Jump to content

Pocket Queries - limitation of 5 logs


klaymen2

Recommended Posts

Hi all,

 

I'm from Switzerland and very new to geocaching, just started a week ago but like it quite a lot. So that's also my first posting here :-) A friend recommended GSAK to me, and I just started using it with pocket queries and CacheMate on my PocketPC.

 

Unfortunately I noticed that I only get 5 logs per cache in GSAK. Obviously that's a limitation of pocket queries. Of course more than 5 logs may accumulate over time for guys that are using it for a longer period of time, but for us newbies that's a real bummer. Is there really no solution gc.com could offer us to get more logs?

 

Yes, I understand the network bandwidth problem. But after all, this "full log listing" would be a one time effort, afterwards differential data would suffice. I could even accept a one-time-payment for such a full download. Another possibility I was considering is using some kind of P2P technology, either by directly putting some base files into ed2k, or by implementing some similar mechanism directly into a dedicated client. That of course would mean some development, but maybe worth thinking about it. An option in GSAK to get just log entries from another database instead of syncing it completely would already help a bit, so guys could at least help others out with older data.

 

I'd really be glad having some solution for this problem. Regards, klaymen

Link to comment

Also want to let you know that sharing a database is against the terms of use and "scrapping" the database or setting up a program that automatically downloads all the data from the web site will get you banned from the site.

 

That being said after you have done this for a while 5 logs will be plenty. I rarely look at the logs any more.

Link to comment

At LEAST 95 % of the time, I only GLANCE at the most recent logs (say, the last five or so) to determine a hit or miss ratio on finds to DNF's. If there are five finds, my chances are pretty good. If I see five three or more DNF's out of five, I MAY skip that cache for the time being. If I see mention of grizzly sightings or UFO abductions, I bring my blow dart gun along...just in case.

:)

 

I'm really just lookin' at the log for the number of recent smiley's to let me know if the cache is probably there.

Edited by tabulator32
Link to comment

I recently had a couple caches where I wish I'd had more logs. Both for the same reason.

 

Couldn't find the cache, it was not at ground zero. There were no DNFs in the past 5 logs. When I returned home, I read in log #6 a finder posted corrected coordinates. I didn't have those coords, while other finders did. :)

 

Of course, this problem would also have been solved if the cache owner had maintained their cache and made sure coords were right... :)

Link to comment

Huh. I'm no novice and do like having a larger selection of logs. I've been keeping them since GPX files were available. They do come in handy at times, especially now that I'm filtering based on the average number of words per find log.

 

Oh, and I did some experimenting a while back on the sizes of GPX files with differing numbers of logs. IIRC, a maximum of 50 logs only doubles the size of the file, yet it would allow that size file to be pulled a lot less frequently than half. While there would be a bandwidth hit in some instances, I think the trade off would more than make up for it.

Link to comment
Oh, and I did some experimenting a while back on the sizes of GPX files with differing numbers of logs. IIRC, a maximum of 50 logs only doubles the size of the file, yet it would allow that size file to be pulled a lot less frequently than half. While there would be a bandwidth hit in some instances, I think the trade off would more than make up for it.

No it wouldn't reduce frequency, sorry to say. People are more concerned with "fresh" data, so they would pull it just as often. The topics regarding PQ's and wanting more than 5 per day more than illustrate that point.

Link to comment

No it wouldn't reduce frequency, sorry to say. People are more concerned with "fresh" data, so they would pull it just as often. The topics regarding PQ's and wanting more than 5 per day more than illustrate that point.

 

This wouldn't really be an issue if it had an option for differential results, where the first time it returned x number of logs, and subsequent runs only returned new logs since your last search. Subsequent runs would often use less bandwidth as a result.

Link to comment
No it wouldn't reduce frequency, sorry to say. People are more concerned with "fresh" data, so they would pull it just as often. The topics regarding PQ's and wanting more than 5 per day more than illustrate that point.

This wouldn't really be an issue if it had an option for differential results, where the first time it returned x number of logs, and subsequent runs only returned new logs since your last search. Subsequent runs would often use less bandwidth as a result.

Ah, but now you are talking about a secondary database for every Premium Member on the site cross-referenced to every cache listed on the site. There would have to be a database built that would keep track of whether you are a PM, whether you ever ran a full PQ for every cache and then what logs you got for that cache each time you ran a PQ -- for every PM cacher and for each and every cache on the site. While it might be less bandwidth, I would hate to see the server lag while it does all of this database recalculation.

 

The last 5 logs are fine by me.

Link to comment

We're doomed to have this conversation every quarter or two with the same sides replaying the same roles repeatedly. So I'll take my turn at the adversarial side. Again.

 

There is no "secondary database" needed. (The argument about needing to have a secondary database to determine if a PQ has been run by a Premium Member is just wierd - only PMs can run PQs.) The database knows the timestamp the last PQ ran. It knows the timestamp of all transactions. It's a trivial SQL inner select to return the caches/logs modified since the last PQ was run. Providing truly differential PQs (some of the caches will have > 5 logs since last run; some will not have changed at all) wouldn't be hard.

 

The topics about people concerned with fresh data are exactly why 5 logs isn't enough. In areas of heavy caching activity, five logs isn't even one day. So even pulling a PQ daily, you WILL miss logs. The current site behaviour conditions those people to run PQs on their areas daily.

 

The various threads here have established that most people running the same PQ repeatedly are doing it to capture the updated logs and, to a much lesser extent, cache page prose. If a PQ could return even one day's worth of logs - and even more so if it could contain an interesting subset's worth of logs - there could be *fewer* PQs run.

 

A PQ that could be run that didn't run the risk of losing logs would be a win for both sides.

 

The arguments about returning > 1,000 logs per cache are just not getting what people are asking for. Fortunately, there are a few integers between "five" and "infinity". It's not about seeing a "TNLNSL" log from 2002; it's about seeing log #6 that said "found 890 feet away using fone-a-friend at coords X/Y" when the last five logs say "found using coords given in log #6".

 

I understand database design and the reality of distributing data updates. This is a totally solvable problem if it'd get the attention of the right people and I'm convinced it could *reduce*, not increase, the total number of bytes shipped if implemented effectively and with the support of the data partners of the site and a little thought into how the process worked.

 

Return logs > 5 - perhaps up to 20/25 since the PQ was last run. In many cases, it will return 0 logs. That's a clear win. A consultation with the guys that make the software that actually matters to the site (Clyde, Lil Devil, et al.) that resulted in sensible logs and archive/temp disabled action could be a win to everyone.

 

You just have to get past The Way Things Have Always Been. That's the frustrating part for the guys that have been asking for > 5 logs.

 

I'm one. I don't want infinite logs. I want more than five when I order up a weekly PQ. Five logs when you're behind a cache train is a ticket to frustration.

Link to comment

Perhaps the request should be

 

Can i have the logs that contain corrected co ordinates in pq's

 

or

 

When corrected co ordinates have been posted they are included as additional waypoints

 

or

 

When corrected co ordinates have been posted more than once the needs maintainance flag is placed on the cache.

Link to comment

I'm not sure that it would be a good idea to add steps for the PQ server. People already complain when it get's tied up and they don't get their daily PQ. It seems to me that adding one more step would just slow it down a little more.

 

As such, I like markandlynn's suggestion to add a maintenance flag if corrected coords are added more than once. The flag would notify seekers that there might be a problem with the cache and the owner could easily reset it if he either changes the coords or verifies that his are correct.

Link to comment
There is no "secondary database" needed. (The argument about needing to have a secondary database to determine if a PQ has been run by a Premium Member is just wierd - only PMs can run PQs.) The database knows the timestamp the last PQ ran. It knows the timestamp of all transactions. It's a trivial SQL inner select to return the caches/logs modified since the last PQ was run. Providing truly differential PQs (some of the caches will have > 5 logs since last run; some will not have changed at all) wouldn't be hard.

Right now, all the database has to worry about is the timestamp. It doesn't have to worry about anything else. While what you are saying in your opinion wouldn't be hard, people complain that the server from time to time does not deliver their PQ's until the next day. That tells me that the current server might be over worked from time to time. Hopefully one day they will upgrade that server. For now though, it seems to me that asking a taxed server to do more tasks might not be in the best interest of those using that server.

 

And "secondary database" was not the right thing to say, no. A modification to make the existing database do more is really the right thing, yes. That goes to my argument above. I still fall back on my argument of "let's get the web site stable and running fast seven days a week first please".

Link to comment

The point that's lost on the "change is bad" crowd is that a simple change could *reduce* server load, the number of PQs requested, and the total number of bytes shipped over the MTA/network hose. What does it need to worry about? The timestamp. No change is necessary in the SQL data structure. Not one extra byte of disk is taken. You need a few extra 'where' clauses in the SQL and if you're applying for the bonus round of more PQs up front, a few dozen lines of UI goo to offer the menu options to select the log count.

 

I get that stability is good. Many of us have been waiting in that line a very long time through periods of ups and downs.

 

Why do people configure recurring PQs?

 

1) To get around the 5 log limit.

Most people don't want ALL logs; they want the logs since the last time their PQ ran and they want enough logs to be able to read past that cache train that came through town or that family of five kids that just got their own accounts but have gone back and, on the same day, logged every cache they found as a family, thus blowing the logs for seekers in the area for a while.

 

2) To figure out when caches are NLA.

Searching for caches that aren't there is bad. The most practical way to get notice from the site that caches aren't there now, is to request the same PQ and observe what's missing.

 

3) To see when caches materially change. Perhaps the coords changed, the cache type changed, or the cache description changed.

 

If you're in an area of high cache density (at this moment, 500 caches is a 5.3 mile circle around me. About 20% of that is water...) the "pull the PQ before you leave" thing just doesn't work.

 

A few simple changes to the PQ selection process could reduce the number of PQs requested to satisfy those common demands.

 

1) Offer options of returning zero, five, and 15 logs the first time a PQ is run.

Why zero? If you're surveying an area for density or pulling puzzle pages, you might not really use *any* logs. If I'm just scouting an area, I need more info than what's in the .loc but I don't really need logs. Save everyone the expense of preparing them.

 

2) Make incremental PQs work sensibly. Don't make people pull a PQ each day just to see if anything has changed. If it's a PQ that's been run before, just return the data that's changed - additional logs (all of them), edited cache pages, etc. If a cache that would have been in this PQ has been archived since the last time this PQ has run, return it with "archived=true" so the client knows it's been shot down. It knows when the PQ was last run. It knows what's changed since then. Why resend unchanged data? Why not send all of what HAS changed?

 

If you have a user that's running a PQ daily to get data that's important to them, if you can wean them to "on demand" or even weekly (the coarsest grain of automation allowed here) to hand them JUST the data that they want, that could be a big win. The nice thing is that if the site makes intelligent use of this data and there is no/little change in the PQ data, there's nothing but a win for the site even if reconditioning the user to *NOT* request dailies is a win. (In very few areas would 100% of any PQ be dirtied on any given day.)

 

...and all the database has to worry about is the timestamp of when the data was last delivered - something it's already tracking. It just needs to be smarter and not blindly regurgiate the entire selection with the entire log set exactly as it did last time.

Link to comment
The point that's lost on the "change is bad" crowd is that a simple change could *reduce* server load, the number of PQs requested, and the total number of bytes shipped over the MTA/network hose. What does it need to worry about? The timestamp. No change is necessary in the SQL data structure. Not one extra byte of disk is taken. You need a few extra 'where' clauses in the SQL and if you're applying for the bonus round of more PQs up front, a few dozen lines of UI goo to offer the menu options to select the log count.

 

I get that stability is good. Many of us have been waiting in that line a very long time through periods of ups and downs.

 

Why do people configure recurring PQs?

 

1) To get around the 5 log limit.

...

2) To figure out when caches are NLA.

...

3) To see when caches materially change. Perhaps the coords changed, the cache type changed, or the cache description changed.

...

 

A few simple changes to the PQ selection process could reduce the number of PQs requested to satisfy those common demands.

 

1) Offer options of returning zero, five, and 15 logs the first time a PQ is run.

Why zero? If you're surveying an area for density or pulling puzzle pages, you might not really use *any* logs. If I'm just scouting an area, I need more info than what's in the .loc but I don't really need logs. Save everyone the expense of preparing them.

 

2) Make incremental PQs work sensibly. Don't make people pull a PQ each day just to see if anything has changed. If it's a PQ that's been run before, just return the data that's changed - additional logs (all of them), edited cache pages, etc. If a cache that would have been in this PQ has been archived since the last time this PQ has run, return it with "archived=true" so the client knows it's been shot down. It knows when the PQ was last run. It knows what's changed since then. Why resend unchanged data? Why not send all of what HAS changed?

 

If you have a user that's running a PQ daily to get data that's important to them, if you can wean them to "on demand" or even weekly (the coarsest grain of automation allowed here) to hand them JUST the data that they want, that could be a big win. The nice thing is that if the site makes intelligent use of this data and there is no/little change in the PQ data, there's nothing but a win for the site even if reconditioning the user to *NOT* request dailies is a win. (In very few areas would 100% of any PQ be dirtied on any given day.)

 

...and all the database has to worry about is the timestamp of when the data was last delivered - something it's already tracking. It just needs to be smarter and not blindly regurgiate the entire selection with the entire log set exactly as it did last time.

I'm certainly not the tech guy that you are (and I don't talk smack about you in other forums either :unsure: ), but I think that mtn-man kinda already responded to your points:
No it wouldn't reduce frequency, sorry to say. People are more concerned with "fresh" data, so they would pull it just as often. The topics regarding PQ's and wanting more than 5 per day more than illustrate that point.
Right now, all the database has to worry about is the timestamp. It doesn't have to worry about anything else. While what you are saying in your opinion wouldn't be hard, people complain that the server from time to time does not deliver their PQ's until the next day. That tells me that the current server might be over worked from time to time. Hopefully one day they will upgrade that server. For now though, it seems to me that asking a taxed server to do more tasks might not be in the best interest of those using that server.

 

And "secondary database" was not the right thing to say, no. A modification to make the existing database do more is really the right thing, yes. That goes to my argument above. I still fall back on my argument of "let's get the web site stable and running fast seven days a week first please".

Edited by sbell111
Link to comment

I agree with you in theory. I there are some problems with it in practicality (and not just "because it's always been that way").

 

The inefficiency may indeed be by design.

If the PQs are more efficient that would mean that people will draw more caches in a bigger radius. It is human nature of programming and queries of data.

 

Right now, if I were to query the caches surrounding my house, regarless of my finds, I would be limited to a 36.25 mile radius. That would give me 2499 caches. If I hit that as a single shot first run and then did a differential search later and there was a 20% change weekly (arbitrary number) I would only need one PQ to accomplish the differential change. So what's a cacher to do with the extra four? Increase the radius. Now if I can get recurring differential PQs that only return 20% on a daily basis, I should be able to get 5 times the data, so I should be able to get 12,000 caches. So, just increase the radius to 148.87 miles.

 

I know that you're saying "If it's a PQ that's been run before, just return the data that's changed - additional logs (all of them), edited cache pages, etc.", so it wouldn't work in such a way that people would be able to get the 12,000 caches keep them differentially updated.

 

But then you'll have people complaining that they have 5 PQs CAPABLE of 500 caches per day, and they SHOULD BE ABLE TO get 2500 in their in-box every day. If the change is 20% and the cacher ran an efficient 2500 the first time, the 5 differential PQs would only return about 100 each, or 500 per day. People will complain about that.

 

Differential PQs will not take into account new caches that didn't show up before.

If I blanket an area with PQs right now, I divide them up by date ranges. The last PQ I have now runs caches placed between 7/1/2007 and 12/31/2008. In your scenario, I would have a set return, and then it would find caches that have changed since the last time it ran. New caches would not have an initial timestamp to compare.

 

But even if the system DID say "if the comparison earlier timestamp is null include this cache" there's not a way for me to know if the PQ is currently going to hit the max of 500. Let's say I've got the "newest" PQ setup and it's returning change data on about 450 caches (20% of 450 = 90 caches). If there are 150 new caches placed around a Mega event, and 400 of the 450 existing caches in the area updated because of changed logs and bug drops due to the mega event, the list returns will exceed the 500 where it had only been returning 90. That's a problem.

 

Ultimately, running differential PQs would be a design change to increase efficiency in maintaining an offline database.

TPTB have stated that the maintenance of an offline database is NOT what PQs were designed for. They were designed as a way to grab the data and plug it into your GPS. All the talk about downloading all of the changes facilitates maintenance of the offline database in GSAK or other software. Since that's not something they encourage, what is the motivation to enhance the ability to do that?

 

Like I said, I agree in theory, but it is not a simple flip of a switch.

Edited by Markwell
Link to comment

If the PQs were not designed to allow for offline databases then why are there features for it? No one has pointed out to me a reason for the "changed in the last 7 days" option that is not for offline databases. In fact, I think a discussion related to that feature happened a while back because it originally didn't produce caches that had logs written that were other than find it logs. It was discussed that these non-find logs were important and the feature was changed to include them. I'd figure if such a feature wasn't meant to be then it wouldn't have changed to better accommodate offline databases.

 

Additionally, the idea that all PQs would expand to fill the void is complete and total supposition with little to no basis in fact. It's my experience, as I actually pay attention to such things, that over a week anywhere from 25 to 35 percent of the caches change in some way--not 20%. It's about half over a month's period. Of course, this is just my stomping grounds which includes a little under 8000 caches spread over one whole state and parts of three others.

 

You simply can't expand a differential PQ out to 500 caches without missing a lot of caches. It would be completely counter productive. The number of caches that changes, as already been mentioned, fluctuates a good deal. You might be able to safely double the number of caches you can get without missing a change here and there. Then you'd better watch out for heavy caching weekends like around three day spring weekends and such.

 

My response to folks getting more of a selection of caches in their PQ scheme is "so what?" It'd still be a reduction.

 

In additional to robertlipe's suggestions above, I know for a fact that GSAK can digest PQs without descriptions or most of the other data. If the data is missing it won't change what is already there. Do we really need the same data over and over and over and over, again? Pull the unnecessary, unchanged data while including all, but only, the changed data and the size of the PQ file will drop dramatically. Not only that, but folks would only have to pull the PQ weekly--another drop is bandwidth and server load.

 

Finally, if you're so worried about folks getting their money's worth then make the differentials be pulled from only the first 500 cache selections after filtering. I'd much rather get a full set of logs and a smaller selection of caches than miss information that is useful to me.

Link to comment

Use a program such as GSAK. I get 2 pocket queries each day. (what i found and local ones i haven't)

 

As time goes on, your logs will grow and grow.

 

Granted at first you may not have too many logs per catch, but in time, you will have more past logs then you can shake a stick at.

 

Then when you print them out or download to a PDA, all them logs will go with you. Or if you got lots of money, can take a laptop with you.

Link to comment
Couldn't find the cache, it was not at ground zero. There were no DNFs in the past 5 logs. When I returned home, I read in log #6 a finder posted corrected coordinates. I didn't have those coords, while other finders did.

 

I've run into this more than once myself. Since I can't control the behavior of the owners who don't maintain caches properly, what I have done in these situations is as follows:

 

If I am logging a Find in my log I mention that "I found the cache using so-and-so's coordinates. I've listed them here to keep them near the top of the logs for those doing paperless caching."

 

If I end up logging a DNF because the correct coordinates were not in the last five logs I'll note: "Wish I had the corrected coordinates of xxx yyy handy. It's useful to keep them near the top of the logs for those of us doing paperless caching."

 

I've noticed on a couple of the caches where I have done this people actually noted 'Thanks for keeping the coordinates listed, DanOCan. We'll do the same for the people coming after us." Made me smile.

 

Anyway, back to topic:

 

I too think it would be nice to have an option to download more than 5 logs with my query. I don't see any reason to go more than ten back, however.

Link to comment

It doesn't happen often, but I'm on the same side of the fence as CR.

 

In additional to robertlipe's suggestions above, I know for a fact that GSAK can digest PQs without descriptions or most of the other data. If the data is missing it won't change what is already there. Do we really need the same data over and over and over and over, again? Pull the unnecessary, unchanged data while including all, but only, the changed data and the size of the PQ file will drop dramatically. Not only that, but folks would only have to pull the PQ weekly--another drop is bandwidth and server load.

 

This paragraph (emphasis mine) is probably the best argument for this. By eliminating things that haven't changed, you leave filespace for things that have. Simply use the "last run" date (which is already in the database somewhere -- it shows on the PQ page) and use that as the basis for the PQ.

Link to comment

You people are forgetting about those of us who use Pocket Queries the way they were designed to be used. I don't maintain an offline database and I do run a fresh query (almost) each time I go out caching. I want all the data in my PQ and not just the changed data. Five logs has always been enough for me. When the rare occasion pops up where I need an older log, I just go online with my phone/PDA and get it live. This happens maybe once every couple of hundred caches.

 

The system works as it is. I see no reason to change it.

Link to comment
For me, the system works as it is. I see no reason to change it.

Fixed.

 

No one is saying it should be changed in a way that would adversely affect you.

 

Also, not everyone has a way to get the internet live while in the field. Even the phone-a-friend option is not always reliable as they, too, may be out caching or simply unreachable.

Link to comment

And of course, this morning we have a topic about someone not getting a regular PQ and then doing a fresh one and getting it. The system does not need more stress at this time. Maybe someday when the PQ server is modified to handle more traffic, but not now.

 

Adding more logs will add more stress and will affect people like me and Lil Devil and others that use the PQs for what they were designed for. I also see no reason to change it at this time.

Link to comment

First let me first say that I use GSAK to keep an off line database for me. I use PQ's to keep up with the geocache in a 75 mile circle around me then when I get a chance to go geocache (which is rare lately) I use the GSAK filters to build me a custom set of geocaches for the mood and direction I am heading. The PQ's come once a week. So while GSAK accumulates logs, on the more popular geocaches it will miss some.

 

But one thing has struck me with the people saying to find some geocaches they need the log where someone gives corrected coordinates, they are ignoring the fact that first person who found the geocache and gave the corrected coordinates found it using the coordinates on the page. So these coordinates are obviously good enough to find it.

Link to comment

Whatever happened to customer satisfaction? I would be significantly more satisfied if I could get data the way Robert Lipe suggests.

 

If I was out caching and could pull down the website on my phone to check on some older logs, then I wouldn’t need an off-line database either. But I don’t have that luxury.

 

One simple feature change could actually address a lot of the mentioned issues. If I select the “Updated in last 7 days” selection, then it makes complete sense to me to be sent all the logs from the past 7 days and I could understand it if it did not include any older ones. It would be significantly more satisfying for me to get to see all the logs in the PQs by running each only once per week.

 

Arguments about server load are all nonsense. We are talking about the PQ server which is different machine from the server that gives us grief on some weekends. Most scheduled PQs are sent before noon, and ones I nudge to run in the afternoon and evening run right away. I can only assume that it has plenty of idle capacity. But even if that were not the case, I am the customer and I want my data right away so I can go out geocaching.

 

I can occasionally see what is important to TPTB. Cool features, one or two bug repairs a month, some new glitz,...all very nice for everyone to enjoy while browsing the site. Customer satisfaction does not seem to be as important. I believe the fundamental problem was pointed out many months ago by Jeremy: those of us who are power cachers are a minute minority; we simply don't make an economic difference for them.

Link to comment
... Arguments about server load are all nonsense. We are talking about the PQ server which is different machine from the server that gives us grief on some weekends. Most scheduled PQs are sent before noon, and ones I nudge to run in the afternoon and evening run right away. I can only assume that it has plenty of idle capacity. But even if that were not the case, I am the customer and I want my data right away so I can go out geocaching. ...
No offense, but I don't think that you understand how the PQ server prioritizes it's jobs.

 

Basically, it runs them by the inverse of how soon they previously ran.

 

If I create a new PQ, the machine looks at the job and sees that it's never run before. That request gets the highest priority.

 

Next are the PQs that haven't run in a long time. I have some PQs built that I only need a few times per year. When I go in and ask for these to be run, they are sent to me pretty quickly.

 

The weekly ones are next. These still come to us pretty well. Mine very often find their way to me about the time I get up and get ready for work.

 

The lowest priority jobs are the daily PQs. These get low priority because yesterday's data is still pretty fresh. One could certainly go geocaching using yesterday's info. However, people still want their data the way that they want it, so they request it to be sent to them daily. Unfortunately, there's only so many nanoseconds in the day and the PQ server sometimes doesn't have enough time to get to everybody's jobs. Therefore, some of these daily PQs don't run. Tomorrow, these PQs will have slightly higher priority because they didn't run the day before, but the person who 'needs' their daily data is still going to be unhappy and create threads complaining about not getting their PQs.

 

If TPTB were to implement Robert's suggestion, the PQ machine will have to take an additional step when generating each PQ. It's not a huge step, but it is one more thing for it to do which would cause each PQ to take just a tiny bit longer to create and send. This would result in the machine not having enough time to run even more of the low priority PQs, making even more people unhappy.

Edited by sbell111
Link to comment

But one thing has struck me with the people saying to find some geocaches they need the log where someone gives corrected coordinates, they are ignoring the fact that first person who found the geocache and gave the corrected coordinates found it using the coordinates on the page. So these coordinates are obviously good enough to find it.

Not true. I've occasionally done caches where the listed co-ords are so far off they aren't any help, but by making some guesses I was able to get the cache. Such as the decimal section of lat and lon were duplicated (both listed as 10.xxx, when one should have been 22 - that one was off by over 12 miles), or where the hint worked to find the cache over 150 feet off (walking around for over an hour checking the few stumps to be found).

Link to comment
when I get a chance to go geocache ... I use the GSAK filters to build me a custom set of geocaches

In the same time it takes you to determine your filter criteria and run it against your database, I can run a fresh PQ for the same criteria.

Unless there is a backlog on the PQ server. GSAK returns results in seconds, the PQ takes minutes (in my experience) to arrive - then I still have to load into GSAK so I can get it into the PDA and GPSr. BTW, GSAK can slice and dice the data many more ways than PQ selector can.

Link to comment
If TPTB were to implement Robert's suggestion, the PQ machine will have to take an additional step when generating each PQ. It's not a huge step, but it is one more thing for it to do which would cause each PQ to take just a tiny bit longer to create and send. This would result in the machine not having enough time to run even more of the low priority PQs, making even more people unhappy.

 

Nice hyperbole. Any change at all results in loss of PQs and unhappy chidlren.

 

The point of the suggested change is so the system has to generate and deliver FEWER PQs with fewer cache pages in them. Yes, there is an additional 'where' clause in the SQL when it generates the candidate set of records. Unless every cache in the PQ has been logged since the last time there will be fewer records to process, generate GPX for, compress, and deliver. So you've traded a little work up front (and I'd expect the unit of measure to be "microseconds" and not even "milliseconds") for doing less work later.

Link to comment
If TPTB were to implement Robert's suggestion, the PQ machine will have to take an additional step when generating each PQ. It's not a huge step, but it is one more thing for it to do which would cause each PQ to take just a tiny bit longer to create and send. This would result in the machine not having enough time to run even more of the low priority PQs, making even more people unhappy.
Nice hyperbole. Any change at all results in loss of PQs and unhappy chidlren.

 

The point of the suggested change is so the system has to generate and deliver FEWER PQs with fewer cache pages in them. Yes, there is an additional 'where' clause in the SQL when it generates the candidate set of records. Unless every cache in the PQ has been logged since the last time there will be fewer records to process, generate GPX for, compress, and deliver. So you've traded a little work up front (and I'd expect the unit of measure to be "microseconds" and not even "milliseconds") for doing less work later.

Thanks, but there was no hyperbole. In order for hyperbole to exist, there must be exaggeration.

 

Adding steps adds time. Added time results in fewer PQs being run. Fewer PQs being run means a greater number of cachers that go without their daily PQ.

 

I don't buy the argument that your plan would result in fewer PQs, because history (and reading threads from just the last several days) shows that people would want a larger and larger personal database and will create more and more PQs to make it happen.

Edited by sbell111
Link to comment

It is a rare moment when having addtional logs are going to do anything to help you find the cache. You are looking for addtional hints aren't you??

 

But when it helps, it REALLY helps!

I cache with a team once a week, and there are more than 5 of us, so 5 past logs, might only reflect that yes, the team members found the cache. There could be corrected coords, or someone 6 or more logs back could have posted that the cache was found 45 feet off from GZ. I'd prefer to see 10-20 past logs in the PQ's.

Link to comment

I just thought of a great solution. I hope Opinionate passes this one upstairs for future consideration. I am fine with things as they are. But... if you want more, you pay more for it. You pay per additional log. You pay to run the same exact query every day. Make it a sliding scale. The same query once a week is a given part of your Premium Membership. You want it more often, you pay a bit. Those that want to maintain an offline database pay for that additional service. The current PQ service isn't designed for this usage, but if you want to use it in that manner then you pay extra for the extra workload.

Link to comment

I just thought of a great solution. I hope Opinionate passes this one upstairs for future consideration. I am fine with things as they are. But... if you want more, you pay more for it. You pay per additional log. You pay to run the same exact query every day. Make it a sliding scale. The same query once a week is a given part of your Premium Membership. You want it more often, you pay a bit. Those that want to maintain an offline database pay for that additional service. The current PQ service isn't designed for this usage, but if you want to use it in that manner then you pay extra for the extra workload.

 

That's actually not that bad of an idea.

Link to comment

Adding steps adds time. Added time results in fewer PQs being run. Fewer PQs being run means a greater number of cachers that go without their daily PQ.

In computerbusiness that is a wrong assumption.

Since all the data is stored in a giant SQL database, all the info that is needed for the GPX-files has to be extracted using SQL-queries. Often the total time that a query needs is directly proportional to the amount of data extracted.

Since we will be extracting fewer caches and probably fewer logs, the overall execution time needed for one query will be less than when every cache is exported. The post-processing of formatting de SQL-reply into a GPX file will also be faster because there is less to do.

 

If it works out this way is something that only Groundspeak can test, but I would not be suprised if it ran significantly faster; meaning more PQ's can be run each day.

Edited by Kalkendotters
Link to comment
The current PQ service isn't designed for this usage...

I keep reading this and I'm not arguing the primary reason for PQs was to build offline databases. I seriously doubt it was. However, the idea of offline databases was probably thought about because of the "last 7 days" option built into it. I still don't have an answer of why that is if not for aggregating PQs in some form or fashion.

 

Can anyone explain that?

 

Now, if a function or feature is included, for whatever ever reason, wouldn't it make sense to make it work in the most convenience and efficient manner possible? It's not as if any modification to the differential methods would be removing a functionality of the PQs in a way that adversely affects those who don't use it--other than changing the workload of the servers.

 

What I don't get is went folks who clearly know what they're talking about offer up technical solutions to a problem folks have repeatedly complained of and some folks who don't have a clue go "nuh-huh!" The technical minded folks are saying the servers would have less of a workload while those who aren't as technical are saying it would be more. Who is one to believe; the person who knows what they are talking about or the one who doesn't?

Link to comment

Whatever happened to customer satisfaction? I would be significantly more satisfied if Jeremy would just pick me up at my front door in his new aston martin and drive me around to the different caches.

 

Jeremy? Did you see that up there?

 

Put that out on Ebay every once in a while and I'll bet you get some rich cacher to take you up on it. That would pay for the additional servers required for the new, versatile "a la carte" PQ system where everyone can request as much or as little data as they desire as many times a day as they like.

 

;)

Link to comment
Interesting question. How much more "work" would the PQ servers have to do if the number of logs was increased to 10 or 20? I'd sat the "avaerage log" is about 4 lines of text. How much larger would that make the PQ's.

It wouldn't increase the size of a PQ a significant amount--as long as the PQ is zipped.

 

An even more interesting question would be how much smaller would the PQs be if only, but all, the logs written in the last 7 days be.

 

Considering my observations are that only about 25% - 33% of all caches in my included area change in any way and by far most of those are because of new logs, what would the saving be if logs written more than 7 days ago (note: this is not the same as logs dated as more than 7 days ago) not included? I'd think it be safe to assume if you're asking for the caches changed in the last 7 days you already have the changed from beyond 7 days. Why'd you want information you already have?

Link to comment
... Considering my observations are that only about 25% - 33% of all caches in my included area change in any way and by far most of those are because of new logs, what would the saving be if logs written more than 7 days ago (note: this is not the same as logs dated as more than 7 days ago) not included? ....
This thought is exactly why this change would result in higher utilization. Cachers wouldn't be satisfied with just those 150 logs when their PQs can carry 500. Instead, they will simply increase the size of their off-line databases to maximize their PQs.
Link to comment
... Considering my observations are that only about 25% - 33% of all caches in my included area change in any way and by far most of those are because of new logs, what would the saving be if logs written more than 7 days ago (note: this is not the same as logs dated as more than 7 days ago) not included? ....
This thought is exactly why this change would result in higher utilization. Cachers wouldn't be satisfied with just those 150 logs when their PQs can carry 500. Instead, they will simply increase the size of their off-line databases to maximize their PQs.

Remember sbell111, we is too stupid to understand all them there complicated things. CR pretty much told us so.

 

What I don't get is went folks who clearly know what they're talking about offer up technical solutions to a problem folks have repeatedly complained of and some folks who don't have a clue go "nuh-huh!" The technical minded folks are saying the servers would have less of a workload while those who aren't as technical are saying it would be more. Who is one to believe; the person who knows what they are talking about or the one who doesn't?

Sometimes the technical minded people forget about the human nature side of people.

 

I do trust that if the technically minded people that run the site feel that the PQ server set up is ready to expand out to provide this service you, that they will do what they feel is best. I'll bow out since I'm not technically minded enough in your eyes to discuss this CR.

Link to comment
Instead, they will simply increase the size of their off-line databases to maximize their PQs.

To which I again say, "So what?"

 

Who cares how many caches are in their offline databases when the server utilizations is less, fewer folks are troubled with late PQs, or PQs that never arrive?

Link to comment
Instead, they will simply increase the size of their off-line databases to maximize their PQs.
To which I again say, "So what?"

 

Who cares how many caches are in their offline databases when the server utilizations is less, fewer folks are troubled with late PQs, or PQs that never arrive?

Take another read of my post.

 

The server utilization wouldn't be less. At best, it would be the same. At worse, it would be much greater.

 

First, I'm not convinced that searching through the database to find the posts that were made since I searched last wouldn't be more work than simply sending me a file of 500 caches. Second, based on your posts, we can be reasonably certain that people would maximize their PQs. This would negate any savings that your proposal would create, if in fact it was more efficient.

Link to comment

I think it is just humerous to read all this argument on how to "reduce server load", "decrease the size" etc.....

 

The crux of the argument here is that certain cachers "need" many more logs or they will not be able to find the cache or determine its status correctly or sort for certain words or whatever. However, somebody else found the cache - and without any of that addtional information. So I must seriously question this as a "need".

 

CR himself has stated that TPTB have no interest in supporting offline databases. Therefore they will not allow large queries or more logs. I think we can all argue the details of how this should work until we are blue in the face but until TPTB want to - they just are not going to do anything to enhance offsite databses. No more logs - not in any fashion.

 

Jeremy is on record as saying his personal preference is to work on alternative access methods to the online data rather than spend effort helping those with thier offline data.

 

Face facts, this database is large and unique in nature and we really have zero idea on how it has been implemented. Only the folks at HQ can know what is best for thier servers and what is best for thier business model.

 

Finally, seriously, I have been able to get every PQ I ever asked for delivered to me within 5 minutes of my request with the exception of 2 occasions over the past 2 years. Pretty good odds in my book. Who are all these people that are seriously impeded by slow running PQs??

Link to comment
Guest
This topic is now closed to further replies.
×
×
  • Create New...