Jump to content

Pocket Queries - limitation of 5 logs


klaymen2

Recommended Posts

Interesting discussion . . . I've been keeping my local database for so long that I periodically use this option in GSAK . . . :)

 

968827ef-e8ee-43a2-994b-ae0ce043da66.jpg

 

Perhaps if I were traveling and getting brand new PQs for a different area I would need more than five logs . . . but I bet that need would be for only for a tiny fraction of the caches I might search for since I use GSAK's "Last 2 DNFs" filter before loading my GPSr.

Link to comment
The server utilization wouldn't be less. At best, it would be the same. At worse, it would be much greater.

Are you basing this conclusion on any empirical evidence, technical knowledge of databases, or anything else other than a guess?

 

Second, based on your posts, we can be reasonably certain that people would maximize their PQs.
Show me where I've said such a thing. I've actually said something of the opposite. That is if you're talking about getting anywhere close to the maximum number of caches per PQ. Again, "so what" if by maximizing you mean actually having a larger offline database.
Link to comment
CR himself has stated that TPTB have no interest in supporting offline databases.

Well, not exactly what I've said. As I pointed out, they were interested in making the "changed last 7 days" option operate a bit more intelligently--and they did so. Simply because of that I couldn't say they had no interest in offline databases and have asked if they did have no such interest then asked why that option.

Link to comment
CR himself has stated that TPTB have no interest in supporting offline databases.

Well, not exactly what I've said. As I pointed out, they were interested in making the "changed last 7 days" option operate a bit more intelligently--and they did so. Simply because of that I couldn't say they had no interest in offline databases and have asked if they did have no such interest then asked why that option.

Well ok but you pretty much said that you understood that they just don't want to.....

 

http://forums.Groundspeak.com/GC/index.php...t&p=3181695

Link to comment
... Finally, seriously, I have been able to get every PQ I ever asked for delivered to me within 5 minutes of my request with the exception of 2 occasions over the past 2 years. Pretty good odds in my book. Who are all these people that are seriously impeded by slow running PQs??
I suspect that you are either just requesting ad-hoc PQs or are having them sent to you once per week, like I do.

 

The people that are setting their PQs to run every day sometimes don't get their PQs because there is only so many server hours in teh day. Of course, they still have yesterday's PQs so I don't really understand what all the grumblage is about.

Link to comment
The server utilization wouldn't be less. At best, it would be the same. At worse, it would be much greater.
Are you basing this conclusion on any empirical evidence, technical knowledge of databases, or anything else other than a guess?
Here's my thinking: If my PQ only returns the logs since the last time I recieved a PQ, it would have to 1) look up the last time I got that PQ and 2) filter the logs by that date and time. I don't care who you are, you can see that this adds a step. More steps equals more work per PQ. More work equals more time per PQ. More time per QP means less PQs get processed.
Second, based on your posts, we can be reasonably certain that people would maximize their PQs.
Show me where I've said such a thing. I've actually said something of the opposite. That is if you're talking about getting anywhere close to the maximum number of caches per PQ. Again, "so what" if by maximizing you mean actually having a larger offline database.
Well, you've made similar posts on a number of occasions. Here's one:
... Would folks expand their territory if the PQ system became efficient? You betcha. ...
Edited by sbell111
Link to comment
This is getting old. If you're quoting something, those ellipses sure are convenient for removing the parts that don't agree with what you're saying.
When I truncate someone's post, I always leave in the quick link back to the post so anyone can verify it or read more of the thread for context.
Increasing "territory" != increasing PQ's if they become more efficient.
My original post stated that people would maximize their PQs and that CR has posted the same. CR stated that he never suggested that this is true. I gave one of several examples where he has posted this. I could just as easily found one where he made the case for larger PQs by arguing that the size/number of PQs are insufficient to support a large enough off-site database.
Link to comment
This is getting old. If you're quoting something, those ellipses sure are convenient for removing the parts that don't agree with what you're saying.
When I truncate someone's post, I always leave in the quick link back to the post so anyone can verify it or read more of the thread for context.
Increasing "territory" != increasing PQ's if they become more efficient.
My original post stated that people would maximize their PQs and that CR has posted the same. CR stated that he never suggested that this is true. I gave one of several examples where he has posted this. I could just as easily found one where he made the case for larger PQs by arguing that the size/number of PQs are insufficient to support a large enough off-site database.

Nice twisting, but I'm sure you understand that and argue this obfuscation simply to cast doubt on my argument. Funny thing is folks are picking on your twisting and not buying it.

 

Expanding territory does not equal your argument that the PQs would be just as large or larger, nor does it mean less efficiency in the server. You're not presenting any valid argument to the contrary other than "nuh-huh."

 

Here's my thinking: If my PQ only returns the logs since the last time I received a PQ, it would have to 1) look up the last time I got that PQ and 2) filter the logs by that date and time. I don't care who you are, you can see that this adds a step. More steps equals more work per PQ. More work equals more time per PQ. More time per QP means less PQs get processed.

Sure it adds a step, but also removes one. (If instead of considering the last time the PQ was run and simply return the logs of the last 7 days. Which is probably the most efficient way of doing it.)

 

There are various ways of getting all of the logs:

  • Send every log the cache has. Not very efficient as any repeat in the PQ will send information already sent.
  • Send all logs since last user PQ run. This is what the OP suggests (that I got out if it anyway). While this would be efficient in the respect that it doesn't send repeated logs, it not in the respect that each PQ has to be individualized--not that it adds a step. While it would require a slightly more complex SQL statement the resultant increase in time would be negligible and be more than offset in the reduced number of PQs being pulled. Not only would the number of PQs be smaller, but couple other saving schemes like not sending logs that have not changed, not sending descriptions that have not changed, etc, etc, then you'll gain speed in file transfer, zipping the files, etc.
    I don't particularly care for this scheme for other issues, as well, like how does the system determine if a PQ is expanded somehow to include an older cache that was not in a previous query. You're left with an incomplete log history which kind of defeats the purpose.
  • Pull all and only that information changed in the last 7 days. While this would require weekly downloads the server is able to pre-compile the logs and changed data so it only has to do it once, not hundreds or thousands of times. This is less efficient in terms of how often one has to pull a PQ (not to mention less custom, individualized, or "slick"), but much more efficient in terms of getting the info out.
    In the query, you're simply substituting a "LIMIT 5" command for one like "LOG_CHANGED_DATE > (CURR_DATE - 7 days)".

I suppose if there were an option to "catch up" on orphaned caches that would handle the incomplete log sets. Have the option turn off automatically after the query is run to prevent automatic weekly downloads of complete sets.

 

Here's something else I'd suggest in simply the usability, "human nature" aspect (differientail PQs only. Just stating in case someone is confused.):

  • Always include archived and disabled caches in PQs even if the PQ is truncated at 500 caches. This will make sure the appropriate information is included to get the archived data out of the list of viable caches instead of leaving them orphaned. What this does is:
    • Allow folks to use the "changed in last 7 days" and not have to worry about archived caches.
    • Not require folks who aggregate caches and pull full datasets to go online to see if a cache is archived or simply fell out of the filter for some reason. This is another saving on bandwidth.
    • Allow cachers to fully automate the updates of their lists and get into the field that much more quickly.

    [*]Allow an efficient automated way to ignore caches. Folks could determine a list of caches offline. Method is irrelevant. Then allow someway for a program like GSAK to send a URL to add that list to that user's ignore list.

    • This would allow users to get caches off their lists they don't want to do. This might reduce public angst. Maybe. Certainly a desirable side effect.
    • This would certainly save bandwidth on ignoring individual caches, but unknown when folks are able to ignore caches at will and in bulk.
    • This would make the queries more efficient in that it will not be sending caches the user never would have any intentions to hunt.
    • This would allow automated ignoring of caches. If I choose to ignore caches based on any criteria not available via the PQ, like caches older than one year or more than 6 logs and that has an average log word count of less than 20. I could have GSAK add all such caches to my ignore list and never have to download them again.

    [*]Only send the information that is changed. There is no need to send data that has not changed if there is no reason. As far as GSAK is concerned, you could drop just about everything other than the cache name and GSAK won't change it. This is a huge savings in the file, and thusly the speed in which the file is zipped and transfered, and the end GSAK user would never miss it. (I'm not experienced with many other programs so I don't how it would affect them until the author updated.) I suspect this would change a complete cache data set with 5 logs to one with one or two logs only in most cases. (Judging from my update logs this is the vast majority.)

So, while it may be true that any one change can increase the time it takes to do that one step the savings elsewhere can more than offset it. It would be ridiculous to claim that any change would increase the server load and risk folks not getting their PQs. We know that Groundspeak has just recently fixed the "Most Painful Bug in the History of Time." Therefore we know that not all code is the most efficient as possible. Things can be made better and more efficient. It's not just in the way the code is written, but also procedure and policy. Combine several changes and the savings accumulate. The best part is folks are getting what they want. What's wrong with that?

Link to comment
Why not just break each state up into one or more areas and have those pre-set as "stock" PQ's? That way, those of us who like to keep a "complete" offline database could just pick from the list of stock queries which should, in theory, make it simpler for GC.

I'm for it. Some are against it because then the PQs aren't personalized. I couldn't care less about that as GSAK interrogates logs for finds and such.

 

This would be a huge savings for Groundspeak. One massive query instead of sending out many times the amount in individual queries.

Link to comment
My original post stated that people would maximize their PQs and that CR has posted the same. CR stated that he never suggested that this is true. I gave one of several examples where he has posted this. I could just as easily found one where he made the case for larger PQs by arguing that the size/number of PQs are insufficient to support a large enough off-site database.
Nice twisting, but I'm sure you understand that and argue this obfuscation simply to cast doubt on my argument. Funny thing is folks are picking on your twisting and not buying it.
There was absolutely no twisting. In the linked post, you stated that people would expand their territory if the PQ system became efficient. That is in exact agreement with my post.
Expanding territory does not equal your argument that the PQs would be just as large or larger, nor does it mean less efficiency in the server. You're not presenting any valid argument to the contrary other than "nuh-huh."
Actually, you are twsting my words here. What I stated was that if a cacher were able to update his offline database with a PQ of only the caches that had changed in the period, resulting in a smaller PQ (150 records instead of 500, using your estimate), the cacher would be likely to increase the size of his off-line database so the PQ approached 500 records.

 

This change would, therefore, result in no savings to TPTB because their server would be working as hard or harder than before. It would also create the very thing that TPTB don't want, larger offline databases.

Link to comment

I don't think the server load is the reason for not sending out more than 5 logs, and therefore I don't think coming up with a plan that would reduce server load will solve the "problem".

 

If they ever decide to support offline databases, they'll probably make more than 5 logs available as well as sending out Archived Cache waypoint codes so those can be easily filtered out of databases quickly.

 

I'm not holding my breath for either to happen.

Link to comment
Guest
This topic is now closed to further replies.
×
×
  • Create New...