Jump to content

preview overlapping PQs


ras_oscar
Followers 4

Recommended Posts

Is there a way to plot a graphical representation of multiple PQs to minumize overlap? I currently have 5 queiries in a state and return just under 4000 caches. Obviously as I make finds, as caches are published and archived, the boundaries change. Looking for a way to manage the changing overlaps without running the same queiries multiple times.

Link to comment

Yes, I understand that there is an option to queiry by date placed. However, that adds a dimnension of complexity to the issue without really solving the problem.

 

Consider that I want to generate a series of queiries that return all the caches with my criteria in a given state. In my method I will first restrict by state and then place queiries in each of the 4 corners of the state, and one in the center. As I cache and run subsequent queiries, the boundaries of those queiries will change. The visualization tool I suggested would assist me in adjusting the centroid of the center cache to acommodate the 4 perimeter queiries. If I instead use the date placed method and place a series of queiries in the center of the state and restrict each first by state and then by date placed, I have no way of knowing whether each queiry date range covered the entire state or totalled out at 1000. Either way I'm still constantly adjusting the queiry criteria. If I use either method with the addition of the visualization tool I suggested, I can be assured that I have the majority of the caches in the range. I have never used the date placed criteria, so perhaps there is something I am missing that makes that option more usable than I am anticipating. I will play with it after my present queiries run and see how it works.

Link to comment

Yes, I understand that there is an option to queiry by date placed. However, that adds a dimnension of complexity to the issue without really solving the problem.

 

Consider that I want to generate a series of queiries that return all the caches with my criteria in a given state. In my method I will first restrict by state and then place queiries in each of the 4 corners of the state, and one in the center. As I cache and run subsequent queiries, the boundaries of those queiries will change. The visualization tool I suggested would assist me in adjusting the centroid of the center cache to acommodate the 4 perimeter queiries. If I instead use the date placed method and place a series of queiries in the center of the state and restrict each first by state and then by date placed, I have no way of knowing whether each queiry date range covered the entire state or totalled out at 1000. Either way I'm still constantly adjusting the queiry criteria. If I use either method with the addition of the visualization tool I suggested, I can be assured that I have the majority of the caches in the range. I have never used the date placed criteria, so perhaps there is something I am missing that makes that option more usable than I am anticipating. I will play with it after my present queiries run and see how it works.

 

I had to go through something similar while trying to build a list of all caches eligible for a certain challenge.

 

I tweaked the radius of each query until the results were under a thousand, then used the "Preview in Geocaching Maps" option in the PQ list to eyeball how much coverage that gave. Then I guestimated the next adjacent circle centroid. To keep track of coverage, I used the Radius Around Point web tool.

 

On a side note, if the PQ generator interface allowed filtering by more than one set of D/T criteria it would not have taken the 30 or so queries I needed to do this.

 

Without that, I had to import all the query results into a GSAK database and dump out the D/T combos I had already, which was about 70% of the data returned.

 

While most people would rightly consider this to be a pain in the posterior, I honestly enjoy developing solutions that pound a square peg into a round hole to pull up someone else's slack.

 

EDIT: Spelling, add link

Edited by frinklabs
Link to comment

If I instead use the date placed method and place a series of queiries in the center of the state and restrict each first by state and then by date placed, I have no way of knowing whether each queiry date range covered the entire state or totalled out at 1000.

If you're wanting to get caches for only your state using this method, it actually makes things even easier. In that case, you wouldn't want to use a radius at all. You'd select your state in the "States/Provinces" list, then select "None Selected" under "From Origin". When you build a PQ for a date range, you'd tweak the dates until the PQ results in just under 1000 caches. That way you know you aren't missing any. Since you've limited it to your state, that PQ will contain all caches in your state within that date range. Repeat the steps for each subsequent date range until you get to present day, at which point you'll want to set the end date as far in the future as you can to capture Events and so you won't have to keep adjusting that end date. You'll want to go in and make minor tweaks every now and then, and you'll have to keep an eye on the newest date range in case it gets close to 1000, in which case you'd add another PQ. This is by far the most efficient method of doing what you want, because there's exactly zero overlap. There also isn't any fiddling around with radii or overlapping circles.

Link to comment

Gee - didn't they just raise them?

 

I think a better solution would be to have a "total count of caches" instead of mandating using five PQs to get just under 5,000 caches. Why not allow users to download 5,000 all at once with one query if they want?

 

But as others will point out, the API will do this now, and give you 6,000 caches.

Link to comment

I had started a previous thread that proposed an option to subtract one queiry from another. Set queiry 1 at origin point XY. return 1000 caches. set queiry 2 at origin point XY, minus queiry 1. Return the next 1,000 caches. Set queiry 3 at XY, minus queiry 1 and 2. Return the next 1000 caches. Each queiry returns a band at increasing distances from the origin. Continue until the last queiry returns less than 1,000. That thread sank slowly into oblivion. Is there an option to access the API directly without breaking the TOU? I'd be interesting in seeing if I could write the code.

Link to comment

Gee - didn't they just raise them?

 

I think a better solution would be to have a "total count of caches" instead of mandating using five PQs to get just under 5,000 caches. Why not allow users to download 5,000 all at once with one query if they want?

 

But as others will point out, the API will do this now, and give you 6,000 caches.

 

I had always assumed the limits were to preserve server bandwidth and to prevent people from scooping up the entire database. Remember, the database is really the primary thing GS is "selling". Making large chunks available would bend their business model. Its in everybody's interest to ensure GS has a viable business model moving forward.

Edited by ras_oscar
Link to comment

WARNING! GSAK workaround ahead. ;)

 

Load the queries into GSAK, then use a GSAK macro to display them in Google Earth (or Google Maps if you prefer).

I already use GSAK to export them to MS streets and trips. I'm looking of a diffferent way to visualize the results preview, essentially overlapping the radius circles in the web page. Once I have them in GSAK i would just export them to MS streets, for cache trip planning, and to Etrex for execution.

Link to comment

How about the download waypoints option on the web page map? I realize there are no filters available, but does it load all the data for each cache like a PQ or only the .LOC information? If I export that to GSAK I could manually sort and remove the caches I normally filter out.

Edited by ras_oscar
Link to comment

Bah humbug....

 

It was going OK until I tried this year and it will work out a PQ each month (on a 50 mile radius). As I use GSAK I though I's just use the API to pull in this years caches, but I get an API error.

 

Range specified is longer than allowed. Range must be less than 30 days.

 

Why is it I can set a PQ with a range spanning years, but the API only allows you a month?

 

Link to comment

WARNING! GSAK workaround ahead. ;)

 

Load the queries into GSAK, then use a GSAK macro to display them in Google Earth (or Google Maps if you prefer).

I already use GSAK to export them to MS streets and trips. I'm looking of a diffferent way to visualize the results preview, essentially overlapping the radius circles in the web page. Once I have them in GSAK i would just export them to MS streets, for cache trip planning, and to Etrex for execution.

 

If you have the caches in GSAK the macro PlacedPQ.gsk will work out the Dates Placed for you...

Link to comment

Gee - didn't they just raise them?

 

I think a better solution would be to have a "total count of caches" instead of mandating using five PQs to get just under 5,000 caches. Why not allow users to download 5,000 all at once with one query if they want?

 

But as others will point out, the API will do this now, and give you 6,000 caches.

 

I had always assumed the limits were to preserve server bandwidth and to prevent people from scooping up the entire database. Remember, the database is really the primary thing GS is "selling". Making large chunks available would bend their business model. Its in everybody's interest to ensure GS has a viable business model moving forward.

Right now, if I'm careful and use the date placed method I can reasonable get 4994 caches per day. I was saying it would be great if I could get that same allotment (actually 5000) by using one query, which would deplete my allotted pocket query results for the day. No change in how much data is pulled, just in having to set up multiple queries to do it.

Link to comment

Gee - didn't they just raise them?

Perhaps another fire, server outage, and apology would be a good thing. Otherwise we would still be at half the current limits.

 

Right now, if I'm careful and use the date placed method I can reasonable get 4994 caches per day. I was saying it would be great if I could get that same allotment (actually 5000) by using one query, which would deplete my allotted pocket query results for the day. No change in how much data is pulled, just in having to set up multiple queries to do it.

 

Yes, one click one time per day only for 5000 caches from a centerpoint would be great. I seriously don't expect to see this, but it would be much easier for users and would provide exactly the same data I can currently retrieve with the additional work. And don't forget about all those PQ tests (without day of week selected) required to establish those dates/limits. How much server resource would be saved if all those tests were no longer necessary? In politics, an absurdity is not an impediment. Seems that list of activities could be expanded a bit.

 

I am only assuming one giant PQ (5000) would be as compatible with a paperless GPS maxed out with five smaller PQ's (1000) is. Does anyone know?

 

I meant to add that I once suggested the current limit of 1000 caches per PQ be set to 1001 if they really want to advertise a 5000 per day limit. Still no guarantee of 5000, but certainly more likely. It was received in the Forum about as well as suggesting dropping a lead balloon on your foot.

Edited by Cardinal Red
Link to comment

Gee - didn't they just raise them?

 

Yes, but...

 

I think a better solution would be to have a "total count of caches" instead of mandating using five PQs to get just under 5,000 caches. Why not allow users to download 5,000 all at once with one query if they want?

 

This would make a lot more sense than allowing multiple queries with a limit per query. As it stands overlapping queries mean people have to choose between overlapping areas (i.e. caches duplicated) or dead spots (i.e. caches omitted). For some areas searches by date placed might be more effective but I found it to be too much of a faff trying to figure how many queries I needed that wouldn't include caches the other side of town while excluding caches that are only marginally further in terms of distance but vastly closer in terms of time.

 

But as others will point out, the API will do this now, and give you 6,000 caches.

 

Which is great, if you want to use API-enabled software. It would be nice if there were more functionality provided within the site and less requirement to use third party software to perform what should be pretty simple stuff.

Link to comment

Gee - didn't they just raise them?

 

I think a better solution would be to have a "total count of caches" instead of mandating using five PQs to get just under 5,000 caches. Why not allow users to download 5,000 all at once with one query if they want?

 

But as others will point out, the API will do this now, and give you 6,000 caches.

 

I had always assumed the limits were to preserve server bandwidth and to prevent people from scooping up the entire database. Remember, the database is really the primary thing GS is "selling". Making large chunks available would bend their business model. Its in everybody's interest to ensure GS has a viable business model moving forward.

 

There's actually nothing that can be done to technically prevent people from scooping up huge parts of the database. In theory you could have a fairly small number of premium members producing overlapping queries to cover a vast area, email notifications with a huge radius so you see anything that changes between queries being run, and all fed into a single monolithic database.

 

It's reasonable to limit how much data one person can download in a single day. It's silly to say I can have 5000 caches in a day across 5 queries but I can't have a single query that returns 5000 caches.

 

As someone already said there must be a huge amount of server time wasted by people running queries to see how they overlap, shifting the centre points, running them again, repeating until they get the results they want, then running it for download.

Link to comment
I am only assuming one giant PQ (5000) would be as compatible with a paperless GPS maxed out with five smaller PQ's (1000) is. Does anyone know?

 

Assuming your GPS can cope with that many caches there's no reason to think not. I wrote a piece of software to combine pocket queries (along with a few other functions I wanted) and if I export 7000 caches or so into a single GPX my Montana can cope just fine. Of course people with GPS units with lower limits could run a query to return 2000 caches, 5000 caches, or whatever else their unit can handle.

Link to comment
I am only assuming one giant PQ (5000) would be as compatible with a paperless GPS maxed out with five smaller PQ's (1000) is. Does anyone know?

 

Assuming your GPS can cope with that many caches there's no reason to think not. I wrote a piece of software to combine pocket queries (along with a few other functions I wanted) and if I export 7000 caches or so into a single GPX my Montana can cope just fine. Of course people with GPS units with lower limits could run a query to return 2000 caches, 5000 caches, or whatever else their unit can handle.

 

With expansion data cards the number or caches that *can* be stored is staggering. I routinely have 6-7000 with no loss if operability. And my GPS is several years old.

Link to comment

If I instead use the date placed method and place a series of queiries in the center of the state and restrict each first by state and then by date placed, I have no way of knowing whether each queiry date range covered the entire state or totalled out at 1000.

If you're wanting to get caches for only your state using this method, it actually makes things even easier. In that case, you wouldn't want to use a radius at all. You'd select your state in the "States/Provinces" list, then select "None Selected" under "From Origin". When you build a PQ for a date range, you'd tweak the dates until the PQ results in just under 1000 caches. That way you know you aren't missing any. Since you've limited it to your state, that PQ will contain all caches in your state within that date range. Repeat the steps for each subsequent date range until you get to present day, at which point you'll want to set the end date as far in the future as you can to capture Events and so you won't have to keep adjusting that end date. You'll want to go in and make minor tweaks every now and then, and you'll have to keep an eye on the newest date range in case it gets close to 1000, in which case you'd add another PQ. This is by far the most efficient method of doing what you want, because there's exactly zero overlap. There also isn't any fiddling around with radii or overlapping circles.

 

The other day I deleted all my queiries in my home state and instead created a series of queiries based on non overlapping dates placed. Worked like a charm. Thanks. Now I have a new goal: to start ticking off the oldest cached in the state.

Link to comment

I have a modest proposal. Why not make a rectangular query? There's hundreds of posts with people vexed about having to deal with overlapping circles, but it would be the simplest thing in the world to make a query for adjacent rectangles instead. I don't think it would matter much to the database if it calculates a point and a radius as opposed to two points and a rectangle, and it might even be mathematically easier to do than a circle. The whole PQ thing would just be so much easier using rectangles.

Link to comment

Right now, if the pocket query criteria is more than 1000 caches, the farthest ones from the radius point are eliminated. In a rectangular query, how would you propose elimination of caches if the criteria gathers more than 1000? There's approximately 13,000 caches in the Chicago area, and if someone sets up a query that encompasses N 41 to N 43 and W 89 to W 85, what caches would be dropped off?

 

If all you're looking to solve is the overlapping circles, there's a method already (date placed ranges). No overlap.

Link to comment

If all you're looking to solve is the overlapping circles, there's a method already (date placed ranges). No overlap.

Which is OK, but only to a point.

 

Although I don't live in an area like Chicago there is still the same issue. Generating PQ's by date is fine for older caches, but as you get nearer to the current date the number of caches seems to rise at an exponential rate.

 

I can't remember exactly but I think I was working on a 50 mile radius, but it was getting that I was making a PQ for placed caches with a timespan of about every 4 months to get below 1000...resulting in lots of PQ's!!

 

The way around this is to reduce the radius...but then you come back to the problem of overlapping if you want to cache more than 10 miles away or so?

 

If I lived in a cache dense area it would be even harder.

Link to comment

I have a modest proposal. Why not make a rectangular query? There's hundreds of posts with people vexed about having to deal with overlapping circles, but it would be the simplest thing in the world to make a query for adjacent rectangles instead. I don't think it would matter much to the database if it calculates a point and a radius as opposed to two points and a rectangle, and it might even be mathematically easier to do than a circle. The whole PQ thing would just be so much easier using rectangles.

 

It would make a lot more sense to increase the number of caches per pocket query and restrict the number of caches downloadable in a single day if bandwidth is a concern.

 

It would make far more sense to say "show the 5000 caches nearest my home" than fuss with multiple queries to build up the same picture. It would also take away the need to keep fiddling with boundary dates and setting up new queries as caches are found and published, or shifting reference points to minimise overlap.

Link to comment
It would make far more sense to say "show the 5000 caches nearest my home" than fuss with multiple queries to build up the same picture. It would also take away the need to keep fiddling with boundary dates and setting up new queries as caches are found and published, or shifting reference points to minimise overlap.

what_he_said_smiley.gif

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
Followers 4
×
×
  • Create New...