Jump to content

Processing GPX files/GUID


TheCarterFamily

Recommended Posts

I keep a database of all the GPX files I download so I can geocache without the internet.

 

I notice the GPX files all use ID # (ie: CacheID, LogID, OwnerID) Where the website uses GUID.

 

Is there any way to decode the ID's from the GUID? or build a GUID from the data in the GPX file?

 

Michael

Link to comment
Are you looking to build URLs to the cache or cacher?

 

If so, just use: (replace '###' with the ID number from the GPX file)

Cacher

http://www.geocaching.com/profile/?id=###

Cache

http://www.geocaching.com/seek/cache_details.aspx?id=###

Log

http://www.geocaching.com/seek/log.aspx?lid=###

 

I knew about those URLs. So I take it there is no way to translate the id's into the GUID or back again?

Link to comment
The GUIDs are not in the PQ and there is no known way to turn GUIDs into GC IDs and back.

 

Too bad. I guess I have to resort to my hitting GC.com pages to get them. ie Query the cache page (all logs) then pull the LogID from the name for the GUID link. Really annoying. :laughing: Especially if I'm trying to pull all logs for a given user. (ie makes interesting reading to follow a persons geocaching journey) :)

 

So what's the GUID anyways? How's it built? I figure being in 4 parts it means something and not just a random string. (ie like the Oracle ROWID. It's made from block, table, db file combination.)

Link to comment
The GUIDs are not in the PQ and there is no known way to turn GUIDs into GC IDs and back.

 

Too bad. I guess I have to resort to my hitting GC.com pages to get them. ie Query the cache page (all logs) then pull the LogID from the name for the GUID link. Really annoying. :laughing: Especially if I'm trying to pull all logs for a given user. (ie makes interesting reading to follow a persons geocaching journey) :)

 

So what's the GUID anyways? How's it built? I figure being in 4 parts it means something and not just a random string. (ie like the Oracle ROWID. It's made from block, table, db file combination.)

Link to comment

So what's the GUID anyways? How's it built? I figure being in 4 parts it means something and not just a random string. (ie like the Oracle ROWID. It's made from block, table, db file combination.)

A Globally Unique Identifier or GUID is a pseudo-random number used in software applications. Each generated GUID is "statistically guaranteed" to be unique. This is based on the simple principle that the total number of unique keys ( or ) is so large that the possibility of the same number being generated twice is virtually zero.

Link to comment
The GUIDs are not in the PQ and there is no known way to turn GUIDs into GC IDs and back.

 

Too bad. I guess I have to resort to my hitting GC.com pages to get them. ie Query the cache page (all logs) then pull the LogID from the name for the GUID link. Really annoying. :laughing: Especially if I'm trying to pull all logs for a given user. (ie makes interesting reading to follow a persons geocaching journey) :laughing:

 

So what's the GUID anyways? How's it built? I figure being in 4 parts it means something and not just a random string. (ie like the Oracle ROWID. It's made from block, table, db file combination.)

Brilliant idea, announcing in the website developer's forum that you're planning on spidering the site. Really annoying, indeed. :laughing:

Link to comment
The GUIDs are not in the PQ and there is no known way to turn GUIDs into GC IDs and back.

 

Too bad. I guess I have to resort to my hitting GC.com pages to get them. ie Query the cache page (all logs) then pull the LogID from the name for the GUID link. Really annoying. :D Especially if I'm trying to pull all logs for a given user. (ie makes interesting reading to follow a persons geocaching journey) <_<

 

So what's the GUID anyways? How's it built? I figure being in 4 parts it means something and not just a random string. (ie like the Oracle ROWID. It's made from block, table, db file combination.)

Brilliant idea, announcing in the website developer's forum that you're planning on spidering the site. Really annoying, indeed. :D

 

Pocket queries only contain 20 logs... I've been told in the past to use a program like swiss army knife to archive the logs... which is great if I started doing it in the year 2000. (One example. I was working with Ontario Geocaching to get a list of all their splash caches. They use the publish log to validate the entry. Once 20 people have visited a cache, the publish log is no longer in the gpx file.) If the GPX files contained all the logs... no spidering would be required to get the data... :D

 

Also for spidering... I put in a random sleep up to 60 seconds between queries, and augmenting some of the queries between google and a squid caching server... so any spidering I do I consider GC.com's load. Trying to making sure I don't bog down GC.com at all with this. :D Google isn't even that nice.

 

I wish GC.com would come up with an API or beef up the Pocket Queries.

Edited by TheCarterFamily
Link to comment
The GUIDs are not in the PQ and there is no known way to turn GUIDs into GC IDs and back.

 

Too bad. I guess I have to resort to my hitting GC.com pages to get them. ie Query the cache page (all logs) then pull the LogID from the name for the GUID link. Really annoying. :D Especially if I'm trying to pull all logs for a given user. (ie makes interesting reading to follow a persons geocaching journey) :D

 

So what's the GUID anyways? How's it built? I figure being in 4 parts it means something and not just a random string. (ie like the Oracle ROWID. It's made from block, table, db file combination.)

Brilliant idea, announcing in the website developer's forum that you're planning on spidering the site. Really annoying, indeed. <_<

 

Pocket queries only contain 20 logs... I've been told in the past to use a program like swiss army knife to archive the logs... which is great if I started doing it in the year 2000. (One example. I was working with Ontario Geocaching to get a list of all their splash caches. They use the publish log to validate the entry. Once 20 people have visited a cache, the publish log is no longer in the gpx file.) If the GPX files contained all the logs... no spidering would be required to get the data... :D

 

Also for spidering... I put in a random sleep up to 60 seconds between queries, and augmenting some of the queries between google and a squid caching server... so any spidering I do I consider GC.com's load. Trying to making sure I don't bog down GC.com at all with this. :D Google isn't even that nice.

 

I wish GC.com would come up with an API or beef up the Pocket Queries.

If you need a list of caches, make a bookmark list.

 

Data suckers may think that their actions are harmless. But multiply this example by thousands of others around the world, and you can start to see why the site has been bogged down and running so slowly.

Link to comment

If you need a list of caches, make a bookmark list.

 

Data suckers may think that their actions are harmless. But multiply this example by thousands of others around the world, and you can start to see why the site has been bogged down and running so slowly.

 

Just a few thoughts on this. What I'm after is the logs/GUID and archived caches. Which bookmarks, and PQ don't give me for other users past 20 active logs, and not at all for archive and GUID. So the only way of getting at the data is "sucking/spidering" as you call it. Which in this case is using my web browser, in a a semi-automated way, to capture the cache copy of the pages, GPX files and logs. (and to save bandwidth I pull the pages in text only)

 

Now since the data is old... we can get it from other sources like google. Thus I'm not actually spider/sucking GC.com

 

As for bogging down the site... lets bring something into perspective here.

 

1. Google has hit pretty much every page on GC.com and continues to do so on a daily basis.

2. There are many more seach engines out there other then google doing the same thing.

3. a "sucker/spidering" person is only interested in specific pieces of data.

3a. They tend to go after just the pages. (not the graphics).

3b. They use typically use ID numbers. ID number are 99% of the time primary keys which means the hit to the database/website is almost non-existent.

3c. Since they are after specific data, they also hit GC.com far less frequent than google would.

4. there are monitors in place on GC.com to stop someone from spidering too fast. (I've seen proxy server hit this problem as 1000's of people are using it at once, thus it appears as a DOS attack, when it's just many users using the same IP)

 

So where is the load coming from. Let's look at this.

1. Querying by GUID or ID. Indexes work on a B-Tree so finding it tends to be a single operation. So quering a single page would take 1 hit.

2. Querying all caches withing a 1000 km radius of a single point needs to do math. So let's assume the worst. 5000 within this radius. The calculation to do the 1000km is a nice complicated formula using sin and cos. So 5000 it needs to do over 5000 operations. That's where the load is coming from.

 

So if you ask me. Anyone who is willing to do these complicated calculations on their own computer is a bonus to GC.com. So was it a suicide to let GC.com know that I'm saving them processor time by offloading any of the spatial queries onto my own laptop?

 

oh ya, and another point... ever noticed this at the bottom of the page:

 

Current time: 5/21/2007 4:03:36 AM

Last Updated: 5/21/2007 1:44:00 AM

Rendered: From Database

Coordinates are in the WGS84 datum

 

GC.com caches the pages to speed things up for single page queries. Range (ie postal code/ZIP) needs to be dynamic every time.

Edited by TheCarterFamily
Link to comment

It's late so I'm going to vent now.

 

I really wish we had a perfect world... where software did what we need. Being a programmer it always frustrates me when I have the knowledge and the skills... but I can't do a thing about it. That's why I started writing my own GPX/Database. So I can do all the weird and wonderful stuff. But I've always got something or someone blocking me. The want me to conform to their standards.

 

GC.com doesn't work on my cell phone. Doesn't work on my PDA. When it uploads to my GPS it's confusing as Puzzles are lumped in with traditional. Can't get caches near the road. Can't get caches by XXX user so we can follow their journey. Can't know who has the highest find count... so I can strive to it...

 

I can get around all my challenges if I just have the data. but noooo... we get people who say. I work fine with 500 caches in a pocket query so you can too! Or I don't cache with a stroller... so you don't need it either. I never need to know if the caches are 100 metres from the road... I don't have a child who sleeps in the car. So you don't need to do it either.

 

GRRRRRR!!!

 

Just give me an API or beef up the Pocket queries. I'd pay $10 a month to get 2000 caches in a PQ and an option to pull all the logs for a one time shot. Or hell just give me a way of pulling the caches based on ID so the PQ server doesn't get over loaded doing spatial queries.

Edited by TheCarterFamily
Link to comment

wap.geocaching.com not enough??

 

what PDA do you use? on the PPC platform - try GPX sonar - looks just like the site. On Palm use cachemate.

 

Very easy to go through a cachers jouney by simply looking at thier found caches - they are listed in the order they were logged. Follow the links. I have done this many times.

 

Caches along a route helps to find caches near the road.

 

Use a "smart" program to transfer files to your GPS and you can see all the differnt types just fine.

 

Hoighest find count - several websites track that minus those that don't log online. The stance of the siote is that this is not a contest. So no tools to turn it into one. BTW - highest count is more than you will EVER find.

 

Am I missing something here??

 

Who wouldn't love to have access to ALL the data. However, it is the only thing they really own beside the website - so they have little incentive to grant wide access.

Link to comment

GC.com doesn't work on my cell phone.

Get a cellphone that handles the wap.geocaching.com site. Better yet, spring a few dollars per month for Trimble Geocache Navigatgor. That saved my butt today when I unexpectedly found myself at the site of a cache that wasn't in my GPS.

Doesn't work on my PDA.

It works on the PDA's used by thousands, probably tens of thousands, of other users. If you're having trouble, ask in the GPS Units and Software forum.

When it uploads to my GPS it's confusing as Puzzles are lumped in with traditional.

GSAK and GPX Spinner are two pieces of software that can easily solve this for you by customizing the waypoint symbols and waypoint names to tell you just the information you need.

Can't get caches near the road.

Try the "caches along a route" feature, and/or a pocket query on some of the park and grab attributes, like "kid friendly" or "takes less than an hour" or a terrain level of 2 or lower.

Can't get caches by XXX user so we can follow their journey.

Try "It's Not About the Numbers" (www.itsnotaboutthenumbers.com) or the new "friends" feature.

Can't know who has the highest find count... so I can strive to it...

Try CacherStats.com, another tolerated spider. Good luck on passing Team Alamo or CCCooperAgency.

Link to comment

Get a cellphone that handles the wap.geocaching.com site.

It's not like Motorola is an unknown in the comms biz. The complaints of problems with the wap site on Moto gear have been prominent for a long time. Saying that users should shirk their service contracts (or denying the reality that most users have entered a long term service contract with their providers and/or should chuck that equipment) is just buck-passing.

 

A substantial percentage of the spidering discussion we see in the forums is to bypass technology deficiencies of this site. The focus is rarely on "how can we reduce the perceived reason to spider", it's much more typically reducing to "programmers with ideas are naughty" and that's just not healthy.

 

I find the recurring "I can log smileys with only the tools/technology that the site provides that happens to work for me - why can't you?" mantra in the forums to be tedious. Any challenge of the status quo seems to be met with "but XXX could find a bazillion caches without becoming an evil page scraper" and it's tiresome. I'm certainly not saying that every page scraping wannabe is justified, but the "pile on anyone that dares to question the way things have always been" isn't great, either.

 

How does the friends thing, introduced this very week, help solve the "follow the journey" thing? I don't see that as a solution to the question that was asked as it stands now.

Link to comment

GC.com doesn't work on my cell phone.

Get a cellphone that handles the wap.geocaching.com site. Better yet, spring a few dollars per month for Trimble Geocache Navigatgor. That saved my butt today when I unexpectedly found myself at the site of a cache that wasn't in my GPS.

Doesn't work on my PDA.

It works on the PDA's used by thousands, probably tens of thousands, of other users. If you're having trouble, ask in the GPS Units and Software forum.

When it uploads to my GPS it's confusing as Puzzles are lumped in with traditional.

GSAK and GPX Spinner are two pieces of software that can easily solve this for you by customizing the waypoint symbols and waypoint names to tell you just the information you need.

Can't get caches near the road.

Try the "caches along a route" feature, and/or a pocket query on some of the park and grab attributes, like "kid friendly" or "takes less than an hour" or a terrain level of 2 or lower.

Can't get caches by XXX user so we can follow their journey.

Try "It's Not About the Numbers" (www.itsnotaboutthenumbers.com) or the new "friends" feature.

Can't know who has the highest find count... so I can strive to it...

Try CacherStats.com, another tolerated spider. Good luck on passing Team Alamo or CCCooperAgency.

 

My point exactly. I just choose to get around it by processing PQ GPX files and processing squid cache files of any cache I've clicked on, or anyone else who uses my proxy server.

Link to comment
wap.geocaching.com not enough??

 

what PDA do you use? on the PPC platform - try GPX sonar - looks just like the site. On Palm use cachemate.

 

Very easy to go through a cachers jouney by simply looking at thier found caches - they are listed in the order they were logged. Follow the links. I have done this many times.

 

Caches along a route helps to find caches near the road.

 

Use a "smart" program to transfer files to your GPS and you can see all the differnt types just fine.

 

Hoighest find count - several websites track that minus those that don't log online. The stance of the siote is that this is not a contest. So no tools to turn it into one. BTW - highest count is more than you will EVER find.

 

Am I missing something here??

 

Who wouldn't love to have access to ALL the data. However, it is the only thing they really own beside the website - so they have little incentive to grant wide access.

 

 

I'm tired of explaining. I have a 11 year old pda. Sharp. Most of you say I should spend more money. Which would you to buy new equipment or make new caches. I can't afford both.

 

Also, I have a solution which currently works. I was trying to make it more efficient by asking about the GUID. With my efforts I'm reducted the hits to GC.com. (by using gpx data rather than the website) My goal is to reduct my hits down to just logging finds and pocessing new caches.

Link to comment

I'm tired of explaining. I have a 11 year old pda. Sharp. Most of you say I should spend more money. Which would you to buy new equipment or make new caches. I can't afford both.

I'd say hop on eBay and pick up a palm pilot that can do all the caching stuff for ~$20.

Link to comment

I have trouble understanding exactly what the OP wants. It looks like he is trying to keep an offline database to reduce his need to hit Geocaching.com. Perhaps with the continuing site performance improvements, he won't feel such a need to do this. Geocaching.com is rightfully concerned with people trying to keep substantial parts of the database offline and reduce their hits on Geocaching.com. Part of their revenue is the sale of advertising and the rates are probably based on number of hits to Geocaching.com pages.

 

However, TPTB also recognize that they provide a service to geocachers and in particular to premium members. PQs are provided to premium members to allow some limited offline processing. While many members (including myself) try keeping and offline database of caches in their area, the intent is really to use this to get a limited amount of data that can be used for planning geocaching trips and massaging the data for use in your GPS and PDA or laptop. Many members of the geocaching community have used their programming skills to provide third party tools that can be used to help in your geocaching experience. Groundspeak has worked with these developers in defining what data is in the GPX files and even on occasion has allowed some limited access to Geocaching.com from these tools. I suspect that when the performance issues are addressed, Jeremy will provide some kind of API, perhaps limited to premium members, to allow applications to access the database more directly. This will involve balancing Groundspeak's objectives in protecting its database and the revenue from advertising with their need to satisfy the geocaching community which are its users.

Link to comment

I'm tired of explaining. I have a 11 year old pda. Sharp. Most of you say I should spend more money. Which would you to buy new equipment or make new caches. I can't afford both.

I'd say hop on eBay and pick up a palm pilot that can do all the caching stuff for ~$20.

Yup . . . that's what I would do. A friend got a Palm M500, like the one I paid more than $200.00 for a few years ago, for only $30.00 on eBay, and that included the shipping. :)

Link to comment
I have trouble understanding exactly what the OP wants. It looks like he is trying to keep an offline database to reduce his need to hit Geocaching.com. Perhaps with the continuing site performance improvements, he won't feel such a need to do this. Geocaching.com is rightfully concerned with people trying to keep substantial parts of the database offline and reduce their hits on Geocaching.com. Part of their revenue is the sale of advertising and the rates are probably based on number of hits to Geocaching.com pages.

 

However, TPTB also recognize that they provide a service to geocachers and in particular to premium members. PQs are provided to premium members to allow some limited offline processing. While many members (including myself) try keeping and offline database of caches in their area, the intent is really to use this to get a limited amount of data that can be used for planning geocaching trips and massaging the data for use in your GPS and PDA or laptop. Many members of the geocaching community have used their programming skills to provide third party tools that can be used to help in your geocaching experience. Groundspeak has worked with these developers in defining what data is in the GPX files and even on occasion has allowed some limited access to Geocaching.com from these tools. I suspect that when the performance issues are addressed, Jeremy will provide some kind of API, perhaps limited to premium members, to allow applications to access the database more directly. This will involve balancing Groundspeak's objectives in protecting its database and the revenue from advertising with their need to satisfy the geocaching community which are its users.

 

This was the best response I've seen. GC.com doesn't get any ad revenue from me... I never click on the ads. But I do pay my $3. If they wanted to tier the account so that the API was linked to a $6, or $10 account. I'd be willing on to get on board with that.

 

Also on offline DBs. The best place for a cache is where... A forest. Not every place on the planet has internet access port in a tree. I have reloaded my GPS/PDA from the car on many occasions. So for me the best moto I ever learned was "Be prepared" (yes scouts) So for me the more data I have the more prepared I am for any situation.

 

On the revenue side... here's an idea for the TPTB. Sell GC.com region GPX files (ie one massive file for all of Canada - like buying Mapsource from Garmin). Then we can upload the mass file into our offline system and use PQ to update specific regions. I'd buy it. It also has the benifit of offloading some of the load. It can be run once a week, or month and it's not dynamic.

Link to comment

On the revenue side... here's an idea for the TPTB. Sell GC.com region GPX files (ie one massive file for all of Canada - like buying Mapsource from Garmin). Then we can upload the mass file into our offline system and use PQ to update specific regions. I'd buy it. It also has the benifit of offloading some of the load. It can be run once a week, or month and it's not dynamic.

 

Geocaching data is dynamic. Every day there are new caches and old caches are archived. Caches can also be moved (a small distance without requiring reviewer intervention), temporarily disabled, and re-enabled. Some cacher like having the latest logs to know about the condition of the cache or if there are any travel bugs. If you could buy a region how old of information would you use? Geocaching.com prefers that you don't use stale information because sometimes a cache is moved or archived because an angry landowner has asked the cache to be removed from his property. Do you want to explain to him why you are hunting a cache that hasn't been there for weeks? The current PQ policy allows you to download up to 2500 caches per day. You can load them into your laptop and update your GPS if you to to an area other than where you originally were going to be caching. Unless you live in as cache dense an area as I do, you can pretty much cover the areas you might be going to with 2500 caches. If you want more just use two days worth of pocket queries. (I'm sure if you hunt something that was just archived yesterday you can explain that you can't always get the latest data). Jeremy has made it pretty clear that he doesn't want people to have offline databases for cache hunting and isn't likely to sell it at any price. I could see him having an API for people who are out in the woods and need to see the latest caches. Oh, wait; he does at least if your name is Trimble.

Link to comment

What would happen when I was caching with stale data was a concept that took me a while to grasp. When I first started caching, I thought once I had the waypoint in my GPSr, I could leave it there until I found the cache. Wrong! :laughing: I have hunted caches Archived a week earlier, two days earlier, or earlier the same day. I would have much preferred to have not wasted that time. :anicute:

 

Now, I try to get a fresh PQ for the area I am heading towards the day before I set out. When I was traveling, I found WiFi HotSpots in the most unlikely of locations, like a second-hand-store on lonely Colorado Highway 17. Whenever I found a WiFi Hotspot, and also needed a break from the road, I downloaded new PQs for the area I was in, or the area I was headed towards.

 

Easy peasy! And no more wasted hunts for Archived or Disabled caches. :wub:

Link to comment
What would happen when I was caching with stale data was a concept that took me a while to grasp. When I first started caching, I thought once I had the waypoint in my GPSr, I could leave it there until I found the cache. Wrong! :D I have hunted caches Archived a week earlier, two days earlier, or earlier the same day. I would have much preferred to have not wasted that time. :D

 

Now, I try to get a fresh PQ for the area I am heading towards the day before I set out. When I was traveling, I found WiFi HotSpots in the most unlikely of locations, like a second-hand-store on lonely Colorado Highway 17. Whenever I found a WiFi Hotspot, and also needed a break from the road, I downloaded new PQs for the area I was in, or the area I was headed towards.

 

Easy peasy! And no more wasted hunts for Archived or Disabled caches. :D

 

 

All I can say is re-read the post. You missed something! :D

Link to comment

On the revenue side... here's an idea for the TPTB. Sell GC.com region GPX files (ie one massive file for all of Canada - like buying Mapsource from Garmin). Then we can upload the mass file into our offline system and use PQ to update specific regions. I'd buy it. It also has the benifit of offloading some of the load. It can be run once a week, or month and it's not dynamic.

 

Geocaching data is dynamic. Every day there are new caches and old caches are archived. Caches can also be moved (a small distance without requiring reviewer intervention), temporarily disabled, and re-enabled. Some cacher like having the latest logs to know about the condition of the cache or if there are any travel bugs. If you could buy a region how old of information would you use? Geocaching.com prefers that you don't use stale information because sometimes a cache is moved or archived because an angry landowner has asked the cache to be removed from his property. Do you want to explain to him why you are hunting a cache that hasn't been there for weeks? The current PQ policy allows you to download up to 2500 caches per day. You can load them into your laptop and update your GPS if you to to an area other than where you originally were going to be caching. Unless you live in as cache dense an area as I do, you can pretty much cover the areas you might be going to with 2500 caches. If you want more just use two days worth of pocket queries. (I'm sure if you hunt something that was just archived yesterday you can explain that you can't always get the latest data). Jeremy has made it pretty clear that he doesn't want people to have offline databases for cache hunting and isn't likely to sell it at any price. I could see him having an API for people who are out in the woods and need to see the latest caches. Oh, wait; he does at least if your name is Trimble.

 

re-read the post! You missed something!

Link to comment

Funny the question was if the GUID or ID could be derived so I didn't have to click on the page to find it. This topic has got so far off topic it's un-real.

 

Moderator,

 

Can you move this to the Off-Topic area...? since it now has nothing to do with the GC.com site.

 

To answer to the Offline DB part:

 

The questions we ask are an attempt to find better ways to keep it up-to-date. If we are seeking the Offline-DB option then we have a valid reason and have exhausted all other options. So if your not going to help... STAY OUT! We shouldn't have to justify to 1,000 people every time we need to ask the question.

 

My Offline DB is only ever 12 hours out of date because of the pocket queries. I can knock that time down to 1 hour if I click on all the caches I might visit. Then take my squid cache of the pages I visited and dump the log entries into my Offline DB.

Link to comment

Ok. I went back and re-read this thread. It seems the OP isn't trying to keep an offline database. Instead he is trying to cache the web pages for some number of caches so he can call them up while caching when he doesn't have an Internet connection. It looks like he wants to process his GPX file to call up the webpages and load them into his cache. In principle this is no different than the user that prints out the web pages for all the caches befor going out on a cache hunt. I suspect if he just hit each page manually before disconnecting from the internet he wouldn't be violating any TOU. So the question is what kind of automated processing is allowed to cache the web pages to allow access when you are not connected to the internet?

 

I know that he runs linux and he hasn't looked at GSAK which can generate HTML files from the GPX. Perhaps there is someone who has a linux application that can do this. Also since the GPX file doesn't include graphics you miss any photos or graphics on the page so it's not a complete solution. Most of the cachers I know that do paperless caching are able to get by with just the information in the GPX file. There are some that use GSAK to collect more than the last four logs in case some older log had information that is useful. And there is at least one program (for Windows) that uses the GPX file to grab the photos off each cache page - generally considered a violation of the TOU. I don't have any suggestions for the OP other than to read the Terms Of Use and decide what might or might not be allowed.

 

Oops - I re-reread the last post and see that he has another step of loading the cache into an offline database. I'm not sure what he uses this for.

Edited by tozainamboku
Link to comment

Funny the question was if the GUID or ID could be derived so I didn't have to click on the page to find it. This topic has got so far off topic it's un-real.

Is there any way to decode the ID's from the GUID?

No, there isn't.

...or build a GUID from the data in the GPX file?

The GUIDs of the caches are already in the GPX files. They're built into the <url> tags. You'd just have to manually extract them from the URL itself. The cache id is in the <Groundspeak:cache id> tag/attribute.

Link to comment

Funny the question was if the GUID or ID could be derived so I didn't have to click on the page to find it. This topic has got so far off topic it's un-real.

Is there any way to decode the ID's from the GUID?

No, there isn't.

...or build a GUID from the data in the GPX file?

The GUIDs of the caches are already in the GPX files. They're built into the <url> tags. You'd just have to manually extract them from the URL itself. The cache id is in the <Groundspeak:cache id> tag/attribute.

 

I guess with all the clutter in the topic you missed the fact I'm missing the finder/owner GUID and Log GUID. Their not in the GPX file. I know where the cache GUID is.

Link to comment

Funny the question was if the GUID or ID could be derived so I didn't have to click on the page to find it. This topic has got so far off topic it's un-real.

Is there any way to decode the ID's from the GUID?

No, there isn't.

...or build a GUID from the data in the GPX file?

The GUIDs of the caches are already in the GPX files. They're built into the <url> tags. You'd just have to manually extract them from the URL itself. The cache id is in the <Groundspeak:cache id> tag/attribute.

 

I guess with all the clutter in the topic you missed the fact I'm missing the finder/owner GUID and Log GUID. Their not in the GPX file. I know where the cache GUID is.

Ah, you're right... I missed that part.

 

Then no, there isn't any mathematical way for you to convert those log numbers into GUIDs or vice-versa. You would need access to a database table which stores both.

Link to comment
Guest
This topic is now closed to further replies.
×
×
  • Create New...