Jump to content

Avernar

+Premium Members
  • Posts

    1416
  • Joined

  • Last visited

Everything posted by Avernar

  1. You may want to start a new feature request for that one. Tacking it onto another non related request isn't going to do much.
  2. It's still a good idea to do so as some people search for Chirp caches using Pocket Queries. The attribute would attract more cachers to visit.
  3. That will show them! They'll have to settle for your id showing up on their cache page when you log it!
  4. WinRAR can open ISO files and extract them to a folder.
  5. That confused me at first but the "traditional first stage" in the topic title implies he was talking about a multi-cache and what he really meant was physical first stage.
  6. Yes. Just list it as a Multi with the Beacon attribute. Not sure if having CHIRP in the cache name is still a requirement or not.
  7. That's good to hear. Once the database is updated no changes to the GPX files would be needed as they are already Unicode with UTF-8 encoding. Not sure why CDATA sections were mentioned as they have nothing to do with character encoding. All they do is let you include lots of <'s and &'s without having to escape them.
  8. 1) So what you're saying is your guess is better than mine even through we know that ASP.NET is the current system they use. Nice. 2) I thought you said in #1 that you don't know how the API works. Now you're stating as a fact that the performance between TransmitFile and ReadFile/WriteSocket is negligible in the API. Groundspeak has already told us that the unzipping in memory is the bottleneck. 3) And during the 6-12 months it takes them to develop this hybrid system (which sounds like a neat idea) I'd still like to download the My Finds PQ. 4) Wrong again. PQs aren't ZIP files now and GSAK still writes them to a file before processing. Why? Because it was the easiest and fastest way to do it. GSAK already has code to process GPX files on disk. He reused existing code. That's why GSAK 8 was released a little while ago instead of half a year for now. But even if he spent the time to get it to dump GPX files directly into the database, he could still do it if they were ZIPed. As I said above, ZIP files can be uncompressed while streamed. So you'd rather wait months or years for the "proper solution" instead of using a "quick fix" in the meantime. I rather have the quick fix so I can use it now. That works perfectly for me, the forum limit is 10! Nice to know I have an easy way to win an argument.
  9. Sounds like you've never done any database programming. For one time requests you're not going to hit any in memory indexes or cached DB rows. Most PQs are set for exclude my finds and the My Finds PQ will be the opposite. Both of which essentially grab a random set of caches from the database. Therefore it's going to be I/O bound. I get it but you don't seem to know how data is requested over the internet. You're NOT calling a function!!! You send a bunch of bytes down a TCP/IP connection to request what you want. For Opencaching it's a standard HTTP request, the Geocaching API might be the same or it might be proprietary. Doesn't matter. The response from the server is a bunch of bytes down a TCP/IP connection. For Opencaching it's JSON for most requests and XML for GPX files. The data can be generated on the fly from the database, stored in a memory cache or sent from a pregenerated file. Now here's the part that you don't get. If the data is already zipped then you can just send it as is. The fact that you're using an API doesn't stop you from sending data from a file, zipped or otherwise. The Geocaching API is no more an API than HTTP or FTP is. It's a protocol. Unless Groundspeak provided a code library that you link into your program it is NOT an API!!! "Groundspeak API" just sounds way cooler than "Groundspeak Database Access Protocol". I 100% agree that the PQ generators don't need to ZIP the data. But the so called API would still send a file, only as text/xml. Because it's not an API. It's a protocol. It's more than likely HTTP anyways. I'll have to hunt down the API document to find out for sure if it's HTTP like the Opencaching protocol. Why use it over FTP? Same reason you use NNTP, SNMP, POP, IMAP, etc over FTP. They've been designed for a specific purpose so are more efficient in requesting the data you want. No you don't. Now I know you've never uncompressed a ZIP file on the fly before. I have. Each file has a local header in front of it. No. It lets you add files to an archive without re-writing the whole thing. While not as important now that disks are freaking huge, back when ZIP was created it was a very good idea. No. Writing data to a disk makes a file. I agree. But you keep thinking in terms of a brand new implementation. And when the API is complete then you'd be 100% correct. But it's not. Right now PQs are still necessary. There's already infrastructure in place to generate and ZIP them. Having the API send them as is instead of unzipping them is a good middle step. You can't just develop a perfect system and expect everyone to switch to it instantaneously. The IT world doesn't work that way. During the transition things are not going to be done in the most optimal way. You've got a oddball definition of dynamic content. Using that definition all files would be considered dynamic. PQs are static. They're generated once when they run and can be downloaded more than once up to 7 days later. During that time the data doesn't change. That's static content no and's, if's or but's. For dynamic content, yes, generating it on the fly and sending it right away is the best way to do it. That's what the Get Caches function does! PQs are not generated on the fly. The PQ previews are however. Each time you preview a PQ you get data that was generated on the fly. I am not missing your point: Don't ZIP it, don't store it on disk. Got it. Please stop saying that I don't get it. Now can you get my point: This is what the Get Caches API call does. But until the distance and results/day are majorly increased it can't be used to get all of your finds. GSAK would unzip it and process it like any other downloaded zipped PQ. That's what it does with the unzipped GPX files it grabs with the API too. What I would like to see is GSAK grab my finds through the API and dump them directly into it's SQLite database. Unfortunately the API doesn't allow that because of distance and caches/day limits. I can't even replace my regular PQs with the API because of the 50km distance limit, never mind the My Finds. If Groundspeak could remove those limits in a month then we wouldn't need to download PQs. But I predict it's going to take them a while to scale up their infrastructure before that happens.
  10. I'm not saying it shouldn't be implemented. Just clarifying that there is a way to prevent PQs from being emailed.
  11. Sounds like you've never done any database programming. Reading from a database is I/O bound which means the CPU is sitting there twiddling it's silicon thumbs. If you compress the data as it comes in you're just using this wasted CPU idle time. But this point is moot. The PQ generators already ZIP the results so were not putting any additional load anywhere. I'm the one who should be rolling my eyes at that statement. Why do you have to take things so literal? They called it an API. Doesn't mean it is an API by the strictest sense of the acronym. It's really a protocol as it's used to transfer data over a network. I don't understand your fixation on "files". A file is just content stored on disk. That's it. Nothing special. The application receiving the content over the network can put it in a file if it wants to or it could process it all in memory either all at once or as the data arrives. I really don't care what the data format the "API" sends. It can be uncompressed text or it can be ZIPed text. I can store it in a file or not. Groundspeak can store it in a file first or it can generate it on the fly. It doesn't matter. Just like in HTTP a web server can send content generated on the fly or it can send files with the same call. Guess which type of content it can send with less overhead on the server? Yup, files on disk. The point that you are either completely missing or completely ignoring is that the file already exists and is already zipped. It make more sense to create another call in the API/Protocol to just send the file as it is instead of doing all this dev work storing the data a second time in the database. No, I'm interested in getting the file. That's why it's the PQ retrieval function and not the get caches function. Eventually when the limits of the get caches function are expanded to what a PQ can do then we can get rid of the PQ retrieval function. 1) Wrong. ASP.NET has a TransmitFile call. 2) Huh? All web servers can bulk transmit files. It's what they're best at and why static web sites are so fast. 3) No argument there. And when the get caches call duplicate the function of a PQ we can do away with the get PQ call. 4) I'm quite sure that GSAK writes it to a file and processes it just like any other GPX file. I fully agree that the proper way is to make a Get My Finds call that returns data like the Get Caches call. But this is more work than a quick fix to the Get PQ call to just return the file as is. The Get PQ call is more than likely temporary anyways until they can get the performance of the Get Caches call up to what's needed to make a Get My finds feasible. Currently with a 6000 cache per day limit on the Get Caches call, getting my finds would pretty much wipe that out.
  12. I would just list that as a Mystery with the Field Puzzle attribute. The requirement for internet access would then be in the description. Most people either don't have or don't check the attributes when cache hunting. They get very annoyed when they start a multi and then find out half way through that they can't finish it.
  13. Just request more than 500 caches in the PQ and it won't be emailed to you. This won't work for PQs based on bookmarks however as you can't set the number of caches.
  14. Really? I'm not aware of any smartphones that can communicate with ANT devices. Please enlighten me. Here you go: GeoBeacon: Garmin Chirp – on an iPhone (or iPad)
  15. Beacon does not mean only Chirp. There are many other types of beacons needing even more specialized equipment. The capability to find a Chirp is quite common.
  16. Looked up Melissa's cache history. Followed her path exactly. Surprise, surprise. I didn't run into Melissa. I did have fun finding all the caches she's been at. All joking aside, there are much easier ways of finding someone than with a list of where they have been. You're more likely to find them at a cache they DNFed as they are likely to return there and those are not listed in their profile.
  17. I'm wondering the same thing about you. So instead of formatting it as GPX you're going to format it as JSON. Still going to use the same amount of CPU time either way. So the only issue is ZIPing the data. That just becomes a tradeoff between a little CPU usage on the PQ generator and bandwidth used to send the data. It's user selectable anyways. As for the API not being FTP, why not? It should be for sending files. You ask for a PQ and it should send it as a byte stream and not convert it to another form first. Wrong! It needs to load the index into memory to find the record. Then it needs to read the db row. Since the data won't find into one DB row it will be stored out of line somewhere else in the DB. Then it needs to read the BLOB (called different things in different DB) data into RAM and copy the data portion into a send buffer. Then it needs to call the socket send function. All the while causing many kernel/userspace transitions. Lots of CPU, lots of I/O and lots of RAM. To send a file is one Win32 API call. Then the kernel will read the file into it's I/O cache (which is already allocated) and tells the network card to DMA it out. Very little CPU, little bit of I/O, little bit of RAM and one userspace/kernel transition. The filesystem is a simplified database anyways. You're arguing that stress on the filesystem is bad thing while ignoring the stress on the database. The filesystem has been optimized to find files. A database needs to be generic to find many different types of data. How is that any less expensive than combing the complete worldwide list of caches for the result set to send through the API? It's not. It doesn't matter what the returned data format is (GPX, JSON or binary). My point is that saving it to a file for direct download is less CPU/IO/RAM intensive than storing it in a database.
  18. He wants this location set automatically from the coordinates entered. For example, if I enter coordinates for Quebec the site should change the location to Canada/Quebec automatically and not default to Canada/Ontario. It defaults to the Country and State/Province from the mailing address in your profile.
  19. Huh? Can you please explain how not generating the file first does not use the CPU? If it's accessing the database it's using the CPU. My point is that PQ generation is not time critical. If the PQ generator is busy then the PQs take longer to generate. Generating the PQ and sending the file through the API are two separate things. Downloading the PQ is just downloading the PQ. It doesn't trigger the generation. That would be a new API call. That's probably why downloading the My Finds through the API isn't a high priority. You have to visit the website to generate it anyways, might as well download it there too.
  20. That's a different feature: child waypoints. In GSAK you can call them whatever you want and they don't change the posted coordinates. Corrected coordinates however do change the posted coordinates. That's their purpose. No!!! That would let people "battleship" the final coordinates.
  21. You're still missing my point: 1) Transferring a file from a disk = very low CPU/RAM usage 2) Transferring data from a database = medium to high CPU/RAM usage
  22. This is compatible with the old GPX schema. This is the preferred solution but would only be in the new GPX schema.
  23. Except that there's already a separate PQ generator box dedicated to that. The PQ generator doesn't have to be real time. If a hundred people hit the generate My Finds button then then they're just queued up and processed over time. The API needs to respond quickly without timing out.
  24. No, that won't reduce server load. Windows has a TransmitFile function that sends files over TCP/IP with extremely little CPU overhead. Sending a file from a disk this way is one of the least expensive operations when it comes to server load.
×
×
  • Create New...