Jump to content

EngPhil

+Premium Members
  • Posts

    253
  • Joined

  • Last visited

Everything posted by EngPhil

  1. GC4M20C "Bond; Lake Bond." instead of GC3CZJB "Bond, Lake Bond".
  2. http://forums.Groundspeak.com/GC/index.php?showtopic=329891
  3. OMG wow @TheWeatherWarrior that would be, like, so #freaking #cool I am totes for this #yes #bestideaEVAR #makeitso.... ...... erm... To quote the great Eric Bogle, "No, no, a t'ousand times no." I'm with cerberus1. The forums and (more importantly) cache logs aren't Twatter, and for that, at least, I am thankful.
  4. "Found by User" seems to be a badly named column. My understanding was, at least when viewing another user's finds, that this column contained (in black) the date that user found it, along with (in green) the date you found it, if applicable. It seems that the behaviour is different when viewing your own finds, yet the column is still named the same. By definition, if it's "found by user", then the black and green dates will always be the same (assuming unique finds at least), which makes one of them redundant. So having the black date switch to "last found by any other user" kinda makes sense, but (assuming that's what it's doing), the heading should be updated to reflect this.
  5. Surely not. That would make absolutely no sense. They even have an attribute for the beacon. If we assume there's only one kind of beacon, then the mere presence of this attribute is surely an endorsement of that beacon type (and mention of its name is irrelevant as there's no other alternative.) If we assume there are multiple beacon types, then surely the seeker would need to know which kind of beacon it was to ensure his/her device was capable of reading it. Either way, banning mention of the name would be ridiculous.
  6. A hack I've used (for external apps that can search on "corrected coords") :- if it's a field puzzle, "correct" the coords to the same as the Published coords.
  7. For (most?) Polish folks, geocaching in Poland is no problem, but puzzle caches in the USA may be almost impossible to cope with. In, say, Switzerland, a cache listing may be in both French and German. That's bilingual, too, but probably doesn't help you. A "bilingual" attribute says nothing about which languages are used, and an "English" attribute is not universally appropriate either. For a meaningful implementation of this, there would need to be a mechanism for cache owners to select all languages that apply (from a list), and have searches similarly able to be limited to values on that list. But that's complicated, and hard to retro-fit, I'm sure. Certainly harder than a limited-usefulness attribute.
  8. Why is that, I wonder? If the policy becomes "string of DNFs is treated like a NM/NA", do you think that might have the effect of people becoming more reluctant to post DNFs? If so, that's far, far worse.
  9. True, but I would hope that if it got to that stage, a NM would be logged to draw the CO's (and ultimately the reviewer's) attention to it. DNFs are a normal state of play. NM and NA are the exceptions that need handling, and it's up to the players at GZ to flag them.
  10. I'm not sure what wording has been handed down from the Lilypad, but based on what you quote, I would interpret "in need of maintenance or archival" as "has needs maintenance" or "has needs archived", NOT "has a buncha DNFs". I'd certainly support a push for greater scrutiny on caches that have long-term Needs Maintenance attributes, but not for DNFs. Let those on the ground make that call, and log NM/NA as appropriate.
  11. I gotta agree with that. If you don't like all the emails, perhaps power trail ownership is not for you. Reading the logs is your responsibility. https://www.geocaching.com/about/guidelines.aspx#listingmaintenance
  12. The API does provide a mechanism to bookmark a cache (AddGeocachesToBookmarkList). Sounds like this should be a feature request for that app. As to the requests to display corrected coords on the map search for caches with corrected coords search for caches with personal notes I fully agree, these would be very useful to have!
  13. Only one wild guess .. phone apps that have no embedded unzip on phones with no native unzip? ...and download GPX files instead of using the Live API to get PQ results? Is there any such beast? I suppose it's not unlikely. That would indeed be a valid reason to not zip the PQ, I guess.
  14. I'm curious, what's the usage case for having a PQ not zipped?
  15. Works fine for some folks while being ridiculously slow for others? Could it be content served off an iffy CDN, or perhaps a bad backend node coupled with a load balancer with persistence locking people onto it? Perhaps those experiencing problems could do a "view source" on the geocaching.com homepage (once it eventually loads) and check right at the end... there's a comment that shows what server you're hitting. For example, right now this server: <!-- Server: WEB20; Build: Tucson.Main.release-20150113.Release_234 --> seems to be flying along for me.....
  16. We aren't (well, I'm not) talking about allowing 10-fold (or any for that matter) growth in what users can request, just the way they can get it. Right now, users can request 10k caches a day, split across 10 PQs. Instead, the proposal is to allow 10k caches a day, split across up to 10 PQs. That could be one 10k PQ, five 2k PQs, or 10 1k PQs. The total storage remains the same, the only difference is in how it's distributed. You won't get people taking ten times the resources, because after they've generated their 10k PQ, they can't generate any more that day anyway.
  17. Quite the opposite, actually. If, as mentioned above, you make the limit 10k caches spread across up to 10 PQs, instead of 10 PQs of 1k each (which is the same total number of results), I would expect the server load to be significantly decreased. To get those 10k caches now (which you can do today), you need to hit the server 10 times, with more complicated queries -- extra WHERE clauses in the SELECT to limit by date, for example. If one query with less WHERE clauses returning the same number of rows causes more load than ten more complex queries, then there is Something Very Wrong with the database[1]. As an aside, I have heard it claimed that the 1k PQ limit is due to a commercial agreement with a certain GPS vendor. Would be interested to hear if anyone else has any information on this (or can debunk it.) [1]although this would come as no great surprise....
  18. "I am going on an overseas holiday to $bigplace for two weeks, I won't have mobile data, and don't know to any better accuracy than about 100km where I might be from one day to the next.." (Yes, that is a real world example from personal experience.)
  19. I've noticed the same over the last couple of months, with both the website and the API at times, often during the Australian evenings (which is the middle of the night in the USA, a time when I'd expect less traffic load on the platform...)
  20. Hmm, interesting idea. I think I like it..... a maximum of 10K caches and 10 PQs, distribute caches amongst PQs as you see fit within those limits. Yes.
  21. Another good reason to ensure your caches are stored for offline use in whatever app you use.
  22. That's "Teamwork Required" -- icon looks like two people high-fiving each other. The "Partnership" attribute is confusing, and I don't really understand what purpose it serves. (Yes, I know what it means, but I don't really see the point of it.)
×
×
  • Create New...