Jump to content

ecanderson

+Premium Members
  • Posts

    5635
  • Joined

  • Last visited

Everything posted by ecanderson

  1. A couple of places to start: https://thegeocachingjunkie.com/2016/02/10/mind-your-ps-and-qs-a-how-to-guide-on-pocket-queries/ and
  2. Another one that is designed to hold out the elements is the old surplus 'decon' container, though sometimes we've found that finders don't close them properly. Seems they're getting harder to come by these days, though. Haven't seen a new cache made with one in a couple of years. Stupid price, below (check the shipping!) https://www.ebay.com/itm/325973740888
  3. It's been a long time - asking users to investigate this with their providers usually requires that they supply the IP addresses of the mail server(s) that are sending the mail. Might be time to re-publish these here. I know several users contributed to this effort quite a long time back, but finding those messages, assuming that those IP addresses are even still current, would be a chore. If you folks can provide them, it's a lot easier than a bunch of us looking up IP addresses on incoming emails to find them all. A few I've noted lately: Email notifications re caches and left messages 63.251.163.224 (mail01.Groundspeak.com) 63.251.163.225 (mail02.Groundspeak.com) 63.251.163.228 (mail03.Groundspeak.com) 63.251.163.229 etc? Newsletters 156.70.151.216 (sparkpostmail.com) 156.70.151.217 (sparkpostmail.com)
  4. Being a software 'Product Owner' (and lead developer), I would mention the concept of "regression testing", but it probably wouldn't mean anything to anyone at gc.com who reads these threads. If you do NOT apply resource to that task before you let a new release into the wild, you'll still likely have to do the same amount of work to deal with your bug, but now you've managed to piss people off in the interim. So why not do the work up front instead of annoying people by doing it after the fact?
  5. Raffle off 982 of them at the next Wingin' It event? See you there!
  6. Might have a regression there, Squirrel. Last week, I had the older logs showing up as others have mentioned. Today, no old logs appeared, but the website code completely reversed the order of the content of the geocache_visits.txt file. The first one in the *.txt list was at the bottom of the online 'drafts' list, and the last in the list was at the top.
  7. Would not leave as much of a spoiler as "UPR" in a log, just my own find notes, (Unnatural Pile of Rocks), for example, but could the UP be the same for these caches, and "B" perhaps Bark or Branches? Ring any bells for those caches?
  8. Still seeing the occasional *new* "paper on flat refrigerator style magnet" caches here, usually on transformers.
  9. And Comcast doesn't provide a whitelisting feature that doesn't require building a FULL exclusive whitelist.
  10. It's not a question of throttling or rejection (which you would see in the logs on your end, usually triggered only if you were sending to a lot of invalid/dead Comcast addresses), it is/was instead Comcast's dumping of accepted mail into users' SPAM folders instead of users' regular inboxes, which was new behavior. Will continue to monitor here.
  11. Not specifically a web site issue, but certainly needs to be noted. As of Sunday 4/2, all gc.com email I receive has been thrown by Xfinity/Comcast into my email's SPAM folder. Sounds like there might be a 'reputation' issue there that should be addressed.
  12. That assumes that the claimed "precision" is even there. I don't really care how precise you make the display - a 4th digit isn't even cute, much less useful at this time. It reminds me of folks who publish puzzle caches that resolve to dd.dddddd. The 64 isn't going to get anyone closer to a cache for having 4 digits of precision. I do not understand the claim that "None of those models were precise enough to support the fourth digit like your new 64 is!" The 64 has no better repeatability to a 4th digit (precision) than others I've seen in the field. Not sure why Atlas doesn't understand that repeatability is precision. He's usually the first to get the terms right. As long as I can hit the target in a small group, even if way off to the side, that's repeatability. It likely also means I have a sight adjustment problem or a bad habit, but that's for another day...
  13. I'm aware of that. Again, we've been down this road before. If results aren't repeatable to the 4th digit, doesn't much matter to the person using the device to locate something whether it's 3 or 4, does it?
  14. When the technology allows for repeatability down to less than 1 foot square for consumer units, get back to me on that. Haven't we been down this road before?
  15. No doubt edited after the fact. Do these show the Wheelchair attribute? I ask, because once set up as 1.0T + wheelchair, you cannot switch off or use the 'negated' wheelchair attribute.
  16. I swear I recall back in the early days when 1.0T meant flat, level ground - end of discussion. If one could reasonably expect to reach a cache from a wheelchair, one was expected to use the wheelchair attribute to flag the accessibility of the cache.
  17. Although not mentioned in the 66's manual, it appears from the 'help' section on Garmin's support pages that this process is still performed using the old POILoader program. Looks as though this is still done using *.gpi files, which may be built from *.csv and *.gpx files by POILoader. Can anyone confirm?
  18. It used to be a relative piece of cake to upload custom POI to a Garmin automotive device of fairly recent generation. Drop a file into the right folder, and bingo. (The older units were a bit more work, but workable). Is it still easy to do this on the most recent generation of their automotive units (DriveSmart)? Do they still support *.gpi and *.gpx format? As I read the current manual, it bothers me that it appears that the only interface for POI is via FourSquare. Tell me I'm wrong, please!
  19. Well said. My original point exactly. Over the last 12 months, it's become difficult to know what logs to trust without first checking find numbers. That's certainly no guarantee of an accurate result, but I'm having to discount many logs by those with smaller counts this year. Rather than trusting DNF and found logs as one might expect to do, am having to spend extra time checking up on my own caches, not all of which are easy to access. Found logs when they're actually MIA doesn't help me or other finders, and may delay my action to correct things.
  20. They're logging both types. Logging the unfindable ones are the bigger issue since that's prone to throwing off finders, and also the none too swift gc.com algorithm that alerts reviewers to potential problem caches. If all their logs get deleted, perhaps they'll get tired of the armchair game?
  21. Read that thread and noted the "20" issue might be the root cause, but I'm seeing counts all over the map for these sorts of things. One finder with 61, another stuck somewhere around 10. So not sure if the "20" is playing into it here or not, but doesn't seem to be. Those who do achieve 20 don't seem to be hiding anything. More like kids playing with phone apps. And as I say, they never have validated email accounts, which is something else that has been beaten to death, but that I find a big problem with the gc.com approach, especially with the advent of the app.
  22. As I say, I don't really care so much if people are claiming bogus finds on caches that aren't issues ... there will always be armchair loggers out there ... it's the finds on problem caches that are the larger issue. The bogus finds will also throw off the algorithm that flags the local admins that there may be a problem with a cache. When you say "Definitely a thing this summer", I assume you're saying that you've been seeing this in your area as well?
  23. Is it just in our area, or are the rest of you also seeing an extraordinary number of bogus finds from 'cachers' with low (< 100 finds) counts this summer. It's as though people are downloading the app and just logging things. Some of these logs are on caches that aren't even there, some 'finds' occur after a long string of DNFs by experienced cachers, and none include any log signatures. Online logs are of the 3 letter "lol" sort, a short string of meaningless characters, or something equally unhelpful. None reply to queries about their finds at the message center. If you're not paying special attention to the counts of these 'finders', it's easy to be led into thinking a cache with a string of DNFs and a few recent 'finds' might be worth looking for after all. Hate to get into 'cache police' mode, but all of these finds on caches that aren't there to be found is getting pretty annoying. I really don't care if they claim ones that ARE there to be found but that they never visited. Time to start going back to the old practice of comparing online logs to those in the cache, I guess. I'm going to start doing it, and it would be good if my fellow COs would consider doing this, too. The process isn't very hard since once you see that the first half dozen you check have no signatures, or they've been logging caches that are disabled and truly MIA, you just delete all of the logs for your caches by those folks to clear things out.
  24. To the latter -- Evidently! As to the former... The receiver takes the satellite signals and calculates its distance from each satellite by computing the difference between the time the signal was sent from GPS satellite and the time the GPS receiver received the signal, using time data previously obtained from one of the satellites (as you say) and backing out the 'time of flight'. Keeping accurate on-unit time derived from a satellite and the process of time differential calculation can induce errors in the result. I haven't found that consumer devices include electronics with tolerances tight enough to produce sub-0.001 fixes ... yet. Have played around a bit with selective use of the various constellations and the results have been interesting. Not seeing a great deal of variation in results.
  25. First, it wouldn't matter if Galileo could generate timing signals capable of producing a good fix to 1cm on the ground. If the differential internal timing circuitry of the GPSr can't resolve a fix to better than 0.001 minutes, it really doesn't matter. Moreover, the timing signals of the birds aren't the only factor in obtaining a better fix on the ground. Even should a satellite own a timing signal good enough that the geometry could mathematically provide resolution to 1cm AGL, a MEO at 20,000+km up wouldn't actually produce a 1cm fix. That's why ground based references are always used to get a better idea of location when a tighter fix is needed. I'm not confusing precision and accuracy. In fact, that's the whole point of the exercise. Unless the direction and amplitude of the offset caused by a lack of accuracy in a high precision device is repeatable, the additional precision isn't of any value. In short, if the 4th digit after the decimal on a Garmin consumer device doesn't provide useful information, and I'm arguing that the device itself doesn't allow for this, then there's no point in displaying it, leading the owner to believe there's a level of accuracy involved that simply doesn't exist.
×
×
  • Create New...