Jump to content

pppingme

+Premium Members
  • Posts

    1238
  • Joined

  • Last visited

Everything posted by pppingme

  1. I too have reported this bug and seen a couple threads on it, with NO RESPONSE from gs addressing this issue. THIS IS A BUG. If I choose the check box that says "Are not on my ignore list", then obviously I don't want the caches, HOWEVER, if I don't choose that option, THEN I EXPECT THOSE CACHES TO SHOW.
  2. The proper way to handle this, and I'm continued to be amazed that gs doesn't support this (I have brought it up before) is for the cache owner to be able to "hide" (not encrypt) the log contents, or pics, that way the finder doesn't lose a find (as they shouldn't), the cache owner gets to prevent spoilers (as they should), and way fewer less hurt feelings, then the log owner (yes, I believe the log belongs to the log owner, not the cache owner, so no I don't support cache owner editing logs) can edit to everyone's happiness.
  3. They actually have, and it works quite effectively, without hurting real users, the reason most people don't even know its there.
  4. 80,000 premium members, so .01% would be a grand total of 8 users being throttled? I kinda think from the volume of this thread that may be off just a tad.
  5. No, it wouldn't be. If people knew they were running a bad script, they might stop running it. Also, without naming it, most people would still see this as a generic blaming of greasemonkey scripts overall, indicating that gs is still guessing why their throttling code is so buggy.
  6. This is a good argument for why notifies should have home coordinates as an option. Someone is regularly posting with this exact problem. GS, are you listening?
  7. The .img server seems to be going REALLY REALLY slow, but its not down completely.
  8. The option "Are available to all users" specifically excludes pm caches, do you have that selected?
  9. I'm not a big fan of IE6 nor do I use it, but I find this to be an interesting decision considering these points: - IE6 is STILL a current and supported product my Microsoft - IE6 is still in use by 12% of users overall - GS has stated that 10% of its users are still using IE6 Is GS really wanting to cut 10% of their users? Using numbers that have floated around there are about 80,000 premium users, so approx 8,000 users will be impacted, that's revenues of $240,000/year (8,000 times $30). Wow, doesn't seem like the smartest decision I've ever seen.
  10. When I turn on the bike routes I see two different things. A dashed green line, seems to be what you are saying, and a solid green line, which are true bike/pedestrian paths and routes (not accessible to cars at all). Are you seeing both or just the dashed lines? The gs google maps can show multiple caches, where the real google maps only show one cache.
  11. What about Delorme supporting their customers? After all, gs put out attributes on 1/12, thats two months ago, in the mean time, they've implemented a work around for Delorme's broken software (although the work around has been found to be buggy), DURING THAT TIME, WHAT HAS DELORME DONE TO FIX THE PROBLEM? From what I've seen, nothing. This isn't a matter of what I find of value, its a matter of something that shouldn't have broken to start with because a 3rd party app doesn't correctly read the files, and now gs has had to waste development time dealing with the situation. The site has to make progress, a broken 3rd party app should not keep progress away.
  12. Why should gs spend ANY time implementing a work around for an already fragile PQ system? With the workaround, certain pieces of code have to be written twice, one for attributes, one for no attributes. The reality should be, that gs pull the ability to choose, put the .gpx out with attributes, and quit wasting time on something that isn't their problem to start with. PQ's are already fragile, gs need to spend time fixing that, and not waste time making work arounds for someone that can't properly read a xml file.
  13. Since the change was a simple addition of a new tag, no one needed advance notice (I'm NOT saying gs shouldn't have put out notice, they very much should have, BUT it shouldn't have mattered). I would agree with you if a tag were being removed or changed, since some program could be dependent on that tag, but that's not the case here. No application that is reading an xml file should break just because there is a new/unexpected tag, period. So, notice or not (and yes, they should have notified), the app shouldn't have broken, and it did. I also disagree with gs having made a work around to accommodate a broken app, those are development hours that would have been much better spent elsewhere. GS's response should have been to ignore the complaints after clarifying that the .gpx is indeed good clean xml data.
  14. This really IS a delorme problem, and they need to fix their software. The gpx files are simply xml files, nothing fancy, version 1.0.1 is simply an extra tag (attributes). The "problem" is that delorme can't handle an unknown tag (in this case the attributes) in the file. The PROPER way to handle this is to ignore the tag, not choke on it. This issue really should be raised on the delorme side for them to fix their software to politely ignore rather than choke on an unknown tag, as the rest of the world does when processing xml files. Does this mean every time gs wants to include something new that they have to wait for delorme to catch up? Absolutely not. The rest of the world didn't choke on the new data, and delorme shouldn't have either.
  15. I don't think this has anything to do with the fact you just updated yours. I haven't updated mine since January, and they are showing up the same way for me. Not sure the last time I looked (doubt its been more than a couple days) but this was fine last time I looked. So, it appears GS is making changes again (huh, didn't Jeremy say something just yesterday about being more conscious about notifying us before making changes?).
  16. Wow, you got everyone's hopes up. Is there any chance it was a corrupt file? Do you zip the files?
  17. Its been longer than 2 days, its been since the "database upgrade" or whatever they did on 2/23. What makes it worse is that no one seems to be actively monitoring the situation over the weekends. There was a very large gap between the time people complained on the forums and the first "we are looking at it" post. One would think that a company that just made major changes (database "upgrade" on 2/23 and a site upgrade on 2/24) to an already unstable system (the pq generator has been acting up for quite a while, but the most recent upgrades seen to have all but broken it) would at least monitor it during peak usage times (the weekends).
  18. Because I'm a cacher of opportunity. Look here: http://www.geocaching.com/seek/nearest.asp...617&dist=25 More than 5000 caches within 25 miles. My daily work could easily take me in a circle larger than that.
  19. You won't. Once Friday is over that's it. You have to reschedule for Saturday (or now its Sunday).
  20. Huh, interesting. No maps, no description, and sidebar and stuff are at the bottom. I'm running FF 3.0.18 and its not quite right on there either.
  21. Since PM status isn't in PQ's, I rarely even know if a TB/coin I'm picking or dropping is in a PM cache or not. Which makes me wonder, do most people even pay attention?
  22. A quick analysis shows that a $1 premium wouldn't even come close to supporting a $100 payout. Think about this: If 100 caches are "insured" then there is $100 revenue, now, over the span of a year, if even ONE of those caches comes up missing, thats a payout, if a second goes missing, the insurance company is now broke. If you pick 100 caches (even set high standards, like last 3 or 4 logs must be finds with no dnfs in the mix), and monitor those for a year, you will find that at least 20 will come up missing. Here is a quick stat, for every 2 caches that get "approved", one is archived. There are currently 900k active caches yet 1.6 million gc codes have been handed out. It would take a premium of over $25/year, probably around $35 to $40 to support a $100 payout, just for the "insurance company" to break even. Even assuming it could be done for $25/year (which it really can't), I just don't see most CO's willing to pay that much.
  23. So you think a cache hidden in honer of a soldier and teaching about history from a global war shouldn't exist? Wow, based on that there are 52 caches honoring obama that should be archived immediately, every one of them offend me.
  24. Sadly, features and bug requests are 2nd to revenue, they focus on revenue first (ads on the pages and in forums, making sales pitches out of the gps reviews "feature", etc), only after they quit working on that do they finally take a serious look at bug fixes and features. Its nothing about what current paying customers want, they figure you are already on the hook, and new customers aren't aware of the bugs and much requested features till they are on the hook too. The only thing that changes companies attitudes on things like this is a mass exodus of customers, and too many people want their pq's over making a statement to get bugs fixed and usable features implemented. I'm not saying features and bug fixes never happen, they are just low on the totem pole and gc has no incentive to prioritize them. For example, its been asked since I started gc'ing for the keyword search to return results in order of distance, or at least limit them to a state or area. Currently, and as it has been since I started using the site, keyword searches are returned by date, a totally useless order, I could care less about caches that are 5000 miles away. When I search for the word "bridges" I have to page about 10 or 15 pages before even finding one in my area. No one has stated they like it the way it is. The problem? There is no revenue incentive to fix it. Another example, what have they stated they have been working on for a soon release? PQ's. Why? Because they know they've lost paying customers because of the overall unreliability and weakness of the current pq system.
  25. The two most common causes of this are: 1 - A double log, you've logged one cache twice 2 - Latency, several people have stated that if they run their my finds pq right after logging, it may not pick up the most recent logs.
×
×
  • Create New...