Jump to content

thebruce0

+Premium Members
  • Posts

    8117
  • Joined

  • Last visited

Everything posted by thebruce0

  1. The new UI/style does have an interface that is paged and more compact than the search results list. Maybe the found/hidden lists could mimick the layout of the Your published hides page. Heck if you're looking at your own profile, why not make that Hides link go to that page? I also keep a bookmark for the page that shows unpublished/archived caches just for completion's sake. For viewing your own hides, maybe that 'your published hides' list could have an option to include archived caches - then we'd be one big step towards the informative functionality of the old list. Make some adjustments for a 'Hides' version as well, and we're back much closer to the old list but with the updated interface (assumedly the goal).
  2. I'm having very odd happenings with the search map when making use of the Not Found By search parameter. And inconsistently so I can't narrow down some specific causes. 1. Occasionally, the entire map tab will go white when clicking the Filters button. 2. Typing a name and hitting enter often doesn't add the highlighted name but a different name no longer in the popup list. eg, if I type the full complete name of a user and hit enter, that one user now displayed may not be the one added to the list - and the text entered remains in the box while a new user item is also inserted into the box. This code used for this manner of user entry needs some bug testing. 3. Loading a bookmarked search with saved parameters in the url won't load additional users in the list. (eg, NFB having User1, User2, User3, if loading the resulting URL, the map will only consider one user, not all 3, but all 3 will appear in the filter parameter) * I'm guessing this may be due to the issue of having three identical "nfb" parameters in the querystring which may be getting interpreted differently based on standards for providing multiple values for one variable (such as nfb[]=value1&nfb[]=value2 etc rather than nfb=value1&nfb=value2) so the server is only recognizing the first "nfb", but the front end is parsing the multiple "nfb" values in the querystring and populating the parameter. In any case, playing with this filter parameter has revealed some bugs in the search map page code. I have tested with all user scripts disabled, and I'm using Chrome, up to date.
  3. Did you fly barefoot too? Airport security must be a little easier My Cachiversary I revisited my first cache (now archived) and did loads of research about the subject, then went to visit a remote lighthouse and believed site of the mysteriously missing shipwreck
  4. There are lots of pieces of information that many use at some point, and almost never all at once. The old list contained all that information in a compact form. The new search result list, being bulky and spaced out, obviously doesn't have the room to display it all. That's one reason why there's so much pushback. People who are seeing what they use aren't as upset as people who are now missing data they use. The 'less upset' opinions don't outweigh the 'more upset' opinions. The fact is the new interface wasn't built to be as informational as the old layout, and clearly that's a problem here. It can be rectified by providing the option to show more data columns (ie, choosing which to display from all data made available), yet that would be a fix in line with the bulky layout we have now - and it wasn't an issue before. I'm hoping, and expected, that the devs at hq are indeed working on something, and as the silence on these issues is pretty common until/if/when something is rolled out, I don't think we should expect much confirmation that the issues raised are being heard let alone worked on. Perhaps we should make sure there's at least one post per thread page that provides an "update" of reported issues so that when new people join the thread ranting about something that's been known since page 1 (heh) we can point them to a "what we know so far" sort of guide
  5. Typically it comes down "if their name is in the logbook you can't delete their log." But if it comes a very exceptional circumstance and HQ is convinced as the CO is that they were not there, it may be possible to delete the log and not have it disputed (or fail in the dispute). If you're 100% sure, delete the log, but make sure you can prove it, or reasonably convince appeals if the geocacher decides to appeal and have it reinstated.
  6. This is precisely what I did with one cache I own. The prior CO archived without realizing adoption had to be done first, and it can't be unarchived to be adopted. So I just republished preserving the original container (and logbook - which is disgusted and is waiting for me to record all the pages' history, no longer in the geocache, heh)
  7. So now if I understand this right, we have a third general statistic... GC profile statistics (Locations Lab Caches as smileys), which are different than Project-GC statistics (Lab Cache stats included only for PGC premium users?), which are different than the AL app statistics (Only complete ALs show as a +1).
  8. We seem to have the same handful of people playing each game, and often the ftfs are gone either to other hiders who hid an alternate, or someone who was a hider and who's finding this round
  9. Interesting hypothesis, but when it comes to an opinion, you could post in forums and still share the same opinion as a majority of non-forum users. So I'd say you could still resemble an average participant - but perhaps the strength or conviction to that opinion when it comes to making it known, well, the statistics certainly imply one would be an outlier on that front = )
  10. And, it looks like they removed the ability to adjust the "archived" search option that popped in with the new links. Now the "Active and Archived" filter parameter is locked for only the Hides/Finds special search result function. That's very unfortunate. Not sure why they'd remove that. The worldwide search for Archived geocaches was super fast and informative. It didn't need to plot results on the map (and it didn't). I would love to see that archived parameter make a return, even if it is a 'hidden' (url querystring only) parameter.
  11. Or, ALs could have all their waypoints on the same coordinates with a fence that surrounds a large area, and each 'location' you complete provides instructions to 'find' the next answer somewhere in that radius. At least there you know where the AL takes place in its entirety - likely if you've read the description and know that yo udon't need to keep going back to the center where the location waypoints indicate. There are variations to the 'default' mechanic that reading the description will absolutely make clear. So while I wouldn't say reading the description is universally required (of course it isn't) there are certainly cases in which it can be. And legitimate cases. It would really suck if you got to a location to find out the answer isn't there - one would expect that to be the case; but perhaps there's a reason it's set up like that? Personally though, I'd probably avoid that in creating it just because for those people who don't read, they may give up and bomb the AL rating out of frustration. Another reason 'quality' ratings (as opposed to "+1" style positive bumps) aren't always such a good idea...
  12. This is a very intriguing limbo. There's no guideline clause that says if you've found the challenge cache at one point, you are entitled for perpetuity to log it as found even once it's archived and locked. Is it that you've found it and the qualification ALR is a technicality? Or is it that qualifying is part of the essential act that validates the "Found it" log? Must you find and qualify while the geocache is active? If part of the process is the responsibility of the CO verify challenge qualifications, then it follows the CO must be active in order to allow the valid find. But from the finder perspective, it's clear that once you do qualify (and you have the proof), then you did actually find the geocache. I think the key here beyond the Challenge Cache scope is the locking of the listing. For any geocache, regardless of how 'valid' the find is, if the listing is locked at the behest of the CO (or a reviewer), the cache simply cannot be logged. If a Trad gets archived for whatever reason, then you go and find and sign the log because it wasn't removed by the CO, that doesn't mean you can claim the find on the website if the listing is locked. Perhaps what you could do in the case of the challenge cache is see if anyone else would be willing to republish the series in a new set of challenges. If you've qualified already, then you'd be able to go out log those ones as found. Here's a big benefit - once you've qualified for the cache, if you haven't logged it as found though you've signed its logbook, and the listing gets archived and locked, at least that doesn't affect your past qualifications! 1] you can still rest easy in that you still qualify for the challenge itself, and 2] any future challenge in the same manner you already qualify for. I'm still working towards qualifying for some crazy challenges signed in while on vacation. If I qualify long after they're archived, I can only hope the CO doesn't lock them; if they're still loggable, I'll absolutely log them found. If not, it sucks (like, really), but there's nothing I can do. And I wouldn't want to dig into whatever drama may have caused it You can still keep like a pseudo-challenges-complete bookmark list to which you add all the geocaches you've qualified, whether found and signed or not.
  13. But it raises a good point - the wording (on a website) tends to imply reporting a problem with the website. It would make more sense, imo, if it were clarified, and obvious, that it's to "Report a problem with this cache". That might help reduce some confusion.
  14. Love that you pull out these stats @Moun10Bike
  15. As a webapp developer I can tell you that's not always the case. If rewriting the base code from the ground up to be an improved server experience, sometimes the interface simply can't be recreated to 'mimic' the old way - at least not easily, quickly, or cheaply. What needs to be done in such a case is an use case analysis of the features and functionality people want or need the most, and prioritize the new UI design around that, and then go through the testing and release process. Designing what one might think is a UI improvement (whether or not the underlying code is improved) and releasing without heavy user testing is a VERY risky move...
  16. Or even the ones they own. Why is that kept secret? I'm guessing now because the links all go to the advanced search, and ALs have no web-based search function.
  17. They may be new and not realize the value in different log types (or even that they posted a find). Also, 'bad cacher' could apply to the CO who isn't contacting them and asking them to edit their log to be appropriate (I try to do that if it's clearly an inconsistent log type). It also bugs me when there are finds (valid) all saying the cache coords are inaccurate, upwards of 10m off, without really providing any other descriptions, and especially without providing their own coordinates to perhaps make things easier... and then those COs who if it seems many are reporting bad coordinates make no effort to adjust their cache. Maybe this is an irk topic
  18. There are a few threads in the Website forum already discussing the PQ map issue.
  19. Yeah that's just because it's defaulting to no filtering on the browse map because the PQ filter isn't being loaded (as opposed to the PQ errantly returning all worldwide results). Same effect as enabling everything and zooming all the way out.
  20. One way to look at it is that the underlying system has been improved, but the interface has lost functionality that many used. Ideally, they'll listen to the issues (again ideally, constructively critical ones, not raging insulting angry rants, though they have their appropriate effect) and continue to adjust and/or fix the interface. Conceptually speaking, if the links imply 'searching' for all cache a user has found or placed, then the link makes sense. But if the link/location context implies an archival list, then the loss of pagination is a very big interface loss for usability's sake. If the server-end of the functionality is in fact better/sleeker/quicker/lighter weight, then finding a way to (re-)implement paging through tens of thousands of chronological records should be on the list of projects to complete over top of the new underlying system. Not because the new code is bad, but because the interface and user experience demands it for its context.
  21. Is there a way to inform the myriad of users worldwide who might suffer from this, without having to come to the forum to see they need to log back in, rather than smashing their phone? Or can it be fixed in the app so for whatever reason the (old) names are broken, they can still show as expected?
  22. And, a new dry sheet added to the container doesn't imply that it will become THE logbook and never have issues. It's a bandaid. I'd do that and say so in my log, so that the CO can decide whether and when to come and fix the 'bad' logbook - but at least the old log is there for them, and it allows, for at least a short while, other geocachers to continue to sign something. So, knowing that local community norms vary from place to place, the 'safest' way to not step on anyone's toes is the hands-off approach. If you're not sure whether doing something would be seen as helpful or hostile, especially if you're from out of town, no matter how nicely worded you explain what you did and your good intentions, the best thing would be to do what need to do for your own logging according to cache-logging rules, and leave everything as close to the way you found it as possible. I'd do that and in the log text explain how I found it and how I left it. If someone wants to get on my case for not 'proxy' helping, so be it - that's not the finder's responsibility. And I'm long gone by then too
  23. So for example: * User posts a Find log; during posting, 3 souvenirs are awarded, each effectively having the same "timestamp". Understandable. * If a user posts 3 logs, each awarding a different souvenir, they should therefore have different "timestamps" - assuming it's not restricted to only a "datestamp". * Back to the first example, if a user posts a Find log on a different date, the souvenir "timestamp" should be date-marked the same date as the Find log. In that case, the time portion would likely match the log (midnight, or whatever they're currently stored as Pacific Time) ie, the only time the souvenirs should have the exact same time are if they are all awarded with the posting of one log. And, the awarded date portions should match the date of the log that triggered them. In the case of the identically-timed souvenirs, perhaps the secondary sort should be the souvenir ID, or a 'souvenir date', or whatever it might be that sorts those by a type of 'narrative' order? A lower-milestone souvenir should be considered "awarded earlier" than a higher milestone souvenir, assuming they are awarded with an identical timestamp. I haven't tested the above thoroughly with the current system, so just pointing out the differences and the expected results.
  24. I can't reproduce the sorting by distance for Basic member account, when I test it consistently sorts by date (FoundDateOfFoundByUser) Played a bit more... the URL does appear correct, and on closer look the dates do appear to be in descending order, but the sorting indicator is over the Distance column, and the Found On column can't be clicked.
  25. Nice. I hope they all don't have the same tracking code.
×
×
  • Create New...