Jump to content

Moratorium update


Recommended Posts

The announcement so far is that checkers are required, but not whether those checkers will have to include output (ie, your qualifying caches are...) as well. The output for some CC's is lengthy and maybe GS doesn't want to store all that text in their Log tables. I like to include screenshots for my CC proof, but maybe requiring screenshots will be too complicated for some cachers (especially smartphone users). My guess is that including the qualification details will not be required for post-moratorium CC's.

Right, if previously CCO's could require evidence of qualification to be included with the log (either by bookmark list, log text, screenshot, or just pointing to GC stats), then if a passing checker result is stored and referrable, then that could take the place of all of the including of evidence.

Alternatively, if passing the checker isn't required to qualify (GS hasn't said that specifically) then the finder would just have to resort to the way things were pre-moratorium and post the evidence with their log. That way, GS's "challenge caches require a checker" is reasonable, and the benefits of having a checker include reducing the amount of work a CO has to do to verify a finder's qualification. If PGC confirms it was run and the user passed? Good to go. Otherwise, back to the way it was - post your evidence with your log.

Link to comment
And in theory, yes, if a human can simply look at data that's reported and presented to them, then a checker can retrieve that data and check automatically.

 

I don't know what theory you are referring to, but these are two very different problems.

 

The problem for the human is simply to look at data that's reported and presented to them (the solution) and verify whether that particular solution is valid. The problem for the checker is to find a valid solution among the universe of finds. This can be a very different problem.

 

Let me give an example. Consider a challenge to find two caches that are exactly 1000 miles apart (or within 0.1 miles of 1000, say 999.9 to 1000.1 miles apart). You can see that if I provide you a solution: "I found GC12345 and GC23456 and they are 999.94 miles apart" it is very easy to verify whether or not that statement is true. However, if I provide you a universe of 10000 cache finds and ask you to find a pair in there that meets the requirement, it's like looking for a needle in a haystack. Very hard. Once you do find the needle it's easy to prove that indeed it's a needle, but actually finding it is a very difficult problem for the checker.

Link to comment
The problem for the human is simply to look at data that's reported and presented to them (the solution) and verify whether that particular solution is valid. The problem for the checker is to find a valid solution among the universe of finds.

In the first context, the human is presented with two caches to check (solution already proposed, now verify). But in the latter, the checker is not presented any caches to check (no solution given, just the complete data set to analyze). Different situations. Reverse the roles - what if the human were not presented those two specific caches and had to iterate through all 10,000? Or the script was directly given two specific caches to check and verify?

 

Case in point: There could be two types of checkers for a challenge like this. One where the user just provides their entire catalogue of finds, and one where the input requests two specific GC codes to check. Whether performed by a human or script, the first would be a much lengthier verification process than the latter :)

Edited by thebruce0
Link to comment

It's not about "me" being worried about it or not. It's about the fact that we (thebruce0 and WPTC) were discussing potential solutions to the issue of temporal changes in qualification.

I was setting the stage for my second paragraph, in which I explain why it's not really a problem worth solving.

 

I'm not sure how easy it would be for all CCO's to decrypt keys. Personally, I would prefer a solution that doesn't require the cache finder to paste something specific into their log and also something that doesn't require the CCO to copy-paste something for every finder when checking whether those finders qualified or not.

I'm proposing an alternate implementation in which the data is preserved by the person that cares about it -- the seeker -- instead of by the web site that just happens to be supporting the checker. It would be exactly as easy for the CCO to decrypt as it would be for the CCO to retrieve the stored information of your proposal. I'm not sure why you don't want the information that confirms meeting the challenge in the log that announces meeting the challenge: it strikes me as the perfect place for it, right where anyone can inspect it at any time. For your solution, the CCO already has to cut and paste the user's name, so I think it would be just as easy to cut and paste a token from the log.

 

However, solutions for such instances became a discussion point in this thread and so some of us are discussing possible solutions.

My point is the the solutions being discussed are overkill considering the problem being solved.

 

I understand where you're coming from with all your comments about how CC finders and CCO's should just 'work it out', but if all cachers were so rational then we wouldn't have needed this moratorium in the first place. Realistically, it's not going to happen. Personally, I'd rather discuss possible solutions that will work without relying on cachers to change their personalities.

My thrust in those other conversations is that irrational cachers are always going to be a problem no matter what you do, so it's a waste of time and an unnecessary burden to accept them as the norm and design the system to handle them.

 

I'm not so much worried about any given individual's personality: I'm more worried about a culture in which such personalities are considered acceptable.

Link to comment

I understand where you're coming from with all your comments about how CC finders and CCO's should just 'work it out', but if all cachers were so rational then we wouldn't have needed this moratorium in the first place.

 

**facepalm**

 

Exactly how many times do we have to say it? The intial announcement of the moratorium stated, and Groundspeak has confirmed, that disputes between COs and finders were not a factor in the moratorium.

 

What you are discussing has never been a significant problem. You guys can continue your discussion about "solving" it all you want, but I have to tell you that folks are merely providing fodder for those who claim that forum discussions are pointless and idiotic.

Link to comment

I'm not sure how easy it would be for all CCO's to decrypt keys. Personally, I would prefer a solution that doesn't require the cache finder to paste something specific into their log and also something that doesn't require the CCO to copy-paste something for every finder when checking whether those finders qualified or not.

I'm proposing an alternate implementation in which the data is preserved by the person that cares about it -- the seeker -- instead of by the web site that just happens to be supporting the checker. It would be exactly as easy for the CCO to decrypt as it would be for the CCO to retrieve the stored information of your proposal. I'm not sure why you don't want the information that confirms meeting the challenge in the log that announces meeting the challenge: it strikes me as the perfect place for it, right where anyone can inspect it at any time. For your solution, the CCO already has to cut and paste the user's name, so I think it would be just as easy to cut and paste a token from the log.

For the record, I never said my idea was perfect and I've already admitted that keeping track of 'fail' instances, which someone else pointed out, wouldn't make much sense.

 

Anyway, about the keys. If running the checker creates output that tell the CC finder "paste this into your Found It log" then great, but I'm thinking it's better not to rely on the CC finder cutting and pasting anything. I'm not against the idea, I just don't think that it makes things easier for the CC finders. The announcement says "challenge checkers will make it easier for players to determine their qualifications for challenge caches". I'm inferring that TPTB want to make proving qualifications and logging finds on CC's easier as well, but I could be wrong. I don't think that making things easier for CCO's is a big priority.

 

My idea about the CCO checking a History page is as follows:

-- CCO opens their CC cache page and right-clicks the checker link, which will have to be included in the cache description, so that the checker opens in another window.

-- CCO looks at the recent Found It logs and then looks at the History page. Depending on how 'busy' the History page is, there may not be any need to cut-and-paste or search.

OR

-- CCO creates a bookmark for their checker's History page.

-- CCO gets an email notification about a new Found It log on their CC.

-- CCO opens a new window in their browser and clicks the bookmark for the History page and looks to see if the name is on the History page. Depending on how 'busy' the History page is, there may not be any need to cut-and-paste or search.

OR

-- Some combination of the above two processes.

OR

-- Something else entirely.

Link to comment

I understand where you're coming from with all your comments about how CC finders and CCO's should just 'work it out', but if all cachers were so rational then we wouldn't have needed this moratorium in the first place.

 

**facepalm**

 

Exactly how many times do we have to say it? The intial announcement of the moratorium stated, and Groundspeak has confirmed, that disputes between COs and finders were not a factor in the moratorium.

 

What you are discussing has never been a significant problem. You guys can continue your discussion about "solving" it all you want, but I have to tell you that folks are merely providing fodder for those who claim that forum discussions are pointless and idiotic.

Sorry, it wasn't clear enough that the two parts of my sentence were not directly related. My point was that not all cachers are rational. They're not rational enough to 'work things out' amongst themselves. And they're not rational enough to not complain to the point that the moratorium was instituted. I'm not saying the the latter is only about the former.

 

How about if I wrote it like this...

I understand where you're coming from with all your comments about how CC finders and CCO's should just 'work it out', but if all cachers were so rational and didn't argue with Reviewers when their challenges were rejected and didn't submit appeals to GS when their challenges weren't published and didn't disagree with the CCO or the CC finder then we wouldn't have needed this moratorium in the first place.

 

The fact remains that some cachers wanted to create complicated challenges, however one wants to define 'complicated'. For published CC's, some cachers couldn't understand the requirements and some complained about the work required to determine whether they qualified or not. The checker requirement is meant to relieve some of that. If cachers were rational, then they could work with each other to educate each other, but instead we'll just throw a checker requirement into the mix so CCO's and CC Finders don't have to work it out amongst themselves.

 

Edit: Splitting up the long sentence a bit more to make things clearer.

Edited by noncentric
Link to comment

 

Anyway, about the keys. If running the checker creates output that tell the CC finder "paste this into your Found It log" then great, but I'm thinking it's better not to rely on the CC finder cutting and pasting anything. I'm not against the idea, I just don't think that it makes things easier for the CC finders. The announcement says "challenge checkers will make it easier for players to determine their qualifications for challenge caches". I'm inferring that TPTB want to make proving qualifications and logging finds on CC's easier as well, but I could be wrong. I don't think that making things easier for CCO's is a big priority.

 

My idea about the CCO checking a History page is as follows:

-- CCO opens their CC cache page and right-clicks the checker link, which will have to be included in the cache description, so that the checker opens in another window.

-- CCO looks at the recent Found It logs and then looks at the History page. Depending on how 'busy' the History page is, there may not be any need to cut-and-paste or search.

OR

-- CCO creates a bookmark for their checker's History page.

-- CCO gets an email notification about a new Found It log on their CC.

-- CCO opens a new window in their browser and clicks the bookmark for the History page and looks to see if the name is on the History page. Depending on how 'busy' the History page is, there may not be any need to cut-and-paste or search.

OR

-- Some combination of the above two processes.

OR

-- Something else entirely.

I'm coming around to the idea of a history page. And, the history page would only need to contain a history of the pass/fail transitions.

 

As is pointed out above, this would benefit CCO's that are trying to verify completion of the requirements.

 

But, it can also benefit the cacher. Especially if the history contained the reason why someone qualified or didn't at that point in time. It would allow a cacher to determine what changed to cause them to no longer be qualified. It may also tell them (dependent on the particular checker script) what they need to do re-qualify.

 

With the history page, both the CCO and cacher would benefit. Heck, PGC could go so far as to send an e-mail notification to a cacher if circumstances caused them to be no longer qualified. If I was PGC, I'd consider adding this feature.

Link to comment
The problem for the human is simply to look at data that's reported and presented to them (the solution) and verify whether that particular solution is valid. The problem for the checker is to find a valid solution among the universe of finds.

In the first context, the human is presented with two caches to check (solution already proposed, now verify). But in the latter, the checker is not presented any caches to check (no solution given, just the complete data set to analyze). Different situations. Reverse the roles - what if the human were not presented those two specific caches and had to iterate through all 10,000? Or the script was directly given two specific caches to check and verify?

 

Case in point: There could be two types of checkers for a challenge like this. One where the user just provides their entire catalogue of finds, and one where the input requests two specific GC codes to check. Whether performed by a human or script, the first would be a much lengthier verification process than the latter :)

Here's another odd scenario that came to mind as I was reading the above. I don't really have a solution, but it is a problem that could happen.

 

I seem to recall one of the PGC script writers mentioning that scripts are only allowed to run for a finite period of time. (I think it was 30 seconds.) Using the example above, it wouldn't be difficult to write a checker for this using an iterative algorithm. It's not the most efficient algorithm for this, but that not the point. The point is that it could take a while to run, and the amount of time is dependent on the number of finds it has to check. If there is a time limit, that means that checkers run the risk of being cancelled, and not completing the test. This could happen with any checker where the runtime is dependent on the volume of finds.

 

So, what happens when there is a checker for a challenge, but it won't work for me because of the volume of my finds?

Link to comment
So, what happens when there is a checker for a challenge, but it won't work for me because of the volume of my finds?

 

I suppose the checker writer could request another parameter that would filter down checking to a smaller segment, like a date range, or some other 'window' that can move with multiple separate checks until the entire data set is scanned. Problem is, one might never get a green overall, yet collect the outputs from each block as evidence in total of a qualification. That's certainly just a workaround though, takes a lot more work, and means the official Pass couldn't be required for a valid Find.

 

That said, how many challenges require scanning so much data as relevant to the challenge? I think most challenges, even with a couple 10's of thousands of finds, would first filter down to relevant cache properties and applicable caches, and then do the more gritty analysis. I've never coded a PGC script, but is it possible to use temporary tables and whatnot? It seems like it should be possible to optimize most any script to be able to run under 30 seconds. In my SQLite instance I've only got a couple of queries/batches that do such intense calculations that they take more than 30 seconds, and those have distance calculations and comparisons; but that's running on my single local pc. I presume PGC's servers are quite more powerful :)

Link to comment

At the grave risk of asking a stupid question...

 

PGC stores lots of stats for every single geocacher - or calculates them on demand from live data using the API? (I think it's the former).

 

So adding, say, a single hash for every single geocacher would require minimal extra storage space?

 

So, is it possible for PGC to calculate, for every single geocacher, a single hash which, when decoded, would indicate the qualification status of an individual geocacher for every single checker on PGC?

 

(I realise that may be several stupid questions but what the heck - in for a penny).

 

^BUMP - just because nobody commented - not even to shoot it full of holes - so it must be a good idea B)

Link to comment

At the grave risk of asking a stupid question...

 

PGC stores lots of stats for every single geocacher - or calculates them on demand from live data using the API? (I think it's the former).

 

So adding, say, a single hash for every single geocacher would require minimal extra storage space?

 

So, is it possible for PGC to calculate, for every single geocacher, a single hash which, when decoded, would indicate the qualification status of an individual geocacher for every single checker on PGC?

 

(I realise that may be several stupid questions but what the heck - in for a penny).

 

39,000 active cachers (Canada, to date this year), 14,000 unique challenges (worldwide)

Say 16 bytes for a standard GUID (for arguments' sake)

8.7GB uncompressed in raw hash data just for active Canadian cachers.

 

I wouldn't store data for every cacher for every challenge. I still think there's not really a feasible reason to store anything but a flag/date for a successful check for a user against the challenge [tag] they checked. Basically just a simple lookup table. UserID + TagID + Date.

 

Also, I don't think PGC stores stats, I think it just keeps track of the finds (per the update schedule and your delayed find count), and generates the stats display from that data. I suppose it's possible the results could be cached so it's not run to analyze the same data every single time until the next update.

Link to comment
So, what happens when there is a checker for a challenge, but it won't work for me because of the volume of my finds?

 

I suppose the checker writer could request another parameter that would filter down checking to a smaller segment, like a date range, or some other 'window' that can move with multiple separate checks until the entire data set is scanned. Problem is, one might never get a green overall, yet collect the outputs from each block as evidence in total of a qualification. That's certainly just a workaround though, takes a lot more work, and means the official Pass couldn't be required for a valid Find.

 

That said, how many challenges require scanning so much data as relevant to the challenge? I think most challenges, even with a couple 10's of thousands of finds, would first filter down to relevant cache properties and applicable caches, and then do the more gritty analysis. I've never coded a PGC script, but is it possible to use temporary tables and whatnot? It seems like it should be possible to optimize most any script to be able to run under 30 seconds. In my SQLite instance I've only got a couple of queries/batches that do such intense calculations that they take more than 30 seconds, and those have distance calculations and comparisons; but that's running on my single local pc. I presume PGC's servers are quite more powerful :)

Actually, I was asking this question from a whole different point of view. I do agree that there are technical means to reduce the likelihood of this happening. I was asking this from the point of view of a frustrated cacher that is trying to use the provided checker, and it is not working for them. What action should I, as a frustrated cacher, take?

 

In most of the discussion in this topic, we have assumed that the act of running a checker will return one of two results. But in reality, there can be more than two results. Two of the results are the obvious ones, the checker script runs cleanly, and we get a definitive pass/fail. But, we can also get a result of timed out. Granted, this isn't coming directly from the script itself, but I am looking at this from the larger overall act of running the checker. There may be other results, such as the script crashed upon execution. Anyhow, my point is that there is the possibility that the act of running a checker script might return something other than a pass/fail result.

 

So, if I execute a checker, and it doesn't complete successfully, what would I see, and what action (if any), should I take? Do I need to ask the pool of script writers to look at it? Or does PGC keep track of which checkers frequently time out, and take action for me? Maybe PGC would refer it to the pool of script writers, or they could invest in more processing power. Or maybe they tweak the timeout values when running this particular script?

 

I guess my explanation of my question provided me with the answer. Just document how I should handle this situation. Document what (if any) action I need to do if a checker times out, or has some other problem. I'm hoping that this documentation will be displayed (or linked to) on the results screen to make it easy for me to find.

 

Now here I go, off on another tangent. (It's so hard to stop myself.) I wonder if the timeout value for the automatic checker that runs in the background is different from the timeout value used when running the checker from the web page? If it is, then the automatic checker could show a pass/fail result, but the manual one could time out. (Or, vice versa, depending on the values.) Anyhow, if they are different, that could also cause confusion for someone.

Link to comment

hm. It would be interesting to know the structure of the auto-checking process, and whether auto-checkers that time out or error are flagged so the authors know.

 

Manual timeouts, well as a user I'd probably first contact the CO to let them know that the checker isn't giving a valid response and they should look at it, so they'd probably get a whole of the author if not themselves, and see if it can be fixed. Might require a bit of back & forth as as test case for the error. I'd assume though that if the checker can't be fixed to reliably check masses of data, it may perhaps become 'subject to archival' since the checker can't be trusted to work (and likely indicates a pretty complex analysis algorithm which may be reflective of the simplicity of the challenge itself, *shrug*)

 

But it's a good question, in general, how does PGC deal with potential timeouts and script errors, especially if the script is already made live and is being used?

Link to comment

Most of my questions and musings have been focused on how this change to the rules will affect the average cacher. When the moratorium is lifted, and the flood gates are opened, what kinds of new problems will the rule changes bring?

 

I thank Rock Chalk and GS for letting part of the cat out of the bag. There has been some great discussion here about the impact, and how that impact can be managed. My hope is that GS and GPC are monitoring this topic, and will take some of the points that are being raised into consideration while they are finalizing the rollout.

 

I do have a suggestion on how to handle the rollout. I would hope that when the moratorium is lifted, and the flood gates are opened, they are opened slowly. Start with a region, and allow challenges caches in that region. Work out some of the unexpected kinks in the process. Then open it up to more and more. As has been pointed out in this topic, what we know of this change is that there are new questions that should be answered. If the flood gates are opened wide on day one, I see the potential for mass confusion, and therefore, mass frustration. And, TPTB will become so overloaded that they will not have the time to address the underlying problems.

Link to comment

hm. It would be interesting to know the structure of the auto-checking process, and whether auto-checkers that time out or error are flagged so the authors know.

 

Manual timeouts, well as a user I'd probably first contact the CO to let them know that the checker isn't giving a valid response and they should look at it, so they'd probably get a whole of the author if not themselves, and see if it can be fixed. Might require a bit of back & forth as as test case for the error. I'd assume though that if the checker can't be fixed to reliably check masses of data, it may perhaps become 'subject to archival' since the checker can't be trusted to work (and likely indicates a pretty complex analysis algorithm which may be reflective of the simplicity of the challenge itself, *shrug*)

 

But it's a good question, in general, how does PGC deal with potential timeouts and script errors, especially if the script is already made live and is being used?

 

And don't forget, a single checker script may be used on multiple caches. So, 'subject to archival' could affect a number of caches.

 

Wow, this could actually be a result of the parameters in the tag that is used. I forgot about that portion of the process. Maybe a script works fine for a one set of parameters, and not for another. So, I think I agree, the first contact should be back to the CO. Thanks!

Link to comment
I was asking this from the point of view of a frustrated cacher that is trying to use the provided checker, and it is not working for them. What action should I, as a frustrated cacher, take?

If that were me, I'd ignore the PGC checker, run my own checker in GSAK, or look at my stats page if applicable, and log the cache.

 

The guidelines only say that the CCO must provide a checker. It doesn't say that the finders must use it. :ph34r:

Link to comment

Most of my questions and musings have been focused on how this change to the rules will affect the average cacher. When the moratorium is lifted, and the flood gates are opened, what kinds of new problems will the rule changes bring?

 

I thank Rock Chalk and GS for letting part of the cat out of the bag. There has been some great discussion here about the impact, and how that impact can be managed. My hope is that GS and GPC are monitoring this topic, and will take some of the points that are being raised into consideration while they are finalizing the rollout.

I appreciate your thoughtful questions and comments. I was thinking some of the PGC folks would jump in to answer some of the questions, but I suspect they're pre-occupied with implementing the changes at PGC. I noticed today that the "challenge checker request" forum has been flagged as obsolete and a new forum has already been created. In fact, it looks like a checker request has already been initiated and completed, and I think it might be similar to challenges that CR mentioned back in post #620 (oldest caches).

 

I do have a suggestion on how to handle the rollout. I would hope that when the moratorium is lifted, and the flood gates are opened, they are opened slowly. Start with a region, and allow challenges caches in that region. Work out some of the unexpected kinks in the process. Then open it up to more and more. As has been pointed out in this topic, what we know of this change is that there are new questions that should be answered. If the flood gates are opened wide on day one, I see the potential for mass confusion, and therefore, mass frustration. And, TPTB will become so overloaded that they will not have the time to address the underlying problems.

This is a good point. There are some CC's already in the queue for when the final CC guidelines are announced. If they all meet the new guidelines and are published immediately, then there may be a 'rush' of cachers trying to get FTF. Even more of a rush if previous finds are still allowed, as they were with the pre-moratorium guidelines. Realistically, I don't think there will be any throttling of the rollout. It will be a good stress test of the new framework and if everything, hopefully, goes well then that would bode well for the future.

Link to comment

I'm proposing an alternate implementation in which the data is preserved by the person that cares about it -- the seeker -- instead of by the web site that just happens to be supporting the checker.

Speaking of fizzymagic, I was thinking of his Mind Your Master as "a good example" of the kind of confirmation token I'm proposing, but it occurs to me I really just stole the idea straight up from that cache and should have given fizzymagic credit for the idea.

 

If you want to see what I'm talking about, take a look to see how fizzymagic's puzzle cache works. When I completed the game, it printed the token q0uqQQstjwHeaLb7jS9GFTjnJ9VyXkm8 for me to post in my log. Anyone, including fizzymagic, can confirm I completed the game by taking that token and pasting it to fizzymagic's token decryptor. A challenge checker could print a similar token, allowing anyone with that token to confirm that that user satisfied the challenge requirements at that point in time.

Link to comment

If PGC includes a hash decryptor, then yeah no additional data would need to be stored at PGC, since the verification is in the encrypted hash itself. It just means that the decryption algorithym really needs to remain entirely secure or it loses its value (even if it's not worth it to 'cheat' anyway) since it really becomes just like pasting result text in the log. That is to say, requesting from PGC if a user has previously Passed a checker is secure, so moving the verification into a hash implies the hash should retain that level of integrity.

 

Anyway, it's certainly a feasible option I think... Also, love the setup for that mastermind puzzle! I wouldn't have thought something like that was allowed, but fizzymagic pulled out all the stops it seems to make sure it's an acceptable puzzle :) I would think that requiring the verification code from the game in the find log would be treated as an ALR, if the finder gets the coordinates to the cache. What if someone finds the cache but doesn't paste their code in the Find log? He's allowed to delete the Find?

blah that's a subject for a different thread... never mind.

Link to comment

And in the case of an unreasonable, though disputed, rejection?

If someone's being unreasonable, then it's good that GS gets involved, but not to arbitrate the dispute. Someone being unreasonable needs to be educated or, if it continues, disciplined. They shouldn't be allowed to continue wasting everyone's time generating unreasonable disputes.

In which case formal arbitration would be a good thing - I thought so :)

You say that as if I've spoken against formal arbitration.

 

In the real world, the more subtle the conflict, the more important arbitration is, so it's natural to take that thinking into this context. But, in fact, in a game, the reverse is true: the more subtle the conflict, the more obvious it is that both parties are being bad sports by not giving in. The discussions of how checkers can handle subtle corner cases, while interesting, just emphasize to me that the effort is going into preventing or handling disputes that we shouldn't tolerate to begin with.

 

But that doesn't mean that there aren't cases where one of the parties is being unreasonable. Those cases might require arbitration just as they do with any other cache type. But at that end of the spectrum, I'm not convinced a checker will do anything to prevent or reveal such abuse. Indeed, if a CCO is determined to be mean, it's easy to imagine the checker itself being used to game the system in a way that annoys seekers.

Link to comment

I'll wait and see how the challenge checker works on Noah's Ark Challenge (find thirteen pairs of caches wit a animal's name in the title.)

Will the checker be able to find the 'hen' in:

Hell's Kitchen's Kitchen!

POPS - Stonehendge

If it cannot, then LUA must be a shockingly poor programming language. In most languages, it's actually harder to find "hen" as a complete word.

 

And a really good checker would still discard that pair of caches as two hens do not a Noah's pair make

Link to comment

I don't understand what you don't understand. You claim to have a checker that works. But it doesn't work. That seems fairly simple. So why do you claim that it does, when it doesn't?

Check my found cache list for the names of animals.

Sorry. Cannot do that. Need a list of acceptable words.

Doh!!! (Do people actually still use that word?) Then your checker does not work! Seems simple to me. I I do not understand why you keep claiming that you have a checker for this, when your checker fails abysmally.

What a complete failure! Yet you keep claiming you have a checker that works, when you do not.

The fact that you don't get that there is a massive difference between the code for the checker which is written and works, and the parameters for the code which is the list of words just shows your ignorance.

 

Try looking up parameters.

 

An analogy you might understand. A car manufacturer can supply a customer a working vehicle. However if the customer doesn't supply the fuel then the car won't work.

 

In this case the car is the checker, the car manufacturer is the checker writer. The customer is the CO and the fuel is the parameter list eg:list of words.

 

The checker/car is supplied and works perfectly but it's up to the customer to ensure they keep it running.

 

So I repeat such a checker is written it just needs the fuel to make it go.

Edited by ShammyLevva
Link to comment

I don't understand what you don't understand. You claim to have a checker that works. But it doesn't work. That seems fairly simple. So why do you claim that it does, when it doesn't?

Check my found cache list for the names of animals.

Sorry. Cannot do that. Need a list of acceptable words.

Doh!!! (Do people actually still use that word?) Then your checker does not work! Seems simple to me. I I do not understand why you keep claiming that you have a checker for this, when your checker fails abysmally.

What a complete failure! Yet you keep claiming you have a checker that works, when you do not.

The fact that you don't get that there is a massive difference between the code for the checker which is written and works, and the parameters for the code which is the list of words just shows your ignorance.

 

Try looking up parameters.

 

An analogy you might understand. A car manufacturer can supply a customer a working vehicle. However if the customer doesn't supply the fuel then the car won't work.

 

In this case the car is the checker, the car manufacturer is the checker writer. The customer is the CO and the fuel is the parameter list eg:list of words.

 

The checker/car is supplied and works perfectly but it's up to the customer to ensure they keep it running.

 

So I repeat such a checker is written it just needs the fuel to make it go.

 

They claimed to have made a checker that works. It does not work.CAn I ser They claim that we have not set up the right parameters. The fact is that is does not work.

Seems fairly simple to me.

"We have a checker that works." "No. It does not work."

I don't care which gas gives me the best fuel economy. Or which car has the best fuel economy.

Does the checker work? No. It does not. Why lie to me? "Oh. You have not crossed your i's and dotted your t's. And clicked your ruby slippers."

We've been lied to. The checkers do not work.

Can I set up a challenge cache where the checker works? Only in a very simplistic set.

Sorry. We've been lied to. In most cases, the checker will not work.

Link to comment

Meanwhile, when can we expect GSpeak to lift the moratorium? After they've crossed the i's and dotted the t's?

 

Geocaching HQ is nearly ready to announce the end of the moratorium on "challenge cache" submissions. However, a few details remain to be addressed. We will complete the process and present an updated framework for challenge caches within the next few weeks.

Link to comment

I don't understand what you don't understand. You claim to have a checker that works. But it doesn't work. That seems fairly simple. So why do you claim that it does, when it doesn't?

Check my found cache list for the names of animals.

Sorry. Cannot do that. Need a list of acceptable words.

Doh!!! (Do people actually still use that word?) Then your checker does not work! Seems simple to me. I I do not understand why you keep claiming that you have a checker for this, when your checker fails abysmally.

What a complete failure! Yet you keep claiming you have a checker that works, when you do not.

The fact that you don't get that there is a massive difference between the code for the checker which is written and works, and the parameters for the code which is the list of words just shows your ignorance.

 

Try looking up parameters.

 

An analogy you might understand. A car manufacturer can supply a customer a working vehicle. However if the customer doesn't supply the fuel then the car won't work.

 

In this case the car is the checker, the car manufacturer is the checker writer. The customer is the CO and the fuel is the parameter list eg:list of words.

 

The checker/car is supplied and works perfectly but it's up to the customer to ensure they keep it running.

 

So I repeat such a checker is written it just needs the fuel to make it go.

 

They claimed to have made a checker that works. It does not work.CAn I ser They claim that we have not set up the right parameters. The fact is that is does not work.

Seems fairly simple to me.

"We have a checker that works." "No. It does not work."

I don't care which gas gives me the best fuel economy. Or which car has the best fuel economy.

Does the checker work? No. It does not. Why lie to me? "Oh. You have not crossed your i's and dotted your t's. And clicked your ruby slippers."

We've been lied to. The checkers do not work.

Can I set up a challenge cache where the checker works? Only in a very simplistic set.

Sorry. We've been lied to. In most cases, the checker will not work.

 

Ok more simply checkers work just fine as described ad nauseum assuming the user isn't completely clueless. I'm sorry if they don't work for you but that does then mean you are completely clueless. It really really isn't difficult to understand.

 

Note you completely misunderstood my analogy and went on about fuel ecomomy??? I was talking about fuel present or not. If the CO provides the list of words (the fuel for the checker) then the checker will work perfectly. If the CO fails to supply the fuel that doesn't mean the car manufacturer is to blame its the guy who doesn't put fuel in the car.

 

BTW have you noticed that for almost 70% of the old challenges out there there is a working checker? No you probably haven't.

Link to comment

Ok more simply checkers work just fine as described ad nauseum assuming the user isn't completely clueless. I'm sorry if they don't work for you but that does then mean you are completely clueless. It really really isn't difficult to understand.

 

Note you completely misunderstood my analogy and went on about fuel ecomomy??? I was talking about fuel present or not. If the CO provides the list of words (the fuel for the checker) then the checker will work perfectly. If the CO fails to supply the fuel that doesn't mean the car manufacturer is to blame its the guy who doesn't put fuel in the car.

 

BTW have you noticed that for almost 70% of the old challenges out there there is a working checker? No you probably haven't.

 

Ah. Throwing in insults. Great way to make your point!

 

Sorry. I've never heard the work 'fuel' used in the way you used it.

 

And, again, you are incorrect. I have checked the challenges that I have found, and none of them have a checker on the cache page. So shouldn't the answer be 0%?

 

Trying to find a checker on the web page looks almost impossible. All I see is a long list of checkers.

Link to comment

And, again, you are incorrect. I have checked the challenges that I have found, and none of them have a checker on the cache page. So shouldn't the answer be 0%?

 

Trying to find a checker on the web page looks almost impossible. All I see is a long list of checkers.

 

I assume you are looking at http://project-gc.com/Tools/Challenges to see the list of checkers. That page has a search box at the top. Just enter (parts of) the name of the challenge and the list will be filtered as you go. Or enter the GC code of the challenge.

 

I haven't checked all your challenge cache finds but some of them, and while none of the ones I checked had checkers listed on the cache page, many of them do have checkers that can be easily found using the about search page. (I also happened to note that one of them had a checker that imposed more restrictions than the cache page had so I created a new one with a note about this.) That none of the challenges you happened to log have links to the checkers of course doesn't mean anything since all these challenges are (obviously) published pre-moratorium when there was no requirement for such a link.

Link to comment

That none of the challenges you happened to log have links to the checkers of course doesn't mean anything since all these challenges are (obviously) published pre-moratorium when there was no requirement for such a link.

 

It does mean that there are challenges already in existence which have no checker written for them - for whatever reason - and that as at this moment in time, those challenges would not be published if they were submitted if the moratorium is ever lifted unless a checker were written for them.

 

So yeah - it does mean something :)

Link to comment

 

BTW have you noticed that for almost 70% of the old challenges out there there is a working checker? No you probably haven't.

...

 

And, again, you are incorrect. I have checked the challenges that I have found, and none of them have a checker on the cache page. So shouldn't the answer be 0%?

 

Trying to find a checker on the web page looks almost impossible. All I see is a long list of checkers.

 

I'm not going to start throwing around absolute numbers, but the fact that the ones you looked at didn't have a checker on the page isn't statistically significant, unless you looked at a large sample size, with a representative set of challenges.

 

Anyway the fact is that a lot of existing challenge caches already have checkers listed on their pages, and of the ones that don't have one listed many do have a functionally useful checker available if the CCO or finders choose to use it.

 

As for it being almost impossible to find a checker, you may be right, but if that means a CCO is going to have to put more effort into setting up their CC by finding a suitable checker, then for me that's no bad thing. The more convoluted the challenge, the more effort will be required to find or set up the Checker.

Edited by MartyBartfast
Link to comment

I'm not going to start throwing around absolute numbers, but the fact that the ones you looked at didn't have a checker on the page isn't statistically significant, unless you looked at a large sample size, with a representative set of challenges.

 

But it is still relevant and important.

Link to comment

I'm not going to start throwing around absolute numbers, but the fact that the ones you looked at didn't have a checker on the page isn't statistically significant, unless you looked at a large sample size, with a representative set of challenges.

 

But it is still relevant and important.

 

No it isn't, in isolation it's a meaningless number, so how is it relevant?

Link to comment

I'm not going to start throwing around absolute numbers, but the fact that the ones you looked at didn't have a checker on the page isn't statistically significant, unless you looked at a large sample size, with a representative set of challenges.

 

But it is still relevant and important.

 

No it isn't, in isolation it's a meaningless number, so how is it relevant?

 

See my previous post.

 

Plus statistics are only part of the picture - they tend to ignore important exceptions.

Link to comment

That none of the challenges you happened to log have links to the checkers of course doesn't mean anything since all these challenges are (obviously) published pre-moratorium when there was no requirement for such a link.

 

It does mean that there are challenges already in existence which have no checker written for them - for whatever reason - and that as at this moment in time, those challenges would not be published if they were submitted if the moratorium is ever lifted unless a checker were written for them.

 

So yeah - it does mean something :)

Seriously? You're applying the new guidelines to challenges published BEFORE the new guidelines (at least the one we know about) were announced and then saying that they wouldn't be published NOW, even though they're already published, because they don't have a checker? It's the same as saying that old caches which have been grandfathered in, for whatever reason, wouldn't meet the current guidelines so therefore wouldn't be published.

 

I'm not sure what meaning we're to infer from this, other than the obvious, that new checkers would have to be written for NEW challenges that mimic or copy the old challenges, which don't have (and weren't required to have) checkers. As far as we know, the old challenges aren't going to be required to have a checker. IF that changes, then you have a more meaningful point.

Link to comment

That none of the challenges you happened to log have links to the checkers of course doesn't mean anything since all these challenges are (obviously) published pre-moratorium when there was no requirement for such a link.

 

It does mean that there are challenges already in existence which have no checker written for them - for whatever reason - and that as at this moment in time, those challenges would not be published if they were submitted if the moratorium is ever lifted unless a checker were written for them.

 

So yeah - it does mean something :)

Seriously? You're applying the new guidelines to challenges published BEFORE the new guidelines (at least the one we know about) were announced and then saying that they wouldn't be published NOW, even though they're already published, because they don't have a checker? It's the same as saying that old caches which have been grandfathered in, for whatever reason, wouldn't meet the current guidelines so therefore wouldn't be published.

 

I'm not sure what meaning we're to infer from this, other than the obvious, that new checkers would have to be written for NEW challenges that mimic or copy the old challenges, which don't have (and weren't required to have) checkers. As far as we know, the old challenges aren't going to be required to have a checker. IF that changes, then you have a more meaningful point.

 

Seriously? You think that's what I'm doing?

 

What has gone has gone - let's live in the now.

 

It does seem though that having thought about it you've understood the point I was making. How much meaning you assign to the point is entirely up to you :)

Link to comment

What has gone has gone - let's live in the now.

 

It does seem though that having thought about it you've understood the point I was making. How much meaning you assign to the point is entirely up to you :)

 

You were the one that referenced the old caches and how they wouldn't meet new guidelines, not me.

 

The meaning you imply is already understood based solely on the announced guideline change. There's nothing new here. New challenges will have to have checkers, even if they're based on older challenges, which might not currently have a checker written for it. The possible outcome, of old challenges being required to follow suit, is unlikely to happen IMO, based on previous guideline changes and the outcome of those caches, which were grandfathered in rather than archived or required to meet the new guidelines.

Link to comment

What has gone has gone - let's live in the now.

 

It does seem though that having thought about it you've understood the point I was making. How much meaning you assign to the point is entirely up to you :)

 

You were the one that referenced the old caches and how they wouldn't meet new guidelines, not me.

 

The meaning you imply is already understood based solely on the announced guideline change. There's nothing new here. New challenges will have to have checkers, even if they're based on older challenges, which might not currently have a checker written for it. The possible outcome, of old challenges being required to follow suit, is unlikely to happen IMO, based on previous guideline changes and the outcome of those caches, which were grandfathered in rather than archived or required to meet the new guidelines.

 

Yes, yes and yes :)

 

All I was getting at was that there are forms of challenge out there now which people might want to duplicate in their local area - and they won't be able to because there isn't a checker.

 

Tha's all.

 

Not sure why you're taking issue with it.

Link to comment

 

All I was getting at was that there are forms of challenge out there now which people might want to duplicate in their local area - and they won't be able to because there isn't a checker.

 

 

I'm not taking issue so much with what you're saying rather than the importance of meaning you attached to it. I just don't see any meaning or relevance to old challenges not having checkers. Just because a checker doesn't exist for an old challenge does not mean that one can't be written for a new challenge that copies or mimics the old one. It also does not mean that they actually can write one for one of the older ones that they're hoping to copy or mimic either. It goes both ways.

Link to comment

 

All I was getting at was that there are forms of challenge out there now which people might want to duplicate in their local area - and they won't be able to because there isn't a checker.

 

 

I'm not taking issue so much with what you're saying rather than the importance of meaning you attached to it. I just don't see any meaning or relevance to old challenges not having checkers. Just because a checker doesn't exist for an old challenge does not mean that one can't be written for a new challenge that copies or mimics the old one. It also does not mean that they actually can write one for one of the older ones that they're hoping to copy or mimic either. It goes both ways.

 

I don't remember saying it was a matter of life and death, simply something which is likely to have an impact on the landscape going forward that is worth bearing in mind.

Link to comment

I don't remember saying it was a matter of life and death, simply something which is likely to have an impact on the landscape going forward that is worth bearing in mind.

 

I get that and I don't disagree, but the impact is for every new challenge to be published, regardless of whether or not it's a new concept for a challenge or a copy of an old challenge. Singling out old challenges without a checker has the same value as singling out new challenges that are also without checkers, at least to me. The amount of work to create a checker is the same, regardless of whether or not the challenge in question is a new one or based on an old one. Neither one can get published without a checker.

 

Those challenges that have checkers already written (or some variation that might be able to be tweaked) will have a built in advantage. Those that have nothing will require more time to get published. That's certainly one way to keep workload down a bit!

Link to comment

Just because a checker doesn't exist for an old challenge does not mean that one can't be written for a new challenge that copies or mimics the old one. It also does not mean that they actually can write one for one of the older ones that they're hoping to copy or mimic either. It goes both ways.

Just to be clear about this issue...

 

It should be noted that while some checker-less old challenge caches can have checkers written for them (or for future versions of them), this doesn't apply to all checker-less old challenge caches. For example, a meta-challenge (find 100 challenge caches), is very unlikely to ever have a checker written for it.

Link to comment

Just because a checker doesn't exist for an old challenge does not mean that one can't be written for a new challenge that copies or mimics the old one. It also does not mean that they actually can write one for one of the older ones that they're hoping to copy or mimic either. It goes both ways.

Just to be clear about this issue...

 

It should be noted that while some checker-less old challenge caches can have checkers written for them (or for future versions of them), this doesn't apply to all checker-less old challenge caches. For example, a meta-challenge (find 100 challenge caches), is very unlikely to ever have a checker written for it.

 

Bolded because it has already been stated. There's no guarantee that ANY checker can be written for ANY challenge, new or old.

Link to comment

Just because a checker doesn't exist for an old challenge does not mean that one can't be written for a new challenge that copies or mimics the old one. It also does not mean that they actually can write one for one of the older ones that they're hoping to copy or mimic either. It goes both ways.

Just to be clear about this issue...

 

It should be noted that while some checker-less old challenge caches can have checkers written for them (or for future versions of them), this doesn't apply to all checker-less old challenge caches. For example, a meta-challenge (find 100 challenge caches), is very unlikely to ever have a checker written for it.

 

However, if you define a challenge cache as one that has the word challenge in the title, then there is already a checker that can do that. In fact, there is even a challenge of that nature in my area, and it has a checker tagged to it on project-gc (but it is not on the cache page at this point).

Link to comment

Just because a checker doesn't exist for an old challenge does not mean that one can't be written for a new challenge that copies or mimics the old one. It also does not mean that they actually can write one for one of the older ones that they're hoping to copy or mimic either. It goes both ways.

Just to be clear about this issue...

 

It should be noted that while some checker-less old challenge caches can have checkers written for them (or for future versions of them), this doesn't apply to all checker-less old challenge caches. For example, a meta-challenge (find 100 challenge caches), is very unlikely to ever have a checker written for it.

Bolded because it has already been stated. There's no guarantee that ANY checker can be written for ANY challenge, new or old.

Sorry. I misread your second sentence.

Link to comment

I don't remember saying it was a matter of life and death, simply something which is likely to have an impact on the landscape going forward that is worth bearing in mind.

 

I get that and I don't disagree, but the impact is for every new challenge to be published, regardless of whether or not it's a new concept for a challenge or a copy of an old challenge. Singling out old challenges without a checker has the same value as singling out new challenges that are also without checkers, at least to me. The amount of work to create a checker is the same, regardless of whether or not the challenge in question is a new one or based on an old one. Neither one can get published without a checker.

 

Those challenges that have checkers already written (or some variation that might be able to be tweaked) will have a built in advantage. Those that have nothing will require more time to get published. That's certainly one way to keep workload down a bit!

 

My bold.

 

The point is that there is an inequality here - they are not the same so they don't have the same value.

 

An old challenge (of type X) without a checker = YES

 

A new challenge (of type X) without a checker = NO

Link to comment

Just because a checker doesn't exist for an old challenge does not mean that one can't be written for a new challenge that copies or mimics the old one. It also does not mean that they actually can write one for one of the older ones that they're hoping to copy or mimic either. It goes both ways.

Just to be clear about this issue...

 

It should be noted that while some checker-less old challenge caches can have checkers written for them (or for future versions of them), this doesn't apply to all checker-less old challenge caches. For example, a meta-challenge (find 100 challenge caches), is very unlikely to ever have a checker written for it.

 

However, if you define a challenge cache as one that has the word challenge in the title, then there is already a checker that can do that. In fact, there is even a challenge of that nature in my area, and it has a checker tagged to it on project-gc (but it is not on the cache page at this point).

 

Also, after the moratorium, theoretically there will be a cache property that identifies challenge caches, so hopefully they open that to the API, in which case checkers will be able to explicitly identify challenge caches (instead of missing some that may or may not match the Unknown+"Challenge" title requirement).

Link to comment

Just because a checker doesn't exist for an old challenge does not mean that one can't be written for a new challenge that copies or mimics the old one. It also does not mean that they actually can write one for one of the older ones that they're hoping to copy or mimic either. It goes both ways.

Just to be clear about this issue...

 

It should be noted that while some checker-less old challenge caches can have checkers written for them (or for future versions of them), this doesn't apply to all checker-less old challenge caches. For example, a meta-challenge (find 100 challenge caches), is very unlikely to ever have a checker written for it.

 

However, if you define a challenge cache as one that has the word challenge in the title, then there is already a checker that can do that. In fact, there is even a challenge of that nature in my area, and it has a checker tagged to it on project-gc (but it is not on the cache page at this point).

 

Also, after the moratorium, theoretically there will be a cache property that identifies challenge caches, so hopefully they open that to the API, in which case checkers will be able to explicitly identify challenge caches (instead of missing some that may or may not match the Unknown+"Challenge" title requirement).

 

You have inside information? :ph34r:

Link to comment

Also, after the moratorium, theoretically there will be a cache property that identifies challenge caches, so hopefully they open that to the API, in which case checkers will be able to explicitly identify challenge caches (instead of missing some that may or may not match the Unknown+"Challenge" title requirement).

You have inside information? :ph34r:

No. See bold.

There easily may not be. Hopefully there will be.

That is to say, after all this, I hope that the only change is not merely the addition of the checker rule to the concept of challenge caches.

Edited by thebruce0
Link to comment
Guest
This topic is now closed to further replies.
×
×
  • Create New...