Jump to content

Target.

+Premium Members
  • Posts

    17
  • Joined

  • Last visited

Everything posted by Target.

  1. Forgot to search for old bug reports But when the first is posted 01 November 2010 it a reminder might bee needed since it have not been fixed in almost 6 years. It can be hard to fix. One SQL update for each incorrect version and removing/make unselectable the country name in the table of existing countries.
  2. For some reason there is three countries for caches in Saint Kitts and Nevis Saint Kitts and Nevis https://www.geocaching.com/play/search?ot=4&c=264 25 caches St Kitts https://www.geocaching.com/play/search?ot=4&c=172 4 caces Nevis and St Kitts https://www.geocaching.com/play/search?ot=4&c=142 Only 2 archived events http://coord.info/GC4X9AD http://coord.info/GC6B1AJ All can be selected on new caches. The problem is that is it hard to get all caches for the country in a PQ an the state will show that you have logged caches in more countries then you have All caches are in the same country of Saint Kitts and Nevis https://en.wikipedia.org/wiki/Saint_Kitts_and_Nevis and should be merged to the correct "Saint Kitts and Nevis". That alos have most hides
  3. Search functionality in the API is limited to external apps, as indicated in another suggestion thread ignored by GS: [FEATURE] Extend API functionality to include searching It appears that the official app can search with parameters not available to the open API, presumably because it it is official and they can control what happens. I'd like to see the functionality of GSAK become "official" -- this way they don't have to enhance the existing PQ system to fulfill multiple different requests for it to include other useful parametrization. If I had to guess (and I do -- this is a one-way no-feedback forum) I'd say the limitations on the open API are due to processing or bandwidth issues. Controlling the app-ized hosted version of GSAK would theoretically assuage these concerns. I forgot to ask in my other post what search options are available to the official app? I could find now search options in the app. Only download of caches around a point and local filter on the result but i dont use that app and hav likely missed somthing
  4. Search functionality in the API is limited to external apps, as indicated in another suggestion thread ignored by GS: [FEATURE] Extend API functionality to include searching It appears that the official app can search with parameters not available to the open API, presumably because it it is official and they can control what happens. I'd like to see the functionality of GSAK become "official" -- this way they don't have to enhance the existing PQ system to fulfill multiple different requests for it to include other useful parametrization. If I had to guess (and I do -- this is a one-way no-feedback forum) I'd say the limitations on the open API are due to processing or bandwidth issues. Controlling the app-ized hosted version of GSAK would theoretically assuage these concerns. If you look at GSAK is it a interface to a local SQL database with macro support written in an old Bordeland Delphi version. To port that to a web version running on a server a complete rewrite has to be done. Even the database structure has to be changes if multiple users should use the same database because of the way your data is stored. And limitations has to be added because som filter options will be slow like advanced regexp search on all cache logs. The direct SQL access is likely that you cant keep because it is to easy to overload the database. The limitation in the API for you to have an updated database in GSAK is not only a processing or bandwidth problem. The limitation is a oversight or a deliberate chose in the API functions. It looks to me that the API is designed for uses on a mobile app where you want info on caches around you or info on the cache you look at. It is not designed to have an update local database. You can download all logs for a cache but cant ask for logs changed after x so you can update after your last query. The same for new caches or changes in cache status. It might be deliberate so it is harder to copy the database. If changes after parameters was available in the API the database and bandwidth usage would be decreased. If i what to update all caches in my stat to have to latest cache data i have to refresh all of them and all have to be fetched from the database and sent to me. If i could ask for changes after X the database only have to look at a field with the last time changed and only return caches with changes. That is much less data But i would like som GSAK functionality on the pgc site like adding custom waypoints to caches. It would be nice if i could add all steps in a multicache for future reference on gc.com. And the option to add custom data to group caches. I can easy in gsak add a field for field pussels, bonus caches and some more and change the name with a script. And easy export that so mysts that are not solved, field pussels, bonuses are not included.
  5. As an aside, why in the world did PGC choose lua? Lua is a language designed for a use case that is very, very different from what PGC does. It seems very ill-suited to the task. Lua would be great for GSAK, but for a database-driven Web app? Not so much. ETA: yeah, yeah, Turing complete blah blah blah. Lua doesn't even have INTEGERS, fer crying out loud! My somewhat informed guess is the reason that lua was used is that it was easy to add a lua sandbox, the checker system has to be secure and dont add security holes to website What would be a more approptiate language for checkers? It is only the checkers that runs lua the rest of the site is PHP. LUA is used on many video games as a scripting language and that is similar to how pgc uses it. I have written checkers and the lack of integers is not a problem when the double can contain a 32 bit integers exactly.
  6. Ok I think I understand... basically you'd need a snapshot of your DT grid when you start the challenge to prove you chose valid DT combos to use for the challenge, since the quantities in the grid could change over the course of the challenge in that the checker might show at the end that the DT isn't one of the lowest-count spots. That is an interesting quandry. I wonder if a reviewer could respond as to whether that type of challenge would be (should have been) publishable, at least with the most recent publish guideline set. How is it verifiable? Technically, the CO has to keep a list of claimed start stats for cachers who do the challenge. So how does the reviewer reconcile the issue of unresponsive COs? Verification is 100% on the CO and if there's no record of the DT grid as of a past date, it would be impossible to verify (without analyzing the cacher's MyFinds log history). Now theoretically, if PGC does have access to the find log history with dates, then it would be possible to run the checker. If the user inputs the DTs they chose for the challenge as well as the start date, then the script could verify that the DTs were of the lowest as of that date, and whether the cacher qualifies with the subsequent finds. I don't think it should have been allowed at the time of publication because of rule at the time of publication "3.2 Challenge geocaches cannot include restrictions based on 'date found'; geocaches found before the challenge geocache publication date can count towards the achievement of the challenge." https://web.archive.org/web/20140916072453/http://support.Groundspeak.com/index.php?pg=kb.page&id=206 That don't allow for requirement to post a note to start challenge in my opinion. With the modification that you have to have a week where you found the same thing without posting the note would i my interpretation be allowed. You might also what to add a requirement that x% of the DT matrix has to be filled before you start. It might be the case that to many started with a streak of 7 days and it is quite easy to do in the beginning. With that modification it can checked for automatically
  7. How would YOU program a computer to deal with a list of unknown items? For this specific case (caches containing the name of an animal) I'd use an existing comprehensive dataset of animal names such as the Integrated Taxonomic Information System. It contains over 700 thousand scientific names and 125K common names. Sure, that list might not have every animal name, but odds are if a cache contains an animal name it would probably be on a comprehensive list. As a programmer I don't have to maintain that list myself, but can call upon a service with a name and it will tell me whether or not the name is on the list. That might solve the problem for animal in English. I tried to enter hund (dog in Swedish) and got no result. I have seen many challenges with x nr of y in the namne and I am not sure I have seen any with a language requirement. And it in it self don't even solve the problem in english. Rook is in the database but the plural form rooks is not. And the cache "Rooks Nest Wood" http://coord.info/GC5YRCJ should be a animal cache if I am not mistaken. It does not event contain the word dog only dogs as in the family and genus because dog is not a spices but the singular of a family of animal. The same for cow and i suspect that they should be accepted. I am not sure if adding s is the correct plural for all animals in English. And English is in that case easy for word forms. For dog i can think of dog dogs and dog's . In Swedish there will hund hunden hundens hundar hundars hundarna hundarnas. And you cant take all words starting with hund because hundra is one hundred and not related to dogs. And it gets worse because many separated words in english becomes compound words in Swedish, the number of compound words are unlimited. The cache "Dogs Bowl" would be hundskålen and "Town dogs" stadshundar. Match that and ignore all words kombinations with that refer to hundra(one hundred). Hundrastgården is dog related but hundraåringen is not. It will be worse for ren (reindeer), ko (cow) and word related to myra(ant) And list of ok words cant be used because if I call a cache Koträdsmysteriet (The cow tree mystery). It would be a ok Swedish word about a mystery cache related to a three that looks like a cow. It might never have been used before and gets 0 hits on google.
  8. You have logged 10 lab caches. They are counted in the number of finds but is not included in the My finds pocket Query.
  9. After I noticed some markdown bugs in gsak i looked for more i found some strange markdown to html conversion on geocaching.com When trying the unordered list i fund some strange result in the log editor. The bug are described in the described in the images and the log text are below the image The log text is included below the image A blank line between to second level unordered list element will have a strange effect on the surrounding lines but works fine on the first level * level 1 unordered list * level 1 unordered list * level 1 unordered list * level 2 unordered list * level 2 unordered list * level 2 unordered list * level 1 unordered list Correct list * 0 space before * * 1 space before * * 1 space before * The 1 space will have a space before the 1 if the next line have more the 4 space before* * 0 space before * * 1 space before * * 5 space before * If there are more then 11 space before the * on the last line it will not be interpreterad as a list * 0 space before * * 1 space before * * 12 space before *
  10. The problem is that "How to Format" text don describe all requirements or possibilities. The italic, bold and bold there is a space before the first * and after the last. It was changed today i suspect so that som popular {*FTF*} tags and some user name don't become {FTF} in italic like they did when it was relived yestereday Add a space after the second * and it will be italic. Space is not completely correct. It can also be beginning of line , end of line. After the second * (but not before the first *) it can also be , . ; : and perhaps som other character i did not test.
  11. The nice thing about markdown is that it degrades nicely. On a device (such as a GPS), I'd much rather see "I'm so great, I was *FTF* on the cache" than "I'm so great, I was <i>FTF</i> on the cache". So getting the "raw" logs is the best way of handling this, as the 3rd party software stack is likely to be completely different than geocaching.com's. Markdown does degrades nicely but non markdown logs dont work as markdown as nice. Look at this cache http://coord.info/GC31Z54 the first log i mine and the two following has unintentional header 2 test because of lines with -----. Old logs before markdown should have been set as not markdown in the database an shown as plain text and not treated as markdown until they are edited. A quick and dirty change would be to insert blank line before lines with only - or = because they will result in large text and is quite common. Lines starting and ending with # that also results in text size is not as common. The announcement had stats of how many logs used BB code and html but i have seen no info of how many logs has unintentional Markdown in them. I think that unintentional formating is worse then missing formating
  12. I started to fix my old caches with BB code or HTML in the and found with som regular expressions and GSAK that i had 120 logs with that formatting most was href links. That was as many as I expected 2.2% of my finds. The only problem i could not fix was that links are not possible in preformated text so some challenge logs of mine have lost som links But the I found the most common markdown error i my logs and that is that some text is renders as header 2 and is quite large on the cache page. The reson in that one or more - along on a row tags the line above as header 2. if there are more then three - you get a horizontal rule as described in the markdown guide in the editor but only if the line above is blank or just spaces else the line above is formated as header 2 With som more regexp i found that i had 818 log that will be incorrect formated and that is 15% of my logs There was more logs on the caches that got strange formating because of ----- If someone is interested to look at their own logs ans uses gsak the the sql query that i used is. Replace Target. with your nic select name, parent,ltype,ltext from (select b.lparent as parent,ltype,ltext from (select llogid,lparent,ltype from logs where lby="Target.") as a inner join LogMemo as b on a.llogid=b.llogid and a.lparent=b.lparent where ltext regexp "\r\n.+\r\n(\-)+\r" ) inner join caches on code=parent I often when i am on a caching trip write a short suffix about the trip separated with ---- and a unique text for that cache above it now the last line on the unique text is quite large. If there is a empty row above the --- a nice horizontal rule i displayed. the online editor on the log page is from http://www.toptensoftware.com/markdowndeep/dingus and on that format more markdown is included then in the log edit how to format The Setext-style format is not listed on gc.com only the atx-styles Setext-style: Header 1 ======== Header 2 -------- atx-style (closing #'s are optional): # Header 1 # ## Header 2 ## ###### Header 6 I have seen stats that 3.5% of 560 million logs used HTML an BB code and will be incorrect. But are there any stats on how old logs have unintentional markdown included? Logs with unintentional markdown that results in much more unreadable log pages the missing html/BB code formatering I also noticed that ending # is required on gc.com markdown but is optional inte other markdown. If that was not the case i would have 2073 logs that had a heading 1 on the first row. It will be interesting to se if when mobile apps renders markdown if that i impemented
  13. When i tried to create preformated code spaning over multiple lines i entered the followin `foo foo1 foo2 foo 3` and in renders as below. Notice that foo3 exist on two time in the output and only one in the input foo foo1 foo2 foo3 foo3`
  14. There is a standard for saying you are FTF? I don't think I've seen that syntax. Most people, on my caches, just say FTF. No, but people playing the FTF-sidegame use {*FTF*} or {FTF} or [FTF] in order to let a third party programme know that this cache was FTF-ed. So they can move over to [FTF] anyway. The reason it was impossible to write {*FTF*} for me is that i use Swedish keyboard. } is on the 0 key with altgr. altgr triggers the key codes for alt and ctrl and the markdown editor uses ctrl for short cuts. shortcut 0 is H0 ie normal text and the line is set as normal an no } is added. If i switched to english keyboard it is possible to write } when only shift is used. For the same reason an Swedish user cant write @ £ $ in logs when they are on AltGr+ 2 3 4
  15. I tried the staging.geocaching.com site It is impossible to add the standard {*FTF*} tag to a log. First the *FTF* will be FTF in italic and when the } is added the row is select and } is not added to the text. And please don't say that FTF in not a goundspeek thing. You have made adds for premium accounts that said get premium to get notification an FTF. The `Preformated text` that i have seen on markup spec works but is not in the "How to Format" text. Will is always be supported and just missed in the guide or is the inclusion temporary. It is quire useful when writing logs on challenges to get text with GC codes and cachenames readable
  16. I tried to use the script to strip the HTML from my old revived logs found a problem. The GmailApp.sendEmail function is limited to 100 emails per day so the script soon stopped working. The limit "Email recipients per day" looks like it is for number of mails and not number of different recipients. It looks like the limit also applies to your own account and it appear not to exist any api metods to add/edit messages on your own account. I like to receive all logs from all caches near me and that script wont work because there will be more then 100 mail most days. But I have not seen the possibilities with Google script and gmail. It will be a great replacement for my current gmail filters to set appropriate labels for geocaching mails You should probably remove your email adress from the updates script like is was in the original
  17. Are you sure? I've never seen such a function, and couldn't find either a macro or even an API call that can do it. V8. Groundspeak.com access > Get Caches - Page 2 select Owned by add your name. Haven't tried it. I only own one cache. Nope. I just tried that and can there's only an option for showing disabled caches on page one. When I ran it with inserting our name in page 2, I only get my current caches. If i am not mistaken the API don't return archived caches in searches. But you can do it by in a few steps. by extracting the GC codes from a web page 1: Find all GC codes for your caches Look at the source of the webpage http://www.geocaching.com/my/owned.aspx (ex right click and show source in firefox) Select all and copy the code. The GC codes of your hides are there 2: Extract the codes Use grep to extract only the codes for example http://www.online-utility.org/text/grep.jsp Use the regular expression GC[0-9A-Z]+{2,6} and select Only Matching on the options. Paste the html code into the text filed and press process text The result is all your GC codes. 3 : Format the codes for GSAK Use for example http://projects.lambry.com/findnreplace/ and replace \n with , Paste in the codes and run the replace. You have now a , separated list with GC codes 4: Import into GSAK use get geocaches and GCXXXXX code format. Past the codes and all your caches will be downloaded. You can now add the to a bookmark list for PQ generation on gc.com or create the GPX in GSAK If you in the future add all your new caches to the bookmark list they will all be in the PQ. If you have more the 1000 finds you have to use multiple bookmark list because the PQ from the only contains 1000 caches I have no idea if the owned cache page is split in multiple pages if you have a lot of hides, it is one page for me with 127 hides. The the process has to be done for each page
×
×
  • Create New...