Jump to content

DECLINED - [FEATURE] Provide API access to My Finds PQ


W8TTS

Recommended Posts

[FEATURE] Re-activate "my finds" pocket query so it's compatible with iPhone app this was a really nice feature that we used to have.

 

1. It showed your total number of finds within the app - This is not currently available :-(

2. It listed all your finds in one saved list, just from running the already existing pocket query - which is already in place for everyone on the GC website (premium members only, I think..)

 

Anyone who agrees please reply yes on this thread :D

 

Happy days

 

Thanks

Link to comment

I've spent about an hour trying to find the merged topic. Sorry for reopening a "closed" topic, but I have two ideas that I think are appropriate.

 

I have read through the whole thread and I understand that there is more to the API processing than just connecting to the PQs and downloading them - that the API actually pulls the data out of the query file. The idea behind the API is to allow for more mobile access to action and, in this case, I agree that there is no valid reason for making this PQ available. However, the only use of the API isn't just for mobile access - in many cases it is to automate things that used to take several steps to do. The use of an API is almost always going to be greater than the intended purpose.

 

So, first of all, the reason for wanting the Finds PQ seems to be in question quite a bit. I am writing a script to download all of my PQs into GSAK. Why use the API at all if I can download all of my PQs but one. If I need to go to the download PQ page - it defeats the whole purpose of having an API to automate the process. Adding the Finds to the queue is available from a given URL (http://www.geocaching.com/pocket/) but the download page requires the user to click on the "Pocket Queries Ready for Download" tab which makes it harder to automate.

 

So perhaps something that we are looking for would bypass the processing of the API and simply be a method that would return the PQ Download File's URL or GUID. (Perhaps the GUID is the same for the entire lifespan of the PQ, but I haven't tested that out since I just thought of this work around today. It would still be nice to have a method to make this process easier for the less gifted among us.) That way those who are just downloading the files can still automate processes while those who actually need the data from the PQ can use the current processes. It puts no more undue stress on the server (it may actually releive some in the long run) and still allows users access to Geocaching.com information for automation scripts.

Link to comment

You might be able to automate this with batch files and/or shell scripts.

 

I did a test and it seems that the URL for the My Finds download link never changes so you can use a command-line downloader to retrieve the file.

 

The trickier part is triggering the query to run.

 

I had a look at the source of the page and the operational part of the Add to Queue button and I think a POST action to that website will work. On a UNIX box, a shell script like this should theoretically work:

 

wget --post-data "name=ctl00$ContentBody$PQListControl1$btnScheduleNow&id=ctl00_ContentBody_PQListControl1_btnScheduleNow" http://www.geocaching.com/pocket

 

I'll have to wait three days to test that because I already used today's query to verify the URL doesn't change.

 

Slightly more troublesome is verifying that the query actually ran before you download it. You could either use the query completion email to trigger the download, or wait a day to download it, or compare the new file against the old version.

Link to comment

I am realizing now as I wait for the next opportunity to test that the command line method is unlikely to work, unless their web servers allow username/password authentication that way. Doubtful.

 

I will work up an AutoIT script and get back to you.

Wouldn't doing anything like you are talking about violate the TOU for the site because you would effectively be 'scraping' or 'automating' the site?

 

You agree that you will not use any robot, spider, scraper or other automated means to access the Site for any purpose without our express written permission.
Edited by GeePa
Link to comment

I am realizing now as I wait for the next opportunity to test that the command line method is unlikely to work, unless their web servers allow username/password authentication that way. Doubtful.

 

I will work up an AutoIT script and get back to you.

Wouldn't doing anything like you are talking about violate the TOU for the site because you would effectively be 'scraping' or 'automating' the site?

 

You agree that you will not use any robot, spider, scraper or other automated means to access the Site for any purpose without our express written permission.

 

That's a good question. I guess there are degrees of automation. Theoretically I would be in violation of the TOU if I used my browser feature to store my password instead of typing it in every time.

 

If they are unwilling to allow an automated day-of-the-week option for the My Finds PQ (even though any manually-created PQ can do it) then I don't see a problem with working around their limitations.

 

I think the intent of the spider/scraper rule is more to prevent bulk querying of mass cache data. Isn't that how c:geo works?

Link to comment

Theoretically I would be in violation of the TOU if I used my browser feature to store my password instead of typing it in every time.

No you wouldn't. You're still accessing the site manually. They just don't want programs automatically bringing up cache pages without any user input.

 

I am not sure who said anything about bringing up cache pages without user input.

 

This is about automatically clicking the "Add to Queue" button in the My Finds section of the Your Pocket Queries page, on a schedule.

Link to comment

Theoretically I would be in violation of the TOU if I used my browser feature to store my password instead of typing it in every time.

No you wouldn't. You're still accessing the site manually. They just don't want programs automatically bringing up cache pages without any user input.

 

I am not sure who said anything about bringing up cache pages without user input.

 

This is about automatically clicking the "Add to Queue" button in the My Finds section of the Your Pocket Queries page, on a schedule.

I'm aware of that. I was responding to your post, clarifying that having your browser store your password wouldn't violate the TOU. I was also mentioning what I understood the rationale behind that clause to be.

Anyway, having a script that clicks the button for you also sounds to me like "automated means to access the Site". Basically, if a human isn't clicking something or the API isn't being used, I don't think it's allowed.

Link to comment

Theoretically I would be in violation of the TOU if I used my browser feature to store my password instead of typing it in every time.

No you wouldn't. You're still accessing the site manually. They just don't want programs automatically bringing up cache pages without any user input.

 

I am not sure who said anything about bringing up cache pages without user input.

 

This is about automatically clicking the "Add to Queue" button in the My Finds section of the Your Pocket Queries page, on a schedule.

I'm aware of that. I was responding to your post, clarifying that having your browser store your password wouldn't violate the TOU. I was also mentioning what I understood the rationale behind that clause to be.

Anyway, having a script that clicks the button for you also sounds to me like "automated means to access the Site". Basically, if a human isn't clicking something or the API isn't being used, I don't think it's allowed.

 

Like c:geo?

 

I believe the rationale behind the the clause is to prevent automated scripts and scraping tools from overwhelming the server with excessive data requests.

 

Since the My Finds query is self-throttling due to its once-in-three-days limitation it would not apply in this case.

 

I'd make a feature request asking for a checkbox something like "Run the My Finds PQ Automatically Every Three Days" but I think I will wait until the team finishes their massive queue of current requests that must be overwhelming. I base this assumption on the rate at which new features appear to be implemented.

Link to comment

I gave up on AutoIT -- it doesn't play well within browsers, only Windows apps. Instead, I cobbled together this process with iMacro and batch files.

 

Install the iMacro add on for Firefox: http://www.iopus.com/imacros/firefox/?ref=fxtab

 

Create two scripts. One is to run the PQ and the other is to download it.

 

RunPQ.iim (replace geocaching_username and geocaching_password with appropriate values)

VERSION BUILD=7500718 RECORDER=FX
TAB T=1
URL GOTO=https://www.geocaching.com/login/default.aspx?RESET=Y
TAG POS=1 TYPE=INPUT:TEXT FORM=NAME:aspnetForm ATTR=ID:ctl00_ContentBody_tbUsername CONTENT=geocaching_username
SET !ENCRYPTION NO
TAG POS=1 TYPE=INPUT:PASSWORD FORM=NAME:aspnetForm ATTR=ID:ctl00_ContentBody_tbPassword CONTENT=geocaching_password
TAG POS=1 TYPE=INPUT:SUBMIT FORM=ID:aspnetForm ATTR=ID:ctl00_ContentBody_btnSignIn
URL GOTO=https://www.geocaching.com/pocket/
TAG POS=1 TYPE=INPUT:SUBMIT FORM=ID:aspnetForm ATTR=ID:ctl00_ContentBody_PQListControl1_btnScheduleNow

 

DownloadPQ.iim (replace geocaching_username and geocaching_password with appropriate values)

VERSION BUILD=7500718 RECORDER=FX
TAB T=1
URL GOTO=https://www.geocaching.com/login/default.aspx?RESET=Y
TAG POS=1 TYPE=INPUT:TEXT FORM=NAME:aspnetForm ATTR=ID:ctl00_ContentBody_tbUsername CONTENT=geocaching_username
SET !ENCRYPTION NO
TAG POS=1 TYPE=INPUT:PASSWORD FORM=NAME:aspnetForm ATTR=ID:ctl00_ContentBody_tbPassword CONTENT=geocaching_password
TAG POS=1 TYPE=INPUT:SUBMIT FORM=ID:aspnetForm ATTR=ID:ctl00_ContentBody_btnSignIn
TAG POS=1 TYPE=A ATTR=ID:ctl00_hlSubNavPocketQueries
TAG POS=1 TYPE=A ATTR=TXT:Pocket<SP>Queries<SP>Ready<SP>for<SP>Download<SP>(1)
TAG POS=1 TYPE=A ATTR=TXT:My<SP>Finds<SP>Pocket<SP>Query
ONDOWNLOAD FOLDER=C:\ FILE=* WAIT=YES
WAIT SECONDS=5

 

To run these scripts on a schedule, you need to create a batch file to launch each one:

 

run_pq.bat

start /b cmd /c "C:\Program Files\Mozilla Firefox\firefox.exe"
ping 127.0.0.1
start /b cmd /c "C:\Program Files\Mozilla Firefox\firefox.exe" imacros://run/?m=RunPQ.iim

 

(The reason you have to launch firefox twice is that the iMacro library needs to be loaded before an attempt to run the imacros: link. The ping statement is there as an improvised wait function.)

 

download_pq.bat

start /b cmd /c "C:\Program Files\Mozilla Firefox\firefox.exe"
ping 127.0.0.1
start /b cmd /c "C:\Program Files\Mozilla Firefox\firefox.exe" imacros://run/?m=DownloadPQ.iim

 

You schedule each of these batch files with Task Scheduler to run at appropriate times.

 

The downside of this method is that firefox is left running, and needs to be closed manually.

 

If you don't use firefox as your regular browser, you can write another batch file to kill the process a minute after it is launched. You'll need the PSKILL utility:

 

http://technet.microsoft.com/en-us/sysinternals/bb896683.aspx

 

You wouldn't want to do this if you are using firefox as your regular browser because this batch file will kill all instances:

 

killfox.bat

pskill /accepteula firefox.exe

 

Let me know if you want more specifics on anything I have outlined here.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...