Jump to content

Hynr

+Premium Members
  • Posts

    1115
  • Joined

  • Last visited

Posts posted by Hynr

  1. Why is everyone jumping to the conclusion that this is due to too many users using the database and/or the PQ service? That certainly is not the only possibility. In fact, we know that the PQs run on a separate machine on a copy of the database, and that it is normally finished processing the scheduled PQs fairly early in its 24 hour cycle. So there are plenty of spare clock-cycles in that operation.

     

    Here are some other potential culprits: Maybe the firm that provides the internet service now has too much traffic on its resources; Maybe there is some hardware defect in the server, router or firewall (e.g. cables, connectors, NIC's etc go bad); Maybe someone hacked into the server and is using it to serve out MP3s or similar (it happened to one of my machines); I suspect many of us here could list many more possibilities.

     

    Jeremy - What say you? What tests have you run? What's the through-put capacity and how close are we to it at various times. What's the max number of users the system can handle at any one time, and why are you letting them all on, only to all get lousy service. Let's see the dirty laundry.

     

    Just now even the Forum is timing out occaisionally - that's silly - you don't need a lot of hamsters for that service even if hundreds of folks are on (153 are on-line right now). Seems like someone needs to check on whether GC's ISP is short-changing geocaching.

  2. But tell me how to know which caches have been moved to temp inactive or archived so they can come off the list that a person is keeping.

     

    I haved ranted I don't know how many times about that. I don't know of a way to do that. Tell me how and I will shut up

    I'll tell you the trickery I am going through to deal with this (despite which I do NOT expect you to shut up).

     

    I have a separate PQ which I run to send me the "inactive" ones in the same circle as a PQ that gives me the active ones. All goes into my off-line database as managed by GSAK. I filter for the same area that the PQ covered (my database covers a lot more ground than any of my PQs). Within GSAK I sort by last update. Then I have to do some manual work by looking at those caches on-line that did not get updated with the last PQ. Of these, some are unavailble, others are OK. It's usually easy to find the ones in this set where I have to manually toggle the archived status: they are usually the ones showing a couple of DNF indicators.

     

    It is a clutzy work-around for which I would like a better solution, especially since it still means I have to manually inerrogate the on-line database at a time when it is typically very busy and when I could be out caching.

  3.   But you have to get the data from geocaching.com.  That is, I suspect, your unasked question.

    The asked question clearly refers to PQs before they are sent, not afterwards. I too want more power in specifying PQs.

     

    Robert, what would it take for Jeremy to install your great software on the PQ server and to give us this sort of capability within the PQ before they are sent to us. If he whines about command line prompts, then would you be willing to help him get this capability inserted? You might have to build a DLL for him or some such thing.

  4. Another good reason to use the PQ's for their intended purpose, planning the day's caching, instead of trying to use them to keep an offline database of all the caches in the state/region/whatever.

    What are you talking about?!? Keeping an off-line database is part of the intended use. Here is the quote out of the license agreement that we agree to when we use PQs: "Licensee may modify the Data and merge other data sets with the Data for Licensee's own internal use." What the license inhibits is you sharing such a database with others or marketing it in some way. But you are free to do whatever you want beyond that.

  5. I think I'll just find somewhere in my yard and let the tin sit over winter.  If the paint job I gave it helps it survive, I put it out next spring.

    Why not test it in operation. Put it out where it can do some good. That is the only way you will get any real data on suitability. We already know how well this will last in your backyard: If you painted it properly, placed out of the sun, never opened, etc - it will last longer than 15 years. If it get's handled, it won't last as long, but easily long enough if you service it occasionally.

     

    ;) My advice: use it to place a cache in an interesting location, ... as is, (content in a zip-lock bag inside the tin) ... now, ... not next spring! I mean TODAY!!! :blink: I hope it's near my home, 'cause I could use some new ones nearby.

  6. Clearly national events need much more advanced planning and announcement. I just mentioned this event to my wife and we are now officially "thinking about it" - largely because we were able to see the information at the web site. If we found this out in December it would already be too late for us to thread this into our very busy lives.

     

    Also, having planned national events myself, I know that organizers need to be able to gage interest level much earlier than 6 months in advance.

     

    The solution is to implement the necessary tools at the web site to accommodate them. The solution is definitely NOT to abuse the folks who are taking the initiative to create a great event. (Thank, "the Federation", whoever you are).

     

    So I'll make a suggestions (Jeremy are you listening?): for event caches there should be a bit of data collected at set-up as to whether it is local, regional, or national. And those who want calender information routinely would then be given the option which they want to see. The three types would have different advanced dates (I would suggest maximum advanced postings allowed as follows: local - 2 months, regional - 6 months, and national - 364 days).

     

    And while I am in idea-sharing mode (or mood): It would be useful if one could click a button on an Event Cache page to explicitly indicate interst and to sign up for an e-mail list for updates and to give organizers a vehicle for communicating with those who are interested in attending. (Maybe listserv is the right tool?) The watch-list is NOT the tool I am suggesting - that tool is anonymous and gives the organizers no info as to who and how many might actually be considering attending.

  7. The point beiing made is that you go to the place in your browser set-up where you can customize the tool bar and make an item there that goes to http://www.geocaching.com/my/

    If you are not logged in, then you will be asked to do so and after doing so you are then immediately taken to your own page. If you are alrady loged in, then you go directly to your page.

     

    With Netscape this is done by creating a bookmark and then in teh bookmark manager dragiing it into the "Personal Toolbar Folder". The process is similar in other browsers.

     

    By doing this on the toolbar you can jump to your page from any web page you might be on, not just the forums.

  8. I understand what the cache approvers are saying here. But I think something was lost when the "standards" were made more restrictive. There are tons of great virtuals that were placed before then. These are still very popular finds (just look at the logs). I see very few new ones like that these days. That should tell you that the change had a negative impact.

     

    By deeming that virtuals need to be off-set caches with a film cannister placed at a nearby lightpost or parkbench, you have degraded the virtual. If the cache owner wants it without the mindless micro, then what is wrong with that?

     

    Geocaching is a lot of different things to different folks. Some love the old virtuals and saw no need for the fix that was made in 2003. Many have told you so. I attend geocaching meetings and it frequently comes up. No one seems to defend the decision.

     

    There is another compelling reason to go back to the way it was before: the virtual allowed cachers to meet on-line. Nothing replaced that. There is a forum thread somewhere about hard-core geocachers getting bored with things here. In that thread you will find that there are some who say that this sport is about people communicating. The organization should foster that, rather then inhibit it. The only reason I know anything about Ron Streeter is because of his virtuals. If I ever meet him we'll have something to talk about. One of my alll-time favorite caches is his "Out of Africa" virtual cache. By today's standards that would not get approved because one can easily hide a film cannister nearby. But that would take away from the cache as Ron designed it.

     

    Jeremy - I have seen a lot of paying clients tell you to bring the old-style less-restrictive virtuals back and to return to the cache owner the artistic freedom to execute them as s/he wishes. Remember that the customer is always right! I don't want to hear any whining about not having enough funds to upgrade the servers if you can overlook hundreds of paying customers.

  9. Regular caches are one-stage multi caches - I doubt that we're going to apply the suggested logic here.

     

    I would suggest splitting the multi cache category into two new definitions:

    off-set cache - two-stage cache where two stages are separated by less than 0.1 mile

    multi cache: more than two stages and caches were two stages are seprated by more than 0.1 mile

     

    We could theoretically split further for the sake of greater resoluiton, but no one is asking for that. There are folks who find the off-set cache definition useful. What more rationale is needed?

     

    Jeremy - what, to you, is "compelling"? Perhaps we are wasting your and our time asking for greater resolution of categoriizing caches if the only thing that is complelling is the repair of some error or invention of some new cache type.

  10. I propose a new log type:

    Mark Waypoint

    which would read:

    X marks a waypoint for Y

    Use: Locationless caches

    Replaces: "Found It"

    It seems to me that this is one of those things where "if it ain't broke, don't fix it". But since I have not done any Locationless ones, perhaps I'm not understanding the problem.

     

    I think the concept of enhancing ones ability to communicate what is going on is good. So perhaps it is not a matter of adding a log type, but adding (1) a set of check boxes for: "no comment", "log sheet is full", "geosheet needs replacing", "cache is in bad shape", "cache was mowed into little pieces" ... and (2) a set of edit boxes where one can enter "corrected coordinates".

     

    I would also like to have checking one of these count as a log comment.

  11. ... GSAK is free, if you keep reinstalling it or you can BUY THE PROGRAM.  Is anyone seeing a trend here?  Now on to my proposal...

    <snip>

        I really DO like GSAK and intend to continue uninstalling, redownloading and reinstalling every couple of weeks... just don't like having to do it... 

    I wonder about your priorities with regard to the trouble you go through with regard to GSAK, which you obviously value fairly highly. Maybe you should have a conversation with Clyde England. He is a person (human being) who works for us all to keep improving that marvelous piece of software. Every time you download the latest version, he has built in new features and fixed things that the users have identified. The amount he suggests you contribute is less than what it would cost you to treat him to one dinner out or about the price of a new CD. I presume that you have the disposable income to play this sport; so why not treat this individual to something nice for all he is doing for you.

     

    I probably shouldn’t focus on any one particular software developer person here (you brought up GSAK). What I just wrote is true for a number of the PERSONS who are here in this forum reading what you are writing. These folks have a lot of class, so they are not going to point themselves out.

  12. I think that there are a lot of folks jumping to incorrect conclusions in this thread about what the $3/month pays for. There are many different economic models that businesses can use and if you think that the only source of revenue for GC.com is the premium membership revenue, then you are obviously mistaken. It might not even be the primary source (only Jeremy knows for sure and he would be a fool to disclose details about GC's budgets to us). Thus to link the premium membership fees to anything that has not been specifically identified as it’s benefit would be a delusion. The premium membership does not buy you priority login or prioritized throughput. If that is what you want, then you can suggest to Jeremy to set up a separate server for premium login. But you might want to be careful about what you wish for, because it might be that the premium members are the ones who tie up the service the most.

     

    I would speculate that the GC.com economic model depends on having a large base of individuals who see the ads. Thus interactive access to cache data is granted freely for everyone. It obviously costs money, but it increases user number and allows collection of more funds from advertisers and sponsors (part of the economic model). Some “premium” services are available at a separate cost that is not necessarily driven by what it costs to deliver that service (e.g. PQs, restricted access to one’s "premium" cache pages, and ability to view "premium" web pages). If these three services are not worth $3/month to the end-user, then GC has a problem.

     

    If a user does not intend to use any of these three services, then why would s/he pay for premium membership? Why would someone subscribe to something and then never use it? Surely you don’t do that with anything else (TV, phone, utilities, newpapers, etc).

     

    Now I know that you don’t have to do a lot of research to find out that Hynr is not a premium member. At the same time, if you think that I have not paid, then you have jumped to the wrong conclusion. Things aren't always what they seem to be.

  13. FYI: In related thread: "Logging Caches By E-mail And Batch", I suggested a file format that might be used to transmit the information involved in logging a cache find.

     

    If the security issue can’t be solved, then perhaps a file consisting of “proposed logs”, or “proposed actions” (e.g. TB drops), can be accepted as an ASCII file, parsed and displayed on one (long?) web page where we could scroll through to view and click a check-box to accept or delete each proposed action. Then proceed with writing all the entries to the database by having the user click a button at the bottom of the page. That would allow some proofreading and verification by the person who supposedly submitted the list of logs. It would deal with security because it would require that the user be logged.

  14. Errors would be handled as follows:

    - Picking up TBs: if TBxxxxxx is not in the GCxxxx cache, then an error message is generated and no change is made in the database.

    - Dropping a TB into the wrong cache, then the same problem occurs as we have now, but at least you get some confirmation as to what you did.

    - If an invalid GC code is entered then an error message is generated and no change is made in the database.

    - If a valid GC code is used but it is the wrong cache (only user can know this), then that would have to be fixed the same way as is done now. You have to go in and fix it by hand by deleting your log and then logging the correct cache.

     

    To deal with security, the content of the file could be considered as "proposed" actions. One would then go on-line at the GC.com web site, view all the "proposed" logs (along with the cache names and error messages) on one web page, and be given the opportunity to accept/reject them all, or each individually. I realize that this adds some server time back in, but there would be a lot less back-and-forth and the total length of time on-line would be significantly reduced (giving others a chance to log their finds).

     

    Note that there is another thread right now on a very similar topic called: Web Services

  15. I would like to suggest that we need a method for submitting our logs of found geocaches via batch file. We could be creating our logs off-line and submit them all at once in one text file, either at the GC.com website or in the body of an e-mail message (or as a ASCII text file attachement) sent to a server that is programmed to interpret it. The server could do this when there is bandwidth available, perhaps just prior to running PQs.

     

    I propose the following format for logging geocache finds this way:

     

    Line 1: username

    Line 2: today’s date

    Each subsequent line has GCcode followed by comma followed by text:

    GCXXXX, sentence/note all on one line (...or:)

     

    Any line that does not have a valid 6-character GCcode at the start of a line is a continuation of text from the previous line

     

    Travel bug drops/pickups could be included anywhere in the file after line 2, with something like:

    GCXXXX, TB#######, {drop, pickup}

    where ###### is the number on the metal TB tag

     

    Obviously this would only be used by folks who know how to create such a file. I’ll wager that there are enough of us to make a difference in server performance on weekends.

     

    The first line would be used to verify the submission against the e-mail address on record. Anything that does not match is returned to sender with refusal note.

     

    I am proposing a standard that only handles "Finds". All other logs would still be done interactively.

     

    Files that are not strict ASCII are returned to sender with error message.

     

    An e-mail is generated to confirm each logs and identify any errors.

  16. When I first read about the “Earthcache” concept I got excited and thought it a great idea. Upon further reflection I find it a bit misguided. If it is going to be the GSA’s cache, then call it a GSA-cache and make similar arrangements available to other scientific societies. I guess at some point we’ll see extension to any organization or firm that has something interesting that can be found at specific coordinates; so we’ll see Disney-caches, etc; followed by Haliburton-caches, followed by any company that has some funds ot spend on us: e.g. Subaru-caches, etc... Hey, let’s not do that. If we need to rethink the concept of the virtual cache, then let’s do that (many of us want them back), but let’s not set up a bunch of narrowly defined cache categories.

     

    What I really like about the earthcache concept is that it is informational and educational. Why not create a cache category called “Infocache” and find reputable entities to check the validity/importance of proposed caches. In the case of geologically oriented caches this could be done by the GSA in the US and similar associations in other countries. By creating an “Infocache” instead of an “earthcache” you create an opportunity for much more significant growth in the sport. In fact, by creating a cache type that has credibility you actually create pages that represent credible information.

  17. I wonder if it might be useful to have a xml tag for URLs to images that are vital to the cache (not the fluff like icons or background images, but the pictures in picture caches or the puzzle images of some puzzle caches). I understand that in the future there would need to be a mechanism for having the cache owner list which images are critical. The cache approver might check to make sure the list does not include fluff.

     

    Currently there seems to be no technological way to get critical pictures into PDAs and this would set the stage to have that problem resolved without resorting to including them in the zip files.

  18. I would also like to see an "offset" cache type. Yes, it would be a one-hop multi, with one additional trait: the hop would always be less than 500 feet or 0.1 mile. In other words, you have some assurance that you will not be run all over town (or country) or have to backtrack several miles when you are on a cache sweep.

     

    I would agree that it is not "needed"... but the very best things in life are things that are not needed.

  19. ... is the stupid action of cutting and pasting from Word ...

    I don't think this action is stupid. I routinely compose with a word processor and paste into edit boxes. That way I get spell checking and other features that the browser does not provide. What is stupid is the programming that does not deal automatically with the special characters or departs from standards that are in place for exactly this reason.

  20. Here is another way (the way I do it, using GSAK (under Windows) and Cachemate (on a Palm device); GSAK uses the Cachemate conversion program and GPS_Babel but you don’t need to worry about installing these because GSAK takes care of that. With everything installed this is the procedure:

    1. Schedule pocket queries (gpx version) for the geographic area that you want to follow (this could be thousands of caches split up over various PQs- it’s up to you). You will want to err on the side of getting too much rather than too little (as you will note below). Have these sent to your e-mail address of choice.

    2. Use GSAK to read the files into one large database on your computer. Now you have the ability to look at the information that was in the PQ off-line in GSAK (it has a browser window that you can turn on). Everytime the PQs run in the future, just have GSAK read them in (it automatically unzips the zip files for you) and it will refresh the information in the database.

    3. Within GSAK you create a subset of your database either by selecting each one in a check-box or by creating a filter. (You’ll have to read through the GSAK documentation for that - if it is a bit technical for you, be aware that it is well worth the effort). Once you see the selected set of caches in the table on the screen, you can to the next steps:

    4. With your GPSr hooked up as you would with EasyGPS, have GSAK download the points into your GPSr (remember that you might need to make room in the GPS first). Note that you can have GSAK automatically create waypoint names to suit your style (it is very flexible). Do this for each GPSr in your team.

    5. (If you use a Palm device) In GSAK export the same stuff to a Cachemate file; if you do it on the computer where you sync your Palm device, then it automatically sets it up for you so that next time you sync, the file goes where it needs to go. You would want to use the same waypoint name creation here as for the GPSr so that the names are the same.

    6. If you are using mapping software (which I do) then you can also export the same info into a file that the map software can read. Now you can load waypoints on the maps and print that out. So you will be able to plan your caching trip.

    7. If you have not graduated to PDA usage and are still converting lots of paper into scrap paper, then GSAK can also set up an html file (designed to be printed out with your browser) of all the selected caches. This reduces the paper utilization significantly (4 to 7 cache descriptions per page). Make sure you have the table sorted in GSAK so it is the way you want the printed list.

     

    I think all the software is available at the link Jeremy posted. Total cost for all registrations is less than $25 and all well worth it, in my opinion.

  21. Hey you guys are too easy to distract....back to the thread:

    The same logic that applies to circles (i.e. what to include if more than 500 are found) can be applied to rectangles. In fact, if one were to allow the use of squares in addition to circles, then one can use the same input (center and radius) and provide a check-box for "circle" and "square". The square would be the one that completely includes the circle. You would still need to do the same amount of math, but you could pre-screen a set of coordinates very easily so you would never have to calculate the distance for a location that is outside the square. I suspect that Jeremy already does this for the current circular situation, because I cannot imagine that the current programming is so inefficient as to calculate the distances from the center to every single data point in the database.

     

    Still, I would rather not go this route, because (1) I want to be able to specify a rectangle and not just a square, and (2) I want the computers to do the complicated math and let me simply find the max/min information on the map and enter it in.

     

    I would suggest that for rectangle input of two coordinates the routine would start at the coordinate listed first, sort by latitude, within that by longitude. Return however many are requested starting at the top of the resulting list. If distance calculation is trivial (might be, because it’s already done), then I have no trouble having the distance from the first corner be used as criterion. But then I don’t want to hear any whining about computational intensity as justification for the arbitrary 500 cache max limit.

  22. I don’t think 24 hours is long enough. Many times the edit of a log is in response to the cache owner providing some feedback. You do have to give time for this; it may take the owner a day or two to get back to the person who wrote the log entry and it may take that person a little time to make a change.

     

    I recently posted a log that the cache owner was upset with. He did not want to delete it but he was unhappy with what I said. We were able to resolve our differences by having me change my log. It took a few days to get that resolved.

     

    I would say that giving us a week to edit a log would cover all instances that I am aware of.

  23. If there is a chance that some new programing will be added to the PQ feature...

     

    I would like to request a new PQ feature. I would like an alternative to having the search area be circular (e.g. center & radius). Instead, I would like to be able to specify a rectangle. It seems that this can be done with two latitudes (a min and a max) and two longitudes (a min and a max) or alternately opposite corners (same thiing, just different words).

     

    Justification: As cache densities increase more and more, the 500 cache limit will mean that we will need to run multiple PQs so as to get a region covered. With adjacent circles this is ridiculously inefficient.

×
×
  • Create New...