Jump to content

Volvo Man

+Premium Members
  • Posts

    332
  • Joined

  • Last visited

Everything posted by Volvo Man

  1. Oh, this has really got the mathematical wheels turning for me. I has just occurred to me the reality from a cost to implement point of view just how logical the request for a Platinum Membership would be: Cost of suitable server to satisfy this "Miniscule Demand" $1000 Cost to copy PQ software to second server and change the number 5 to 20 and the number 40 to 160 $200 Cost of simple broad band connection per year: $400 Electricity to run above per year $200 (generous) Total $1800 #of "Platinum $120" members required to recover Year 1 costs Just 15 (total of 300 Mb/Day upload, well within basic broadband connection capability and TOU) Annual profitability if no further members upgrade $1200 Wow, now that's what you call a return on investment, 12 moth break even and 200% markup on ongoing costs are dream scenarios of any business in the world, except perhaps the recording industry Naturally, economies of scale would make this even more profitable as bulk bandwidth charges are much lower per Gig, and its as easy to clone 5 hard drives as it is to clone 1, plus that $1000 server has a massively greater capacity to run queries than is being used in the above example. How about it, is that a good enough business case? I have at least 15 cachers in my address book who would be willing to pay $120/ year for this, but I really can't be bothered to trawl the forum to count how many others would be Pro Platinum, but I'm willing to bet that server would pay for itself pretty quickly. Don't forget all those subscriptions would be up front payments so the break even could be as short as Day 1 if it was properly announced and advertised to the membership. (I'd make the Platinum payment 12 months only and not allow split payments)
  2. Wow, just seen the length of my previous post, sorry about that, but it's the best way to say what I wanted to say. (see can do short posts too )
  3. Firstly, let me say that I'm neutral on the PQ sharing issue raised here, my agenda is merely to advocate the proposal to raise the PQ limits by some means, financial if necessary. I do however know that there are PQ syndicates around with reciprocal sharing agreements (all PMs contribuing PQs equally) for this to be viable, for instance in the UK, would take 70 PQs a day which adds up to a significant number of members in the syndicate, assuming that they do not each use all 5 of their PQs for this purpose. There's no point arguing about TOU violations, everybody knows PQ sharing is a violation of the TOU, but still it happens and I'm sure that GS commit some effort to tracking down such activity and sanctioning such persons. Perhaps such efforts should be directed at the cause not the symptom, as this will result in a more lasting solution. With regard to database load, comparing each query to a public ignore list would not place as much load on the system as some people think, it just depends on your methodology in when the comparison is executed, as far as know, GS use MySQL for the DB and tthis is a pretty powerful piece of software, add to that the fact that processor speed has grown (with a continuing downward price trend) at a far faster rate than geocaching. By my calculations, the size of a database that includes the last five logs for all the caches in the world, and all the data that would be included in a PQ would be around 2.7 Gb in XML format, not so big when you consider the power of a quad processor machine these days. Database speed and load relies not on the size of the dataset so much as the quality and relevancy of the indices. It's also far more efficient to query a large dataset for a large number of results a few time than to query for few results many times. Incidentally Markwell, your suggestions on narrowing down the search by adding more and more criteria actually increase the load on the server quite significantly. just consider the implications of calculating the distance of every cache in the dataset from an arbitrary centrepoint, this is something that just can't be indexed, the best you can do is have the system initially eliminate all caches outside a top left bottom right box, then calculate the distance for each cache in that box. this load climbs if you just go for the nearest 500 and ignore the radius parameter. in effect this method requires the constant creation of temporary indices. Its far simpler and quicker for the system to just eliminate all caches that don't meet a criteria on a fixed index such as State and then spit out all results that do meet it. There is another point to make, my experience on tuning a query to give around 490 results is that you need at least 5 previews to get it right, this climbs a lot when you are not familiar with the area and the caching activity there. each one of these previews places a similar load on the system as a full PQ, so again, by tuning, you are placing more and more strain on the server, effectively running 25 PQs to get 5, for a total of 30. Lets examine the math of that for a moment, imagine two cachers who are lucky enough to not have to work and spend 6 days a week caching and one day planning the week. One of them tunes his queries and stays within the 5 /500 limit, the other has convinced jeremy to create a platinum account and has access to say 20 PQs a day and maintains an offline database of the area he's covered with that. then lets assume that each continues this for 12 months. We'll also assume that they use the seventh day's PQs to add to the week's data. Cacher 1 - for each day runs 5 PQs and 25 previews to tune them , 30 a day, 210 per week 10920 per year to generate a total of 1820 individual PQs, if we assume a low overlap of repeating data of 60%, that gives us 364,000 caches queried Cacher 2 - for each day runs 20 PQs with the coresponding 100 previews to tune them, say he runs different PQs each day of the week but keeps the week's PQs constant for the year. thats 120 a day for 1 week, 840 with a further 7140 for the year gives a total of 7980 per year with a total of 70,000 caches queried, we'll also allow another 300 previews per year for re-tuning, which can be achieved first time without previews if you use your offline database properly. We'll also say he's finding 20 a day and tuning his queries to use that space but does so with his offline database, not affecting the server query load, that gives 76,240 caches queried. in summary Cacher 1 - 10920 queries, 394,000 caches queried $30 subscription paid, 6,240 Finds, 52 days spent online at GC.com tuning. (Found 1.7% of caches queried) Cacher 2 - 7980 queries, 76,240 caches queried, $120 subscription paid, 6,240 finds, 1 hard day spent tuning queries, 51 days spent with his feet up after checking his offline map and saying - "I'll go there next week" (Found 8.1% of caches queried) We'll forget the logging of caches as they are equal for both cachers in this example. The question is, now who's putting more load on the server, even without factoring in centrepoint queries versus state based date queries? Also, who's got more data than he needs? And finally who's the better customer from a revenue to product point of view? Who's contributing more towards equipment and bandwidth costs? There are other advantages, Cacher 2 will be able to grab nearby caches along his route to the main caching grounds on impulse whereas Cacher 1 has to plan to do this. Such tools as trimble navigator are a great idea, but come nowhere near to the level of functionality that a laptop with an offline database coupled to a sat nav with a POI database can give, not to mention my previous point about cell tower coverage, check out the coverage maps for Northern California for instance, your iPhone wouldn't be a lot of help here. On top of this, my current combination can be utilised anywhere in the world, without massive overseas roaming charges. With international roaming as high as $8/minute for calls, I'd hate to think what data roaming would cost. As for the elitism comment, that seems to be more prevalent amonst the opponents to a Platinum tier that in the proponents, I certainly note they use the word "newbie" a lot, one of the reasons I weighed in on this discussion was to show that some old hands are for it too. And finally in answer to the I5 route, simply - "the road less travelled", there's a reason there is a series of caches titled "I Hate I5" and yes, they are on my finds list.
  4. Oh, and just in case, lets call that upgraded membership "Gold" and reserve "Platinum" and "Diamond" for later requirements. one other thing to remeber, when 5% of your customers want to double their payments, they effectively become 10% (no i'm not going to do the precice math at this time of night) (oh, it got the better of me, 9.5238%)
  5. Oh my god, I can;t believe I get to correct the famous Markwell on a point, minor though it may be! If TPTB were not interested in changing their original PQ model, there would not be logs attached to them, and they would not have doubled the saved query limit to 40, which is enough to cover a full week's worth of individual PQs or 5 duplicated ones that run quickly when the server resets the counts at midnight PST. up to Week old data is IMHO good enough for my purposes, the percentage chance that that particular cache has been archived in the last week is pretty low, and if I have trouble finding a cache, the history file I construct offline will give me a good enough idea to make a judgement whether it's disappeared, been archived or it's just a good hide. This database is held on my laptop which lives in my glove compartment. As I've previously stated, if you're planning a road trip without a specific route, one day's worth doesn't get you very far at all, 17500 caches doesn't get you that far in places either. My next roadtrip is northern California and southern Oregon. I won't know which direction I'm heading until I turn on the freeway from my centrepoint, 17500 caches gives me a 200 mile radius to work with, ok for my purposes this time as I'm doing a centre based trip this time. Last road trip covered California, Oregon and Washington, took 7 days and covered over 3000 miles. I didn't do any caching on that trip as we had other priorities, but had I had my TomTom with its POI database, it would have probably been a different story, as I am sure that the places we went would have had caches, it was a twin peaks locations tour which was a lot like going caching but with cool locations from our favourite TV show instead. As for filtering, I don't do a lot of Multis but will do the shorter ones in passing, Same for Puzzles, Traditional caches though, if it's there, I will go find it, no particular preference for Size or difficulty. And talking about missing the point, Geocaching's popularity is due to the flexibility of the sport/hobby, you make your own rules how you do it, there are only a few basic common sense guidelines in place regarding conduct in the field and placement of caches, mostly regarding protecting the environment we are using and our fellow cachers. The GS TOU are not the rules of geocaching. What this thread is about is the standard commercial practice of supply and demand. In this case, some customers are "demanding" more data from the "supplier" in the hope that the supplier will recognize that "demand" and set a price to fulfill our requirements. In any other supply and demand model, the supplier is only too happy to supply more product to the customers and often encourages this through pricebreaks for larger deliveries. Very few companys are interested when some other customers say we don't want any more product than we are currently receiving and they certainly don't prevent larger customers from obtaining more product (limited supply special sale prices excepted as this is usually a loss leader). In it's basic terms, They have data, I have money, lets do a swap.
  6. If the reset time for your 5 PQs was linked to your home Timezone, this would have advantages for both GS and Members. Server load peaks would be spread across 24 hours, although naturally some time zones have more Members than others. The time zone currently used is fine for the US, but European cachers don't get their daily quota until around 9am which can cause some delays getting out caching if you've used your previous day's allocation. After midnight in each time zone is the least likely time for new caches/logs to be submitted, thereby keeping the data as fresh as possible. I would suggest that to prevent abuse and 'Tweaking', the ability to change your timezone/location be limited to once per month. I guess this could be accomplished by duplicating the script currently used at midnight to run hourly with a subset of data produced once daily by a new script run at 12pm GMT (the first midnight of each day, in Australia) If this is not possible, would it be possible to switch the server zone to EST as this would have little effect on US cachers, but would mean most European cachers would get their daily quota 3 hours earlier, in time for an early start caching.
  7. I Think Team2Hunt's point was to clearly indicate caches that are safe from stray supersonic copper jacketed projectiles during the season for such things so they can enjoy a day out caching without the need for orange vests and also not to turn up only to find that hunting is allowed and the placer hasn't added these attributes.
  8. Use four of the PQs on your main account, and five from another (for a total of 4500 caches!). Load them into a database. Then use the All My Finds query and have it grab your finds - it will grab them all once a week. Load that query AFTER the others, and he caches will be updated with the status as found by you. I know it's a klugy work-around, but it can be done with the tools in place now. Very kludgey, indeed! Here's the problem. Like most folks, I'm sure he would rather not get the caches he's already found. Why waste a slot for a cache you've found and you will get in your All My Finds files? Here's the problem, the second account will not know which caches have been found and therefore can't exclude them. The only way for it to work--i.e. not have overlapping downloads--is once you log with one account you switch to the other account and ignore that cache. That's just a flat-out ridiculous solution. Not only that, but any cache you ignore on the first account you'd have to duplicate on the other, otherwise you'd still get it in your offline database. Quite right CR, If i'm paying a full second membership, I'm going to want the same level of service, that is not to include all my found caches that I've eliminated from my main account PQs. Since I've changed jobs, I can now look forward to getting back to some serious caching, and every 500 finds means one less PQ to have to scrape together. As I use a defined area and tune my PQs by Date Placed, I lose about 120 caches PQ capacity a week as it is. Going back to my business model example, if I'm paying double, I want a little extra service as sweetener, rather than increasing my workload with ignoring all my finds, I have enough trouble finding time to log caches as it is (If only I could log them as I go and sync up when I finish for the day). That's not to mention going back and ignoring all my previous finds.
  9. Hmmm, I see a possible solution here, Ok, so I buy additional subscriptions to get the extra PQs, fine. How about then a feature request instead......... The ability to link multiple Premium accounts together with the ability to treat finds from one account as finds for all as far as the PQ server is concerned. Maybe treat the extra account as a slave account, enabling 10 queries a day instead of 5 and a library of 80 On the library point, it would be nice if I had the ability to submit my PQs remotely, thereby enabling me to have an unlimited library on my machine and have my machine send them in daily as needed. (or is this already in the works, I seem to remember hearing a rumour) Sounds like a possible "Bell & Whistle" that will get additional Premium subscriptions sold. From a data cruncing point of view, it would probably only take a few of extra lines of code in the Member profile system and the PQ handler. (I'm no SQL expert, but I've done similar in Access and GSAK with little extra effort)
  10. If we're to talk of business models here, and as it's already been mentioned, lets look at a standard ISP service plan model. In the UK, broadband internet ISPs price roughly as follows for a 3 tier service- £5 - 2 Mbps service, minimal webspace, Email, but no other bells or whistles, service unable to reach the daily data caps set by the provider, full charge for install and wireless £10 - £12 - 10Mbps Service, Loads of webspace, email, bells & whistles etc, but daily data cap somewhere between 2 and 4 GBytes (usually enforced only for continual heavy downloaders), small charge for install and wireless £20 - £25 - 20 Mbps+ All bells & whistles, No caps, truly unlimited, free wireless router, no install charges etc etc From this pricing model, we see that the primary service at least doubles when the price does, but there are loads of extra add-ons which may be of interest and help seal the deal. Now this seems like its all in the customer's favour, but the ISP is gambling (in the same way that Vegas casinos gamble) that the customer will not make full use of all the features/capabilities of their service level. If they do, no worries, the extra revenue means they can buy their bandwidth & equipment in greater bulk and get better price breaks. This same business model is used by many successful companies the world over, especially in the tech sector. So what I am saying is that sure, I'll pay an extra annual subscription (I've already set up the accounts), but I'd much rather pay twice the subscription and have the ability to run larger or more PQs on a daily basis from the one account to save the hassle(It would also mean I can eliminate my finds/hides from all not just half the queries), If a state/country subscription service came available I'd rather that too. As to those who say you don't need all the data, 1 - "Be prepared" is a good motto, and just as applicable to impulse caching days as planned, 2 - 2500 caches in my area gives me a 37 mile radius (and I'm on the coast), not much distance for a good road trip, from my mother in law's house, 2500 is reached in just 21 Miles radius, and as I have said, my road trips can exceed 3000 miles. 3 - the point of geocaching is FUN and I'd rather be out there having Fun while my automated GSAK macros do all the data maintenance, than sitting in front of the computer planning tomorrows Fun. 4 - I'm not a newbie, I've been a premium member for 3 years and seen the database grow and grow, all we want is for the service options to grow along with it. When I first signed up for PQs, it took just 12 to cover the entire UK, now it takes 69 and it's growing ever quicker. 5 - I'm offering to pay pro rata for the service level, please take my money.(GS that Is, you can't all have it)
  11. A topic close to my own heart, I agree with the comments on sharing and the business model, it could cause damage to income, although, i suspect that sharers would probably expect others to contribute their fair share to the collaboration, maintaining the status quo on memberships.. On subscribable files ie all caches in State/Country, there are a lot of GSAK users out there and it is just 3 clicks to filter out all found by user name, and they automatically highlight on GPX load anyway. if I could subscribe to a regular download of a single file for the UK and a Single file for California, I would ditch all of my PQs. Maybe GS could run a trial to see if this would be viable in reducing their bandwidth charges. On the 500 limit, I would welcome an increase to 1000 or more, after all as has already been pointed out, cache placing is increasing at an ever faster rate, this can be demonstrated using the date placed method to build complete query sets, my current set shows a change from over a year for each of the earliest 2 queries in the radius, down to filling a query in 20 days in mid summer (spring and August are the main peaks in placings) I would happily pay extra for such enhancements rather than troll through the hassle of maintaining 2 sets of PQs from an additional premium membership, perhaps a Platinum Membership for $60 would be appropriate, sounds like a win win to me, would need to have both larger/more PQs and Subscription files service at a minimum. On the point of mobile Internet Devices, thats ok if you cache in built up areas, but out in the sticks you can't always get a phone signal, let along data and some countries don't have well developed cell networks. It can also get kinda pricey if you're roaming, out of your home country etc. Why do I want all this data? firstly I enjoy roadtrips and don't always plan my caching, it just happens, ever been somewhere and thought, "I bet theres a cache round here somewhere" then checked when you get home to find that you were probably stood right next to it. Thanks to advances in technology, devices now have near unlimited storage capacity. I can generate a file for my Tom Tom with GSAK which shows all caches on the driving view, and even plays a sound in close proximity, can be searched for the nearest caches etc. As this can be automated with a simple macro, I can have all the data loaded on my caching tech in a very short time, chuck it all in the car and decide where I'm going while on the road. The "Magical Mystery Tour" has alwas been a lot of fun to do no matter how its done and hardly ever results in a bad day, to be able to combine this with caching makes it even more fun. Unfortunately, this is not easy to do when limited by cache density, 2500 caches in my area limits me to a 30 mile radius, in certain area of California, its as low as 15 miles, a week's PQs in North california limits to a 200 Miles radius and my road trips have been known to rack up to 3000 miles without leaving the West Coast. Impulse caching is also a lot of fun when you've got a few minutes to spare here and there on trips for other purposes, yes, I keep a basic caching kit in my car, if only magellan would produce a wifi GPSr that can be updated in the driveway (and TomTom too). The other reason for the regular data is the limited logs on GPX files, a regular PQ and GSAK builds a comprehensive history of each cache, as this database lives on my laptop, I can look back in the history for clues on a tricky cache, 4 DNFs in a row on a 3.5 Plus difficulty cache are meaningless and also no help at all. I've also found "lost" caches and returned them to their correct hiding place by going through the history of the cache. Just a final point to those who may wish to comment on my recent lack of posted caches, I've still been caching, i just haven't been logging for a while.
  12. I think 6 months is way too short a time for one thing, I dropped out of sight for nearly that long, due to work commitments. Also, along with ditching a username, would have to go dropping their logs unless the name was changed to "whoever (archived)" in some cases, the reason that cacher no longer logs caches is that they have passed away, and I feel that ditching their logs would be a real shame. Even when a cache is permanently archived, it can be found on the site if you know where to look, I've even found archived caches from banned users. I do agree that a no finds/hids user name should be recycled after a given time, perhaps 12 months. GC is one of the few places where spaces are allowed in the username, thanks to a lack of email address (ie @GC.com) being provided in some way. I think this allows a lot more creativity in user names My own is a duplicate, a swedish volvoman signed up before me (I'm getting used to that) so I added a space and capitalised the first letters, which I prefer as a display format anyway.
  13. You could just place a cache in your area, then see who logs it first, likely it would be your current most local cachers.
  14. On reflection on recent events, I believe that it may be wise to consider temporarily archiving physical caches within the City of London, and perhaps some of the other major City Centres too. Just for a couple of months until the nerves of police and the public are more calm. This could of course be left up to cache owners to decide if their caches may arouse suspicion with their location. when the news came out that a 5th "bomb" had been destroyed in a park like area and was a plastic tupperware container hidden in a hedge, I immediately thought "they've blown up a cache". I throw the consensus open
  15. not if 5 queries have run in the last hour, unfortunately I cannot now run another query (for the gpx file) for 24 hours
  16. All my PQs bounced and I'm supposed to be going caching tomorrow. My mail box is now fixed, is it possible an admin could reset my 24 hour limit or re-send the PQs??? Pretty Pretty please with cherries on top and a light sprinkling of sugar.
  17. The archived caches shown by GCUK are just the temporarily disabled ones, If they have been permanently archived (don't show) then the only way for regular users to find them is to trawl profiles. I have found by this method, a couple of "banned" users whose caches are actually still out there. I came across this when I found what I thought was an exclusively listed cache on "another" site, I was puzzled why there were lots of logs in the book, but I was the first to find the cache on said site, and the first to log the cache in over a year. It can be a bit of a pain not being able to generate a list of permanently archived caches, If someone permanently archives a missing cache, rather than temporarily disabling it for a while, it doesn't update the record on GSAK when I load up my pocket queries, and still shows as active. I've ended up going after several caches that are missing due to this.
  18. here here!! As for the TB issues, my take: Some TBs have the objective to visit as many caches as possible, so I log them in all caches they accompany me to Some TBs just want to go to specific places, I move them in that direction and log them where I leave them Some TBs have no goal, I leave them in a random cache and log them there My personal TB goes everywhere, as I have no other way of tracking the milage and order of the caches I do. My Personal TB is virtuall dropped & grabbed at one of my caches 200 yards from my house at the end of every cache trip. I leave it up too the bug owner to make the rules for their bug, also, some send a note with the bug with those rules, some don't.
  19. I beleive the record for this is one of seasider's caches was actually found by a cacher looking for a place to hide a cache. When they got home, they found the cache had not yet been Listed on GC!! As soon as it was approved, it was claimed by said cacher.
  20. What a great site, I will get mine on there soon. One suggestion for thoses who run the site is to add a section where you can enter your own personal capability and be presented with a list of all rated caches that are within your capability. A distance search would be even better. Whilst I am able bodied myself, I have always maintained that geocaching should be for everybody, and in placing my caches, I have tried to keep a certain number accessible by those with physical limitations
  21. Yeah but it would be nice to pull up in a shiny new Jeep, to go walk in the woods eh?
  22. Technically speaking, If it was listed as a regular, (or was that perhaps the container) and you find a log book in the first container, you are justified in signing said log, and forgetting the rest. The series mentioned above should really have only had 1 Log Book in the final, else it should have been listed as 3 seperate caches to log 3 finds and 3 log signings. Chester's Bone-us cache could be a mystery cache, I would count it as one, although that is subjective, it is at least listed as a Multi. If you want to change the listing, a quick email to our friendly reviewers will do it. My personal feeling is that bonus final caches should be Mystery caches due to the sheer workload of finding them. Examples I can think of are Dodgydaved's Sunninghistory series, My own Parks series to be released soon, and Jessex's series in brentwood. I can see what Pharisee is saying, but trads should be do-able by casual cachers, by this I mean, I should be able to have my GPSr loaded with waypoints for a particular area that I am going to, and be able to go find them if I have a spare moment. OK, in my paricular case, I have cache mate on my clie, but others rely on cache pages for Multi's, which means they may spend loads of time searching for a cache that was never there if they make an unscheduled stop at a nearby cache in passing. One other small annoyance, is that being a little bit of a stats junkie My stats show as having just done 8 Multis, when I've done at least twice that many that aren't correctly listed.
  23. I was going after one particular cache, and had to make a sharp side turn off a dual carriageway. I got to the turning, which was marked on the map as a road with a fork in it about 1/2 mile long. I made the turn, and proceeded up the very narrow "lane" that was the fork I needed, or so I thought. As I continue up this lane, I become very aware that I could not safely reverse down it in my van, as branches keep knocking the mirrors falt. shortly thereafter, the lane turns into a very bumpy public footpath, and still nowhere to turn around. I had to keep going hoping against all hope that the footpath had somewhere for me to turn around. Fortunately, there was a farm track about 200 yards further down the path, but I had to drive back down the path to get out. all this off roading in a 2wd Astra, avoiding 18inch ruts. when I got back to the main road, I turned at the next turning (the right one) and drove right up to the cache. It was at that moment I decided I needed to buy a 4x4
  24. I have to disagree with Leoness, what she describes is a serial cache, which usually ends with a puzzle cache, but has many individually loggable stages, some of which will be traditional, some multi, some virtual. A multi cache has 2 simple criteria: 1: the posted co-ords are not those of the final cache. (So it cannot be a virtual or traditional) 2: to get the final co-ords, you must visit at least one different location, which is not loggable individually, to determine the final co-ordinates. Clues may be in the form of hidden containers with co-ordinates in them (but no log), or in the form of virtual clues such as getting a number from a plaque. A puzzle/mystery would be generally either a massive serial cache to get to, or a treasure map style clue chain, such as 20 paces east, turn right etc etc. I t is also used for unusual or unexpected cache containers, even though the co-ords are actually listed.
  25. It might be easier to believe the time if you ensure your watch is fully wound before setting out
×
×
  • Create New...