Jump to content

rickrich

+Premium Members
  • Posts

    200
  • Joined

  • Last visited

Everything posted by rickrich

  1. Since the P2P application is running on the users computer, it can pull maps from the internet. Then the P2P application overlays the track on the map image. You can't push maps from a server to multiple clients without running afoul of copyright. But a user can pull maps for their own use with no such trouble, and fair use allows you to do anything you want with those maps once you get them. The P2P application just assists in presenting them in the desired way.
  2. One of the beauties of a P2P solution is that no mapping server is needed.
  3. I think a rating system, properly implemented, would work very well to separate the really excellent caches from the really lousy ones. So what if there is also a large middle class of caches? Any rating system would obviously have to be anonymous, as runner_one points out. I agree, it would not be hard to implement. I find it hard to believe that anyone could argue against a system that solves many issues in a democratic way.
  4. IMHO, the way to do this or any gc.com-like substance, is to create an entirely distributed database using peer to peer (P2P) technologies and application-specific cross-platform peer software. Central servers don't scale well and ultimately force the provider to use a business model in order to pay for the servers and bandwidth.
  5. My shirt wardrobe contains two and only two styles: tee shirts ("play" clothes) and long sleeve tee shirts ("good" clothes). Ask my wife. But I never buy white tee shirts for any reason. I get my white tee shirts "for free" from vendors, employers, etc. I don't need any more white tee shirts. Ask my wife. I do pay for colored tees and colored long sleeve tees. I have to buy these in order to balance out my wardrobe so I have a color to fit any occasion or mood. I would definitely consider a purchase of a colored tee. Oh, and I'm going to need the long sleeve tee if you want me to wear it to work and do a little advertising for you.
  6. The problem with virtuals is that some are lame. However, traditional caches also suffer this very same problem. It is arguable which category has more lame caches than the other. There is no way to know when you set out to find a cache whether it is lame or not. Most people are too polite to write in the log "this cache is lame, stay away". The solution is to provide a way to anonymously rate a cache when you log it. Plus, you should be able to search for nearest caches with a quality rating greater than some level. Caches (whether traditional or virtual) that get poor ratings will fall into disfavor, and can eventually be archived. This simple technical feature will allow natural selection to decide which caches survive. Let the people rate the caches, and take the onus and load off the approvers to decide the quality.
  7. I use a Magellan GPS Companion plugged into a Handspring Visor springboard slot. But I never bother to load any software, waypoints, or maps into the PDA. I just use the GPS Companion with the basic lat/lon display that comes in its ROM. I got this a couple years ago before I knew what geocaching was. Even though this setup is capable of handling maps, cache description pages, etc, I don't use it that way because I have a superior solution (IMHO)... I have a Toshiba laptop with 15" display that I picked up new from Best Buy for $650 last XMAS. I added a Deluo USB GPSr ($80) to it. I run Redhat Linux and the free GpsDrive software on the laptop. Thats what I use in the car for maps, waypoints, the cache description pages, and navigating between caches. The display is large enough that I can read it easily while I'm driving, and my kids like to run it as well. It also doubles as a game machine and car DVD player to entertain the kids. :-)
  8. If you log a find and you don't get the "Visit the cache" page in response, then hit back in your browser and submit it again.
  9. That data is available in the database... $ geo-pg -n Starbucks -otxt sacramentocaggm.pdb Restaurants, Coffee|Starbucks Coffee Co|421 Pioneer Ave|Woodland CA 95776|5304060898|38.36783|-121.80301| Restaurants, Coffee|Starbucks Coffee|4424 Freeport Blvd|Sacramento CA 95822|9164512747|38.29759|-121.67666| [sNIP] However, I'm quite sure that republishing it on a website would require more than the $25 fee to license the data for personal use. -Rick
  10. For $25 you can get a database of 1.5 million USA placenames from www.mapopolis.com. The database includes the name, address, phone number, and waypoint. Here are some of the categories in the database: Amusement Banks Bars Drug Stores Food Stores Gas Stations General Merchandise Government Hotels Libraries Movies Museums Post Office Public Safety Religious Restaurants, African Restaurants, Asian Restaurants, Bars Restaurants, Coffee Restaurants, Deli Restaurants, European Restaurants, Family Restaurants, Fast Food Restaurants, Franchise Restaurants, Ice Cream Restaurants, Miscellaneous Restaurants, North American Restaurants, Seafood Restaurants, South American Restaurants, Steak Transportation I have tools, pgpdb2txt.c and a companion wrapper script "geo-pg", that will convert these database files into text or into any of the formats that GpsBabel supports. These tools should run on any Linux/Unix system. I have no connection to Mapopolis other than as a customer. Google on geo-pg or pgpdb2txt to find these. -Rick
  11. If only there were a way to mark a cache "ignored", then there would be no reason for people to use the DNF and notes to remember caches that they do not want to do or can't do.
  12. Hey robert, great summary of the problems with the site. Let me add this: I would reckon that most pocket query users are getting updates multiple times per week. It should be no surprise that the diffs between two updates are very small. Not only that, but we have our client side tools to filter the queries. I'd like to be able to download the GPX file for an *entire state* on a monthly basis, and then download the weekly diff against it, followed by the daily diff to bring it completely current. Some states might have to be broken up into 2 or more parts, but I think you can get the drift of my idea. This would significantly reduce the load on the servers and allow people to filter queries client side as they see fit. This also avoids the need for "on-demand" scrapes when you find you are going to be making an unscheduled trip, or when you find that there are a bunch of new FTF opportunities. Basically, its recognizing that power users want a replicated database for their region or state(s). -Rick
  13. Some may find statistics to be politically incorrect, but why should they decide this for the entire community? The dirty secret about statistics is that many of the xxGCA organizations are already scraping this website in order to construct stats for their members. Banning the competitive interest will not prevent it. It seems to me that gc.com could provide a wealth of interesting statistics with lower server load. Or, they could open up a more useful interface for others to provide the statistics. Polling the "200 recent finds" list (as Dan's site does) is an approximation at best, and the alternative currently causes more page loads than is optimal.
  14. Just today my user stats have started showing that I have found 1 USA Geocoins 2003. I've never found a Geocoin, and wouldn't know how to log it if I did. I notice the same problem with other user profiles as well. Database corruption?
  15. Hmm, looks like my response crossed in the mail with Moe's. I defer to Moe for the definitive status. But what I want to know is why Moe isn't out right now getting FTF on the cache I just placed!!!!!!
  16. I'm pretty sure I heard at the recent MnGCA Summer Fling that the MN DNR would have a geocaching policy out sometime this fall. A year and a half is pretty good response time from the MN DNR. We are still waiting for a determination on whether we can use the Sonar aquacide in our lake to knock out the Eurasian Watermilfoil, and that study has been going on for 10 years now (meanwhile, we dump far nastier chemicals in, and get less control, but I digress). I thought the DNR would milk this geocaching thing for 10 years of funding in order to "study" it, and *then* continue to ban it anyway. Glad it looks like I will be wrong. But who knows until the policy is actually published. So it can't hurt to tell send your thoughts.
  17. I do have a complete set of the Pittsburgh geocache web page and logs up until 3PM this afternoon. I'm pretty sure that Magellan will do the right thing and pay for the ticket(s) people got, once they sort things out. But if not, well, the evidence is not lost.
  18. quote:Originally posted by Chrissy_Skyking & Blaze: And, if you want to know the reason for the change, here's the most likely answer: for each and every Search Results page for a logged-in member, the Server must query the database FOUR times! Four queries (for Not Found / Found / Hidden / Archived) means the server has to do FOUR times the amount of backend processing. After it makes the first query, it must sort the results, build that into a part of the web page, and then move on to the next query for the next category, repeat, repeat... There is no reason to execute four separate database queries in order to form the grouped web page. If that was what the old system was doing, then that is simply bad or naive programming. There are enough geek programmers in this sport that if the source code to the servers were opened up, we could have this web site humming without having to give up useful features.
  19. This bias against virtuals seems to me to be solvable at the technical level. First, it should be made easy to filter (by inclusion and by exclusion) any characteristics of a cache. That will allow virtual lovers and virtual haters to coexist peacefully. Second, there should be a rating/lameness system for caches. Lame caches will fall into disfavor and after some sufficiently long period of time where no one visits them, they can be archived.
  20. quote:Originally posted by Mopar: _SOME_ reason has been explained already. There really isn't a choice. If you wanted a more realistic poll it should be: [ ] Do you want the old way back, with the website grinding to a halt the last 5-6 nice weekends?(and expect it to get worse to the point of NEVER being able to accually use the website) [x] Do you like the new way, that has lots of cool features and I'm sure will get even better with some CONSTRUCTIVE input? (and reduces the server load to 1/4 of what the old format was) I'm pretty sure the #1 complaint with the new format is the lack of grouped display of the cache status. A poll could be run to determine if this is the case. Grouped display, if implemented in an intelligent manner, should cause no more load on the server than an ungrouped display. This is simply a rendering choice. In fact, the new system (without grouped display) is probably causing unnecessary database queries to be initiated.
  21. Imagine a system where the geocache database is scalably distributed across the computers of all users, instead of the current unscalable centralized system. A custom peer-to-peer (P2P) network and backend program(s) would be used to automatically distribute small portions of the database (say, the 100 or 1000 nearest entries) to each machine participating in the network. Each machine would have a "home" zip code, "home" bandwidth value, "home" radius value, and other parameters which would control the number of database entries its backend program would hold (cache) locally. Of course, the backend could be asked to query any area of the world (and for a time) also hold non-home data. Any number of frontend programs and browser plug-ins could be written and used to query this distributed database. All filtering, rendering, and visualization of the database would occur on the local system, rather than on an overloaded central server. Some frontends might allow the full power of an aribtrary SQL query. New geocache entries could be handled with either approval or dis-approval servers. Note that this is different from from the current system, where all geocaches must be approved first. Approval or disapproval servers would dole out only lists of geocache IDs, and each users backend would use these lists to accept or reject caches. Multiple approval/dis-approval servers could exist, to account for regional taste, custom, laws, and to foster competition. An approval server could be of either the inclusive or exclusive variety. Users could run their local backend with or without filtering from one or more disapproval or approval servers. Each user would control what level of approval they prefer for their own backend filtering, but all submitted caches would be available somewhere on the P2P network. Each user would have the choice of who to trust for approval/disapproval. Geocaching.com could run their own (dis)approval server(s), and I imagine that many people would opt to use their judgement for their own backend filter. At least, as long as they continue to provide a valuable service. The (dis)approval service could be free or by subscription. Some stats.... Average GPX entry size: 3600 Average compressed GPX entry size: 1200 Guess: average entry size with full logs: 12000 Guess: average compressed entry size with full logs: 4000 Size of million-entry worldwide database: 4 GB Size of thousand-entry local database: 4 MB Size of hundred-entry local database: 400 KB
  22. Do you regularly censor polls? Or is this just a one time mistake?
  23. quote:Originally posted by maldar:I liked it the old way better. Maldar I have to agree 100% with Maldar. The old way was better. The new search page is nearly useless. Also, if you remain in love with using color, please remember to not use GREEN. 10% of the male population cannot distinguish red from green. Use blue instead of green.
  24. quote:... be patient ... Are we there yet? Are we there yet? My kids just couldn't wait for the official post. I had to give them the coordinates just to shut them up. Off they went on their bikes, in the rain with the GPS in a ziploc. Silence is golden. An hour later that plan backfired, because now they want to log it :-). Can we log it? Can we log it? Can we log it?
  25. IMHO, caches should be automatically approved when submitted, on a conditional basis. Then, when a reviewer gets around to it, they can decide to revoke the approval or not. Only one cache submission per day would get the default approval. Any additional caches would require human approval (to prevent programs from flooding the site with bogus caches). Depending on abuse, the reviewer could also flip a flag and change the default for that account to "not approved". New accounts would also get "default: not approved" until they have logged at least one cache find. To me, its a matter of being a pessimist or an optimist. I say, assume people are good and aren't going to post bogus caches. My submission today seems to have disappeared into some kind of black hole, and now I don't know whether or not to resubmit it, and the only email I get is from a bot.
×
×
  • Create New...