Jump to content

Problem with Cache Map


dani_carriere

Recommended Posts

I was trying to see the Cache Map of Canada. I loaded it, then tried to zoom in to the area near the south Manitoba/Ontario border and received the error below. I've tried yesterday and today and got the same message. Can anyone out there in geocaching.com land fix this?

 

quote:
The page cannot be displayed

There is a problem with the page you are trying to reach and it cannot be displayed.

 

--------------------------------------------------------------------------------

 

Please try the following:

 

Click the Refresh button, or try again later.

 

Open the www.geocaching.com home page, and then look for links to the information you want.

HTTP 500.100 - Internal Server Error - ASP error

Internet Information Services

 

--------------------------------------------------------------------------------

 

Technical Information (for support personnel)

 

Error Type:

Microsoft OLE DB Provider for ODBC Drivers (0x80040E31) [Microsoft][ODBC SQL Server Driver]Timeout expired /map/mapviewer.asp, line 344

 

Browser Type:

Mozilla/4.0 (compatible; MSIE 5.0; Windows 95; ZDNetSL)

 

Page:

POST 132 bytes to /map/canada.asp

 

POST Data:

lat=&lon=&Cmd=ZoomIn&Left=-141.324403603294&Bottom=27.2696287474577&Right=-52.2930356667258&Top=83.9259537980009&Map.x=302&Map.y=211

 

Time:

Sunday, August 11, 2002, 8:40:20 PM


Link to comment

I've been seeing this error constantly. I go to Oregon and try to zoom in on my area, and never make it. Or if I do, I can't zoom in again.

 

I used the map to find caches to visit. It was very helpful, since the terrain around Bend varries greatly depending on which direction you travel in. It was also very helpful for planning multiple cache daytrips.

 

I hope it works again soon icon_frown.gif

 

snazzsig.jpg

Link to comment

I've been seeing this error constantly. I go to Oregon and try to zoom in on my area, and never make it. Or if I do, I can't zoom in again.

 

I used the map to find caches to visit. It was very helpful, since the terrain around Bend varries greatly depending on which direction you travel in. It was also very helpful for planning multiple cache daytrips.

 

I hope it works again soon icon_frown.gif

 

snazzsig.jpg

Link to comment

I'm having the same problem here for Oregon Cache's. I don't know if it's a system limitation (IE: too many people trying to update maps at the same time), or what. About a month ago it worked fine & I never received this error. The suggestion about going back & trying it again does seem to work, but I get this error on just about every search that I do. Has this been mentioned to one of the admins?

Link to comment

I've been getting this message alot in the past few days. I'm not getting it more than once (hit reload and get same message again). I use Netscape. Is that a problem? We had trouble with forums in Netscape a few months ago which were eventually fixed.

Link to comment

The message above is a database timeout error. It occurs when the web site makes a request to the database and the database doesn't respond in the expected amount of time.

 

This is partly due to the increased traffic on the web site, but is mostly due to people using automated tools to suck down data from the site in bulk. The load placed on the database when these tools run is astronomical, and basically causes a denial of service to other geocachers. This is the reason we work to prevent automated tools from accessing the site.

 

We're working to optimize the database and some of our code to make it more efficient. Also, our efforts to convert the site to the .NET platform will help as well. We're working hard to address these performance issues with the site and are confident that we'll come up with a scalable long-term solution.

 

-Elias

Link to comment

Would it help if we backed off using the new Pocket Query Generator?

 

I can't get on the most recent logs page or the list of new caches for my state, both of which are necessary for outwitting other players of the sweet little cach-u-nut game we have here in Utah.

Link to comment

quote:
Originally posted by Elias:

[ slowdown ] is mostly due to people using automated tools to suck down data from the site in bulk. The load placed on the database when these tools run is astronomical, and basically causes a denial of service to other geocachers. This is the reason we work to prevent automated tools from accessing the site.


 

While I don't know the details of the current data flow, of course, I find the generalization faulty becuase well-designed "data suckers" can serve a multitude of users and actually result in fewer page requests on the central servers. As long as the fetches aren't greedy (i.e. they pull only the pages that humans would have pulled anyway) and implement caching/sharing so the pages are served to multiple users, the total load would be less. (I have no way of knowing if the ones that are hammering you are characterized as such or not; I'm only offering that some suckers can be your friends.)

 

This is why I've asked several times for published guidelines on such suckers so that they can hit the servers during times of low load, access data in lightweight formats, etc. So far, I've heard no answers.

 

Personally, I'm still pinning hopes on GPX. Although this will ultimately result in more roams of the database and likely more total bytes served (each user will get their own query instead of sharing one) it can likely be done by lighter-weight code becuase you don't have do do the formatting of the data on the way to HTML.

Link to comment

quote:
Originally posted by UtahJean:

Would it help if we backed off using the new Pocket Query Generator?


No, feel free to use the Pocket Queries as much as you like. We try to run the Pocket Queries when the load isn't as high, and we built into the code some pretty good caching such that each cache is only queried from the database once, no matter how many queries request that cache.

Link to comment

quote:
Originally posted by robertlipe:

While I don't know the details of the current data flow, of course, I find the generalization faulty becuase well-designed "data suckers" can serve a multitude of users and actually result in fewer page requests on the central servers.


I wasn't talking about proxy servers or other "well-designed 'data suckers'". I've watched over a dozen AOL proxy servers query the site simultaneously. In general, they're pretty good about how they grab pages, but its still a reasonable load. However, taking the hit from those is clearly orders of magnitude better than having all the users behind them hitting the site directly.

 

I was really talking about home-brew applications that people build to grab huge sections of the data in bulk. Or applications that increment cache ids and grab thousands of cache detail pages (using logs=y, of course) as fast as we can serve them. It doesn't take very many of these to brutally impact the server. And its almost scary as to how many of these are happening right now as I type this.

 

The issues are really a combination of things: traffic on the site continues to grow, more and more automated tools are mining the site, and we've got some old code that isn't as efficient as it could be.

 

Its just a growing pains issue and we'll sort it out. icon_smile.gif

 

-Elias

Link to comment

quote:
Originally posted by robertlipe:

While I don't know the details of the current data flow, of course, I find the generalization faulty becuase well-designed "data suckers" can serve a multitude of users and actually result in fewer page requests on the central servers.


I wasn't talking about proxy servers or other "well-designed 'data suckers'". I've watched over a dozen AOL proxy servers query the site simultaneously. In general, they're pretty good about how they grab pages, but its still a reasonable load. However, taking the hit from those is clearly orders of magnitude better than having all the users behind them hitting the site directly.

 

I was really talking about home-brew applications that people build to grab huge sections of the data in bulk. Or applications that increment cache ids and grab thousands of cache detail pages (using logs=y, of course) as fast as we can serve them. It doesn't take very many of these to brutally impact the server. And its almost scary as to how many of these are happening right now as I type this.

 

The issues are really a combination of things: traffic on the site continues to grow, more and more automated tools are mining the site, and we've got some old code that isn't as efficient as it could be.

 

Its just a growing pains issue and we'll sort it out. icon_smile.gif

 

-Elias

Link to comment

quote:
Originally posted by UtahJean:

Just very curious about what those 'home brew' database suckers might be doing with the data.


 

My guess is that they're using it for caching icon_smile.gif Or using it for their own websites (eg. some of the alternative geocache map sites).

 

I'm in the process of setting up a community site for cachers in my area. One of the first things that comes to mind is "hey, wouldn't it be cool to show the newest local caches on the main page"?

 

Thankfully, I'm not an inconsiderate dolt icon_smile.gif Pocket Query files work just fine for that, and are even easier to automate and extract data from since they're in XML, and e-mailed out automatically.

 

snazzsig.jpg

Link to comment

quote:
Originally posted by UtahJean:

Just very curious about what those 'home brew' database suckers might be doing with the data.


 

My guess is that they're using it for caching icon_smile.gif Or using it for their own websites (eg. some of the alternative geocache map sites).

 

I'm in the process of setting up a community site for cachers in my area. One of the first things that comes to mind is "hey, wouldn't it be cool to show the newest local caches on the main page"?

 

Thankfully, I'm not an inconsiderate dolt icon_smile.gif Pocket Query files work just fine for that, and are even easier to automate and extract data from since they're in XML, and e-mailed out automatically.

 

snazzsig.jpg

Link to comment

quote:
Originally posted by Elias:

We try to run the Pocket Queries when the load isn't as high, and we built into the code some pretty good caching such that each cache is only queried from the database once, no matter how many queries request that cache.


 

Hmmm... Does that mean that if a cache description or coordinates are changed then it won't show up on a pocket querie? I don't like that idea 'cause you don't get the latest logs to determine if the cache has been plundered/wet/in need of repair, either.

 

As for the overload, I've got that error every time for the last two days.... FWIW...

 

---------------

wavey.gif Go! And don't be afraid to get a little wet!

Link to comment

quote:
Originally posted by VentureForth:

Hmmm... Does that mean that if a cache description or coordinates are changed then it won't show up on a pocket querie? I don't like that idea 'cause you don't get the latest logs to determine if the cache has been plundered/wet/in need of repair, either.


Yes and no. The caching is designed to only ask the database for each cache once per day. So subsequent days will requery the DB for that cache. Since the pocket queries take several hours to execute, the most the data can be stale is just a few hours.

 

-Elias

Link to comment

There must have been a large increase in the number of automated programs sucking down data in the past couple of weeks. As it is, the maps are basically unusable.

 

I know there was a big to-do with Buxley in the distant past over a related issue, but I see his maps are still going strong.

 

Is there any way that you can enforce some kind of throttling to keep the automated data suckers from ruining the performance for the rest of us? For example, limiting DB queries to 1 per second, or something like that? Or will people just come from multiple IP addresses?

 

My idea for how to fix it is to replicate the database to a server that is dedicated to automated queries only, and to prohibit automated queries on the main DB. Given that you're using SQL Server, that ought to be fairly straightforward. But, of course, I am no expert...

Link to comment

quote:
Originally posted by fizzymagic:

As it is, the maps are basically unusable.


 

If you have any kind of mapping program on your local computer, the combination of pulling the *.loc from the pocket query and importing that into Delorme/Mapsend/Mapsource/whatever is a powerful one.

 

You want only caches you've found? Click that button on the pocket query. Want to lose the virtuals? No problem. Want to zoom and pan at full local processor speed? The computer's yours; go nuts. Want to simulate the "green" box behaviour? Just select "new within the last week".

 

Yes, the first time you do it there's a lag, but you can have these delivered to your mailbox automatically.

 

The site did a bit of a wierd thing by mushing "pda data" with "search and bulk waypoint" concepts with the new pocket query voodoo. I think a the message sort of got lost that they're two independent things just billed under one name. Even if you don't care about the PDAs the latter can be your friend.

Link to comment
Guest
This topic is now closed to further replies.
×
×
  • Create New...