+Marky Posted July 25, 2004 Share Posted July 25, 2004 I keep getting the Server Too Busy error with the following stack trace: Stack Trace: [HttpException (0x80004005): Server Too Busy] System.Web.HttpRuntime.RejectRequestInternal(HttpWorkerRequest wr) +146 Not a good sign of health. --Marky Link to comment
+Teach2Learn Posted July 25, 2004 Share Posted July 25, 2004 Me too, most of this afternoon on the geocaching.com site when trying to load pages other than the home page. Haven't had the same problem in the forums. Link to comment
+Ambrosia Posted July 25, 2004 Share Posted July 25, 2004 It's also super slow on the main pages, too. I thought is was my connection, at first, but you're right. I'm not having any problems with the forums. Link to comment
+mtn-man Posted July 25, 2004 Share Posted July 25, 2004 Moving to Jeremy's Forum, Geocaching.com Web Site Discussion. Link to comment
+Marky Posted July 25, 2004 Author Share Posted July 25, 2004 Funny, I thought I posted it here. I wonder where I did post it... Link to comment
+chstress53 Posted July 25, 2004 Share Posted July 25, 2004 Me too I even checked all my connections. Sometimes an error message that the server is busy. Link to comment
shadango Posted July 25, 2004 Share Posted July 25, 2004 (edited) Same here...very very slow.... It took 4 tries to upload my latest log for a cache..... And then when it went thru it double posted....Trying to delete now. Edited July 25, 2004 by shadango Link to comment
+Firehouse16 Posted July 25, 2004 Share Posted July 25, 2004 No matter what they say or do, the server connection sucks! One of the worst connections I've ever had to deal with consistently. It makes doing my work here painful. Link to comment
robertlipe Posted July 26, 2004 Share Posted July 26, 2004 [ checks calendar ] You're trying to use geocaching.com interactively on a day that starts with an "S"? "That trick never works." Rocky the Squirrel. Link to comment
+Firehouse16 Posted July 26, 2004 Share Posted July 26, 2004 (edited) S or M or T or W, lately it's been almost everyday again. Either way this is ridiculous. Once they took on paying members they took on the obligation of giving us a site that works. It's taking me almost 45 seconds to get to any page I want to look at, and believe me I have a FAST connection! The only thing that seems to be OK is the forums. Edited July 26, 2004 by Firehouse16 Link to comment
+Teach2Learn Posted July 26, 2004 Share Posted July 26, 2004 Still very slow, often "too busy" or "timed out" and not just because it's a Sunday/weekend. There is another problem, probably the same one Jeremy mentioned this past week. Very frustrating on a day when the weather wasn't good for caching either and I was trying to log some earlier finds, but I'm sure that he or another admin is "workin' on it." Link to comment
Jeremy Posted July 26, 2004 Share Posted July 26, 2004 It turned out that the firewall was the culprit. Imagine a drunk bouncer (It's the best analogy I have). The bouncer wasn't even recognizing its own members half the time and booting them, which caused a cascading problem throughout the servers. The machines have been rebooted (including the firewall) and it has seemed to clear up somewhat. We'll continue to monitor the site to make sure it is ok. Link to comment
+Boot Group Posted July 26, 2004 Share Posted July 26, 2004 I couldn't get in for about an hour, but all is well now. It's running fast again. Thanks! Link to comment
+Sparky-Watts Posted July 26, 2004 Share Posted July 26, 2004 No matter what they say or do, the server connection sucks! One of the worst connections I've ever had to deal with consistently. It makes doing my work here painful. Not getting your 8 cents worth out of the site today? Sheesh, lighten up! Link to comment
+Teach2Learn Posted July 26, 2004 Share Posted July 26, 2004 Yes, the server works again, the server works again! (shouted to imitate Steve Martin's enthusiasm upon receiving the new phone books in the movie "The Jerk") Link to comment
+Sparky-Watts Posted July 26, 2004 Share Posted July 26, 2004 Yes, the server works again, the server works again! (shouted to imitate Steve Martin's enthusiasm upon receiving the new phone books in the movie "The Jerk") <sighting target through scope>There you are, you typical random cacher!</sight> Link to comment
+Teach2Learn Posted July 26, 2004 Share Posted July 26, 2004 Yes, the server works again, the server works again! (shouted to imitate Steve Martin's enthusiasm upon receiving the new phone books in the movie "The Jerk") <sighting target through scope>There you are, you typical random cacher!</sight> Yes, you spotted me, but please don't shoot. Even if I am "typical" and "random," I try to be proactive instead of negative and that's the only reason I even mentioned the slowness of the server, not to complain. Returning to the topic, I was trying to show my gratitude for Jeremy fixing the problem with the gc site's firewall. Link to comment
+Sparky-Watts Posted July 26, 2004 Share Posted July 26, 2004 Yes, the server works again, the server works again! (shouted to imitate Steve Martin's enthusiasm upon receiving the new phone books in the movie "The Jerk") <sighting target through scope>There you are, you typical random cacher!</sight> Yes, you spotted me, but please don't shoot. Even if I am "typical" and "random," I try to be proactive instead of negative and that's the only reason I even mentioned the slowness of the server, not to complain. Returning to the topic, I was trying to show my gratitude for Jeremy fixing the problem with the gc site's firewall. Yeah, I know you were.....it's always a big relief from panic when that first page of gc.com finally pops back into view after some downtime!!! Link to comment
+RuffRidr Posted July 26, 2004 Share Posted July 26, 2004 It turned out that the firewall was the culprit. Imagine a drunk bouncer (It's the best analogy I have). The bouncer wasn't even recognizing its own members half the time and booting them, which caused a cascading problem throughout the servers. The machines have been rebooted (including the firewall) and it has seemed to clear up somewhat. We'll continue to monitor the site to make sure it is ok. I'm curious as to what kind of firewall you are using (hey I'm an IS Security Analyst) and what platform it runs on. Anyway, I can understand if you don't want to answer that question, some companies are touchy about giving that out. Hell, if I poked around a bit, I could probably figure it out myself, but I know better than to do that kind of thing. --RuffRidr Link to comment
lowracer Posted July 26, 2004 Share Posted July 26, 2004 I think I found the problem. The can's empty. Time to re-order. Link to comment
+Sparky-Watts Posted July 26, 2004 Share Posted July 26, 2004 I think I found the problem. <snipped photo> The can's empty. Time to re-order. Oh, I figured he ran out of hamster food! Link to comment
+Firehouse16 Posted July 26, 2004 Share Posted July 26, 2004 Actually it was my $.10! LOL and it's been several days of slugging around. But thanks for finding the fix Jeremy! Link to comment
+TotemLake Posted July 26, 2004 Share Posted July 26, 2004 It turned out that the firewall was the culprit. Imagine a drunk bouncer (It's the best analogy I have). The bouncer wasn't even recognizing its own members half the time and booting them, which caused a cascading problem throughout the servers. The machines have been rebooted (including the firewall) and it has seemed to clear up somewhat. We'll continue to monitor the site to make sure it is ok. I knew as soon as you would try to log your finds you would discover the problem. Link to comment
Jeremy Posted July 26, 2004 Share Posted July 26, 2004 Oops. I was wrong about the firewall. It's really a traffic issue. We're running into a situation where traffic gets to a point that the db is completely tapped, which means the web server is waiting too long for the database to return data. It then goes into a vicious circle of crashes as it tries to keep up. There are two solutions we're working toward. One, we're adding BigIP that will allow us to create a web farm. Much of the data on the site is stored in memory, but it runs out of memory too quickly. By using BigIP we can split the number of users connecting to each machine which will allow it to get a breather. Additionally, we will be adding another database and making it read-only. This will allow the more detailed queries to go to one box, and updates go to another machine. These solutions will occur over the next month. Afterwards, we can just add more web servers or databases as traffic increases. Link to comment
+Hynr Posted July 27, 2004 Share Posted July 27, 2004 I hate to state the obvious here, especially where caching seems to be everybody’s expertise,... but maybe the solution is to buy a couple of gigs of RAM and cache the whole db? I guess one obvious part about it is that it would be expensive. Is that the only limitation? Link to comment
thorin Posted July 27, 2004 Share Posted July 27, 2004 Considering most consumer boards can only handle 1GB or RAM and most consumer boards can only address a max of 2GB of RAM. Just throwing RAM at the situation isn't a solution. You need a server that can actually handle the needed amount of RAM properly. Thorin Link to comment
+GrizzlyJohn Posted July 27, 2004 Share Posted July 27, 2004 My guess is that if they are talking farm they have servers that are likely on the high end. Servers that are really servers and not desktop boxes running server software that can handle gigs of RAM. I also suspect they have added RAM already, that just tends to be an easier solution to try before going to a farm. Link to comment
Jeremy Posted July 27, 2004 Share Posted July 27, 2004 We have 4 gigs of RAM on tigger, but we get over 15 million pageviews a month. Since most of the site accesses the database, we store a lot of data in memory on the web server, but it runs out too quickly. Link to comment
thorin Posted July 27, 2004 Share Posted July 27, 2004 (edited) My guess is that if they are talking farm they have servers that are likely on the high end. Servers that are really servers and not desktop boxes running server software that can handle gigs of RAM. I also suspect they have added RAM already, that just tends to be an easier solution to try before going to a farm. Agreed. I was just trying to point out to Hynr that throwing RAM at the problem isn't really a solution on it's own. I have no doubt that gc.com is actually running on some fairly half decent hardware but even then throwing RAM at it doesn't solve the problem if the server is maxed already. I wish that you could just keep throwing RAM at servers but unfortunately they max out eventually too. We have a few sun boxes here that are maxed at 4GB and 2GB respectively and we'd love to just add more RAM but can't Edit: Just checked out the F5 Networks BigIP stuff, that's pretty sweet HW they have in the offering. Thorin Edited July 27, 2004 by thorin Link to comment
+GrizzlyJohn Posted July 27, 2004 Share Posted July 27, 2004 Agreed. I was just trying to point out to Hynr that throwing RAM at the problem isn't really a solution on it's own. Yea but remember the good old days when throwing RAM at the problem always worked? And covered up a bunch of sloppy coding that you never felt like going back and cleaning up. Man life was so simple then. Funny to talk about 4 gigs of RAM and I remeber a 30 meg hard drive was a big deal. Link to comment
thorin Posted July 27, 2004 Share Posted July 27, 2004 Agreed. I was just trying to point out to Hynr that throwing RAM at the problem isn't really a solution on it's own. Yea but remember the good old days when throwing RAM at the problem always worked? And covered up a bunch of sloppy coding that you never felt like going back and cleaning up. Hey who told you about that, no one was ever supposed to dig that piece of code up Thorin Link to comment
+WeightMan Posted July 27, 2004 Share Posted July 27, 2004 Agreed. I was just trying to point out to Hynr that throwing RAM at the problem isn't really a solution on it's own. Yea but remember the good old days when throwing RAM at the problem always worked? And covered up a bunch of sloppy coding that you never felt like going back and cleaning up. Man life was so simple then. Funny to talk about 4 gigs of RAM and I remeber a 30 meg hard drive was a big deal. Hey, I remember when 32K was a big chunk of RAM. I think it was about that time that Bill Gates said 64K was as much as anyone would ever need. (At least I think that is what he said. I know if I am wrong someone will come up with the correct quote and markwell it.) Does anyone else remember RAm is kilobytes? Link to comment
+RuffRidr Posted July 27, 2004 Share Posted July 27, 2004 Edit: Just checked out the F5 Networks BigIP stuff, that's pretty sweet HW they have in the offering. We use Radware here where I work. Very similair to the stuff from F5. It is pretty sweet how that stuff works. Here if our firewalls, webservers, etc. get overloaded, we can throw another one together and put it behind the Radware and it instantly grabs off some of the load. Cool technology. Not cheap. --RuffRidr Link to comment
Jeremy Posted July 27, 2004 Share Posted July 27, 2004 The cost between Radware and BigIP were nominal, and we liked more of the features from BigIP. We haven't purchased the solution yet, so it may be good to pick your brain about it. Eliasbone is spec'ing out the requirements so I'll ask him. Link to comment
+Hynr Posted July 27, 2004 Share Posted July 27, 2004 If there is 4 gig of RAM in the system, how much of it is being used to cache the database and its index files. Is there a way to get a dedicated cache for just those files? Another way to address this issue is to see if there is a way to have less data transfer per user action. If we assume that the most common user action is the pulling up a single web page, then perhaps there is stuff on that web page that could be omitted. I notice that there are two maps on each cache page and I never ever look at one of them (the little one). Typically just under 2000 bytes. I wouldn’t even notice if you dropped that. Replace the yellow and green graphic stars (168 and 164 bytes respectively) with simple asterisk characters (1 byte each; and maybe 15 bytes more for the html code to make it colorful). I see other icons on that screen that can be eliminated/replaced as well. I know that you want to have a snazzy looking page, but I think it could be leaner. Another common request is the “My Cache Page” listing. Give us more options to turn stuff off (e.g. I can live without the calendar display; most of the time I don’t even see the bottom 80% of that page (cache finds from a month ago) - give us a setting to show the last Y number of finds or just the last 10 days rather than 30 days. Suddenly there is 66% less stuff to look up in the database, and 66% less stuff to transmit to us. The listing of caches (“Filter Finds” for instance) seems quite lean to me. I think it is very well done as it is, without any undue graphic glitz. Here I only wonder what database access occurred to make it happen and whether all the implied database look-up happened for the pages that are not displayed. I hope not. I’d hate to think that every time I run that page, the search engine goes and reads all 4216 records which it claims to have “found” as a result of my query. I’m going to assume it ain’t so; if it is in fact doing all that work, just to show me 25 records, then there is another place where some efficiencies could be generated. I would also like to suggest that we get some sort of off-line mechanism for logging caches. If we could do all the work off-line, and transmit them in one batch, then that would save tons of server activity. Link to comment
Recommended Posts