Jump to content

Site Performance Being Addressed


mtn-man
Followers 7

Recommended Posts

It seems a little better on the review side this morning too.

 

Maybe the new server that Jeremy talked about a couple of days ago is in? (crosses fingers).

 

Its more likely the new improved Hamster food they started using. "Speedy hamster flakes, Now with Dexedrine"!

Link to comment
It seems a little better on the review side this morning too.

 

Maybe the new server that Jeremy talked about a couple of days ago is in? (crosses fingers).

 

Its more likely the new improved Hamster food they started using. "Speedy hamster flakes, Now with Dexedrine"!

:):):anicute::D:)B):)

 

As they say... "Up the dosage!" :D:D

Link to comment

The hamster factory is working 24/7 to turn out more hamsters. I've got Barry White music blaring on the stereo... not just to set the mood on the production floor, but also to drown out all the squeaking.

 

This emergency has completely depleted my supply of signature items. Yet I am happy to make the sacrifice.

Link to comment

The hamster factory is working 24/7 to turn out more hamsters. I've got Barry White music blaring on the stereo... not just to set the mood on the production floor, but also to drown out all the squeaking.

 

This emergency has completely depleted my supply of signature items. Yet I am happy to make the sacrifice.

 

The visual of this is rather quite disturbing!!!

 

But hey... whatever it takes, right!?!

Link to comment

It's nice to see that the powers that be have acknowledged the performance problems and it bothers them plenty as well. In my experience, applications take time to optimize. It's a non-trivial process, especially with a production system that's been built up over the years. Rather than complaining, I offer them my thanks and wish them luck finding optimizations that help solve the problems as quickly as possible. I know we all look forward to reducing the problems to a manageable level and continuing to try to keep them there as geocaching continues to grow in popularity.

 

Are there perhaps any .net optimization specialists who are geocachers and would love a faster site? Perhaps just inviting them to volunteer their expertise would provide some low hanging fruit.

 

Mike

Link to comment

I got my regular two weekly pocket queries delivered right on time Saturday morning, and I logged 16 finds Saturday evening, all with no delays. This morning I requested the "My Finds" query, and a route query for a day trip tomorrow, and received them both within a minute or two of the requests.

 

Whatever you are doing to make things better, it is working. I really appreciate the time and effort going into keeping the site running smoothly. It stinks for you, though. I'm sure you'd rather be out caching!

Link to comment

The hamster factory is working 24/7 to turn out more hamsters. I've got Barry White music blaring on the stereo... not just to set the mood on the production floor, but also to drown out all the squeaking.

 

This emergency has completely depleted my supply of signature items. Yet I am happy to make the sacrifice.

barry white??? :blink:

 

i guess that's one way to get them in the mood to make more hamsters. :o

Link to comment

I just submitted a new cache and moved about 10 TB/coins around in and out of caches and not one single problem. :o

Now I'm not getting e-mail notifications(started a new thread on this). Could this have something to do with the nice speed and no 'server to busy' errors? :o

Where's the cry babies now...btw? :blink:

Link to comment

Whoops . . . I just got this message:

 

Server Error in '/' Application.

Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.

Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.

 

Exception Details: System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.

 

Source Error:

An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.

 

Stack Trace:

 

[sqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.]

System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream) +742

System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior) +45

System.Data.SqlClient.SqlCommand.System.Data.IDbCommand.ExecuteReader(CommandBehavior behavior) +5

System.Data.Common.DbDataAdapter.FillFromCommand(Object data, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) +304

System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) +77

System.Data.Common.DbDataAdapter.Fill(DataSet dataSet) +38

Groundspeak.Web.SqlData.SqlConnectionManager.FillDataSet(String sql, Database database, Int32 Timeout) +208

Groundspeak.Web.SqlData.SqlWaypointController.GetUserLogTypeListData(Int64 UserID, WptDataSources Datasource) +70

Groundspeak.Web.GPX.WptLogType.GetUserLogTypeList(Int64 UserID, WptDataSources DataSource) +38

Geocaching.UI.my_geocaches.BuildLogFilters(WptDataSources Datasource) +50

Geocaching.UI.my_geocaches.Page_UserLoggedIn(Object sender, EventArgs e) +265

Geocaching.UI.WebformBase.IsLoggedIn() +1088

Geocaching.UI.my_geocaches.Page_Load(Object sender, EventArgs e) +92

System.Web.UI.Control.OnLoad(EventArgs e) +67

System.Web.UI.Control.LoadRecursive() +35

System.Web.UI.Page.ProcessRequestMain() +772

 

Version Information: Microsoft .NET Framework Version:1.1.4322.2300; ASP.NET Version:1.1.4322.2300

:laughing:
Link to comment

Just a thought, but could it be the pocket queries slowing down the system? I hunt in a large area and like to have an up-to-date database of the area's caches. This takes 16 pocket queries to get all of them, because Utah is a big state with a lot of caches. Could these be done with just one query of the state in the future? That would clear up a lot of server time, and probably be a lot simpler of a search through the database. What about splitting larger states into multiple areas? I've never found a cache near St George, but I have a lot of them in my database. They're 300 miles away from me. I've found caches in Idaho just 150 miles away, but I have none in my database.

I became a premium member for this reason mainly, but also because I found the hobby so fun I figured it was worth it. Thanks to all who started this site and made it so great.

Link to comment

Geocaching.com is not the only site experiencing these problems. Every site that has experienced phenomenal growth goes through it, and it's never solved overnight. Anyone who is on Second Life, for example, knows that the community there is making precisely the same complaints that we're seeing here. It's part of the nature of the human-machine interaction. No one wants to see participants unhappy, and you can be sure tht people *are* in fact working on the problem.

 

Is logging weekend finds on a Wednesday out of the question?

 

-- Jeannette

Link to comment

Geocaching.com is not the only site experiencing these problems. Every site that has experienced phenomenal growth goes through it, and it's never solved overnight. Anyone who is on Second Life, for example, knows that the community there is making precisely the same complaints that we're seeing here. It's part of the nature of the human-machine interaction. No one wants to see participants unhappy, and you can be sure tht people *are* in fact working on the problem.

 

Is logging weekend finds on a Wednesday out of the question?

 

-- Jeannette

 

I dont have a problem as such with logging finds a few days later, but a real problem is that if you log TB's or coins a few days later, someone may have already picked them up and logged them as "grabbed from somewhere else" which seems to make the tracking a little more difficult.

 

I recently experienced that one of out TB's that had been logged as left in a cache, was grabbed and taken somewhere else and had not had the chance to log it before someone else grabbed it and logged it as "grabbed from somewhere else", so I then had to log that I had grabbed it, then posted a note to the cache where it had not been logged to, and "dropped" it, then had to go back to the log for our TB and move it back to the last location (which was the person who has it now)...

 

It meant that a long process was required, it took me over 45 minutes just to work out how to do this as there was no information on how to do this in any of the TB information I found, and during doing this "virtual movement" I had 2 experiences with "server not responding" errors, which added a further 25 mins to the process, so for one missing peice in the track of our TB it took just over 1hr and 10 mins to fix.......... Imagine if you had 10 travellers logs to "fix".

Link to comment

I think the TB problem is only solvable by community ethics, "Don't log a TB before the last holder has put it". Due to holidays etc, it could take weeks! before a TB is logged. Not everyone goes online on their holidays!

 

A performance improving issue could be Bulk logging. The process as it is is to look up a cache (which you have already done, and you know its GC number) click on the "Log your visit" and then write you note. The unnessescary display of the cache info should be avoided.

 

A bulk procedure, where you order a tabel of 10-20-30 rows, where you put your gc number, your log type, and your comment, would improve performance in the stress-loggin period several thousend percent!!

 

A moderate procedure could be to have one log procedure to go directly to another. Today the process goes, display cache-log it, display cache- log it, etc, but an improvement could be display first cache, log it - put in next gc number, log next - put in gc number , log next etc.

Link to comment

:D so hard to navigate the website lately. 2am and there's still performance problems... I'd think most geocachers are asleep now.

 

hope you get it fixed soon, I know/appreciate that you're trying, but this is tough to work around. Took me like 40 minutes to log 3 caches the other day cause the server wasn't responding.

Link to comment

:D so hard to navigate the website lately. 2am and there's still performance problems... I'd think most geocachers are asleep now.

 

hope you get it fixed soon, I know/appreciate that you're trying, but this is tough to work around. Took me like 40 minutes to log 3 caches the other day cause the server wasn't responding.

But 7am in the UK and Europe, where cachers are looking up the caches for the day...

Link to comment

One issue that I haven't noticed being mentioned is the server (I would guess it's a different server anyway) that hosts the images. It's very often impossible to view gallery images when reading logs or profiles. Is this actually a separate server and, if so, is the issue being looked into or is it simply on hold until the regular server issues are ironed out?

Link to comment

One optimization that might help would be to limit the number of duplicate emails sent. If I post a note on a cache that I own, I get three (or is it 4?) copies of the note in my email. If I post a note to a cache that I'm watching, I get two copies of that in my email. Etc.

Link to comment

The NON fantastic sevice of the server continues!!! I spent over 10 minutes TRYING to log a simple find....

 

What gives????

 

I might as well go back to dial up for the ammount of time I spend waiting for Geocaching.com to load it's bloody pages!

 

So much for hugh speed....

 

Maybe they should start a mail-in cache logging system - it woudl be faster some days!

 

DD

Link to comment

Site performance recently seems to be terrific. Based on some of the other messages, I know there must be problems but I've noticed a vast improvement. Reaction time actually seems "snappy" at this point. Just thought I'd throw in my two cents....

Link to comment

It's taking all day to update the TB's. I logged one this morning, and the search page still doesn't show it, end the log doesn't show updates either. That and getting SQL errors all day....

 

How many hours does it take to update things?

Link to comment

How many hours does it take to update things?

 

software developement is not an easy or quick thing. Just look how many times Windows Vista's release date was pushed back. This stuff takes time, especially on a production system. have to change code, and test it in a testing environment and get all the bugs out of those changes before migrating to the production environment and going 'live'. It may be a slow process, but if done right (and I have faith that jeremy, et al. are doing things right) will be well worth the wait.

Link to comment

How many hours does it take to update things?

 

software developement is not an easy or quick thing. Just look how many times Windows Vista's release date was pushed back. This stuff takes time, especially on a production system. have to change code, and test it in a testing environment and get all the bugs out of those changes before migrating to the production environment and going 'live'. It may be a slow process, but if done right (and I have faith that jeremy, et al. are doing things right) will be well worth the wait.

 

I guess I phrased that wrong - I meant how long for the TB logs, etc... to update. My stuff finally did get updated - took about 8 hours.

Link to comment

I guess I phrased that wrong - I meant how long for the TB logs, etc... to update. My stuff finally did get updated - took about 8 hours.

That's another issue that they are aware of. It has to do with the syncing of the servers. Also the same issue as approved caches showing as Unapproved Cache in the title on the search page.

 

Someone forgot to feed the hampsters and Keystone had to overnight an emergency supply of Chemical-X, that's why your issue is ok now. :shocked:

Link to comment

I have 3 Pocket Queries that are supposed to run daily and email me the gpx file and this is Friday and the last 2 I received were on Sunday and Tuesday. Two of my queries return 350-400 caches so the query email size is around 5 MB. Anybody having problems receiving their pocket query emails? Should I be alternating days with the 2 large query requests? Thanks,

Link to comment

I have 3 Pocket Queries that are supposed to run daily and email me the gpx file and this is Friday and the last 2 I received were on Sunday and Tuesday. Two of my queries return 350-400 caches so the query email size is around 5 MB. Anybody having problems receiving their pocket query emails? Should I be alternating days with the 2 large query requests? Thanks,

 

First, I would ask yourself do you really need 3 PQ's everyday? If so, you're probably going to have to make a copy of each one everyday and let it run once then delete it then it will run within minutes.

But the best thing to do is to uncheck all 3 PQ's from running everyday and run it when you 'really' need it and then it will run almost instantaneously.

Maybe once the server issues are fixed all premium members can run 5 PQ's everyday :laughing:

Link to comment

I have 3 Pocket Queries that are supposed to run daily and email me the gpx file and this is Friday and the last 2 I received were on Sunday and Tuesday. Two of my queries return 350-400 caches so the query email size is around 5 MB. Anybody having problems receiving their pocket query emails? Should I be alternating days with the 2 large query requests? Thanks,

 

First, I would ask yourself do you really need 3 PQ's everyday? If so, you're probably going to have to make a copy of each one everyday and let it run once then delete it then it will run within minutes.

But the best thing to do is to uncheck all 3 PQ's from running everyday and run it when you 'really' need it and then it will run almost instantaneously.

Maybe once the server issues are fixed all premium members can run 5 PQ's everyday :anicute:

 

You're correct, I do not need 3 PQ's run each day. I was just doing that so I could at least get them every 2-3 days. I had no idea that copying a query and letting the copy run once before deleting would give me the instantaneous results and I'll give that an attempt. For my original PQ's, I changed them so that no days are selected and "Uncheck the day of the week after the query runs" is selected. Thanks!

Link to comment

Except for the fact that it might take a few hours to synchronize (getting check marks next to my "found" caches), the site performance over the past week has been great. Quick page loading, quick uploads of pics and quick search results. If enhancements have been made, they certainly are apparent.

 

Thanks for the hard work on this.

Link to comment

My family and I like to geocache; it's a great family activity. We particularly enjoy seeking geocaches at interesting spots we would have never stumbled across otherwise. We also like the history of geocaching and the genesis of geocaching.com.

 

However, the web site performance has become so poor that dealing with the site has become a reason not to geocache. I've given up logging all of our finds - it just too slow and frustrating. I'd pay more for speed, I'd go to a new website for speed. Today (Sunday) I planned to log our Saturday finds and maybe find a few more targets for today, but I'm giving up.

 

Sorry to be so grumpy - I just hope something is done.

Link to comment

Logged a bunch of caches this afternoon. Things went amazingly quick for a Sunday, so whatever changes have already been put in place: thumbs.gif

 

All weekend though, I haven't gotten a single notification for caches or travel bugs, although I submitted a new cache today and I did get the "We have received your submission" email, so I know my email isn't being blocked. (Which it shouldn't be anyway as I host my own email and would know if stuff was getting blocked.)

 

PQs seem to be working again, as the PQ I have scheduled to run on Sundays ran just fine, coming in while I was logging caches.

 

I fear what's going to happen when all this backed up email suddenly gets turned loose... :blink:

Link to comment
Guest
This topic is now closed to further replies.
Followers 7
×
×
  • Create New...