Jump to content

Dedicated Server For Premium Members


lowracer

Recommended Posts

We paid. We shouldn't have to wait for clogged servers. Give us our own dedicated machine, with a guaranteed quality of service. This would easily be worth $50-75 a year.

 

Not trying to be elitist, but if we paid the money this would be the best benefit you could offer your premium members. No more waiting, no more server errors, no more 'can't load the page after trying for 60 seconds.'

Link to comment

Interesting idea. Of course, I paid because I thought it was a good price for the services I already received. The additional 'premium' services are just cake.

 

That being said, many people would not be members if the price of membership was raised above a certain threshold. Profit is maximized to the threshold and then lost beyond. $50-$75 seems a little steep, in my opinion.

 

I say leave it as it is. I'm quite sure that they are working on the server issue.

Edited by sbell111
Link to comment

Interesting. The way I see it is that right now premium members help pay for the site as whole, both for free and premium members. I would be uncomfortable asking for a dedicated server based on the current premium member price because I want to see the free option remain for others, and without raising prices, providing that could tax Groundspeak. Would I pay more for increased speed that is greater than those who don't pay? Sure! But I can see why an increase in price would be necessary there. I would happily pay more, but others might cringe. I have no idea there.

Link to comment

I do not understand all there is to know about servers and forums, but i am a member of another forum and it has thousands of members and has very few issues. the only cost is to volunteer 10 bux every year or so.

 

that site has no speed issues and runs very well.

 

it is www.yotatech.com

 

i am not criticizing, or arguing, just sharing what i know. which isn't much.

 

so, no one take offense at this just making a comparison.

Link to comment
I do not understand all there is to know about servers and forums, but i am a member of another forum and it has thousands of members and has very few issues. the only cost is to volunteer 10 bux every year or so.

 

that site has no speed issues and runs very well.

 

it is www.yotatech.com

 

i am not criticizing, or arguing, just sharing what i know. which isn't much.

 

so, no one take offense at this just making a comparison.

The Groundspeak forums very rarely have problems. The problems are with the database and the server where all of the cache pages are. This is totally seperate from the forum server.

 

southdeltan

Link to comment
I do not understand all there is to know about servers and forums, but i am a member of another forum and it has thousands of members and has very few issues. the only cost is to volunteer 10 bux every year or so.

 

that site has no speed issues and runs very well.

 

it is www.yotatech.com

 

i am not criticizing, or arguing, just sharing what i know. which isn't much.

 

so, no one take offense at this just making a comparison.

It's not the forums that have the problem, it's the Geocaching web site. That site has a lot more volume. If you're curious as to how many logs and loggers that site has check out About Geocaching

Imagine each log entry requires calling up (Probably two to three times) a webpage with a maximum size of I think 8000 characters from a database containing 117476 active caches and I don't have any idea of the number of pending and archived caches. Then throw in all the Photo's uploaded, and TB's that are logged in and out. It's a lot of hits to process.

 

TPTB are trying hard to keep up but it will still be a while.

 

As the poster that beat me to it while I was typing said, the two sites are on different servers.

 

Cache Well

Link to comment

There were only 72,000 logs filed in the past week.

 

There are database systems that log that much activity in a day.

 

More to the point, if even a portion of the current machines were to be premium only, you would probably draw more people to become premium members and that money *might* be enough to get more hardware support for the cluster...but in the meantime, I have a feeling that the premium member server would get slammed hard enough to make it as useless as the current situation (in fact, enough people might upgrade their membership such that non-members would have faster access... :D ).

 

Of course, a better option may be to have a "cache logging" page where if you have the GC# of the cache, you can log your find on a page similar to the logging page now, but instead of requiring you to access the cache page or returning you to the cache page after your log, you're still left at a static page like the front page.

 

This means less queries during peak logging hours which means happier DB'ing.

Link to comment

Sounds to me like a perfect example of when database optimization intersects with availability:

 

http://www.sqlskills.com/blogs/kimberly/

 

http://www.sqlskills.com/blogs/kimberly/Pe...75-8f9f32682a78

 

This isn't a facetious comment - she knows her stuff and the .NET Rocks webcast was very informative, and it sounds to me like you've entered that zone.

Edited by caderoux
Link to comment
I'm the first to admit I'm tapping my SQL skills.

Didn't you post an opening for a "computer guru type dude(ette)" a while back?

 

What ever happened to that? No nibbles?

 

I guess everybody's willing to give free advice but nobody wants to get paid to actually work (of course some of the more capable people wouldn't be able to take the probable pay cut plus move).

 

sd

Link to comment

Everyone of these points has been made several times for the past 4 weeks now.

It generally involves someone suggesting that it is a financial problem.

Then it becomes clear that the current equipment should be able to handle the load.

Then someone suggest the PQs are at fault.

Then it becomes clear that the PQs are on a separate machine.

At some point Jeremy mentions that the bottleneck is in the database.

Then we start looking for ways that the SQL could be faster, added equipment might be useful, etc and explore work-arounds.

 

I would like to know what action Groundspeak has taken in the past month to resolve the problems. I'm guessing there have been "discussions" and "diagnostics"; what has been concluded and what action is planned.

Link to comment
I'm the first to admit I'm tapping my SQL skills.

 

Tapped out?

 

I don't want to intrude on your process, but I assume you are using the profiler and query analyzer to understand your stored procedure execution plans, have updated statistics on your tables and have at least done some review of the indexes and clustered indexes? Are you sure your stored procedures are using the execution plans you think they are?

 

Are you using implicit joins (don't)? If you are performing a lot of inner joins, have you pushed the criteria up into the join instead of in the where clause and seen if indexes can be modified to turn table scans into index scans? Are you doing dirty selects (with nolock) - I don't normally have to bother, but you probably safely can in most cases which will remove locking contention?

 

Are there single steps which always run slower than you'd expect or is there only general slowdown noticed at times of high activity? If so, are there hot spots due to your clustered indexes? Are logs and data files on separate disks? Windows performance counters can also show you some things about the OS performance handling your file access.

 

Sorry, I can't offer my services, but in any case, I'm not a real DBA - more of a jack of all trades - it sounds to me like a DBA is exactly what you need - a hired gun to come in for just a few weekends to help you profile the app and get some things ironed out.

Link to comment
There were only 72,000 logs filed in the past week.

 

There are database systems that log that much activity in a day.

You fail to mention the number of searches performed, the number of web pages loaded, the number of maps generated, etc. This is quite a bit more than 72,000 I'd bet.

 

Jeremy, what is the average daily hit rate on the site?

 

--Marky

Link to comment

I know you guys are just trying to help, but you really don't understand what's going on behind the scenes. And its certainly not your fault either; we don't disclose enough information about the inner workings of the site or supply any statistics you can use to estimate or guess as to what's happening. Suffice it to say we're *way beyond* the basic performance monitoring and tuning Caderoux mentions a couple posts back.

 

We may be tapping our SQL skills, but don't read that to mean we're inexperienced or don't know what we're doing. I've worked with enough consultants to be confident in saying that 80% of the Microsoft Certified DBAs wouldn't be able to help us, or they'd offer nonsensical suggestions. I'm not trying to imply that we write better SQL or haven't introduced problems in various places, but I can say the problems that do exist aren't trivial and we certainly haven't missed anything obvious.

 

Ok, enough ranting. :mad:

 

Now about that 72,000 number. I won't completely correct it (since we don't give out numbers), but I'll give you a hint. In terms of daily SQL "activity", that number is off by about an order of magnitude. You can guess in which direction. :huh:

 

Believe me, we're not happy with the performance of the site either. Its as frustrating to us as it is to all of you. We may not know exactly what's going on, but we do know how to figure it out, and we will get it resolved.

 

:D Elias

Link to comment
Now about that 72,000 number. I won't completely correct it (since we don't give out numbers), but I'll give you a hint. In terms of daily SQL "activity", that number is off by about an order of magnitude. You can guess in which direction. :D

I think the 72k number was referenced as number of logs. I'd imagine the primary site is 90%+ read and 10% or less writes.

 

As someone without any MS certification (like those matter anyway :mad:), but doing this for 25 years and having created very large multi-user database systems (db in TBs and front-ends in web and client server) I can appreciate the issues you are running into. I've even written (albeit 12+ years ago now) queries to return the closest set of rows based on geocoding and again though Oracle didn't do it well, Sybase/MS returned results on 10's of thousands of rows within a second or two. If I recall, there's even multiple formulas that can be used with the more complicated and mathematically intensive ones providing a bit more accuracy.

 

In all cases (other then a Sybase conversion to Oracle which never took off) all performance issues can be overcome with either code or db (typically application db) tweaking.

 

So I wish you luck and if you do want help, just ask. I'd imagine there's a good number of folks here that do have practical hands on experience and would be willing to help make things better.

 

David

Link to comment
I know you guys are just trying to help, but you really don't understand what's going on behind the scenes.  And its certainly not your fault either; we don't disclose enough information about the inner workings of the site or supply any statistics you can use to estimate or guess as to what's happening.  Suffice it to say we're *way beyond* the basic performance monitoring and tuning Caderoux mentions a couple posts back.

 

We may be tapping our SQL skills, but don't read that to mean we're inexperienced or don't know what we're doing.  I've worked with enough consultants to be confident in saying that 80% of the Microsoft Certified DBAs wouldn't be able to help us, or they'd offer nonsensical suggestions.  I'm not trying to imply that we write better SQL or haven't introduced problems in various places, but I can say the problems that do exist aren't trivial and we certainly haven't missed anything obvious.

 

Ok, enough ranting.  :D

 

Now about that 72,000 number.  I won't completely correct it (since we don't give out numbers), but I'll give you a hint.  In terms of daily SQL "activity", that number is off by about an order of magnitude.  You can guess in which direction.  :mad:

 

Believe me, we're not happy with the performance of the site either.  Its as frustrating to us as it is to all of you.  We may not know exactly what's going on, but we do know how to figure it out, and we will get it resolved.

 

:huh: Elias

2 points.

 

1: thank you for the comments and all the efforts.

 

2; It is cruel but I am also thankful that its you that gets the calls when the site is down and not me. How you handle the stress when your work of love is having problems and we are all complaining is beyond me.

 

Did I bother to say Thank you?

 

Michael

Edited by CO Admin
Link to comment
I've worked with enough consultants to be confident in saying that 80% of the Microsoft Certified DBAs wouldn't be able to help us, or they'd offer nonsensical suggestions.

That's for sure (I'd say you're probably in the 95% region). You're well outside the general experience zone, and a lot of the certifications are not worth the paper they're written on. You can always do it on your own, and taking the time to find the right person is a huge opportunity cost.

 

You shouldn't rule out the effects of the other parts of your network also possibly improving your "perceived" database performance. Improving the firewall and web server as you're doing will eliminate barriers to users which may actually have caused more database requests than necessary as failed attempts are retried. Eliminating these bottlenecks can cause more load, as you mentioned, but they can also have the opposite effect, because it can be a lot more complex than that.

Edited by caderoux
Link to comment
Now about that 72,000 number. I won't completely correct it (since we don't give out numbers), but I'll give you a hint. In terms of daily SQL "activity", that number is off by about an order of magnitude. You can guess in which direction. :D

My intention was not to provide it as a bottom line on the total number of SQL calls in a week. In fact, I feel it better served my point that no one has taken into account that came last in my post.

 

I don't know exactly which pages require SQL calls to the DB (for example, does the homepage because of the events listings and latest logs?) but in order to actually write one of those 72,000 logs, you have to hit:

 

Homepage

My Page

Filtered Finds List

Cache Page

Log Page

Cache Page (after sending Log)

 

And of course, I can see a few of those pages requiring multiple calls (like My Page and the Cache Page).

 

In fact, the fact that the Homepage more than likely causes a few DB calls is probably not smart site management since even if the person is going straight to the forums or elsewhere that wouldn't be querying anything...they still give a shot to the DB (sure, you could cache the events...but if someone put in a last minute one or changed one, it'd be incorrect...and there's no way you're caching the latest logs).

 

You may have the most efficient SQL queries in the entire world...but it won't help you survive if you're spending 15-20% of the DB's access just getting people in the front door. That's a site design issue, not a DBA issue.

Link to comment
Of course, a better option may be to have a "cache logging" page where if you have the GC# of the cache, you can log your find on a page similar to the logging page now, but instead of requiring you to access the cache page or returning you to the cache page after your log, you're still left at a static page like the front page.

Something like this?

 

5f1abc4f-3157-4bda-9883-c14af94a1a1a.jpg

 

01085b9b-5991-432a-bda7-1269b94215af.jpg

That would certainly cut down on the DB accesses during peak times. Just enter your Waypoint ID, your log, and hit submit. No need for the DB to serve you the cache page at all.

 

Of course, you'd have to be careful not to fat-finger the waypoint ID, but there could be ways to double check that, for instance, put a "CHECK ID" button next to the waypoint ID and have it print the name of the cache there next to the waypoint ID just as a sanity check.

 

Edited to suggest that this could be a Premium Member Feature and suggest one possible branding strategy.

Edited by lowracer
Link to comment

Judging by the error messages I get today, I would avoid using datasets and use the more lightweight datareaders - they use less server (DB) and client (web) resources. But I'm sure you know that and all this is pointless Tuesday-morning quarterbacking.

 

Good luck.

Link to comment

There could be a confirmation page (using lowracer's example) that only retrieved the cache name based on the waypoint (or maybe a little more info like state/country) where you would then confirm the log submission.

 

Good idea though - I like it.

 

Also, if there's a way (maybe there already is) to call a URL for a cache based on code or some other identifier from programs like GSAK and others, we could do it through that interface as well. This could save at least 1 additional click to get to the site and then the page to enter the log.

Link to comment

The problems with log scalability were mentioned at least a year ago.

 

http://forums.Groundspeak.com/GC/index.php...=0entry357599

 

Note the date - even then the site reliability was taking on water. A few of the things mentioned in that thread have since been addressed.

 

Sometimes "do less" really is a better answer than "do it faster". Some discussion on logging interfaces has been given at http://forums.Groundspeak.com/GC/index.php?showtopic=77810 and http://forums.Groundspeak.com/GC/index.php?showtopic=68251 but an optimized logging interface closer to what many before have described (and what lowracer has just sketched out) - and especially if it doesn't require page-specific viewstates- would cut the number of pages that had to be served during logging without the cost of a "real" web services API.

 

I don't know how things are going to improve from here, but I'm looking forward to it.

Link to comment
I know you guys are just trying to help, but you really don't understand what's going on behind the scenes.  And its certainly not your fault either; we don't disclose enough information about the inner workings of the site or supply any statistics you can use to estimate or guess as to what's happening.  Suffice it to say we're *way beyond* the basic performance monitoring and tuning Caderoux mentions a couple posts back.

 

:huh:

Elias, I really appreciate your comments. And now that we know what is going on (or not) we no longer have to beat up the mystery culprit(s).

 

I would recommend starting a thread by posting this sort of statement in a pinned item at the top of the web page forum. Leave it there until the problem is identified and resolved. Keep it updated as well as possible within the constaints of what the company is comfortable disclosing. Point out to us what it looks like on our screens when this problem is flaring up and what we should report to you. If you are intererested in suggestions, then focus that discussion by starting a very specific thread. If unsolicited, pointless threads emerge you can politely direct folks to the pinned item. If folks are venting, then just let them do that.

 

I feel that by keeping us as informed as is possible, we (your clients) will not be as frustrated.

Link to comment

I would be more than willing to buy into a higher level of membership. I would be willing to drop 100 bucks on a higher level of service, not that I am unhappy with what I have now. I can certainly see a dedicated server for "ultra-premium members" being worth the money. Although, I can handle the server slow downs, I just postpone logging, if you tell me that I can get guaranteed access (assuming that nothing is broken) then tell me how much and where to send the check.

 

For the record I am happy with the great job that Groundspeak is doing, but I am always interested in getting more and I don't mind paying for it.

Link to comment

I think the idea is ridiculous. What happens when with time, all the servers will be upgraded and the bandwidth will be multiplied so everything will be fast for the basic members as well? Then Groundspeak will insert sleep instructions and empty loops into the scripts so that it remains slow for the basic members (otherwise the premium membership would not be as attractive)? Come on.

Link to comment
There could be a confirmation page (using lowracer's example) that only retrieved the cache name based on the waypoint (or maybe a little more info like state/country) where you would then confirm the log submission.

 

Good idea though - I like it.

 

Also, if there's a way (maybe there already is) to call a URL for a cache based on code or some other identifier from programs like GSAK and others, we could do it through that interface as well. This could save at least 1 additional click to get to the site and then the page to enter the log.

GSAK does allow you to do that now. Recently I've been doing all my logging from GSAK. I filter to the caches I just did, click on the open page in browser, and enter my log as normal - no main page, my page, hide/seek or such.

Link to comment
Recently I've been doing all my logging from GSAK. I filter to the caches I just did, click on the open page in browser, and enter my log as normal - no main page, my page, hide/seek or such.

That's a step in the right direction, but the GSAK method still requires the geocaching.com database to read and serve up the cache description page, including the images, backgrounds, first few logs, hints, and all that.

 

The suggested SPEEDLOG® method wouldn't require any (or at least not as many) database reads. The only database access would be writing the logs to the cache page. Perhaps behind the scenes there's some short reading of an index to make sure the cache exists before writing, but it couldn't be as much database access as loading the whole cache description page.

 

Disclaimer: The last time I wrote any kind of database software was back in 1992, using stone knives and bear skins (C++). Perhaps things have changed significantly since then.

Link to comment

I don't know how GSAK links to the site, but you can go directly to the log page and bypass the cache page entirely.

 

e.g.

 

http://www.geocaching.com/seek/log.aspx?ID=146042

 

or if you have the longer ID:

 

http://www.geocaching.com/seek/log.aspx?wi...84-2263a44b20af

 

Regarding SPEEDLOG® (or whatever you want to call it)

 

It's a great idea. You will need to refresh the page for the waypoint ID in order to determine the correct log types you can use, but otherwise it would just be a couple of round trips to the database. I'll see how I can integrate this. Additionally I'll have web services in the near future you can also use to log caches, via both SOAP and a standard HTTP Get.

Link to comment
Regarding SPEEDLOG® (or whatever you want to call it)

 

It's a great idea. You will need to refresh the page for the waypoint ID in order to determine the correct log types you can use, but otherwise it would just be a couple of round trips to the database. I'll see how I can integrate this. Additionally I'll have web services in the near future you can also use to log caches, via both SOAP and a standard HTTP Get.

Maybe Speedlog could say "that is not a valid log type for this cache" if they select the wrong one (much like how it now says something like "you must enter a log type")

Link to comment
I don't know how GSAK links to the site, but you can go directly to the log page and bypass the cache page entirely.

 

e.g.

 

http://www.geocaching.com/seek/log.aspx?ID=146042

 

or if you have the longer ID:

 

http://www.geocaching.com/seek/log.aspx?wi...84-2263a44b20af

 

Regarding SPEEDLOG® (or whatever you want to call it)

 

It's a great idea. You will need to refresh the page for the waypoint ID in order to determine the correct log types you can use, but otherwise it would just be a couple of round trips to the database. I'll see how I can integrate this. Additionally I'll have web services in the near future you can also use to log caches, via both SOAP and a standard HTTP Get.

In regards to the ID's, why are there two different types?

I always prefered the shorter, old ID compared to that new longer one.

MarcB

Link to comment
I don't know how GSAK links to the site, but you can go directly to the log page and bypass the cache page entirely.

 

e.g.

http://www.geocaching.com/seek/log.aspx?ID=146042

Geocaching.com Add New Log=http://www.geocaching.com/seek/log.aspx?ID=%gcid

 

is what I added as a Custom URL to GSAK. I right click any cache in GSAK and select Geocaching.com Add New Log and it opens the log page. Neat and quick!

Link to comment
I like the idea of a dedicated server for premium members. I'd be willing to up my annual premium membership fee to $60.00 per year to see a dedicated server.

 

Wait a second people...most of those who are saying this are single people or at least the single cacher in a family---what about the nice folks my husband and I met our caching a couple of weeks ago who have FIVE people in the family who all cache? Will they pay for FIVE premium memberships at the higher cost? For that matter, will my husband and I both pay for two higher priced memberships?

Or do we pay for one premium membership for the household---and its my guess that would not help the server issues anyway...

Link to comment

I will start by telling you that all this SQL and server stuff is waaaay over my head, but I had a couple of thought.

 

It sounded like the bottleneck is in the database, if so, would another server help relieve some of that load? If that was the case maybe Groundspeak could setup another "dontation" type PayPal accout where members could help with the new server fund. Since I have no idea how much a server would cost, maybe this is not even an option.

Link to comment
A big, bad server will only last you so long, if the real problem is the DB. Thought about looking at other DBMS solutions, besides MS?

The difference in performance between different database software is relatively negligible. Each database has its own strengths and weaknesses, and every database vendor claims their database is the fastest under certain test methodologies. If I were a HUGE company, I'd probably worry about those minor nuances, but for us, it really doesn't matter.

 

That said, a few years ago we considered switching to Oracle so we could take advantage of their Spatial module. We couldn't afford it, so it wasn't much of an evaluation, but we did briefly consider it.

 

You are certainly right that hardware gets us only so far. We need to find better and smarter ways of querying the database, improving our caching mechanisms, etc.... We know that throwing hardware at a problem isn't a fix; it just part of the larger performance tuning picture.

 

:P Elias

Link to comment
Geocaching.com Add New Log=http://www.geocaching.com/seek/log.aspx?ID=%gcid

 

is what I added as a Custom URL to GSAK. I right click any cache in GSAK and select Geocaching.com Add New Log and it opens the log page. Neat and quick!

Cool. I get it now. That's the SpeedLog® functionality, built right into GSAK! Makes me glad I spent my $15 wisely.

Link to comment
We had a couple of DL580 's laying around.  They're cute if you put a good RAID system in them.  But don't confuse them with big and bad  :D

That's really funny - I was looking at those a few weeks ago and even printed out their spec sheets (they're still on my desk). :D

 

Unfortunately, it'll take a few more premium members before we'll be able to get one of those.

 

:P Elias

Link to comment
We had a couple of DL580 's laying around.  They're cute if you put a good RAID system in them.  But don't confuse them with big and bad  :D

That's really funny - I was looking at those a few weeks ago and even printed out their spec sheets (they're still on my desk). :D

 

Unfortunately, it'll take a few more premium members before we'll be able to get one of those.

 

:P Elias

/goes and renews my subscription...

Link to comment
Guest
This topic is now closed to further replies.
×
×
  • Create New...