Posts posted by caderoux
I checked off one to run today which hasn't run in a couple days. It still hasn't run after an hour or so.
"Page cannot be displayed" is an IE "friendly HTTP error message" Because of the page state method used for some pages (and several on geocaching.com), the back button cannot properly be used.
Opening in a new window preserves the original window and is really the only solution (other than the designer changing the way the web application keeps track of user state).
The new global replace option will go along way to making this easier for you.
However, updates to the macro language in version 4.3 should make this possible.
Cool. Keep up the good work.
Apologies if this has already been covered.
In my script, I bring in various files and then export my hitlist (all user flagged caches) to GPX. Then I use GPX to maplet to get that GPX into Mapopolis and put the GPX on my PDA.
I usually check a filter I've set up to identify new unfound, unflagged caches and click watch and user flag them. Then I re-export the hitlist. But it would be nice if this would happen automatically (watch and userflag all caches in a particular filter after the caches are loaded and before the hitlist export), so that my hitlist is always and automatically updated for these types of caches.
Is there already a way to do this in GSAK?
Thanks in advance,
Mapopolis - program free, pay for maps
GPXToMaplet - converts a GPX file to a Maplet file
I'm currently holding this one - will drop it off in the next few days - been riding out Ivan.
Only thing I would add is that it might not get as far as you expect in a single school year.
I think I messed up a cache without even knowing it - this cache was awesome, but I think I accidentally over tightened it (PVC sewage cleanout plug x 2 encased in concrete).
We have one in New Orleans - haven't done it yet, though.
So maybe a/the real problem is understanding how much (of the existing - size is an existing piece of information) information to display and how to present it rather than an issue with the type categories?
This might seem to be a silly question, but if someone is a frequent cacher and hates micros with a passion, don't they already have tools to get them to only caches which will be guaranteed to meet their stringent cache-enjoyment requirements? - like GSAK or Pocket Queries.
What about Fakes? - we've got all kinds of fake stuff popping up. Arguably that should be mystery/puzzle or its own fake type - probably has more merit as being a type than micro - but part of the attraction of doing these is not knowing it is a fake and having to make some kind of mental leap and get the rush.
We could add so many 'types' or 'hunt styles' with a sport/pastime/hobby/game as creative as this, all attempting to define and categorize everything. If micro is used as an indication of hunt style, then I've seen plenty ammo boxes which are micros. What about mid-sized caches like the smaller peanut butter jars etc.? They are sometimes like micro hunts and sometimes like full-sized hunts. This ain't baseball; the sport itself defies categorization.
I called my most recent multi-cache a puzzle, because the stages did not contain the coordinates of the next cache - only a number which would be used to build up the next and final locations.
But I was 50/50 as to whether I called it a multi or a puzzle (people can't just head to the first location like a traditional multi - they need to read the cache description, so I decided puzzle would be the most considerate choice). Three out of the four stages are micro-sized and the final location is an ammo box. I was trying to work a combination lock into the cache, but I've left that for another cache (and another multi-puzzle dilemma) and I was trying to fit the game I wanted the player to play into the mold of what I had seen done before and making it original and fun all at the same time.
I voted No. But to me, there's no such thing as a bad cache - I generally find something positive about every caching trip - I love exploring and have a overdeveloped sense of curiousity. When I first started out, I didn't like micros much either, but after the first dozen or so, I relaxed and just had fun. Now, I find the trade items rather wearisome, and rarely trade, but I go for all caches no matter what.
True... but you can't query web pages comments. And it's tough to sift through 500 caches when you are in a town for only a day or two.
If you're coming to Louisiana, you can use the Lagniappe List which is just getting off the ground.
The purpose is to provide a guide to tourists and visitors to the best caches in the area based on past visitors' ratings. It is a positive-only system. Cache owners have to place the link on their page, so only the maintained and better caches will float to the top.
Speak out. Why not just email the cache owner and, if no result, email an approver to have a cache archived if the cache owner is neglecting them? Seems to me that you can already get to a win-win situation for everyone without a "rule" just by sending a few emails and some discussion.
The more rules we add, the more pointless crap everyone will have to go through - new cache owners and approvers.
It's a hobby run by people for people. Rules are for computers; less pointless communication and work for good caches and more communication between people about bad caches will improve caching for everyone.
I love GSAK.
As you can tell from my script above, I always export my updated hitlist anytime I run my prep batch file. Now, if a new cache shows up, and I haven't already put it in my hitlist, then it will not be exported, so I always have to run by "unfound nearby" filter, check to see if there are any new caches, and if there are, then mark them and re-export my hitlist.
I think one thing which would be useful (and maybe can already be done) is to be able to automate that part.
I know there is a mark new/updated caches, but I don't want to do that on all the PQs I'm bringing in, nor do I want to mark any which aren't in a particular filter.
Here's my DOS batch file - it actually creates a custom GSAK script every time based on what it sees needs to be done. It uses WinZip command line tool to unzip the new file. It automatically renames the files so I can tell what they are on my iPaq and ONLY imports the file the first time (saving processing time).
There's better and easier ways to do this if you generally have a lot more files (like a perl script with a list of filesnames, but my PQs are pretty consistent, and I haven't spent much time on this):
echo # GSAK File Generated By Batch File >geoprep.gsak
if not exist 31327.zip goto next1
echo LOADING Unfound
echo LOAD File="C:\Documents and Settings\Cade\Desktop\Geocaching\31327 - Unfound.gpx" >>geoprep.gsak
ren 31327.gpx "31327 - Unfound.gpx"
if not exist 40314.zip goto next2
echo LOADING Watched
echo LOAD File="C:\Documents and Settings\Cade\Desktop\Geocaching\40314 - Watched Caches.gpx" >>geoprep.gsak
ren 40314.gpx "40314 - Watched Caches.gpx"
if not exist 40315.zip goto next3
echo LOADING New
echo LOAD File="C:\Documents and Settings\Cade\Desktop\Geocaching\40315 - New Caches.gpx" >>geoprep.gsak
ren 40315.gpx "40315 - New Caches.gpx"
if not exist 40940.zip goto next4
echo LOADING Found
echo LOAD File="C:\Documents and Settings\Cade\Desktop\Geocaching\40940 - Found Caches.gpx" >>geoprep.gsak
ren 40940.gpx "40940 - Found Caches.gpx"
if not exist 106573.zip goto end
echo LOADING Orlando
echo LOAD File="C:\Documents and Settings\Cade\Desktop\Geocaching\106573 - Orlando Caches.gpx" >>geoprep.gsak
ren 106573.gpx "106573 - Orlando Caches.gpx"
echo FILTER Name="Hitlist" >>geoprep.gsak
echo EXPORT Type=GPX File="C:\Documents and Settings\Cade\Desktop\Geocaching\hitlist.gpx" >>geoprep.gsak
"C:\Program Files\GSAK3\GSAK" /run "C:\Documents and Settings\Cade\Desktop\Geocaching\geoprep.gsak"
Although it is not meant to be a difficulty rating system, here in Louisiana, we have a new Louisiana Lagniappe system which Alex at LAGeocaching.com has built which allows caches in Louisiana to be rated - for overall enjoyability. It's simple to vote on, simple to set up, and if you wanted to know the caches most people liked the most in Louisiana, you can simply work your way down the list.
It's just starting out, but looks like it is going to be a success.
YMMV, of course.
Then again, I know of no tool that will export this format with user-entered logs. GSAK?
GSAK will export your user notes in a GPX file if you tell it to - as a note (not a find or anything specific).
We had a couple of DL580 's laying around. They're cute if you put a good RAID system in them. But don't confuse them with big and bad
That's really funny - I was looking at those a few weeks ago and even printed out their spec sheets (they're still on my desk).
The ES7000's provided me with much OS-architecture entertainment in my pre-geocaching days. (Hmm. They might have been under NDA then, so scratch that.) Once you get into Real Computers where everything including memory and CPUs are hotplug, and you have 32 or more processors honking along in the same cabinet, things get really wild.
I don't know what you were running before, but in the interest of not turning this into Judge Reinholds demo of the Dominator MX-10s, I'll say the 580's are nice boxes. The off-cache performance is disappointing on them, but that's the point of the 4MB caches that are in vogue now on Xeons. Pair them up with a strong I/O system talking to a RAID cabinet with lots of spindles and a bucketful of memory, and you get hail-mary's for lots of database sins. Ironically, it also makes it harder to observe and measure those sins when everything happens in core instead of on platters because it's faster.
We decided not to buy unpopulated quad-capable machines anymore. We thought of getting them dual-proc to start with and upgrade later for the last SQL Server upgrade we did. Once you got around to populating them, in say, a year, if you looked at the price premium you paid for the chassis, you could take that amount you saved this year and buy next year's fastest at that time next year. And that's in today's money. The rapage on old processors and upgrades is ludicrous. And with data on the SAN, forklifting the data files to the new cluster was simple.
Interestingly, I saw an article on this a little while back (http://www.infoworld.com/article/04/08/13/33FEmyth1_1.html?s=feature)
So we pretty much stuck to only dual-proc for all servers and retired the quads.
Judging by the error messages I get today, I would avoid using datasets and use the more lightweight datareaders - they use less server (DB) and client (web) resources. But I'm sure you know that and all this is pointless Tuesday-morning quarterbacking.
I've worked with enough consultants to be confident in saying that 80% of the Microsoft Certified DBAs wouldn't be able to help us, or they'd offer nonsensical suggestions.
That's for sure (I'd say you're probably in the 95% region). You're well outside the general experience zone, and a lot of the certifications are not worth the paper they're written on. You can always do it on your own, and taking the time to find the right person is a huge opportunity cost.
You shouldn't rule out the effects of the other parts of your network also possibly improving your "perceived" database performance. Improving the firewall and web server as you're doing will eliminate barriers to users which may actually have caused more database requests than necessary as failed attempts are retried. Eliminating these bottlenecks can cause more load, as you mentioned, but they can also have the opposite effect, because it can be a lot more complex than that.
I'm the first to admit I'm tapping my SQL skills.
I don't want to intrude on your process, but I assume you are using the profiler and query analyzer to understand your stored procedure execution plans, have updated statistics on your tables and have at least done some review of the indexes and clustered indexes? Are you sure your stored procedures are using the execution plans you think they are?
Are you using implicit joins (don't)? If you are performing a lot of inner joins, have you pushed the criteria up into the join instead of in the where clause and seen if indexes can be modified to turn table scans into index scans? Are you doing dirty selects (with nolock) - I don't normally have to bother, but you probably safely can in most cases which will remove locking contention?
Are there single steps which always run slower than you'd expect or is there only general slowdown noticed at times of high activity? If so, are there hot spots due to your clustered indexes? Are logs and data files on separate disks? Windows performance counters can also show you some things about the OS performance handling your file access.
Sorry, I can't offer my services, but in any case, I'm not a real DBA - more of a jack of all trades - it sounds to me like a DBA is exactly what you need - a hired gun to come in for just a few weekends to help you profile the app and get some things ironed out.
Sounds to me like a perfect example of when database optimization intersects with availability:
This isn't a facetious comment - she knows her stuff and the .NET Rocks webcast was very informative, and it sounds to me like you've entered that zone.
I saw "Microsoft" in your quote and thought: Okay, there is the reason
Not a likely, nor helpful, reason, since HTTP verbs are GET and POST and nothing to do with the fact that it happens to be a Microsoft server product.
Page it's requesting either doesn't accept GET or POST, but that's what the form is asking for.
It's just all code and configuration, same as any system... Problem could be on PayPal's side or the destination form.
I don't want to sound like a crying baby, but can anyone explain to me when the site will be fixed once and for all?
It can only be "fixed" when the sport stops growing and evolving. Personally, I hope it never reaches that stage.
I don't think that's how Yahoo! and Google handled their growth, and I don't think this is what you meant, but it comes off like: "As long as our business is growing, the customers will have problems because we can't keep up. Only when the growth rate in demand falls below our capacity growth rate will our customers be happy."
Seriously, we know work is being done to improve performance to be able to handle the growth in users and the hobby/sport.
I, for one, appreciate the difficulty of the task in front of GC.COM, because I work with large/complex databases all the time, and performance tuning is a very important issue when databases become extremely large/complex, especially when viewing that data through different parameter filters such as user preferences (i.e. found/unfound). When the detail becomes extremely large, the processing of these types of joins can be extremely expensive. And I see all these platform flame wars, which amuse me, since none of these people have any idea that this technology problem is not unique to any one platform - you would have ALL THE SAME PROBLEMS on every platform when a database gets to this state of size, complexity, etc. It seems like you run into every problem all at once - table size, number of tables, number of joins, clustered indexes which used to be right for data at one size, but not so good when the data evolves over time.
I think the question, reasonably asked and answered, which most of the frustrated customers probably have, is simply:
We know you're working hard on the problems, but when do you think the site will be able to handle a weekend again?
Is There An Issue With Pocket Queries Today?
Mine still hasn't shown as being run - I'll uncheck it and re-check it again....