Jump to content

russellvt

+Premium Members
  • Posts

    154
  • Joined

  • Last visited

Everything posted by russellvt

  1. Geez, you must live in my area. I think I've found that one too! And yeah, no real imagination, there... I agree. Maybe we should start a thread mentioning folks that routinely provide some level of imaginative hides and/or containers (then again, perhaps that just encourages the plunderers to go off and plunder their caches... dunno).
  2. You could always just avoid caches located in parking lots... though there's the occasional LPC on a sidewalk, they seem to be a lot less common (that's not an invite for the "geocache in every wally world parking lot" folks, BTW).I usually load up 800 or 900 caches at a time. Your suggestion would entail aerial photo inspection of every micro. I'd rather be caching. Point taken... I usually do the same thing, but the implication is that you could always drive by the obvious ones (sorry if I/that wasn't clear). <devil's advocate> Of course, I understand that it's not always obvious it's a LPC, either, until you drive up to it... and sometimes not even then (I've seen plenty of decoys). So, long as you're there, "might as well grab it to clear the map" tends to be my thinking. </devil's advocate>
  3. Well, if you actually read my response, I indicated that SSL wouldn't address many of those social issues I pointed out. However, it would address an issue such as an upstream sniffer (trivial man in the middle) or other sort of similar or potentially "social" compromises (eg. bribing or compromising links along the chain). Groundspeak also stores passwords (as compared to hashes), so SQL injection through something like URL munging (or similar) is also a consideration. Remember, you don't actually have to compromise the site in-question, but only some link along the way (or have a crafty phish/proxy/etc)... and SSL is great for preventing prying eyes from prying (ie. keeping honest people honest) in those scenarios. Even compromise of the browser is possible (though that's obviously a spot where only good password practices might help you). (...but "computer illiterate" who... do the exact same thing? Sounds a bit harsh, to me...) My previous posts were largely directed at your assertions that [insert "worthless" social site here] is not worth an attempt at compromise from a hacker's standpoint. That's a pretty common misconception. Personally, I think it's pretty clear that folks at this site in-particular might tend to fall more in to the "disposable income" category, so it's certainly a security concern. Note: This is not saying someone should go and hack geocaching.com or any other social or hobby site or similar. It's merely giving you an idea of how some identity theft and related compromises tend to play out in real world scenarios. And yes, education of your user base also plays a very definite role... social engineering can be a huge problem. I don't think a forum operator's job, however, is "user education" (that's more of a workplace thing). Bottom line: SSL's not expensive nor difficult/costly to implement unless you're planning a whole new infrastructure around it (ie. new/big hardware). Frankly, I think someone with Groundspeak's traffic already has to have a reasonably impressive network presence as-is just to keep the Windows boxes from falling over. And honestly, I've not implemented SSL across IIS though I believe Microsoft's pretty much gotten it down to a "single click" solution/purchase at this point (that might be slightly optimistic, though). Given the current round robin solution on the site, it's unclear if there's already some sort of load balancer in the stream that could easily offload any/all IIS servers... though throwing SSL accelerators in front of the current farm(s) probably isn't tough or really that costly, either, in the grand scheme of things. But really only Groundspeak to speak to those points. Any more dead horses?
  4. I'm experiencing a weird bookmark issue as I'm creating a new bookmark list (for an "alphabetical" challenge). As I'm cresting the one page barrier (20... actually at 24 now) it seems that Firefox only shows 19 (with a "next page" link at the bottom). Clicking the "Next" or the "2" at the bottom of the page just gives me the canned "General Error" page. Also expanding the number per page to something like 50 doesn't actually change the display at all, but it removes the page numbers at the bottom. Of course... opening up IE (Aiyeeee!) and logging in, it all seems to work fine and as-advertised. After a few minutes, Firefox finally displays the 20th cache on the first page and is displaying just a copy of the first page for the "second" page. Seems like some weird session caching stuff is going on, perhaps? Or something odd with the javascript that's used to generate these pages? (why do it the hard way?) I'm currently using Firefox 3.0 on Vista and have page caching turned off (ie. "check every time"); though technically, the "expires" header should be properly set on pages generated from database queries, I'd think (ie. haven't actually checked). Other multi-page bookmarks would appear to work just fine. Note: ten minutes later, it's working fine (sounds like a caching issue somewhere?). I might blame the browser, were it not for the "general error" popping up all the time (though maybe that only needed to happen once before the browser cached it for some time, anyway?).
  5. Unfortunately, I think it current "best practices" come down to things like biometrics, and/or good password security policies with single-use passwords (OTP). But yes, essentially total security boils down to turning off a computer and unplugging it from the wall. (and yes, I know you can defeat fingerprint readers, too) Actually, in the real world, that's exactly where such a hack is probably going to start... probably during part of the process (aka "hack") referred to as enumeration. It's likely to be a place such as a reasonably "low-level" social site (such as a forum or similar) where the security is likely weaker or "not really a concern" (ie. the "low hanging fruit"). Typically, that's one of the best places to find things like unsuspecting users or software glitches that may allow intrusions, SQL injections or privilege escalation type operations (agreed, most of that's outside the realm of SSL). Basically something they'd likely not be able to get away with right away at some place like a financial institution or some other "high security" site (at least they'd likely be detected reasonably quickly). And again, based on all the things mentioned above, it's likely that it'd only be a matter of "following the chain" there-after... BTW... many/most "secure" wireless networks also are only there to keep honest people honest, so-to-speak. Most of the weak encryption can be broken by someone determined enough to do so (see the password rotation discussion, above). Short of securing a router in a DMZ and running a VPN across it for "real" access, you're likely not going to be able to tighten even that down so that a determined (or knowledgeable) person can't get in to it... Yes, none of this likely applies to someone who actually religiously follows good password guidelines... but really, how many folks actually do? (that's really what we're talking about, here) And with the proliferation of "login required" websites on the net, how many folks re-use credentials at "non-important" sites? Think about that one for a second and feel free to answer it for yourselves (and no, changing one or two letters/numbers or something between sites does not count as using different passwords, I'm sorry to say). If someone really follows good password security guidelines, I commend you -- count yourself as one of the very, very few. Edit: "low hanging fruit"
  6. Interesting point, and GREAT bit of history to know! Suddenly sheds an enormous amount of light on the whole small/micro disambiguation... I'll have to keep that in-mind while searching for older "micro" caches (any idea how long ago this was done? then again, I'm assuming it probably took a while to filter out completely following the change). So, were the above delineation a bit-more-clear (ie. the older established caches were properly size-adjusted), I think that would likely address a lot of the issue people are (still) having, since I think most of the complaint currently stems from the "small sized micros." (Also, I think there would be less or an issue adding a new, smaller size than trying to add a size somewhere in the middle of the current scheme... but that point's probably moot, now) Thanks for the insight, Toz!
  7. Nice, thanks! I had seen that link but was thinking of "what use" it could be (assuming it was for all cache types). Of course, residing in a state that can take 2/3rds of a day to drive through makes this a bit more tedious, but probably still usable. But yeah... I tend to switch residences between states... just means I update my home coordinates as-needed. Sounds like it'd work in this case, too.
  8. With all due respect, by that same token, "large," "medium" and "small" would also have been enough, then. Just, in my opinion, there usually a very different manner and/or level of search involved in finding a "nano" versus a "micro" -- far more than, say, a small versus a micro, for example (at least this is true in my own area and a couple others I've seen). Therefore, it just seems like there would also be a logical division between the two sizes is all... But yes... at some point there certainly has to be an imposed cap.
  9. You could always just avoid caches located in parking lots... though there's the occasional LPC on a sidewalk, they seem to be a lot less common (that's not an invite for the "geocache in every wally world parking lot" folks, BTW). I, myself, don't mind the creatively hidden micros... and I have to admit, the first LPC we did was pretty amusing -- can't say I'm much impressed with anyone else's "creativity" following that one, though. Perhaps it's just more impetus for a more comprehensive cache rating system that seems to hit, oh, about every two weeks now? (disclaimer: I only just pulled that link off the gc.com topic list just now - I've yet to actually read this installment of it).
  10. I think we could probably both agree that it doesn't sound like a $99 solution any more does it There are "Real" reasons NOT to do it, but as always, there are going to be many reasons based on perception and unwarranted fear to implement a solution. I tend to take the pragmatic approach though. If we focus on real issues, there really aren't any good reasons to do it. There are essentially always lots of immeasurable costs in "free" solutions... but those costs also don't tend to appear on the budget/planning sheets that go up the chains of command to scare away the potentially more-costly ideas, either. So yeah, I guess it could be said that it goes both ways... and yeah, you're right - it's not really a $50-$100 solution, overall (but the certs are certainly oodles cheaper than what they used to be). Overall, my general assertions (for clarity): SSL cert costs are just a drop in the bucket, annually * Overall, the performance degradation is probably negligible A competent sysadmin should be able to add a cert to a fleet of load balancers or SSL accelerators in under an hour ** * Programming man hours are likely dependent on architecture, though shouldn't be huge ** If you decide/have to buy additional hardware, this obviously can boost the cost significantly. "The day before a breach, the ROI is zero. The day after, it is infinite." - Dennis Hoffman, RSA vice president of enterprise solutions Edit: added assertions to help clarify the point
  11. Well, it at least generally contains real name and address (and ICBM address), which some people feel a bit sensitive about, along with possible IM accounts (with an account to turn off public display of it -- which would beg the question why it's there, anyway). So not sure I agree it's a "1" ... but probably not more than a 4 or 5 at worst/best? Well, you make good points but, really... how many folks actually have good password practices? (for that matter, how many sysadmins really even practice what they preach?) Anyone having worked in IT can tell you of the nightmare you can have implementing even a corporate security policy that includes strong passwords and/or even a regular password rotation, as you very rightly recommend. More to the point, were Groundspeak's network to be somehow compromised and the sniffing were to occur on larger scale... just because an individual user of group of them may already be irresponsible with their password practices shouldn't make it okay for the corporation-level folks to assume that they can be equally lax (and more-over, argument can be had that a responsible corporation would just step up and do it anyway). BTW... with the advent of things like SSL accelerators, these days the only big performance hit is really during the establishment of the SSL session. Over an extended session the low-grade encryption isn't really that big a deal, even from a slow device such as a cell phone... you can further mitigate the issue by simply encrypting the login, and leaving the rest of the work to browser sessions (which they already use, anyway). Many sites on the Internet already do this exact thing and are plenty fast. In my own previous "production lives" we've never had a problem forcing a few hundred thousand clients to re-establish SSL sessions on a regular basis, short of one or more of the load balancers deciding that they weren't happy with life. I suspect the tougher technical issue for Groundspeak, here, is having a large number of clients computationally querying tens of thousands of records out of their DBs on a nearly continual basis... especially with a windows front-end (*grins*). So my argument would be more like why not do it if a simple solution like this might provide a modicum of users peace of mind and, at the same time, the company appears as though they care about the security of their users.
  12. Not sure if you know, but events have to be published at least 2 weeks before the actual event, for that very reason, to give people time to plan. Yeah, I was hoping/figuring there was some sort of requirement for such, though I've recently missed "local" events that I didn't see until either the night before or a night or two afterward... though that probably just implies that I wasn't looking in the right spot or something. Thanks for the reply/info!
  13. Generally, such SSL MITM attacks will pop up a big security warning error message. Don't ignore those! I've done it to myself for test purposes - it's pretty scary. I was able to sniff my own credit card info while surfing my cc company's site. But like i said, i had to click through a huge 'invalid certificate' warning from the browser. Generally, perhaps... though many browsers come with certain features such as that one (and a few others) disabled or turned off by default... so the uninitiated/unwary tend to be the ones hurt worst. I think that's largely the case only because a lot of developers still seem to make the mistake of "half encrypting" a page which tends to cause similar nasty alarms. But this is also all probably getting outside the scope of the original discussion... They're not even that pricey these days, unless you choose to go with someone like Verislime (disclaimer: may or may not bear any resemblance to any known or fictitious company)
  14. Hmmm... evidently I don't manage to read it, much. (seems kinda odd that the feature, as-written, would be of little use, though... /shrug) Thanks much!
  15. Well, not always... most often times, it seems to be a film canister or key hider, in my experience. And as you also said, not everyone's on the same page with respect to (WRT) small vs. nano. Myself, a nano is generally a size where there's zero chance of anything fitting in there other than the very-tightly-wound-log (about half the size of the typical waterproof matchstick container that everyone seems to like to wrap camo tape around)... or, as some would say, one where you usually spend more time getting the log back in to the container than you did taking the whole thing out. Also, keep in mind that some folks just grab a PQ and a GPS and hit the road... so it's debatable whether the description is often seen much before many try to log a find (or do further research after a DNF). My inlaw-types are more methodical in their approach to caching (ie. research every one before searching), but, well... my fiance and I fall more to the side of keeping more of a list on the GPS in-case I/we get time or urge to cache in the area(s) I/we currently happen to be.
  16. [browser burp... duplicate post... my apologies]
  17. You can do something similar in GSAK with the "filter" feature... but, like the "notaboutthenumbers" solution already proposed, it requires access to your friends' "finds" list (PQ) or their GSAK database (which appears to be encouraged to "share" between licensed versions of GSAK; it's at least encouraged between different machines). I already do something with my s/o and caching partner to see where we "overlap." Unfortunately, there are only really provisions for one-to-one comparisons, and the caveat is that you have to setup GSAK so that it never purges logs from your friends (which can be daunting for the basic user). I imagine with a bit of ingenuity, though, you could probably manage a kludge'y workaround that involved filters and database merging, etc, that could get you the result list you were seeking.
  18. Don't think that works for your "found caches," though... well, more specifically, it's not listed for the included/canned "My Found Caches" link, and any manually created PQ is limited to a max of 500 caches (A Google Maps max, IIRC) and would appear to be limited to 500 miles. Also, the "Google Maps Preview" section of this is busted ("feature'ized"), as it may not actually show you all 500 miles of that query (looks more like 50). For instance, I currently have around 150 found caches in the San Francisco and Los Angeles areas, total (those cities are about 300 miles apart) but a PQ centered in either the bay area of half-way between (approx Coalinga) only show a few caches... a closer look at the query would appear to be that it's cap'd at 50 (not 500) miles, even though it says 500.
  19. And that, pres-tell, I think is the best argument for them... I mean, heck, my s/o (NoSuchCache) tends to h8te (sic) "micros." Personally, I think it's due to the scenario I described, above... most of the "micros" I come across (particularly when she's with me) are usually really nano-sized caches or the-like. I think were the sizes a bit more distinct, it would enable her to avoid the really tiny ones, which can tend to drive us both crazy at times...
  20. For turning "event notifications" (ie. "Instanotify" Notification Service with a "Event Cache" or "Mega-Event Cache") it would be nice if we could get alerts from further than 50 miles. Particularly for remote areas, 50 miles may not be far enough to be of much benefit to some. It would also be nice if you could just say "any type of event" so-as to combine the two. Yeah, I understand that you can still achieve the same thing with a PQ (though it's not "instant," which could be a problem for "last minute events"). Also, with a number cap of 500 caches (ie. a hard DB limit on the SQL query), it seems a little crazy (possibly computationally "expensive") to also worry about the distance in certain instances like this... though I know an event 500 miles away is probably of little concern to most folks, too (*laugh*).
  21. Looks like (as of a few weeks ago now) that "caches along a route" are being averaged (ie. or at least the points between that make up a route are significantly reduced following upload). Just FYI, I'm using Mapsource with the "track" feature to save off my routes as a GPX file before I upload them to the website. So, the scenario I'm noticing is that if I create a route with close-to-five-hundred points, it seems that Groundspeak will effectively "average" those points down to something closer to fifty points. Unfortunately, over a large distance with twisty/turn'y areas and a rather narrow band of search, it seems those are often/possibly very different results (at least as far as my path taken through those points is concerned). Just this last weekend, for example, our "caches within a half mile" along a few hundred mile route ended up off a couple of miles -- which, of course, we weren't about to change our route to go chase. And, of course, this in-turn makes the "caches along a route" less usefull (to us) as we most often use it in the exact scenario, above... usually long drives where we want a "quick on/off a highway" (or rest stop caches) where we're not driving any further than we might otherwise to stop for gas. Yeah, I concede there are still a decent number of caches that fall in to the "acceptable parameters" I described, but there are still patches where watching the caches fly by a decent distance away is pretty... well... silly. Apologies, as I meant to post this a few weeks ago when I first noticed it, but managed to not do it (and I know this was previously working a bit differently than above)... but I just got bit by this again and wanted to see if there's possibly some feedback... Regards, Russell
  22. SSL ain't going to do squat to protect you from a keystroke logger on the library computer. That's true, but SSL will protect your session if the client is trusted but the network you're on isn't (hotel, cybercafe, freebie wifi on the metro train or airport, etc). SSL just seems like a basic precaution that should be in place to protect our accounts from a trivial method of compromising them. Technically, even that's not really completely/always true... there are still a few other (albeit currently less common) attacks that may allow someone to compromise or snoop on a rudimentary SSL session. (though it certainly blocks out a large percentage of the scr1pt k1dd1ez) But overall, I agree... SSL would be a nice addition, even if they start with a self-signed certificate for now (though GoDaddy and a few others have some pretty cheap ones out there with a CA that should be integrated in to most modern browsers, by now).
  23. I think that almost goes without saying when mentioning any sort of "feature request" here... you often need to "don the flame-retardant suit."
  24. I agree... it would be nice to add a "nano" size -- which I personally define as anything small enough where it can be a trick to get the log back in correctly (especially the first few). Seems that there are so many "nanos masquerading as micros" in my main area that I've become more accustomed to searching for those... then get rather surprised how "big" a "micro" or even "small" container can be, elsewhere.
  25. It's been being very slow for me lately, and I've seen a lot of database timeouts, particularly on the forums.
×
×
  • Create New...