Jump to content

Volvo Man

+Premium Members
  • Posts

    332
  • Joined

  • Last visited

Everything posted by Volvo Man

  1. The thread is about Un-Published caches which have at one time or another been published before becoming "Un-Published" thios plays havoc with refreshing databases via the API as the data GC sends out on an API refresh of them is not valid xml. It's got nothing to do with whether they are Earthcaches or not, merely the dubious practice of UN-Publishing a cache that has been logged in the past, as, according to correct process, the cache should have been archived, not un-published. and, FYI, GC8D0D is an unpublished cache that has previously been logged, but the CO was banned from GC.com for some reason (he no longer shows as banned, but he did several years ago)
  2. So, it would appear that the "unpublish" method is actually a method of performing a "cover-up" when an error has been made. That's going to be an utter pain weeding out all those covered up earthcaches from my DB, I'll have to check each one manually online
  3. All of Oregon has 27,382 caches as of 12:01am 06Nov2011 Of those 1091 are PMO caches However the Portland Metro area consisting of Multnomah Washington and Clackamas counties have 6605 caches 911 of them are PMOs. Just a note of caution on those figures (I don't have all of oregon in my GSAK database so I can't double check) that the number seems a little on the low side from my experience in southern Oregon & northern CA (State Of Jefferson) Whilst GSAK has a PMO field for the cache, this will only be updated when you refresh it with data via the API, this info doesn't get included with PQs. To ensure you have an accurate figure, run a Geocaching.com access Status Check on the whole database. My State of Jefferson GSAK database has PMO diamonds shown on around 20% of entries, I'll check the figure later when I've run a status update on it as its not been done for a couple of weeks.
  4. Just an additional note to reveal something that I don't think many are aware of, and PMO placers should know, If you do a get geocaches lite search via any API enabled software, you can downlaod up to 10,000 caches a day without a PM account, but only get .LOC standard results. Included in these results are PMO caches actual locations!
  5. You're one month late for Zombie Awareness Month, you know. Actually, this resurection's more like Vampire, and it needs to be careful, as geocachers know all the best places to obtain a Wooden Stake
  6. To be honest, I don't think they really had much competition to worry about from "Go Cola"
  7. I'll be honest, I didn't read the whole thread, why, because for the large part after the first page, i could predict what would come up, and scanning through, I was right. For the record: 1: the OP speaks for me. (except the If there must be a physical then there can be an offset - this is called geotrash, and shame on TPTB for any advocation of it, if the cache container location isn't the point of doing the cache, don't place it (that was the WHOLE POINT of virtuals, worthy places you couldn't hide a container) 2: I could swear there used to be a forum vote capability here, but not anymore, otherwise we actually could take a vote on this. Quality of caches is entirely a seperate issue to type, there are dire examples of bad caches in every type of cache category, and I have to say, a lot of these are Multis that are all great doing the clues, but there's just no good place to put the container, when you've done all the clues (and completed the point of the cache), the containers gone, again! So, the argument about quality - INVALID, applies to all types equally The arguments about maintenance - INVALID - Applies to all types (but "Holiday Virtuals" should not return) The arguments about the flack the reviewers got - INVALID - Applies to all types. - Simple answer, let the members decide, give virtuals a vote system, if a virt drops below say a -2, archive it. (write a macro to do it, no reviewer to bitch at then) Arguments about changes to location affecting the cache - INVALID applies to all types, particularly the type of changes made with a chainsaw. Arguments from those that don't want to see Virts return - INVALID - don't do them then. I don't really like Multis that much, so I don't do them, I've never advocated their removal from the game. Challenges are the replacement - INVALID, how does being taken to the most incredible volcanic lake I've ever seen in or out of a book, compare to visiting caches no-one has been to for 6 months (there's probably a reason for that) then having to go to australia to find a tupperware box??? Totally different concepts, Challenges are the replacement for COs who want to torment people with ludicrous ALRs (which used to apply to most types) GC doesn't want to so they won't - INVALID - Thats what everybody said about increasing cache count in PQs Caches have to have a physical log to sign - WHY?? and BTW how exactly do I sign a webcam? Virts were one of the first additional cache types, before all the other stuff that doesn't seem to be all that popular, but Virts have maintained their popularity (those that are left anyway) and have the most vocal and loyal fanbase of any kind of cache, I really do not hear anyone standing up for 35mm plastic film containers anywhere near as consistently as fans of Virtuals, that's gotta tell you something. BTW I <3 Micros too Someone said, lets have a vote, I second that motion, that's usually good enough to go to a vote in the free world. Lets take that vote and all agree to abide by the majority consesus. Heres a good idea, GC should like this one, I'll pay them for the ability to post a Virtual. Whats more, I'll pay them on a per cache basis! here's my proposal: I'll pay $5 per virtual I submit and is approved. On the condition that I get a refund if its archived by a reviewer within 24 months. There you go, Money on the table, right where my mouth is! My logic in this, I'll save that on the pointless container for the offset, and the return visits to replace it when a squirrel thinks it a tasty new treat for the winter. BTW I <3 Squirrels too
  8. I remember this site back in the day, and I also remember ignoring it as it was kinda empty. I just checked a few caches in my local area, and the obvious cross listings even have the same ID code!(Except change GC for OX), now that's Brazen.
  9. I also receive a small comission for resurecting threads that have faltered and been forgotten long ago. I've recently heard a tale of a guy who makes a good living out of his Geocaching related activities, although its a very shady tale. It was relayed to me by an aquaintance of mine. Aparently what this guy has done is to get a hold of a ton of information about geocaches. he then tempts people to try them out, "go, on whats the harm", "all the cool kids are doing it" "first one's free". He continues to supply free information until the people are addicted. then, he carefully lets them know that what hes been giving them is only a taster, you know, just the entry level stuff. Then he tells them all about the "special stuff" he keeps in back. The people are all exited cause their cravings are getting stronger and what they've been getting just doesn't seem to be enough anymore. So, then he tells them, you can have as much as you can handle, any time you want, but, and here's the rub, they gotta pay him for it. Of course, by this point, the helpless peeps are so desperate for their next fix, they can't type in their 16+4+3 digits fast enough! This sordid tale doesn't end here, oh no, once these poor helpless addicts are hooked and receiving their regular fix, they find they need to start paying another guy, who doesn't come from all that far away from the first, for updates and enhancements to their computer systems and gadgets to keep the fixes coming. this second guy (known only as Willie Fences ) is so sneaky, he's even got a slice of the action from the competition, a bunch of fruit growers who make their money selling eye related stuff. ( I did hear that even Forest Gump gets a slice of this action). So now, I hear that the first guy is starting to drop hints about something stronger to satisfy the real addicts. Its got something to do with Platinum, but that's all I've heard so far..........
  10. Like (although to determine who spends too much time here, just divide the number of posts under their avatar by the number of days since they joined, in your case 1.636585365853659 posts per day - that's quite reasonable, I've seen some non-mods who average over 20 posts a day!)
  11. I used to do the TFTC log thing, but since this , along with cache quality has been such a hot topic for a while, I am now making the effort to write unique logs for each cache if they are of a high quality, by way of thanking the CO for no placing random film cans in the woods. Unique log writing is a lot easier now that the API has come out, as I have always taken a laptop with GSAK with me, I now write each log as soon as I get back to the car, then just publish them all with one click when I get back to WiFi. This not only spreads out the effort to do unique logs, and saves the time of accessing each GC page individually, the cache is still fresh in memory and so easier to think of something interesting or at least unique to write. I recently did a short run of Micro to mid-sized caches along a path, even the micros were really good quality and all sizes were appropriate to their locations. the locations too were mostly worthy of placing a cache, each had at the very least a view, and most had a feature to bring to your attention. In return, when I got back to the car, I made the effort to make each log different to the last, even if it was saying the same thing in effect. I also made the effort to mention how nice it was to see such good quality hides which take me back to back in the day when I started caching. Ironically of course, the worst quality caches also get a unique log, too often along the lines of - I'm pretty sure this is in someone's garden, and from the lack of maintenance I don't think it's the CO's. (whilst I was slightly more diplomatic, I actually wrote a log that basically said this last week) Back on topic though, I vote for more geocaching features being added to facebook I'd love to be able to run a daily PQ on FB to only get the stuff I want, and eliminate all the games invitations, reposts, shared photos etc that some of my "friends" post.
  12. A useful extension to this request would be to include the total number of logs (all types) for a cache in a lite request, or even one without daily limits (like get status) users of software like GSAK could then write their update Macros to only request x number of logs to refresh the cache data instead of having to guess or do it pokemon style every time. This would result in a massive drop in log requests to the API. My own GSAK log refresh Macros are all written to space out the requests to avoid hitting the hourly limit anyway, so it'd be no great shakes to check the status first, would also save me from wasting those valuable calls on caches that are already up to date. It would also be great to be able to request Logs by specifying the range you want, with number 1 being the first log (usually the publish) and so on up to the latest number (which would be an easy way to get the number for my first suggestion above) Then, if you know you have the latest 30 logs from a get cache data, and there are 45 total for instance, your Macro would then ask for logs in the range 1-15 inclusive. That's a pretty big saving in bandwidth and data collection, would be interesting to know if it would be a viable tradeoff against any potential extra server processing that would be needed. From the scope of the different efficiency improvements in both bandwidth and server output data, I believe that the tradeoff will be in GCs overall benefit where cost is concerned and the users benefit where speed and query efficiency is concerned. At the moment, the API is very much a "Brute Force" approach, even with the most targeted of requests other than the simple single cache status request, a lot of redundant data changes hands. Whilst this is probably necessary for things such as refresh cache data, I'm betting the logs already have sub-table unique IDs which could be used to identify them individually. Including the log identifier in the data in a PQ or get logs would enable already received logs to be filtered out in the request for other logs, thereby eliminating redundant data transfer. at the moment, if I get the logs for a representative group of caches in my local radius, I'm seeing around 30-40% efficiency on get all logs (thats 60-70% redundant data) average to as bad as 5% efficiency (95% redundant) on recently placed caches (in the last 3 months) I believe that being able to more effectively target requests could see efficiency (in targeted requests) rise to 95% or better
  13. From the checking I've done, and the replies above, I can only conclude that the best thing to do (unless GC are willing to add another field to the Status check (Published=True/False)) is that all "retracted" listings should first be archived. At least that way the Status check won't keep sticking them back into live databases, and they'd be real easy to filter out of Refresh & Get Logs requests.
  14. A Further Example: GC8D0D - placed by a cacher known to have been banned, found by me, remains on my found count, but not on my public found list
  15. I'll cross post for Clyde, but the cache I listed as an example has 7 found it logs on my GSAK DB, and was updated via PQ over a month after it was first published. I've also got a couple of these in my found it count, in one case, it was placed by a user who was subsequently banned. The stats still show for my count, but the cache pages come up unpublished when you click the link.
  16. Ever since the API program came out, I've been having a whale of a time using the rich feature set to make my offline DB so much more useful, and my on the day caching activities much simpler (Thank you so much GC & GSAK) On the flip side of this is that the API has picked up some irregularities with some of the caches I have previously had delivered via PQ. The oddest of these is the "Unpublished" caches that at some time or other in the past have been Published, and Found (in some cases found by me) but for some reason, instead of being archived, they are "De-Published". The API status check of these caches indicates them as Temp Disabled, so they are not picked up by the various things that I do to weed out Archived listings from the list. When I come to do a refresh cache data, via the API, if the anomalous caches are in the list, they cause an error as the response is unrecognized, and I waste a whole bunch of data for that query (a waste both in terms of my API access credits and GC.com's Server bandwidth). An example cache listing is GC1C5QB This all results in the following Questions: 1: Why were these caches "De-Published" instead of being archived as normal? 2: Does the API Status check response include whether a cache is published or not? if so, I will take this topic to Clyde at GSAK too. If Not, please could it? 3: Why does the Status check acknowledge their existence, but the Other API commands do not?
  17. no matter how many PMO caches are placed, the rate of placement of regulars is still huge. back in the old days the rate of cache placement was so low, a PQ of caches placed in the last year would give me over 100 mile radius, now its down to single digit miles, even if half of them are PMOs, thats still 5 times the caches placed. In fact, in the UK for instance (and I know the US is faster), the cache placement rate now is 73 times higher than it was in 2003, that still leaves plenty of room for non premium members.
  18. I'd be surprised if this hasn't come up before, but I think it would be quite useful to make a couple of changes to the way in which PQs are managed. As we are now allowed 1000 PQs in the list (Many Thanks GC) and given the incredible rate of cache placement in many places (The UK now sees up to 200 placed a day) I find I have a very large number of PQs on the list in order to keep things up to date. This has however shown up a couple of issues in navigating the PQ list. I therefore propose a couple of changes that would make our lives a little easier and also reduce the load on the GC.com web server 1: Persistent grid with submit changes buttton. Rather than having to uncheck/check each PQ one at a time, submitting the change to GC and then downloading a new page every click, then scrolling all through the list to find the next one to click, the grid should allow you to make your changes without refreshing, then when you are satisfied with your changes, click a save changes button which submits the new grid layout. Also with this, if you've used your PQs for today, you should be able to set your ticks for that day, with the PQs running next week - This can be done with a bit of a workaround already, but nice to do it in the grid This change would save a massive amount of page requests to the GC server, if I want to change my PQs for the week, it currently requires 70 page refreshes, Not a huge load on my desktop with fast broadband, but a nightmare on a tablet running from mobile data, this way it could be just 1 or 2, also my scroll wheel would be in your debt forever 2: Clear All Ticks Button A simple button to clear all the day of the week ticks from the grid in one go, this would be handy with the above change, but would be invaluable without it, and I would imagine pretty easy to implement fairly quickly. 3: PQs reset at midnight local time (rather than PST), as set by the member's home co-ords This isn't a huge deal for me as midnight PST is 8 am my time and I've generally got what I need the night before anyway, but I think it would ease the load on the server by spreading out PQs throughout the day. I'm also guseeing this would mean GC would have to limit the number of times you can change your home timezone in a month to stop people trying workarounds. With this, perhaps it would also be possible for people to chose their reset time to what works best for them.
  19. I shall take the risk of being "Markwelled" and disagree with you. As a cacher since 2003, (ok so i've been inactive a bit through that time) long before the "wow" factor rule, I found many virtuals which were awesome, and none which took me to a dumpster behind Wal-mart. I do agree with the can you place a physical cache requirement, although I did find that with most of the early virtuals I visited which could have accomodated a physical cache, it would have been purely for the sake if it and not the point of going there, they would also have been subject to disappearance and DNFs, even though you'd accomplished the whole point of the hunt anyway (the location/object etc). On the point of reviewers difficulties, it has always been fairly easy for anyone with internet access to determine if if a location is in a national park or some such place where physical caches may not be allowed or appropriate, this is even easier now with Google maps & Satellite view As for the wow factor, why not leave that up to us? thanks to the wonders of logging, and rating, there's nothing stopping TPTB from releasing a thumbs up or down voting ability (kinda like yahoo answers) then the first 20 or so visitors could decide if its a worthy cache or not. (OOPS, I think I've opened a can of worms) As for COs getting their "feathers ruffled", sorry, that's just a fact of life, its going to happen no matter what you are doing or where you're doing it, someone somewhere ain't gonna like it. Basically if you can't handle your ideas being criticised to death, don't tell anyone about them, and sure as all hell, don't put them online. So, to me, TPTB commenting about the hurt feelings of rejected COs being a factor in the end of virtuals is just plainly ridiculous for anyone that does anything even remotely connected with the internet. (This whole part of the discussion is well covered by rules 11,12,13,18 & 57) In fact, the whole Virtual Cache argument, along with most things on GC.com is pretty well covered by Rules 18 & 57, I'm just glad they've managed to keep the proof of Rules 34 & 61 off the site. Don't forget, Live by Rule 60 and you'll be OK. PS - I personally vote for the return on Virtuals, under the suggestions I've listed above.
  20. So, to add my 2¢ worth to this age old discussion, i'd like to see micros become a "type", with their own size range - large, medium, small, film can, nano, also maybe even some word to be used for plain sight cammo caches, possibly one of the most inventive variants I have seen. I am also a subscriber that all the non traditional types that have micro containers should stay as they are. Its all about the experience of the hunt, I never hunt a mystery looking for a place to drop a tb for instance, those are about the mystery, not the size of the container you get to write your log in at the end. I'll hunt any kind of cache, but its often about what mood i'm in on the day. As has been said before, technically, even a 5 gallon bucket sized cache could be a micro style cache if its filled with concrete and only has a film can sized space set into it. I also don't subscribe to all the hate spew about cache types, if you don't like them, don't look for them and leave it at that. I've had some bad experiences with Multis, they're often a lot of effort to invest only to find the end cache gone, so I don't often hunt them, but I don't come on the forums complaining about multi-spew, asking for them to be banned. Incidentally a few years ago, I experienced a rash of multi placements in my area which at the time was a lot like what is often described as spew on the forums. So I just went after the trads I was interested in, just had to go a little further afield, no biggie. All the hate spew is really just a form of elitism, alot more people are getting into the sport now that gps is being put into every gadget you can think of, and a basic gpsr can be had for under $100. Not everybody these days can afford to spend $20 on an ammo box filled with quality trades. You can place a micro for free, but most i've seen at least had some effort put into a printed logsheet. Ive even seen some micros that would have cost close to that $20 figure. Having micros as a type would enable them to be filtered out of pqs without losing such things as mysteries with a micro at the end etc. Also, from a database load point of view, a micro type would probably reduce the query load when people filter them out (or in) I'm only guessing, but it would make sense that the system eliminates unwanted types from the examined records before going further, as this would be the fastest way to reduce the size of the dataset its querying. In data Manipulation, only a lunatic would design a system, for instance, that searched for all caches that had a log containing the word "yellow", then picked only the results which were mystery caches from that subset. It would always be the optimum case to pick out the mystery caches first, then query that subset for the word yellow. So, in summary, micros to have own type with their own size range, only traditionals with micro as the size to be transferred across to this by gc, definition for this new type to be : caches at the location posted, with only space for a logbook and maybe small trades, the size of a quarter or smaller (10p, or a euro for those elsewhere), bring your own pen/pencil advised.
  21. This could easily be done in GSAK, or else you could just set up your ignore list, simply by using the all caches hidden by user feature. Seems like alot of work to request GC to do because of a spat, I also don't think its very in keeping with the spirit of geocaching to have such a feature available. I can see a couple of other uses, such as ignoring users caches that have a penchant for poor quality hides, but I still don't think its in the spirit.
  22. If you uncheck your queries and only run them as needed, as suggested, once the time since they were last run exceeds a week, they will run almost instntly when you enable them. That way you won't be using potentially stale data. I have a set of regular PQs which run across a week, but I also have a handfull of "on demand" PQs which are not ticked. When I'm going caching, I pick the ones I need and run them either the night before, or when I get up, or both. That way, by the time I'm dressed and have had my coffee, they are ready and processed by GSAK. I also have a macro adapted fom the GSAK library to download and update my database, sort by type, remove any recent finds, create a set of Tom Tom POIs & upload them to my Tom Tom, create a pair of waypoint files for the GPSr & upload them to the card, and finally to create a set of S&T pushpins and add them to the map with different symbols. All this takes about 5 minutes to run. Adding a run monthly option would likely add a large degree of complexity to the current PQ system and user interface. They would have to add a stack of ticks to the page for day of the month etc. If they added a run on the first of the month feature or some such, chances are it would drag the server to its knees on that day after a while.
  23. Baloo&bd, I was answering some anti-offline database comments that have been made prior to this post, so I beleive my reasons as they stand are relevant to the discussion. They are also relevant to the limits in areas of high density. Apart from the fact that I have 4 ways on the road of accessing this data (TomTom, GPSr Memory card, PDA & laptop) I have in car power for the laptop and this is just an argument for Offline Dbs nothing more. Hate to sound like a tech geek, but don't you keep regular backups? (for me, mirror fileserver & gmail backup all my data plus a full copy on both the laptop and desktop) A lot of people complaining about Friday PQs here, and most of them from the US, add another 8 hours to my inconvenience (see below). Additional PQs/Memberships etc, don't solve this directly, they are a kludgy workaround in themselves, but extra revenue from enhanced memberships would pay for new PQ servers, plus you could plan ahead and schedule more on a thursday, leaving less important pqs for friday. Nope, complements it, if pqs are slow on a friday, I can only use my offline database, as I won't get the PQs until afternoon. even if the PQs are the first to run, I won't be able to get out before 09:00 as PST midnight is 08:00 GMT 1 week's data in Northern California gives appx 200 mile radius at the moment, the bulk being centred around the I5 corridor, this takes you from SF to southern oregon, easily done in 6 hours or less, if you're headed in the direction of Washington State, forget it, you're going to be passing some 30,000+ caches. The whole point here is that people are saying that the request for increasing the data limits is only relevant if you are keeping an offline DB (I do not agree, it's partly the opposite, offline DBs reduce the need for larger PQs), although, I am putting forward the positives of offline DBs anyway. I'm staying at my mother-in-law's for a week, it's out of the area set up with my offline DB, can't do a lot of caching unless I do something naughty. Only if you keep an offline DB and repeat your queries (or get an iPhone or pay $10/Mb for data roaming, that's just 2 pages from the full site) yeah Ignored or not, Iillegal activity is illegal activity, I think we'd better let that drop eh. There were a few prosecutions last year in the UK under the Data Protection act for persons caught by the police using unsecured networks, they do not require the owner of the network to be involved in the prosecution, due to the wording of the Act. Punishments included large fines, a little jail time and confiscation of computer equipment, for me this would also mean the loss of my Job. Also, I have never seen a free Wifi hotspot advertised in the UK, unless it was of the nature "Customers Only". Thats great, got one, can't use it in the US without filing for bankruptcy when I get home. It must be nice for those mega cachers not to have to work for a living. (do the math, they can't possibly have time to work) If they can afford not to work, they have plenty of time to work within the limits and plan ahead. Also, they've cleared out the majority of their local caches so their first PQ probably takes them out to 100+ mile radius. Hopefully the don't decide to run the finds query on a Friday and take down the PQ machine. For me, getting home from work 15 hours after I left in the morning, I really don't need to sit down and plan anything, I'd rather let my automated macros do it and grab my gear on my first day off. Also, see my previous notes about PQ data sharing, it does happen, and the more cachers you know, the more likely you are to have the opportunity to join such syndicates. Some one mentioned the phrase "anal-retentive cache historians" in regard to how some people appear to see offline database keepers, gave me an idea, if I could get my finds up to 500/month, I could start with the first 500 placed in the UK, then the next and so on, during the winter months, i'll outstrip the placing and I'll be able to use my PQs more efficiently, I could even try to do them in order of placement . On second thoughts, a second PM will be cheaper than all the extra driving, which I'll be signing up for this week. My final point - yes I do want more data for a larger offline DB, because I only want to get involved with planning stuff once a month or so, not every time I feel like going caching, and the extent of planning I want to do is re-arranging my PQs for efficiency. (Edit Note: Yup, I know I've made a cock up somewhere on the quotes, but at this time of night, the word quote and its spelling has lost all meaning to me as has the difference between back and forward slashes and the ability to count past 2, so if anyone would like to point out which tag I've gotten wrong, I'd be happy to change it)
  24. There are many reasons for keeping an offline database, here are some of mine: If my internet connection goes down, I can still cache. (happens often when ISP is upgrading) If GC.com is experiencing problems with the site, I can still go caching. If the PQ generator is slow, I can still cache. If I want to leave the house before 09:00 (the time the PQs arrive in the UK), I can still cache. If I go on a Roadtrip thats longer than 200 Miles, I can still cache. If I'm on a Roadtrip and the Motel doesn't have Internet, I can still cache. If my brother-in-law cancelled my Mother-in-law's Internet, I can still cache (happened this week) If I get stuck on a cache, I get more hints from the log history. I get to play with GSAK Macros to automate everything. One point on wireless hotspots, unsecured networks may be fair game in the US, but it is illegal to use them in the UK except where it has been advertised as a free wireless hotspot (very rare) I have recently looked into getting an iPhone, and found that my network does not support them, and I know I cannot get a contract with another network thanks to the wonders of credit checks. As I said in a previous post, if the app gets ported to nokia's answer to the iPhone, I'll possibly sign up. The great thing about the nokia is that it is Windows Mobile based and therfore would cover a lot more phones than the iPhone app.
  25. I've just read my way through the last few days posts that I've missed while travelling. Some really creative thinking going on here, and starting to see some support from those who were initially zagainst. This i8s totally what the forum is about, discussing a suggestion and shaping a final solution. Even if that doesn't always work, when it does, its worth the effort, even if TPTB don't implement the solution, a few more people are singing from the same songsheet. I've said all along I'd be happy to pay pro rata for the data, I'd also be pleased to see improvements such as stronger updated data queries, particularly needed there is the results to be sortable in reverse order, for instance if the last 2500 update results for your area goes back 5 days, you know you need to run that query every 4 days or so to ensure you catch all the updates, the problem with the current method is that the results are sorted by a different index aand if your query overflows the limit, you don't get all the results in say the last 24hours, but get some from each of the last 7 days.. with regard to the iPhone service and Mtn-Man's comments, I would be pretty annoyed if I went out and got a new iPhone specifically because the GC app was available, then support was pulled because of low uptake, this is one of the reasons I will not be getting an iPhone unless I can find another way to justify it, or the app is ported to a Nokia equivalent, as like many europeans, I always buy Nokia phones as they are simply the most rounded and reliable phone range on the market (as a generalisation on the whole range)
×
×
  • Create New...