Jump to content

Groundspeak Downtime


CoyoteRed

Recommended Posts

Considering this has been yet another downtime on the eve of a major weekend. Yeah, I know: through no fault on the part of Groundspeak. Nevertheless, it left a bunch of folks who pull spontaneous PQs in the dark. It was a Friday and if they were leaving Friday afternoon, they were just SOL.

 

Is this yet another stark reminder that we can't always rely on our data getting to us when we want if we are waiting for the last second to get it? This time a fire shut down the whole server farm. Last time was an unfortunate timing of an upgrade. Previous times was simple site overloads. Sometimes it's glitches in the DNS servers or routing. For some folks on various weekends, it would be glitches and hiccups closer to home like a simple internet outages. Those of us who maintain OLDBs (Off Line Data Bases) are much less susceptible to such outages, but for those who are counting on last minute PQs to be ready for processing before heading out on a weekend of geocaching...

 

Is it time for Groundspeak to re-think the idea of keeping OLDBs and provide the tools for those of us who choose to do so, and folks who would if given the tools, to be able to do so guilt-free?

Link to comment

I'm not sure if O.L.Dbs would be the answer.

Granted PQ's don't come seconds after you request them, you need to plan ahead. If you know you are going away for the weekend, you should have requested a PQ on Thursday if you are leaving on Friday.

 

I personally run a PQ for each of the 4 states I live in/near on Thursday of every week.

Link to comment

I am a guilt-free maintainer of a local database. I was able to continue final planning for a 2-week trip starting Monday even though GC was down. (I was getting a little worried I wouldn't be able to download current information today.)

 

But I don't think this extraordinary event is the "club" to use to convince TPTB that changes should be made to make my task easier.

Link to comment
I am just glad they are back up and running with what was really minimal downtime. I don't see this as a reason to change the whole PQ model.

 

I do. No reason larger amounts of data cannot be downloaded daily from the site.

 

Bandwidth and disk storage is dirt cheap these days. Keep the current system for those who want it. Allow direct DL's for us who prefer a more efficient means of keeping an up to date OLDB.

Link to comment
Bandwidth and disk storage is dirt cheap these days.

That's true to the extent that you mean the same amount of bandwidth and disk storage which an individual would have been expected to consume 4 or 5 years ago. But if the overall total cost were falling, companies like EMC and Cisco would be going out of business. The cost of "enough bandwidth and storage to run a site growing at over 40% per year" is not falling at any great rate.

Allow direct DL's for us who prefer a more efficient means of keeping an up to date OLDB.

That's probably about 1% or 2% of members. But there are also several people in the shadows who would love a chance to resell or otherwise redistribute the site's data at a better rate than $30/year. Making life easier for them is not likely to be on Groundspeak's agenda any time soon.

Link to comment

But I don't think this extraordinary event is the "club" to use to convince TPTB that changes should be made to make my task easier.

 

If this is not it, then there is none. This is a no brainer. The old reasons no longer apply (geoJr). It is high time for direct dl's and an increase in data amount to insure accuracy. Some of us in our online groups have members that cover some distance. So, having a larger OLDB is advantageous to helping each other.

Link to comment

But I don't think this extraordinary event is the "club" to use to convince TPTB that changes should be made to make my task easier.

 

If this is not it, then there is none. This is a no brainer. The old reasons no longer apply (geoJr). It is high time for direct dl's and an increase in data amount to insure accuracy. Some of us in our online groups have members that cover some distance. So, having a larger OLDB is advantageous to helping each other.

 

if this was mission critical data maybe but it's just a hobby for crying out loud!

Link to comment
Bandwidth and disk storage is dirt cheap these days.

That's true to the extent that you mean the same amount of bandwidth and disk storage which an individual would have been expected to consume 4 or 5 years ago. But if the overall total cost were falling, companies like EMC and Cisco would be going out of business. The cost of "enough bandwidth and storage to run a site growing at over 40% per year" is not falling at any great rate.

 

I will have to see if I can get some hard data on this from a good friend of mine who is a Cisco engineer.

 

That's probably about 1% or 2% of members. But there are also several people in the shadows who would love a chance to resell or otherwise redistribute the site's data at a better rate than $30/year. Making life easier for them is not likely to be on Groundspeak's agenda any time soon.

 

This is something you could never guess. You have no idea what the others of us here do, let alone know what we would do in the future. The last part above is simply Al Gore at its best - scare tactics. The authorities are created for those who steal. So, scare me away - NO for it is no reason not to do it. The means are in place to prevent such reselling or redistribution. :D

Edited by Frank Broughton
Link to comment

But I don't think this extraordinary event is the "club" to use to convince TPTB that changes should be made to make my task easier.

 

If this is not it, then there is none. This is a no brainer. The old reasons no longer apply (geoJr). It is high time for direct dl's and an increase in data amount to insure accuracy. Some of us in our online groups have members that cover some distance. So, having a larger OLDB is advantageous to helping each other.

 

if this was mission critical data maybe but it's just a hobby for crying out loud!

 

Hmmmm, excuse me - this is a professional company (as we are told so often around here). So you need to cry up another river with that argument.

 

Here we go again - people saying no for no reason but because they can. This is getting so old! Leave us alone, us who want to see innovation and progression - you status quo people do not need to come along with us. :D

Link to comment

Sure we were inconvenienced. I was inconvenienced.

 

But it would take light from "inconvenienced" three years to reach where Jeremy was at, standing outside the server farm at 3 in the morning waiting to get in, and a whole lot of work behind the scenes to restore the site. All ....just so we can grab our little GPS units, hoist our backpacks, and run out to the bush to find tupperware hidden under bark.

 

Let's keep it in perspective, and give him a break :D

Link to comment

I am just glad they are back up and running with what was really minimal downtime. I don't see this as a reason to change the whole PQ model.

 

I agree - and I think kudos are deserved for the team who were likely up 24 hours straight to get us all back to our hobby of getting out of the house and searching for caches. Considering the amount of time I spend on geocaching.com and how much I rely on it I think $30/year is a great deal. So - good job to the Groundspeak folks and I'm sure glad we're all back in business. Now it's time to get out the door and find some caches!

Link to comment

 

Here we go again - people saying no for no reason but because they can. This is getting so old! Leave us alone, us who want to see innovation and progression - you status quo people do not need to come along with us. :D

 

OK, that's not me. I don't think everything GS does is right. I'd like more than 40 PQs, a flexible counting of 5x7x500 caches in PQs during the week, an increased number of caches/week, an ability to select "archived in last week/month".

 

But this event doesn't really argue for any of those changes.

Link to comment
Sure we were inconvenienced. I was inconvenienced.

 

But it would take light from "inconvenienced" three years to reach where Jeremy was at, standing outside the server farm at 3 in the morning waiting to get in, and a whole lot of work behind the scenes to restore the site. All ....just so we can grab our little GPS units, hoist our backpacks, and run out to the bush to find tupperware hidden under bark.

 

Let's keep it in perspective, and give him a break :D

 

I am not gripping about GS and the efforts that the CEO and his main man made yesterday. So, if this is in reference to me, I want to make that clear. Just because people feel strongly about something does not mean they do not appreciate what they have and the efforts that have brought us here thus far. What I have now works - but it also causes more work for the servers. There are better ways and that is what CR and I have been yapping about for some time now.

 

I am sure he, as I, much appreciate the efforts made to get the server back up and online. Truth be told, there was not much that had to be done. Once the power went on the individual boxes needed to be powered up. Yes, Elais did (I assume - that dangerous word) some checking on the integrity of data and HD stability after that. But it was rather routine in what it took for GS to get this place back up and running. The DC people and the electrical providers, well, that is a different story. They went through a nasty day.

 

Yes, it was a very long day for Jeremy - thanks Jeremy. Yes, Elais went above the call of duty and returned from vacation encase the servers needed any maintenance. Thank goodness they all shut down properly and no disk errors apparently were present.

 

So lest I be accused of being an ogre - thank you Jeremy for the updates via Twitter and your blog. Thank you Elais for being there when needed. That was awesome of you and I was proud of you to do that when I read about it via a twitter update.

 

Happy 4th all!

Link to comment

I'm ecstatic that the thing is back up. Billion dollar satellites for me to use and I pay the cost of a good lunch to enjoy the caching experience 24/7.

 

Huge thank you to the Seattle staff who brought it back up so quickly.

 

Heading out the door.......

Link to comment

I too use GSAK to maintain an offline DB. I keep all the caches within 150 km of my house and as of today, that is about 5000.

 

The current PQ process allows me to easily update that twice a week AND keep the most recently published updated daily.

 

If I chose to update less frequently, say weekly, I could have close to 10,000 caches in an offline DB. That is quite a few.

 

I have no problem with the current amount of data. It would be nice if there was a bit more flexibility in filtering it.

 

As for yesterday, it was an Act of God and has no relevance to any argument for more data.

 

While geocaching is a 24x7 sport, it is NOT mission critical. No one is gonna die or somehow be harmed if we don't get real time data for 24 - 36 hours.

Link to comment
While geocaching is a 24x7 sport, it is NOT mission critical. No one is gonna die or somehow be harmed if we don't get real time data for 24 - 36 hours.

Worth repeating.

 

I do not disagree, but that is a different argument than the PQ issue. The PQ's can be changed. Noone can stop fire.

 

Something to be said for PREPARATION! If you don't wait until the last minute, no harm no foul should something (like yesterday) come up. As for me, I have plenty of caches stored on my unit and still more on my TOPO 8 which will hold me for months to come should I need do so!

 

Maybe those harping for backup should pony up extra money....since the rest of us don't need to come along? I'm assuming the backup won't be free or cheap?

Link to comment

Yes, there is something to be said about preparation. Those who planned their weekend on Thursday got their PQs, but what would have happened if the site or your internet went down on Thursday?

 

My preparation includes most of my PQs earlier in the week and only a differential on Friday. The PQs are scheduled for two reasons: they'll run earlier than a manually run query and they'll be there without interaction from me. But then though those differential PQs didn't run on Friday, I had the PQs from Tuesday. I was ready to go.

 

Before anyone starting jumping on me for wasting bandwidth and server time, that's the point of this thread. IF the PQ scheme was geared more towards folks keeping OLDBs, then a lot of the waste would go away. I've outlined in other threads a scheme that reduce server load by in the neighborhood of 95%. (!) I've not seen the first person who knows what he's talking about in terms of website and data base design dispute the logic.

 

The primary arguments against differential PQs and OLDBs is the notion that folks would maintain an OLDB larger than what is "allowed." First, I maintain an OLDB of around 5000-6000 caches and that includes owned and found caches, both of which I don't download on a regular basis. The most prolific local said, the last time we spoke on the subject, keeps it around 11,000. The "allowed" theoretical is 17,500. The "practical" is somewhat less.

 

Getting more than "allowed" can be accomplished now with differentials, but you would be gambling, and losing on a regular basis, that you won't get gaps.

 

A little creative thinking could provide a scheme that would address that issue and still reach the goal of efficient OLDB maintenance and protect against getting more than "allowed" caches. I've got a couple of thoughts on the subject right off the top of my head.

Link to comment

Another cacher and I were talking about the downtime and he mentioned a mirror set of servers on another continent. A seperate but equal set of data that would be available even if something happened in Seattle. I'm not tech savy enough to know if being on another continent is needed or if just on the other side of the current one would be sufficent. Either way, if there were two or more places that the data could be accessed would that also help with the other issues that seem to be speed and volume of user related?

Link to comment

While geocaching is a 24x7 sport, it is NOT mission critical. No one is gonna die or somehow be harmed if we don't get real time data for 24 - 36 hours.

I'll try to keep that in mind... No requests for changes or suggestions for improvements will be considered unless people will die if the changes are not made.

 

"No impending deaths? Then shut up! We don't want to hear it!"

Link to comment

I'm sure certain people will jump all over this, but I have a question for all those posting in numerous threads about the need for redundancy and a backup system so the database never goes off-line again. Do you all have a spare computer at your homes loaded with all the appropriate software and data so that you can keep operating if your primary computer quits? If not, why not?

Link to comment

Another cacher and I were talking about the downtime and he mentioned a mirror set of servers on another continent. A seperate but equal set of data that would be available even if something happened in Seattle. I'm not tech savy enough to know if being on another continent is needed or if just on the other side of the current one would be sufficent. Either way, if there were two or more places that the data could be accessed would that also help with the other issues that seem to be speed and volume of user related?

 

My guess is that it can´t be wrong to spred the risk to other continents and MAYBE that will improve on the speed and accesability

Link to comment
I'll try to keep that in mind... No requests for changes or suggestions for improvements will be considered unless people will die if the changes are not made.

 

"No impending deaths? Then shut up! We don't want to hear it!"

A classic Straw Man argument. Nobody is talking about "no improvements". This is about whether it is worth making a specific, major change to the site's operations which might - if it were to have worked - have saved 29 hours of downtime, the worst outage in the site's 9-year history. Groundspeak has said "it isn't going to happen" and most people here can understand why. Those of us who have actually tried to build and operate redundant data centres can understand even more reasons why - most so-called redundant systems find ways to let you down. (A colleague just discovered that the dual-processor, "keep on running" server on which our accounting system runs, has been running on just one processor for a month, because the alert saying "help, a CPU went down" didn't get through due to some stupid firewall or other filtering issue.)

Link to comment

While geocaching is a 24x7 sport, it is NOT mission critical. No one is gonna die or somehow be harmed if we don't get real time data for 24 - 36 hours.

I'll try to keep that in mind... No requests for changes or suggestions for improvements will be considered unless people will die if the changes are not made.

 

"No impending deaths? Then shut up! We don't want to hear it!"

A classic Straw Man argument. Nobody is talking about "no improvements". This is about whether it is worth making a specific, major change to the site's operations which might - if it were to have worked - have saved 29 hours of downtime, the worst outage in the site's 9-year history. Groundspeak has said "it isn't going to happen" and most people here can understand why. Those of us who have actually tried to build and operate redundant data centres can understand even more reasons why - most so-called redundant systems find ways to let you down. (A colleague just discovered that the dual-processor, "keep on running" server on which our accounting system runs, has been running on just one processor for a month, because the alert saying "help, a CPU went down" didn't get through due to some stupid firewall or other filtering issue.)

See, now those are reasonable-sounding points that are relevant to the original issue, unlike the irritating post that I was responding to. (You left it out of your reply to me, but I have put it back.)

 

I am neither for nor against any of the changes being suggested, since I don't know enough about the technical details to have an informed opinion. I was not responding to the OP, but rather to the maddening habit that some forum posters have when shooting down requests or suggestions or complaints. Responses like:

"No one is going to die if the change isn't made."

"The earth won't stop spinning on its axis if we leave things as they are."

"It's just a hobby; get a life."

"This is the way it's always been done."

"I don't see a need for it, so I don't think they should do it."

"What do you expect for thirty bucks?"

etc.

 

It just sets my teeth on edge when people try to cut off discussion by pointing out the ridiculously obvious: that this is just a hobby, for which there is no "need" at all, and which would kill no one even if Groundspeak were to go out of business tomorrow.

 

Just because it's "only a hobby" doesn't mean that people can't have strong opinions, passionate feelings, and vehement arguments about it.

 

These "no one's gonna die" responses to any discussion here just piss me off -- it doesn't matter which side I agree with, or whether I agree with any side at all -- that kind of response is just pointless and stupid and annoying.

Link to comment

I see no reason to not allow direct downloads as long as it comes with similar limits to those currently in place (or a rearrangement of those limits). Groundspeak's primary IP is the listings of the caches. That it has managed to balance data access with data restrictions is a pretty amazing feat and has helped it grow into the locus of online worldwide geocaching activity. That being said, the networld is pretty fickle; consider how many changes to the "preferred" site/system for various web activities have occurred in the last 20 years. Advances in technology and pure innovation make the actual data/IP even more crucial. I'm with ya frank (though not as militantly) that GS needs to continue to evolve but want to underscore the importance to GS of their proprietary listings database.

 

I'd be in favor of a more direct DL method but also actually really like the email method because I can just have it there on my web based server and DL into the desktop and laptop at various locations and times.

Link to comment

Another cacher and I were talking about the downtime and he mentioned a mirror set of servers on another continent. A seperate but equal set of data that would be available even if something happened in Seattle. I'm not tech savy enough to know if being on another continent is needed or if just on the other side of the current one would be sufficent. Either way, if there were two or more places that the data could be accessed would that also help with the other issues that seem to be speed and volume of user related?

 

My guess is that it can´t be wrong to spred the risk to other continents and MAYBE that will improve on the speed and accesability

 

I seem to remember somewhere that TPTB mentioned that having a mirror setup elsewhere would be cost prohibitive for a service that exists on ad revenue and small membership fees.

Link to comment

Stupid question.

 

I don't live and breath because of this website. My premium membership money is well spent, and I have no complaints at all. I am thankful for all that they do to keep us in the woods.

 

It's just a damned hobby, people. It's not life or death.

Link to comment

Another cacher and I were talking about the downtime and he mentioned a mirror set of servers on another continent. A seperate but equal set of data that would be available even if something happened in Seattle. I'm not tech savy enough to know if being on another continent is needed or if just on the other side of the current one would be sufficent. Either way, if there were two or more places that the data could be accessed would that also help with the other issues that seem to be speed and volume of user related?

 

My guess is that it can´t be wrong to spred the risk to other continents and MAYBE that will improve on the speed and accesability

 

I seem to remember somewhere that TPTB mentioned that having a mirror setup elsewhere would be cost prohibitive for a service that exists on ad revenue and small membership fees.

 

Ehh a friend of mine is running a large comunity here lets use some numbers he is paying 400$/month but my guess the need might be 4 times (just to be sure that there is lots of capacity to be able to have some service) so thats 1600x12=19200... do you have the numbers for how many premium membesr there is in Europe ?

 

640 is the number needed to pay for the 19200......

Link to comment

So, how many of you premium members were upset enough about the downtime to request a refund of all or part of your premium fees?

 

Stuff happens. I don't feel that GS was directly responsible for what happened. Lets see, $30/yr, so that is about $0.08/day. They where down about 29 hours. I'll give them the first four hours for free. I really don't care to ask for my $0.08 back.

 

Jim

Link to comment

 

Ehh a friend of mine is running a large comunity here lets use some numbers he is paying 400$/month but my guess the need might be 4 times (just to be sure that there is lots of capacity to be able to have some service) so thats 1600x12=19200... do you have the numbers for how many premium membesr there is in Europe ?

 

640 is the number needed to pay for the 19200......

 

Your not even close. Five cabinets in a colocation facility is not cheap. locuslingua mentioned the cost of the office facility in Seattle was cheaper than the machines. What they have works fine. They might want to fine tune backups, but I don't have my financial life on their servers so I really don't see that there was anything to get my knickers in a bunch about. It is a hobby, a pastime, (well not for Jeremy or Elias) so 24/7 during hurricanes, earthquakes, man made disasters and hamsters is not expected nor in my opinion needed. Planing for things like a major fire in the data center and that recovery would be prudent. Planing if the outage were to extend beyond a couple days would be prudent. Hey guys and gals, the GS part of the datacenter was peanuts compared to what was done for that time. 280,000 vendors had their credit card processing shut down, verizon had their internet shutdown, bing.com had their travel site shutdown. Just be glad our

 

Jim

Edited by jholly
Link to comment
It's just a damned hobby, people. It's not life or death.

I'm not seeing where anyone has said it was. What is being said is folks' plans have been forced to change because of an unforeseen event. A change in scheme can reduce the harm done in such situations.

 

As I've said, this really didn't affect those who keep OLDBs much at all, yet we keep hearing the PQs aren't made for such things. I'd think that in order to better serve a customer-base a site would want to more reliably provide for its core offering: getting cache listings into the hands of hobbyists.

Link to comment

 

Ehh a friend of mine is running a large comunity here lets use some numbers he is paying 400$/month but my guess the need might be 4 times (just to be sure that there is lots of capacity to be able to have some service) so thats 1600x12=19200... do you have the numbers for how many premium membesr there is in Europe ?

 

640 is the number needed to pay for the 19200......

 

Your not even close. Five cabinets in a colocation facility is not cheap. locuslingua mentioned the cost of the office facility in Seattle was cheaper than the machines. What they have works fine. They might want to fine tune backups, but I don't have my financial life on their servers so I really don't see that there was anything to get my knickers in a bunch about. It is a hobby, a pastime, (well not for Jeremy or Elias) so 24/7 during hurricanes, earthquakes, man made disasters and hamsters is not expected nor in my opinion needed. Planing for things like a major fire in the data center and that recovery would be prudent. Planing if the outage were to extend beyond a couple days would be prudent. Hey guys and gals, the GS part of the datacenter was peanuts compared to what was done for that time. 280,000 vendors had their credit card processing shut down, verizon had their internet shutdown, bing.com had their travel site shutdown. Just be glad our

 

Jim

 

I Guess that you didn´t see this part "(just to be sure that there is lots of capacity to be able to have some service)"

Face fact there is no backup system for info! Twitter and Facebook for info did work BUT only if the word gets around and it did on other local geocaching sites

 

The last 7 days 82.000 users wrote some kind of log how many of us are premium mebers ?

Link to comment

Actually, GC does have a backup for the site info, what you probably mean is that there is no secondary set of servers they can switch right over to in case of an emergency.

 

Having a full secondary set of servers would be cost prohibitive. I'm sure that either GC's insurance or their server facilities insurance would cover renting a set of servers in case of a long term disruption.

 

About the only thing that I could suggest as an improvement is an "off site" server that would only have the capability of sending out mass emails to us in case of a big disruption, and any updates. That one should be located far enough away that any forseable natural disaster that could take the main servers down would not affect that one, possible in an east coast state.

Link to comment
I'm sure certain people will jump all over this, but I have a question for all those posting in numerous threads about the need for redundancy and a backup system so the database never goes off-line again. Do you all have a spare computer at your homes loaded with all the appropriate software and data so that you can keep operating if your primary computer quits? If not, why not?

 

 

Although I do not believe we need redundency with the servers here as DC centers outages are rare and a big event when they do happen with service replaced at breakneck speed, the answer to your question is YES! Not only one sysytem but two (PC & laptop) and also a Linux file server that contains a backup of all my music and pictures - many gigs of data.

 

I also have two other systems online here on the home network that I could jump on but my personal data is not on them.

Link to comment
That one should be located far enough away that any forseable natural disaster that could take the main servers down would not affect that one, possible in an east coast state.

Hey, we Europeans want to carry on caching even in the event of a pre-emptive nucular strike on both US seaboards. Let's have that redundant system located somewhere outside the US and that isn't likely to be seen as America's best buddy. That would be France, then. :drama:

 

More seriously, I was intrigued by this tweet from Jeremy in the middle of it all:

Rumor: Seattle is the backup location for Authorize.net - main colo in Dallas flooded last month and still down.

If that's true, the people who designed network operations at Authorize.net will be asking themselves what more they could possibly have done...

Link to comment
Sure we were inconvenienced. I was inconvenienced.

 

But it would take light from "inconvenienced" three years to reach where Jeremy was at, standing outside the server farm at 3 in the morning waiting to get in, and a whole lot of work behind the scenes to restore the site. All ....just so we can grab our little GPS units, hoist our backpacks, and run out to the bush to find tupperware hidden under bark.

 

Let's keep it in perspective, and give him a break :drama:

 

I am not gripping about GS and the efforts that the CEO and his main man made yesterday. So, if this is in reference to me, I want to make that clear.

It wasn't in reference to you. It was in reference to the OP.
Link to comment
So, how many of you premium members were upset enough about the downtime to request a refund of all or part of your premium fees?

Hmm. Let's think about this. $30 divided by (24 hours times 365 days) equals about $0.00342 per hour. Multiply that by 29 hours of downtime and we get $0.09931. So Groundspeak owes each of us 10 cents.

 

Considering postage costs and credit card processing fees are both higher than 10 cents, it would be silly to even think of them granting a refund.

 

Now if they want to extend everyone's membership by one day ... :drama:

Link to comment

Do you all have a spare computer at your homes loaded with all the appropriate software and data so that you can keep operating if your primary computer quits? If not, why not?

 

Yes. Haven't you ? :drama:

All my "mission critical" info is available if 1 system fails and most info is available is 2 systems fail. As a radio amateur my log (all contacts made) is even available if all systems fail. It's called backup and off site storage.

One system is on 27/7 and data is backed up on a networked drive.

Link to comment
So, how many of you premium members were upset enough about the downtime to request a refund of all or part of your premium fees?

Hmm. Let's think about this. $30 divided by (24 hours times 365 days) equals about $0.00342 per hour. Multiply that by 29 hours of downtime and we get $0.09931. So Groundspeak owes each of us 10 cents.

 

Considering postage costs and credit card processing fees are both higher than 10 cents, it would be silly to even think of them granting a refund.

 

Now if they want to extend everyone's membership by one day ... :drama:

 

Hmm. Let's think about this. $30 times X = the money premium members pay/year

Link to comment
Guest
This topic is now closed to further replies.
×
×
  • Create New...