Jump to content

Marking Cache Coordinates


Indotguy

Recommended Posts

To believe averaging is a good thing we pretty much have to believe that our first coord is the worst and subsequent coords will be better.  I can't think of any reason to believe this would be true except by chance.

To believe that a single reading is a good thing we pretty much have to believe that our first coord is the best and most reliable reading and subsequent ones are worse. I can't think of reason to believe this would be true except by chance.

 

:rolleyes:

Quite true.

 

We can't know. So by averaging we might end up with better coords and we might end up with worse coords.

 

What then, is the point of returning to the location 3 times and averaging the coords?

 

With SA we could know that with enough coords to average (which most units did automatically when not moving) we would eventually correct for the random errors and our coords would get better. The averaging at that time was done once per second and the rule of thumb was to let the unit average for 5 minutes which resulted in 300 coords to average. The averaging being done today is usually done, at best, 3 times. So 3 coords to average and for non random errors at that. Even if the errors were random, which they aren't, with only 3 datasets to average the effort would still be pointless.

 

We might be getting more close coords than more further out coords. We might be getting errors that self correct, we might be getting errors that are amplifying the error.

 

How do we know?

 

We don't.

 

Therefore the extra effort expended is pointless.

Link to comment
All of this postulating misses one point, you don't know if any one waypoint, taken by itself, is closer to the actual point on the surface of the planet than another.

 

I agree.

 

Secondly, all of the readings a GPS will give you will be within a certain distance a certain amount of time, but you don't know which one is "more accurate" at any one time.

 

I agree.

 

While I agree GPS signals aren't completely random, they are not organized to the point where averaging is worthless either.  Yes, I agree that an averaged reading may be worse than a single reading, but just as likely it also may be much better.  You wouldn't know unless you take other readings to compare it to.

 

I agree.

 

Regardless, "accuracy" is not a real issue in geocaching.  "Repeatability" is.  There is a difference.  You are wanting a reading than another geocacher can duplicate so they can find a box in the woods.  Who cares if the coordinates given are the actual coordinates on the face of the Earth?  As long as my reading and your reading matches or is pretty close, we're good.

 

I agree.

 

The thing is, not only is a single reading 'good enough' for geocaching, but the practice of averaging will sometimes produce better coords and sometimes produce worse coords, we cannot know in any given case which will be true.

 

Since averaging will sometimes produce coords that are at best, no different and at worst, less accurate why would anyone wish to expend the effort to take multiple readings at different times on different days when they can't know whether the end result is better, same or worse coords than the single reading?

 

I don't say averaging CANNOT result in better coords, I only say averaging cannot correct for non random errors. It can't reliably correct for them anyway, but it can, by chance, have that effect at times.

 

So, as you say, we can't know what any given averaging excercise is going to result in so why do it? If we knew that averaging would reliably produce better coords then some might find it worthwhile. But, it doesn't reliably produce better coords. It hasn't since SA was turned off.

Link to comment

Whether you average or not I think it is always a good idea as the cache hider to use those coordinates on another day and time and see where they lead you. If they do not lead within a reasonable distance of the cache you need to refine the coordinates.

 

It may also point out to you that you have entered incorrect coordinates on the cache page. I have seen this more than once.

Link to comment
Whether you average or not I think it is always a good idea as the cache hider to use those coordinates on another day and time and see where they lead you.  If they do not lead within a reasonable distance of the cache you need to refine the coordinates.

 

It may also point out to you that you have entered incorrect coordinates on the cache page.  I have seen this more than once.

I was about to reply pretty much as what Night Stalker said.

 

Let's change the question from "does averaging give better coordinates" to "how much time should I spend getting good coordinates". Most of the time if you just take a waypoint you'll get coordinates close enough for someone else to find your cache. In fact most of the time people will post that the coordinates took them right to the cache. On rare instances you or a finder may get different readings. If you get the different reading, you'll get a few finds where the finders indicate the coords were a little off and maybe will post corrected coordinates. You will either change the cache page to use the corrected coordinates or let future finders see the corrected coordinates in the logs for themselves. If a finder gets a bad reading, most other finders will have posted the coords are right on and the unlucky guy will post a DNF. I'll agree that averaging or not will not make any difference as to what you will see in your logs.

 

The majority of caches that I've found that had bad coodinates had nothing to do with the GPS. It was because someone had entered the coordinates wrong when they posted the cache.

Edited by tozainamboku
Link to comment
So, as you say, we can't know what any given averaging excercise is going to result in so why do it?

Actually, I didn't say any particular averaging will result in an unknown. What I said was any one reading, that's one waypoint, not averaging, will be an unknown.

 

Additionally, it doesn't matter if averaging produces a worse waypoint than a single one. I touched on the reason before. What matters is repeatability. What is the hunter's reading going to be? Averaging produces a number that is closer to what any person who comes after will get. A single reading may be a wild flier and you would never know without taking other readings.

 

Averaging produces a higher confidence the numbers are close than taking a single reading alone. That's what counts.

Link to comment
Most of the time if you just take a waypoint you'll get coordinates close enough for someone else to find your cache.

If you know how to properly take a waypoint.

 

Users of SporTraks who walk right up to a spot and snap a waypoint will find their waypoint will be embarassingly off. However, if they do the average-break-average routine they will find the SporTrak will produce highly accurate waypoints.

Link to comment
  Averaging produces a number that is closer to what any person who comes after will get.  A single reading may be a wild flier and you would never know without taking other readings.

 

I agree. [Editted to clarify I am agreeing only with the second sentence, not the first.]

 

Averaging produces a higher confidence the numbers are close than taking a single reading alone.  That's what counts.

 

I disagree that averaging produces a higher confidence the numbers are close. What you described isn't averaging, but selecting the waypoint with the least deviation which is what ImpalaBob advocated a page back.

 

If you take your coords, walk away and come back and return to the spot and your GPSr tells you you are 75' away from where you are then yes, you need to walk away and take a 3rd reading. If the 3rd reading agrees with the first, then toss the 2nd. If the 3rd reading agrees with the 2nd, then toss the first. This isn't averaging though, it is selecting the waypoint with the least deviation.

 

My only argument in this thread is that averaging is pointless and just as likely to give worse coords than better ones. Averaging was defined in the OP as "I usually take at least 3 waypoint readings. If possible mark a waypoint on three sides of the cache and then average."

 

Averaging doesn't involve discarding the outliers, it involves including them in the computation. My argument is that this is a pointless excercise and one can't be sure the end result is better or worse than the first result.

 

Discarding the outliers and choosing the waypoint with the least deviation is much more likely to produce a better result. But again, that isn't averaging.

Edited by DaveA
Link to comment
But don't tell me that taking a single measurement is going to give coords as good as can be obtained with multiple measurements, because it just isn't true. Coords taken just once will probably be "good enough," but is that really the standard we want to set for ourselves?

 

Good enough is all we need in this sport. We're not surveyors.

 

I'm not saying that a single reading will provide better coordinates than averaging. I agree that most of the time averaging will be more accurate, but averaging does not guarantee better coordinates.

 

My chief point is that it isn't worth all the extra effort to gain a few feet of accuracy. If people want to spend a few hours returning to the cache over a period of several days, more power to them. I'll use that time to find or place a few more caches, or maybe go fishing or read a book and in the end I'll pit the quality of my coordinates against anybody's.

Link to comment
My only argument in this thread is that averaging is pointless and just as likely to give worse coords than better ones.

Well, my argument is if done properly averaging will produce a better final waypoint than a single reading. I've seen this in the real world, so I have no doubts in the least. A SporTrak will, by default, average readings when it is standing still. The longer it sits, the more readings it averaging. I don't have to walk away and come back or any of that non-sense. I just let it sit.

 

Now, if you care to do a real world experiment to demonstrate this then all you have to do is obtain a SporTrak, find a convenient spot in your backyard, and start. Each time you go through the exercise jot down the result. Take them randomly, a few in one day, wait a few days, wait a week, it doesn't matter. Like I said before, I got +-0.001 (less actually) deviation. Can you get that by just going out and taking a waypoint? That's my point on "confidence."

Link to comment
My only argument in this thread is that averaging is pointless and just as likely to give worse coords than better ones.

 

Well, my argument is if done properly averaging will produce a better final waypoint than a single reading.

I am not sure I am following you. First I don't understand why I would need a sport trak. I have a Meridian so I assume that works too? For what it is worth (not much really) I have done the same thing and the result is comparable. The Meridian always says it is within 7 feet (most often exactly 6ft) of the original coords. I didn't average anything although Magellan units do a type of averaging which is not comparable to the manual, take 3 reads and average them technique under discussion.

 

What do you mean by "if done properly averaging will produce a better final waypoint than a single reading"?

 

What is the *proper* way to do it?

 

If it includes using the auto averaging feature of your Magellan unit to discard the outliers for you and weight each reading according to internal, proprietary algorithms or involves manually discarding the outliers and selecting the waypoint(s) with the least deviation then this isn't averaging as described in the OP which is what I have been discussing.

 

Averaging is taking more than 1 reading and averaging those readings together. The usual number I see suggested is 3. Any process of elimination or assigning different weight to some results is *not* what the OP mentioned and is *not* what I am talking about.

 

If you are arguing for the least deviation method it is another topic and I have already agreed with you on it. It isn't averaging.

Edited by DaveA
Link to comment
How about some data?

 

Averaging works, whoda thunkit?

Yes, it does work, but this proves my point. If I'm reading these charts correctly you need to average about 3-4 hours to gain any significant increase in accuracy.

 

And after all that your accuracy improves from roughly 5 meters to roughly 2 meters. Big whoop.

Link to comment
How about some data?

 

Averaging works, whoda thunkit?

I think you didn't read your own source closely enough :rolleyes:

 

It says

The above plots and formulas would seem to imply that 1 to 2 days might get a position-average RMS error down to the 1-meter level."

 

Great, so you average your coords for 24 to 48 hours CONTINUOUSLY and you might get accuracy down to 3 feet. Of course with a WAAS unit you can get to within 9 feet with a single reading. Let's keep in mind the test equipment used to get to this 1 meter accuracy after 24 hours of averaging included an external antenna.

 

However, let's read further:

 

The plot below shows averaging results for 30 days using a Garmin eMap.  As the plot indicates, the 30-day average is displaced about a meter and a half from the true position.  This possible bias indicates that there is a limit in the accuracy that may be obtained from averaging position.

 

So, even with averaging over 30 days and with an external antenna the article concludes you may only be able to get 4.5ft of accuracy instead of the 9 you get with WAAS on a single reading.

 

Yeah, the words might and may are real confidence inspiring when it comes to getting those extra few feet of accuracy over the course of days. Also take a look at their plot points and tell me if the positions plotted appear to be random or if they are tilted to a particular area (non mathematically random)

 

You want to sit at your cache site averaging for 2 days with your external antenna you go right ahead.

 

Now here is the real kicker, they didn't average.

 

In summary, different GPS receivers perform differently when position-averaging.  Several days of position-averaging appear to be needed to obtain 1-meter level horizontal accuracy.  High-end (survey-grade) units will do significantly better.  Finally, the statistics vary somewhat and extensive measurements may be required to obtain accurate model values.  For this reason, predictions should be taken only as approximations.  Remember that the analysis done here were for simple position-averaging done on the NMEA data.  Any "tricks" or re-configuring of the receiver algorithm for firmware position-averaging have not been analyzed.

 

In other words the averaging was using the unit's internal algorithms to throw out waypoints and assign differing weights to the readings. This is not the same as taking 3 readings and manually averaging them which is what has been under discussion.

 

If anything the statement in the article that says "Several days of position-averaging appear to be needed to obtain 1-meter level horizontal accuracy." ought to indicate that the type of averaging (using external antennas) discussed is not something any cache hider is going to do - ever. Further the averages used the internal weighting algorithms to weight each reading rather than just average them all together.

 

If you looked up this info as a result of this discussion though then I am happy, at least people are looking for facts and that is always a good thing. Nevertheless the article doesn't conclude what you say it does.

Link to comment
...this isn't averaging as described in the OP...

Who cares what the OP described. Proper averaging works. Period.

 

You have a Meridian, then you can duplicate my experiment. When you put your unit down it will not likely be the same coords as after it's averaged a while, but after you've gone through the routine I'll describe yet again it will likely be; find your spot and put the unit down, check to make sure you have good signal from your satellites. Let the unit average for 5 minutes, then move the unit just enough to break averaging, and put it right back. Let it average for 10 minutes, 20 or more if you have as obstructed view of the skies. Take your waypoint.

 

Again, I suggest anyone who wants to ensure they know how to take good waypoints is to play with it in your backyard. Can you take a waypoint and get very nearly the same reading each time, day after day?

 

BTW, Dave, when you say that your Meridian is telling you you're within 7 ft. of your coords, is that an EPE of 7 ft? What about how far you are from the actual spot you originally took your waypoint when the unit says "0"? You know, there is a difference.

 

Also, how would you know SA was randomly generated versus pseudorandomly generated? Do you know some state secrets?

 

Disclaimer:

I know what works and works really well for a SporTrak Map and Pro. Users of other units: YMMV.

Link to comment

While we're throwing out references, here's one that convinced me to purchase the SporTrak. Note the larger scatter of the other units and note the testers moved the SporTrak to make sure it was still working because the readings were so tight.

 

The built-in auto-averaging is what convinced me to buy. Of course, I have to deal with the sling-shot effect, but you get used to it.

Link to comment

 

Who cares what the OP described.  Proper averaging works.  Period.

 

Well, I believe it is customary to not change definitions in the middle of a discussion so what the OP described is something I care about. It is what I have argued does not work. You have equated the averaging that a GPSr does with the averaging described in the OP. Why? They aren't even remotely the same. You may not care, but it is what the discussion has been about.

 

BTW, Dave, when you say that your Meridian is telling you you're within 7 ft. of your coords, is that an EPE of 7 ft?  What about how far you are from the actual spot you originally took your waypoint when the unit says "0"?  You know, there is a difference.

 

It is the distance field that tells me I am 7 feet, not the EPE.

 

Also, how would you know SA was randomly generated versus pseudorandomly generated?  Do you know some state secrets?

 

It is public knowledge.

 

Here are a couple links:

 

one

 

two

 

It was simply adding noise to the clock and ephemeris in the navigation message.

Link to comment
It is public knowledge.

 

Here are a couple links:

 

one

 

two

 

It was simply adding noise to the clock and ephemeris in the navigation message.

ROFL!

 

Niether of those two link said anything about how the error was generated.

 

Noise? "Noise" can be pseudorandom.

Are you just trying to argue with me for the sake of arguing? You have taken the usage of 'averaging' in this thread and turned it into something different just to make your point. Now you want to harp on me about SA? Google is your friend. Use it.

 

Neither of these things is relevant to whether or not manual averaging, as suggested by the OP, is going to produce a more accurate result.

 

Unless you wish to discuss that please stop with the distractions.

 

Seriously, when I tried to bring you back on topic you said "Who cares what the OP described?". I mean... WHAT? What do you think is being discussed in this thread started by the OP?

 

Please discuss the matter at hand or start a different thread. I am not going to entertain your apparent desire to argue for the sake of arguing.

 

editted to add: Here is one last link on the SA thing for you. Note how many times the article uses the term pseudo random to describe the way GPS operates, but when they discuss SA they say random, not pseudo random. If that isn't enough for you then it is off to google or the library for you.

Edited by DaveA
Link to comment
editted to add: Here is one last link on the SA thing for you. Note how many times the article uses the term pseudo random to describe the way GPS operates, but when they discuss SA they say random, not pseudo random. If that isn't enough for you then it is off to google or the library for you.

Considering the way you've used random, psuedo-random, and "mathematically" random in this thread I'm not impressed that you can interpret a description of SA to fit your argument. I have no idea what "mathematically" random means so its possible that you are using this term correctly :unsure:

Link to comment

Considering the way you've used random, psuedo-random, and "mathematically" random in this thread I'm not impressed that you can interpret a description of SA to fit your argument. I have no idea what "mathematically" random means so its possible that you are using this term correctly :unsure:

 

Initially I simply used random and non random. Then it became apparent by people's examples that they were describing non random things that appear random as being random. So I used the term mathematically random to attempt to clarify since the averaging we are talking about it a mathematical process. The attempt apparently failed. So I switched to pseudo random.

 

I can understand the confusion, but it can't be helped. To further confuse things, pseudo random elements can still be averaged with good results as long as the distribution of the pseudo random elements is uniform enough. The problem with the pseudo random elements people are discussing here like the errors from multipath and ionospheric distortion is that there is no guarantee the distribution will be uniform enough for averaging to have a positive impact. Further the taking of 3 sets of coords to average all but guarantees the distribution won't be uniform enough due to a woefully inadequate data set. Even the averaging done by GPSrs that weights the coords differently requires dozens, if not thousands of coords to make any noticeable improvement in accuracy.

 

BTW, why are you criticizing my usage of terms you don't even spell correctly?

It is pseudo, not psuedo. If you feel I am using a term improperly and it somehow relates to the topic of discussion, present your argument rather than just taking jabs please.

 

Better yet, make a good argument showing that taking 3 readings and averaging them consistently produces more accurate rather than less accurate coords. That would be really helpful if someone wanted to try to argue that.

 

I mean, actually discussing the issue, what a concept. Making and defending an argument, advancing a counter argument. That kind of thing. Taking pot shots doesn't really demonstrate anything other than you have probably been drinking and you have no argument to make.

Link to comment

Well, what I was getting at is your assursion that you can average SA readings, but not non-SA reading to get a better waypoint was bogus. In fact, I'd suspect the real error induced is still secret even though it's not needed. Additionally, I've not seen or heard anywhere that the error caused the reading to bounce all around as it seems you assert in order to get a better average. While it was before my time, GPS-wise, it is my understanding the error is similar to the error we have now only a little different and a lot larger. Besides, the military wouldn't want it too easy to average the reading and get a firm grasp on things.

 

Anyway, sure, you want to get back to the 3 reading is no better than 1, then let's.

 

While I think 3 is a little on the low side, it's still workable. Okay, back to my argument that you can throw accuracy right out the window when it comes to geocaching. Who really cares what the exact spot on earth is where you've placed your cache? Really? The only person who is going to give the slightest bit of concern is another geocacher. Guess what, he's using the exact grade of reciever you are. The only thing he cares about is getting close to the reading you got--which may or may not be any closer to the actual coordinates of that spot. The only real issue is getting the finders close to what you got.

 

Now, each time someone comes along they are going to have a slightly different reading. Eventually, if each person, say, dropped a marble at each of their "zeros" you'd have a scattering of marbles.

 

Now, here's the real issue, is the cache in the middle of that scattering? Or is it off because of a bias induced because you only took one reading? Sure it could be a bit off. One day a person might say the coords are dead on. Another, a cacher might find them over 50' off. If the coords have little or no bias folks will be reporting lower maximum errors.

 

See, I think you know what I'm going to say next. The average of that scattering is actually the real world location of the of the coordinates you took, which may or may not be actually on top of the cache.

 

However, if you had taken several reading and did you're own average using the cache as a reference, the resultant finders' scatter would be much closer to the cache's location.

 

Why? Because you're making your own scatter.

Link to comment

Averaging of waypoints is a waste of time.

Even if we are to assume that taking several readings and then averaged them out, the person looking for the cache would only be working from the one current reading they were getting.

 

Get a bunch of cachers together to find a cache that was hidden by someone that avarged out several readings and see were they all end up. They are not all going to wind up at the same spot.

 

Averaging may have helped in the days of S.A. but in todays world it is just a waste of time.

Link to comment
Well, what I was getting at is your assursion that you can average SA readings, but not non-SA reading to get a better waypoint was bogus.  In fact, I'd suspect the real error induced is still secret even though it's not needed.  Additionally, I've not seen or heard anywhere that the error caused the reading to bounce all around as it seems you assert in order to get a better average.  While it was before my time, GPS-wise, it is my understanding the error is similar to the error we have now only a little different and a lot larger.  Besides, the military wouldn't want it too easy to average the reading and get a firm grasp on things.

 

OK, first, if the error introduced with SA is being falsely represented by the government I have no way of knowing that. I can only go by what I read which has stated the error was random, not pseudo random even in papers that use the terms random and pseudo random fairly precisely such as the last link I provided. For all I know the error was in fact pseudo random, but with a uniform enough distribution that averaging had a consistently positive effect rather than a sometimes positive, sometimes neutral, sometimes negative effect that we get with manual averaging of non random errors. It is difficult to know whether something is random or simply pseudo random with uniform distribution when the details of how something worked are at best sketchy. If the randomness was computer generated then it is pseudo random 100% of the time as computers are incapable of true randomness, but most of the time the algorithms used by computers produce uniform distribution so for the sake of averaging it is the same as random. Such is not the case with enviromental errors such as multipath and ionospheric errors. There is no guarantee of uniform distribution of error therefore averaging cannot be consistently beneficial.

 

Anyway, sure, you want to get back to the 3 reading is no better than 1, then let's.

 

OK, thank you.

 

While I think 3 is a little on the low side, it's still workable.

 

Well, then I would request you show how it is workable. The idea of averaging is that the errors cancel themselves out given enough samples assuming uniform distribution of the errors. With 3 samples, each with some error, this isn't possible. The best case scenario is that one error is 10' to the north and another is 10' to the south and so they cancel each other out and produce perfect accuracy. If we introduce a 3rd sample with a degree of error, what cancels that out? A worst case scenario is that all 3 samples error in the same direction, but increasing in magnitude. sample 1 is 1' to the north, sample 2 is 10' to the north and sample 3 is 40' to the north.

 

Your previous example of using the waypoints with the least deviation would work much better with 3 samples than averaging would. Well, it would work better on a more consistent basis anyway. Letting the GPSr do the averaging is also much better since the internal algorithms weight each reading differently based upon it's estimated accuracy. Of course how well the GPSr does this is make and model specific, but from what I can tell Garmin and Magellan have both gotten better at this with time.

 

Okay, back to my argument that you can throw accuracy right out the window when it comes to geocaching.  Who really cares what the exact spot on earth is where you've placed your cache?  Really?  The only person who is going to give the slightest bit of concern is another geocacher.  Guess what, he's using the exact grade of reciever you are.  The only thing he cares about is getting close to the reading you got--which may or may not be any closer to the actual coordinates of that spot.  The only real issue is getting the finders close to what you got.

 

I agree. However, there is a but coming shortly :unsure:

 

Now, each time someone comes along they are going to have a slightly different reading.  Eventually, if each person, say, dropped a marble at each of their "zeros" you'd have a scattering of marbles.

 

Now, here's the real issue, is the cache in the middle of that scattering?  Or is it off because of a bias induced because you only took one reading?  Sure it could be a bit off.  One day a person might say the coords are dead on.  Another, a cacher might find them over 50' off.  If the coords have little or no bias folks will be reporting lower maximum errors.

 

OK so far.

 

See, I think you know what I'm going to say next.  The average of that scattering is actually the real world location of the of the coordinates you took, which may or may not be actually on top of the cache.

 

However, if you had taken several reading and did you're own average using the cache as a reference, the resultant finders' scatter would be much closer to the cache's location.

 

I don't see this as being true. I do certainly agree that it is wise to double check the coords one lists for a cache. This is to reduce the possibility the coords listed were wildly inaccurate for any number of reasons whether they be human or 'machine' or environmental error. However, if the second reading (taken seconds later) agrees with the first reading within the GPSr's margin of error then addional readings have no real value in most cases. The only exception would be if one were next to a rock bluff or some other object known to introduce multipath error in which case the reading should ideally be taken at different times. When dealing with multipath though there really is no defense against it other than the unit rejecting those signals and the ability to do this is make/model specific. If the cache is near a multipath producing object the end result is that lots of folks are going to have trouble getting close to ground zero regardless of how many coords you took and averaged or tossed out the outliers or used the unit's averaging algorithms for.

 

Why?  Because you're making your own scatter. 

 

Yes, that is true, but a scatter with 3 datapoints isn't a meaningful scatter and averaging those samples may make the listed coords less accurate rather than more accurate.

 

OK, lets get more practical now and simply answer the question of how do you get the most accurate coords possible for a cache listing with a minimum of effort?

 

Easy, use your GPSr's auto average function and let it sit for 1-2 minutes. Magellan units do it automatically, many Garmin units you have to turn the feature on, Lowrance I have no idea. You then get the benefit of averaging with the outliers tossed out and each coord (one per second) weighted by the unit based upon estimated accuracy of the coord. When in doubt move the unit a few feet to break the averaging and repeat a second time. 99% of the time the second reading will be the same as the first within the unit's margin of error (9' with WAAS) If you really, really want to you can average the two :blink:

Link to comment
Yes, that is true, but a scatter with 3 datapoints isn't a meaningful scatter and averaging those samples may make the listed coords less accurate rather than more accurate.

This is where you are losing all logic. You say averaging may or may not be more accurate.

 

More accurate than what?

 

So, really what you are saying is any one of the waypoints you are averaging may or may not be more accurate than the average of all of them. A statement like that would mean than it doesn't matter how many of those points you have, the statement would still be true. The problem is which one do you choose?

 

Here's another mental exercise that might show you where you are wrong.

 

We'll take a one hundred waypoint scattering. Average the error to get a radius. Now, randomly take 100 combinations of any three waypoints and average them together to get a new scattering. Average the error of that scattering. Which error average do you suppose will be greater? Hmmm...?

 

See, you can't look at just one waypoint at one cache. You have to look at the whole picture.

Link to comment
This is where you are losing all logic.  You say averaging may or may not be more accurate.

 

That is correct, I am saying that averaging samples that all contain error of non uniform distribution may or may not produce a result that is more accurate than the first sample. The exception would be if we knew with certainty that the distribution of the error was uniform enough and the sample size large enough for averaging to have any hope of a consistently positive effect. If you disagree with this I strongly recommend you visit your closest university and ask the math professor whether or not averaging a set of samples, each with a degree of error that is not uniform in distribution will consistently yield a more accurate result than the first sample. There is simply no room for debate on this subject. It is a mathematical certainty that the result will sometimes be positive, sometimes neutral and sometimes negative.

 

More accurate than what?

 

So, really what you are saying is any one of the waypoints you are averaging may or may not be more accurate than the average of all of them.  A statement like that would mean than it doesn't matter how many of those points you have, the statement would still be true.  The problem is which one do you choose?

 

I agree 100%. Which one to choose is a problem for humans given our insufficient understanding of the probability of any given sample being more accurate than the next. If I say we should always go with the first sample then sometimes that first sample will be the best, sometimes it will be within a few feet of the best and occasionally it will be 100' from the best. On the other hand if I say average them all out then sometimes the average will be better than the first, sometimes worse and sometimes the same. The question then is what is the point of averaging a very small (3-5) set of samples whose error is not guaranteed to be uniform in distribution only to end up with a result that may, or may not, be any more accurate than the first sample. It is expending more effort for a result that might be worse. You have hit the nail on the head, the problem is how do we choose? We can't choose because we can't know which sample is the most accurate. If we could know that we certainly wouldn't average the sample known to be the best with those known to be inferior, would we? Of course not. We average because we don't know which sample is the best and we assume the average will yield a better result. With a uniform distribution of error this would be true (as it was with SA). This is no longer guaranteed to be true though. That is why averaging is pointless, particulary on tiny datasets.

 

That is why I said chosing the coords with the least deviation will more consistently produce better results than averaging. Choosing the least deviation at least relies upon an assumption that is going to be true more often than not and that is the outliers are probably the result of large errors whereas the tighter cluster samples are probably the result of smaller errors. Even this system isn't all that great though as it is possible that the outlier is dead on and the tight cluster are all the result of a non random error effecting the result. It is not likely though which is why it is a superior method to averaging.

 

Better still is using the GPSr's auto averaging function which not only discards outliers, but weights each sample according to proprietary algorithms. I have no knowledge whatsoever of any algorithm used by any manufacturer in any model. However it is safe to assume that in assigning weight to each sample the firmware is taking multipath detection, WAAS correction/non correction and other factors into account since every unit is capable of it.

 

Here's another mental exercise that might show you where you are wrong.

 

We'll take a one hundred waypoint scattering.  Average the error to get a radius. 

 

If I average 100 waypoints I won't end up with a radius, I will end up with a precise lat/long coord.

 

Now, randomly take 100 combinations of any three waypoints and average them together to get a new scattering.  Average the error of that scattering.  Which error average do you suppose will be greater?  Hmmm...?

 

OK, I see what you are getting at. The problem is you are assuming a uniform distribution of error. If 10 waypoints are within 3' of ground zero and the other 90 are off to the east, your experiment fails. This would be a fairly extreme example, but it demonstrates the futility in relying upon averaging to deal with non random or perhaps more accurately, non uniform distribution of errors.

 

In your 100 sample dataset the odds that we would end up with a bad average are much less than if we use a dataset of 3-5 samples which is what is usually suggested by those persons, hopelessly bad at math, who suggest taking 3-5 readings and averaging them. In the real world, even with errors that aren't uniform in distribution, the larger the dataset, the more uniform the distribution often (but not always) becomes.

 

See, you can't look at just one waypoint at one cache.  You have to look at the whole picture.

 

No, you don't. Assuming one is in a position where there is very little error each and every coord will be within the unit's rated accuracy. If one is in an area of high error, there is no guarantee of uniform distribution of the error so averaging can't consistently produce a more accurate result.

 

I wonder if you aren't confusing your sportrak's slingshot effect with averaging? In the case of your sportrak it's autoaveraging (while moving no less) introduces positional error. (I deal with the same issue with my meridian) To correct the error (no idea why Magellan thought this would be a good idea) you have to wait a while after stopping. This isn't averaging, this is letting the old coords from your previous position age out of the algorithm. As this happens your displayed position becomes more accurate because your previous location isn't being averaged into your present position.

 

With units that don't do silly things like this (and let's face it, the slingshot effect is downright stupid) the first coord upon arrival is just as likely as any other to be the most accurate one. In most cases they will all be accurate to within the limitations of the GPSr for the given environment. Averaging 3-5 coords may or may not improve the accuracy. Thus, it is pointless.

Link to comment
Averaging of waypoints is a waste of time.

Even if we are to assume that taking several readings and then averaged them out, the person looking for the cache would only be working from the one current reading they were getting.

 

Get a bunch of cachers together to find a cache that was hidden by someone that avarged out several readings and see were they all end up. They are not all going to wind up at the same spot.

 

Averaging may have helped in the days of S.A. but in todays world it is just a waste of time.

Right on Dude!!! :unsure:

Link to comment
And don't forget the unfortunate fact that no matter HOW accurate your readings are, or how many hours you spent obtaining them, there's always going to be at least one person who hunts for the cache who claims your coordinates were "way off" or "out."

True. But if you look at a sample of opinions, you can realise this one is an outlier, and average the rest :unsure:

Link to comment
We'll take a one hundred waypoint scattering.  Average the error to get a radius. 

 

If I average 100 waypoints I won't end up with a radius, I will end up with a precise lat/long coord.

Dave, this prefectly illustrates you are having a comprehension problem. Simply, you just don't get it. No wonder there are others who won't argue with you because no matter how they try to show you why you are wrong, you just don't get it.

 

I'll leave you to your own dellusions with this little tidbit, if your theory was correct no one would ever complain about coords being off. Why? Because just one reading will do. Right?

 

Be sure to let both Magellan and Garmin know that averaging function they built into their units are useless. I'm sure they would like to hear from someone who knows what they are talking about.

 

Ciao.

Link to comment

Dave, this prefectly illustrates you are having a comprehension problem.  Simply, you just don't get it.  No wonder there are others who won't argue with you because no matter how they try to show you why you are wrong, you just don't get it. 

 

Well I agree there is no point in continuing this discussion, there isn't anything more to be said.

 

I'll leave you to your own dellusions with this little tidbit, if your theory was correct no one would ever complain about coords being off.  Why?  Because just one reading will do.  Right?

 

Actually I said one should double check their reading to make sure they didn't get a far flung coord. All I have said is that averaging 3 readings may improve upon one reading or it may make it worse or it may have no appreciable effect. This is a simple thing to demonstrate and I have done so.

 

Be sure to let both Magellan and Garmin know that averaging function they built into their units are useless.  I'm sure they would like to hear from someone who knows what they are talking about.

 

I didn't say the built in averaging was useless. I said the opposite. I said it was far superior to any manual averaging method and suggested it be used when getting that single reading. In fact, what I said was this:

 

OK, lets get more practical now and simply answer the question of how do you get the most accurate coords possible for a cache listing with a minimum of effort?

 

Easy, use your GPSr's auto average function and let it sit for 1-2 minutes.

 

Odd that you would accuse me of not comprehending what you are saying when you think I am saying the exact opposite of what I wrote :D

Link to comment
I've seen quite a few new caches disabled due to incorrect coordinates. Having hidden almost 100 caches and had quite a few compliments on my coordinates I thought I would give a few tips for getting the best possible coordinates for your hides.

 

1.) First, study your GPSer's manual and go out and find some caches.  Take some time to learn how your GPSer works before trying to hide any caches. 

2.) Make sure your unit is on with sats. aquired well prior to marking waypoints.

3.) Most GPSers have a display screen indicating the approximate accuracy in feet.  From my experience this number can vary depending on which direction you face.  Watch the accuracy number and mark waypoints when it shows the lowest reading.

4.) I usually take at least 3 waypoint readings.  If possible mark a waypoint on three sides of the cache and then average.  In a deep woods situation where sat. signals are poor, it often helps to walk several feet away from the cache in different directions and then walk back to it before marking the waypoints.  Either way, always give your unit time to settle before marking.

5.) Before or after you fill out your cache approval form, check your coordinates with an internet a map program like Topozone.  Obviously the "X" should fall somewhere near where your cache is hidden.

6.) And lastly, check and double check your numbers as you entering them.

 

Hope these tips are helpful to some of you newer cachers.

Keep on Caching!!!

Thank you for those helpful tips! Thanks also go to Thot for providing a useful link on the issue, and fizzymagic for sharing your scientific discipline.

 

I own an older eTrex without the averaging function, so I compensate by looking for a spot with reasonable EPE (18' or less in my case) and then place the GPSr there for a while, have it display the coordinates, and watch. When it's "settling" you can see the coordinates drift in one direction then stop. For "noise" you can see the coordinates jump in unpredictable directions.

 

After it has settled, I move the GPSr about 5' in north/south east/west direction and observe the response time. If the GPSr's coordinates don't change accordingly, then I suspect the signal in the area is not good, so I resort to "walk from different direction" approach fizzymagic mentioned, then take more readings.

 

I've tested this method on three ADJUSTED benchmarks and I got coordinates well within the EPE'.

 

As for fizzymagic, I visited one of his very evil hides recently. I used my brother's WAAS GPSr and it consistently zero'd within 5' of the cache's coordinates, which allowed us to focus our search. (Yes, we found it)

 

I've had no problems with caches in our area hidden by owners known to post spot-on coordinates, and all of them use averaging as far as I know.

 

These "data" might not be good enough to convince some people, but what does it matter when I've taken the time in the field to convince myself?

Link to comment

I started to write down some definitions of random, uniform distribution, etc. because the way DaveA misuses these words make his arguments hard to follow. As I put this list together I realized that Dave is correct in the point he is trying to make. Statistical theory shows that for a sufficiently large sample the average will approach the mean of the underlying population. There are statistical tests that can determine the likelihood that the average is within a given tolerance of the population mean even when the underlying distribution is unknown. But in any case there is a significant probability that for a sample of three, the average will fall outside of some small tolerance of the mean

 

It is easier to see what Dave is trying to say by taking a different example of statistical sampling that more people are familiar with. Suppose I was to take a poll of the electorate before the last U.S. presidential election. I select one person at random. That person tells me that he will vote for Bush. If I stop now, the result of my poll would have correctly predicted the election. However I want to be sure of my result so I ask a second person. That person says “Kerry”. So now I have, on average, 50% for Bush and 50% for Kerry. Pretty close to the actual vote. Just to be sure, I ask a third person. They say they will also vote for Kerry. Now my poll shows 2/3 for Kerry, 1/3 for Bush. In order to take a poll that is meaningful, I need a much larger sample. When averaging a small sample there is a significant probability that I will get bad results.

 

In order for averaging to work you need a larger sample than 3 measurments. How much larger is hard to say without doing the statistics. My guess is that you don’t need a very much larger sample. However, the original post suggests averaging at least 3 waypoints, and I think it is correct to say that the average of 3 waypoints doesn’t significantly improve the chance of a repeatable measurement over taking three waypoints and selecting the one with the smallest EPE (which is what I believe Dave is suggesting). On the other hand, if you have a Magellan that does auto-averaging, or a Garmin with averaging capability, it is very likely that if you let the unit do averaging for several minutes you will get enough samples to get a highly repeatable measurement. These units might do a weighted average (where measurement with smaller EPE are given more weight in the average). This may give some improvement to the repeatability and therefore need a smaller sample to get reasonable results.

Edited by tozainamboku
Link to comment

First, allow me to note that folks here are using the term "averaging" in three very different ways here, and this may be causing confusion among some casual readers, where some readers may be mis-reading the intended meaning of some statements using the term. The three different usages of the term that I see emplooyed here are as follows:

1) moving to three or more positions around the cache, and located equidistant from the cache, sometimes called "triangulating" and then storing waypoint readings at each spot, and then later averaging the readings numerically to determine the "center".

2) standing at the cache site and storing two or three or more waypoints while holding the GPSr still, and then later averaging these readings to derive one "average" reading.

3) holding the GPSr in a fixed postion -- and one which allows good reception of a number of satellites -- for at least a couple of minutes, to allow the GPSr the time to perform internal lock-in and WAAS averaging. In other words, we are holding the GPSr still and allowing it to do internal "settling" or "averaging". This is what I mean whenever I use the term "averaging", and this is how I take all my waypoint measurements. I often also refer to such readings using long-acquisition time as "settling time" in addition to calling it "averaging". I think one reason that many of us call this feat "averaging" is that some Magellan units, including the SporTrak Pro, display a "WAAS Averaging" timer on one of the nav screens; this counter/timer shows how long the processor in the GPSr has been able -- without disturbance or resetting -- to not only perform WAAS averaging but also to really integrate the signals fromthe satellites meaninfully.

 

I personally find, using my SporTrak Pro, that even in dense forest cover, allowing a "settling time" or "averaging time", as described in item #3, of at least 2.5 minutes seems to consistently yield a really tight reading, 4-5 minutes is even better, and holding it still for 8 minutes or longer results in even tighter readings, although the improvement is minimal at this point. However, I usually find -- even in dense forest -- that the point of diminishing returns -- in terms of garnering significantly greater accuracy -- is usually reached after about 3 minutes; any further accuracy gained after this point is usually quite minimal.

 

Thus, I must agree with Quoddy, who wrote earlier that:

A recent "test" I watched at Geo Jamboree 3 should prove useful to many cache setters. In order to get REALLY accurate readings the GPSr should remain at a position for a full 12 minutes. This time frames allows for all available birds to establish the correct coordinate readings. After viewing it, this seems to be far superior to the somewhat established paradigm of multiple marks.

 

I must also agree with many of the comments made by DaveA on this matter.

 

We always employ the method I have described in item #3 for taking a waypoint when placing caches. I have never received any complaints about waypoints on my caches and have often received compliments on how tight the waypoints were, even for deep forest cache placments. In my younger days, I spent many years of my life as a ham radio operator and as an electronics/RF design engineer, and my years of working with RF signals steered me, once I started playing with GPSrs, toward the method which I have described in item #3.

 

BTW, if you have noticed that I seem to have consistently talked about holding the GPSr steady in one place while taking a long-settling-time reading, rather than talking about laying it on the ground or on a log, well, that is true, and that is only because all of our team's cache placements are under fairly dense forest cover (and this is also true of many caches which we hunt as well). Thus, holding the GPSr at chest height (or higher!) in such settings almost always allows greater satellite signal strength than if the GPSr were lying on or near the ground. If I were cache hunting in an open field or in a flat desert, these considerations would not be necessary.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...