Jump to content

Averaging comments


Guest Kerry

Recommended Posts

Guest brokenwing

Kerry, since you chose to move this discussion, I will post my link here as well: http://www.fs.fed.us/database/gps/mtdc/gps2000/Nav_3-2001.htm This is a US Forestry Service report that clearly shows averaging AS WE USE IT WHEN GEOCACHING does indeed improve accuracy, especially in forest conditions, where environmental interference can be a real problem. I will let readers decide which article they believe, one tested in forest conditions, at multible points, showing the difference between no averaging and the average of 60 one second readings, or your data about 24 hour averaging pre and post SA.

 

No one is arguing that long term averaging is needed post SA. All we are saying is short term averaging is better than a single reading. This has been my point all along, yet you choose to counter with long term data. The forestry service data certainly suggests a short term average is more accurate. When we consider short term average vs. single reading, your data is irrelevant.

 

brokenwing

Link to comment
Guest PharoaH

I think averaging can help you and hurt you. For example:

When I am hiking along a trail to a cache, I am generally moving at a good pace (2.5-3mph). However, when the GPSR "needle" flips to one side of the path, I slow down considerably and start looking around for clues. So, when I'm getting spotty reception, it takes longer for the averaging to adjust to my new speed. Before you say it, I know averaging is position based. What is speed, but the incremental change in position per unit time. Thus speed should matter. But I digress... Anyway, until the averaging adjusts to my new speed and direction, the GPSR reported position will have some iduced error due to the averaging. In other words, in this situation I stop where the GPS says 5 feet away. Wait for 5 minutes, and the distance "averages" back to a more accurate 50 feet away.

 

Now, averaging is great when you have a compass and stop to get an accurate distance and bearing from the GPS. Why do you think so many veteran cachers swear by a compass for the last 100' or so? Because of the inherent (in)accuracy of the GPS system, and because of situations like the above.

Link to comment
Guest Markwell

Ah - one more reason to love my little yellow eTrex. No automatic position averaging.

 

Hmmm. The best of both worlds? A nice little yellow eTrex for hunting on the plains of Illinois. And a top of the line Magellan with an external antennae and automatic averaging for hiding and/or dense cover.

 

Christmas is coming if anyone wants to know my mailing address. icon_biggrin.gif

Link to comment
Guest Markwell

Ah - one more reason to love my little yellow eTrex. No automatic position averaging.

 

Hmmm. The best of both worlds? A nice little yellow eTrex for hunting on the plains of Illinois. And a top of the line Magellan with an external antennae and automatic averaging for hiding and/or dense cover.

 

Christmas is coming if anyone wants to know my mailing address. icon_biggrin.gif

Link to comment
Guest arffer

I fully concurr with Brokenwing. I've stated it on similer threads that averaging only needs to be for 2-5 minutes at most. Take a look at the article Kerry posted at diagram 6a. Compare the black single point accuracy plot to the 5 minute plot, and tell me that a short averaging isn't an improvement. 6a also reinforces my opinion that averaging for more than a short interval has minimal value.

 

Regarding the compass comment from PharoaH, I'm by no means a 'veteran' cacher, but I learned REAL quick how valuable the combination of GPSR averaging and a compass are. I follow the GPSR to within 150 feet, give a two minute average, take the compass bearing and pace off the remaining distance. Do this once more and I'm usually right on top of the cache. Hasn't failed yet.

Link to comment
Guest bob_renner

Arffer,

 

I took a look at figure 6a and I don't see a marked improvement in accuracy with the averaging. There is a slight deviation of the black line with respect to the averages, but it only provides about 1 meter improvement in the accuracy and only around the 50% region of the verticle axis.

 

Also, in the General Comments section, the author states

quote:
Based on this data long averaging time periods appear generally not practical, suitable or even required now with Selective Availability set to zero. In effect based on these results data averaging does not appear to have any practical purpose or use with SA off. This may be dependent on several other issues and possibly one cannot make a blanket statement for a world utility such as GPS without much more data and analysis.

I think this is the main reason manufacturers have discontinued the feature of averaging in some of the newer receivers. They certainly have the capabilities to average - more and cheaper memory, faster processors, etc. With the addition of WAAS, and the demise of SA, I don't think averaging is essential.

 

I will agree that it provides an improvement under heavy forest cover, but with good reception, it is a bell and whistle.

 

Bob

Link to comment
Guest jeremy

I use the shotgun approach at averaging.

 

Similar to when you shoot at a target, I do a series of waypoints over a 10-20 minute period. I make several waypoints, turn off the receiver, walk away, turn it back on and acquire satellites, then return and take more readings.

 

Once I'm done, I take the coordinates (when I return home) and plot them on a map. Visually I choose the one that is closest to the center of the shotgun pattern. Seems to work pretty well.

 

If I think I have a good waypoint at the location, I play the seeker and try and locate the cache with my own coordinates (after shutting off and turning on the eTrex). If I get close at least I know it's close enough.

 

Jeremy

Link to comment
Guest Kerry

Brokenwing, I was originally going to put the link in here anyway as it appeared more the appropriate venue, just got sidetracked on the way. Definitely the more links, data, thoughts the better, helps everybody don?t you think?

 

I?m a little lost with your reply regarding long term data? The randomly selected averaging periods (EVERY 5, 19 & 52 minute periods) are of course taken from ALL the data over the full 24 hours to see the trend for EACH and EVERY averaging PERIOD. That article was based around 5, 19 and 52 minute averages over 24 hours as well as each single point position.

 

So is ~5 minutes (317 seconds) not short enough? or would you like to see the same thing based on the same data in whatever period you nominate (30, 60, 75, 123, 348 seconds or whatever). You call it and I?ll do it but in the meantime I?ll do your 60 seconds anyway which technically shows a deficiency in your argument.

 

http://www.cqnet.com.au/~user/aitken/gps/avt_1184_b.jpg

 

There's your short term average and my point still stands regarding the ?practical side? of the averaging affect with regards the precision of the instrument without SA. That area you refer is the dark hole of precision, Improvement? Technically it may be (at times but other time periods the reverse) but by how much and can one effectively measure it with this type of equipment? No.

 

Maybe you should go back and read it again first to clear this long term average impression you have.

 

To all, regarding obstructions etc now that takes on a whole range of ?variables? which can?t be adequately defined to make a standard statement such as ?if in tree cover do this because?. It really is difficult with ?proving? obstruction conditions adequately (so many many different variables and conditions). One could do a trial at one position and in tree cover sometimes moving 6? sideways will/can/might totally change the whole deal. Is it that tree, what type of tree, what type of foliage, what type of cover or that branch, with a handheld which way should I face, wet v dry trees, multipath and how much and from where? etc etc. I realize what some are trying to say but the question still is how do you know what the effect is? Over a short period one has the same possibility of getting a better position as making it worse. That?s where the hindsight is such a wonderful thing. One has to look at a wide range of data to see how 1 little period fitted into the OVERALL pattern/trend as well as the periods each side of it and the periods each side of them and ?? etc etc.

 

Averaging bad data (obstructed or otherwise)just gives a bad data average and don't forget one doesn't have hindsight at the time.

 

Cheers, Kerry.

Link to comment
Guest CharlieP

Although I would agree with you that averaging does not improve accuracy as dramatically as it did with SA on, I think there are still some benefits. This link will support that statement.

 

http://users.erols.com/dlwilson/gpsavg.htm

 

The different formulas Wilson derives for averaging accuracy with SA on vs, SA off are quite interesting in comparison. I would hypothesize that the longer term factors with SA off are related to changing satellite geometry over time and HDOP, whereas the shorter term factors with SA on are more closely related to the SA algorith cycle. Although my GPS does not average, I often manually average several positions over time. I find that taking several readings at intervals of 10 minutes or more, or taking readings that use different satellites, will give me a more reliable position than a one time shot, especially if that one time shot has a high EPE. I weight my averages based on the EPE of each reading, and if a position determined with a high EPE is well away from the others, I discard it.

 

FWIW,

CharlieP

Link to comment
Guest brokenwing

B>Averaging bad data (obstructed or otherwise)just gives a bad data average and don't forget one doesn't have hindsight at the time.


 

Again, if the data is bad, I'd darn sure better be averaging to get the best possible results. As you have pointed out, we don't know at the time if the data we are getting is all that accurate or not. If I were to take just a single reading and just assume it's good data, that could be a foolish assumption. On the other hand, if I do like I often do when geocaching and take 120 readings, (one second frequency for 2 minutes) I discount any wild fluctuations in my data and increase my probability of accuracy.

 

That is what it's all about. My probability of having a more accurate reading is better with an averaged reading than without. Will it always be better than any single point? No. Obviously, any single reading I took might have been more accurate than the average. The problem is, I have no way of knowing that. If you look at all the point readings the average is derived from, is it likely that my averaged data is more accurate than any randomly selected point? Yes, because some reading are going to be way off and will skew the average toward the middle of the range.

 

brokenwing

Link to comment
Guest Kerry

quote:

Quotes originally posted by Brokenwing

 

You say that you have single point positions, but I don't see them on the charts in your document. Am I just missing them? I think that data is an important control for determining the accuracy of averages. Another problem I have here is that you list PA time, but you don't list frequency. Without frequency, this data is meaningless.


 

The BLACK line plot (6/6a) is ALL the unaveraged single point data. That's what it's there for! to compare the averaged data to. Frequency? or is that distribution? again look at the plots.

 

quote:

I'm not sure what your trying to say here, but it seems your confusing accuracy with precision. I would assume you know better, since you seem like a smart fellow, but your post confused me.


 

Accuracy v Precision is what all this is about. One can't have Accuracy (averaged or otherwise) if one doesn't have the precision!

 

quote:

I disagree. That is the whole point of plotting averages. We get closer to a median value, that will, by definition discount the importance of extremes.


 

Well that can depend on the time frame and the trend of the positions over that time frame. Longer time frames will always generally give a better distribution around the mean. Again the trend of the data dictates this short time frames really "prove" nothing. That is the integrity of the position is uncertain.

 

That the averaged values may or may not be better is really not the issue and I think we agree there is no way of knowing what the averaging effect really is.

 

The point is however that without SA the wildly fluctuating values these days are minimal. That is reflected in the way GPS accuracy is defined. Those same definitions applied to the averaged values give the following accuracy improvements when compared to the SINGLE POINT accuracy.

 

"Accuracy Improvement" at the 95% level averaged at

60sec (1 min) = 0.16m

317sec (~5min) = 0.26m

1166sec (~19min) = 0.22m

3124sec (~52min) = 0.37m

 

"Accuracy Improvement" at the 50% (CEP) level averaged at

60sec (1min) = 0.24m

317sec (~5min) = 0.64m

1166sec (~19min) = 0.71m

3124sec (~52min) = 0.80m

 

Again in the real world what did averaging achieve OVER and ABOVE the PRECISION capabilities of the GPS.

 

One other comment which I also find intereting is the claimed 1 second updates (as you mention) that practically all manufacturers claim.

 

Very few appear to actually update at 1 second intervals. It's more like closer to 2 seconds and if one looks closely at any of the data that can be recorded there are many identical positions over any time period.

 

Actually the real values even surprised me! From the pre SA data 73.82% of ALL data recorded (everything the unit is capabable of) were "sequential duplicates" or about every 4 positions recorded. Sequential duplicates here is referring to a position being "exactly" the same as the previous position.

 

Post SA data it was 92.48% or on average about every 13 positions!

 

Now that difference would appear to be almost totally due to SA being terminated and the smoothing/distribution of the accuracy curve in relation to the capable precision (there that key word again).

 

That tends to screw averages also but that's the same data as an averaging function would be relying on.

 

Sometimes all is not what it appears.

 

Cheers, Kerry.

 

 

[This message has been edited by Kerry (edited 04 August 2001).]

Link to comment
Guest brokenwing

Well, Kerry, It's pretty obvious to me that I'm not going to change your opinion. Likewise, you're not going to change mine.

 

For those that may be following along, I'd like to try to sum up if I can:

 

Kerry's position: The difference in accuracy made by averaging is smaller than the precision of the handheld GPS units we use. Therefore, you will not see any gain in accuracy because you are saddled by the precision of the GPSR. (Kerry, please correct me if I'm wrong.) Kerry bases this on tests he conducted over a 24 hour period under (I assume) clear conditions.

 

My response: I will accept that under ideal conditions, averaging probably does not do much. This can be seen in the Estacada control point reading in the USFS report I posted above. This is because under ideal conditions, accuracy and precision are very close.

 

My position: Virtually all the situations I have encountered when geocaching place me in less than ideal conditions. Some better than others, but rarely would I call the conditions ideal. When conditions are poor, multi-path errors and poor satellite geometry are likely. This is when averaging helps because averaging decreases random errors. Think about it like this: The errors encountered in less than ideal conditions make a distribution pattern that would look much more like what Kerry probably had for his pre SA data. (Look at the USFS data from Clackamas for example.) Because accuracy and precision have a much wider gap under poor conditions, averaging helps minimize the effect of poor accuracy.

 

I would like to say to anyone that may be reading these posts and is unsure about averaging, to study the information that has been posted, especially the referenced websites. When you do, think about what is really being presented, by whom, and how that information relates to real world use.

 

Thanks,

brokenwing

 

[This message has been edited by brokenwing (edited 04 August 2001).]

Link to comment
Guest Kerry

l">quote:


Kerry's position: The PRACTICAL difference in accuracy made by averaging is GENERALLY smaller than the precision of the handheld GPS units we use. Therefore, you will not see MUCH any gain in accuracy because you are saddled by the precision of the GPSR. Kerry bases this on tests he conducted over a 24 hour period under (I assume) REASONABLE conditions WITH REFERENCE TO THE STANDARDS.


 

Also based on standard specifications for defining Predictable GPS accuracy. Repeatable of course would be slightly harder to totally define in this manner. As stated something that can be quantified and qualified back to a standard.

 

Someone try and define me a standard that could be applied with confidence to obstructions situations to define accuracy!

 

So many different variables almost every trial would show something different on almost any different day of the week.

 

Cheers, Kerry.

Link to comment
Guest Markwell

OK - now that we've done all the esoteric talk and discussed various theories and empirical results, what would either Kerry or Brokenwing have done with the seven different readings I received when I placed that cache?

 

Still haven't posted it.

Link to comment
Guest brokenwing

quote:
Originally posted by Markwell:

OK - now that we've done all the esoteric talk and discussed various theories and empirical results, what would either Kerry or Brokenwing have done with the seven different readings I received when I placed that cache?

 

Still haven't posted it.


 

I think you do exactly as you listed in your first post. It is certainly acceptable to throw out any outliers and average the remaining coordinates. Alternately, just average them all. That is the beauty of averaging. Either way, your going to be pretty close. Jeremy's method also works well, and can be pretty accurate. The concept is the same, it discounts any wild readings.

 

Thanks,

brokenwing

 

----

Eschew obfuscation!

Link to comment
Guest brokenwing

quote:
Someone try and define me a standard that could be applied with confidence to obstructions situations to define accuracy!

[/b]


 

Kerry, you're still missing the point. Obstructions cause errors in accuracy that cannot be adequately estimated at the time the reading is taken. This is no different than SA, when SA was on, hence this is why averaging still is valid. Think of obstructions as naturally induced selective availability. The fact that no specific standards can be applied to the conditions is the reason we need to average! If we knew the error that would be induced beforehand, we could simply adjust our readings accordingly. The fact that we cannot know this beforehand is the reason we averaged in SA times, and is the reason we still need to average.

 

Thanks,

brokenwing

 

----

Eschew obfuscation!

Link to comment
Guest Kerry

quote:

Originally posted by brokenwing:

 

I would like to say to anyone that may be reading these posts and is unsure about averaging, to study the information that has been posted, especially the referenced websites. When you do, think about what is really being presented, by whom, and how that information relates to real world use.


 

Is that a closed mind tunnel vision statement or what?

 

Ok for obvious reasons I've stayed away from scenario type obstruction situations because every scenario is generally different and could tend to be meaningless in providing constructive information.

 

The following link is a very quick comparison of some obstructed data. Now for those that don't beleive a 65% blanket obstruction to a GPS is not an obstruction then I've wasted this time as well.

 

http://www.cqnet.com.au/~user/aitken/gps/gps_obs.htm

 

Like I said it's just a scenario and ALL obstructed scenario's will be different. So many variables to take into account and if there's one thing to apply then it's at least awareness of the issues.

 

Also if Obstructions are naturally induced selective availability then we should be able to define obstruction specs (SA could be specified), but they aren't and we can't.

 

One point that is condstantly being missed here is that without SA the practical effect of averaging is questionable. If users think a few metres matters (remembering the equipment one is using) then that's fine but how do you really know what it is at the time you were there?

 

Technically the next person who comes along could cop the system sweet and what be off the mark?.

 

Cheers, Kerry.

Link to comment
Guest Kerry

quote:
Originally posted by Markwell:

OK - now that we've done all the esoteric talk and discussed various theories and empirical results, what would either Kerry or Brokenwing have done with the seven different readings I received when I placed that cache?

 

Still haven't posted it.


 

Markwell, it's sort of a difficult question to answer not knowing under what conditions and what the situation actually looked like at the time.

 

Basically (as I see it) jotting down or averaging a few readings over a short period of time hasn't given the system time to change any of its characterics. If it's a really bad location then it's certainly going to be better to have a few readings but spaced over a longer time peiod. The situation this afternoon could be entirely different (for better or worse) than say what it was this morning.

 

2 periods doesn't have much of a chance in deciding which group is right or wrong. 3 could help but of course the more the better based on simply stats.

 

If using a handheld generally always best to face the equator, no satellites 40-45 degees either side of the poles up to about 45 degrees.

 

Cheers, Kerry.

Link to comment
Guest Geoffrey

Heres how the placement of the satellites and the quality of the GPS receiver effect everything:

 

This link talks about satellite geometry.

http://www.ualberta.ca/~norris/gps/DOPdemo.html

DOP = Dilution of Position (the lower the number, the better the satellites are positioned in the sky)

EPE = Estimated Positioning Error (Accuracy)

 

Six Classes of Errors:

 

Ranging errors are grouped into the six following classes:

1) Ephemeris data--Errors in the transmitted location of the satellite

2) Satellite clock--Errors in the transmitted clock, including SA

3) Ionosphere--Errors in the corrections of pseudorange caused by ionospheric effects

4) Troposphere--Errors in the corrections of pseudorange caused by tropospheric effects

5) Multipath--Errors caused by reflected signals entering the receiver antenna.

Signals bouncing off Large objects (Manmade or Natural) can mess up the signal going to

your GPS receiver.

6) Receiver--Errors in the receiver's measurement of range caused by thermal noise, software

accuracy, and interchannel biases (noise within the receiver itself). Each Brand and model

of GPS is different, in the amount of electronic noise within the GPS itself. Try putting

your GPS near an AM radio. I find that the Garmin 3plus generates more electronic noise on

my headphone radio, than my old Garmin 3 GPS. The Magellan 4000 XL was real bad.

 

Averaging has everthing to do with satellite positions in the sky. If you are averaging when the positions of the Satellites are good, then deteriorate, then your averaging is no good. If the satellites are in a straight line across the sky, then averaging might not help, until they spread out. Ive had days when there where only 3 sats overhead, and i had days when i had a lock on a lot of satellites spread across the sky. You will only get the overhead satellites(not the lower ones) when you are under alot of tree cover, especially in dense woods. It could happen that all the overhead satellites are behind a tree, then you lose the the signal completely. I have seen that happen on the satellite page on the GPS.

 

If you want to place a CACHE among some big tall trees, you might want to use the satellite page to determine if the sats are positioned well enough to even try an averaged position fix. Nobody really talked about poor satellite geometry, in taking a position fix, or how satellite positions in the sky effect the quality of your taking an averaging of your position. Walk around the site you want to place a CACHE while looking at the satellite page, then take a position fix

 

The GPS navigation satellites(24 active ones) circle the earth every 12 hours. So they are constantly changing positions in the sky.

 

[This message has been edited by Geoffrey (edited 06 August 2001).]

Link to comment
Guest Markwell

quote:
Originally posted by Kerry:

Markwell, it's sort of a difficult question to answer not knowing under what conditions and what the situation actually looked like at the time.

 

Basically (as I see it) jotting down or averaging a few readings over a short period of time hasn't given the system time to change any of its characterics. If it's a really bad location then it's certainly going to be better to have a few readings but spaced over a longer time peiod. The situation this afternoon could be entirely different (for better or worse) than say what it was this morning.


 

Thanks brokenwing for your answer. Now a little followup for Kerry.

 

OK, I understand what you're saying (at least I think I do). However, in my world of Geocaching, I have a whiny 6 year old tagging along (he HATES hiding caches and keeps asking me if we can just find them). I need some way that I can maximize precision and accuracy in a short amount of time.

 

When I turn my eTrex off and walk a short distance away, the unit has to reacquire the satellites, which sometimes can take up to 5-10 minutes. I'm assuming the delay is that it can't find the satellites on which it just had a lock - and I'm hoping that the newly acquired sats would be a different constellation.

 

I usually don't have an hour to sit there and take readings, and many times, I've driven quite a distance to hide - which eliminates the possibility of driving back later in the day.

 

All I'm trying to do is make it so that I've got the best possible readings to post on Geocaching for others to find. How would you suggest I take my readings on my little eTrex (other than the facing the equator thing).

Link to comment
Guest PharoaH

All this fancy talk and quoting studies... I read the reports and learned some things that I wouldn't have known before, but none of it is really necessary.

 

What I mean is this. The question a conscientious cache placer should really ask is:

 

"Did I make a reasonable effort to ensure that my coordinates are accurate enough for the average geocacher to find my cache?"

 

In my mind, this means:

1. Make sure that you have good satellite geometry. You want at least 4 satellites locked in. You want those satellites to be dispersed across the sky, not all directly above you. If you don't have good geometry, you should try another day. If that isn't an option, make a descriptive hint and state in the description that you are concerned about the coordinates.

 

2. Take several readings. Stand over the cache for a few minutes then record the coordinates. Walk away from the cache. I think 100-150' should do fine. Walk back to the cache. Stand over the cache for a few minutes then record the coordinates. Repeat a few times.

 

3. Arrive at a median value or an average. In the case of a median value, try Jeremy's "Shotgun" method. For an average, just average the coordinates.

 

4. Test your coordinates. Come back on another day and act like a geocacher. Enter your coordinates as a waypoint and see how close you can get to it. This is what I would call "Quality control".

 

5. Monitor your cache. Keep an eye on the logs. If the coordinates aren't good, you'll see it in the logs. Other cachers will occasionally post what they feel the best coordinates are.

 

[Out, out dadgum typos!]

 

[This message has been edited by PharoaH (edited 06 August 2001).]

Link to comment
Guest Kerry

p/epe etc and note what looks like maximum conditions. One can only maximize the results at the time for that particular place, under the constraining conditions.

 

In 5-10 minutes the constellation wouldn't have changed all that much but one never knows it might have been just enough relative to the obstructions to open up a window even if only briefly? Nothing certain in that one, that's just the nature of the beast.

 

Also that 5-10 minutes to reaquire lock from a warm start sounds rather a long time?

 

Cheers, Kerry.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...