Followers 1

# Averaging techniques for Handheld GPS

## Recommended Posts

Sometimes you hear that a recreational GPS is "only good for 10 meters" or some other number. And some people will claim their unit is "usually within a couple feet". I undertook to study what I could get out of my old unit with a LOT of averaging and some tricks on reading out finer precision than ddd.ddddd. After several years of occasional work on the problem, I think I've gotten the process nearly as good as I can.

Here is my report to date. The main result is Figure 2 on page 10 of this edition. It shows better agreement of the average with a supposedly known position than I could hope for, and a standard deviation of about 1 ft east-west by 2 feet north-south on 1-hour averaged waypoints. There is a lot of discussion about what I think I've learned in the process.

I'd be interested in other people's accuracy and repeatability data for various models, taken under fairly well controlled conditions, not just anecdotes. Most of what I've seen is "spot" readings, and some reports of the closest readings they got to a known position, not averaged waypoints like I used.

Edited by Bill93

I will try to give some shots during the competition.

I like your little grid and when it gets a little warmer I may do some on Triangulation Stations nearby.

I've been seriously thinking about a similar experiment when warm weather arrives. It would consist of three stages.

First, do exactly as you did, holding the handheld GPS over a geodetic monument of known published coordinates.

Second, with two persons holding two identical handhelds simultaneously over two separate monuments about a mile apart, record the satellites being picked up, and the coordinates of each GPS over a duration of about an hour. The purpose of this is to compare the drift of the two handhelds at a given moment, to see if there is any similarity.

Third, use these techniques to find HU1290 (CREEK). It's somewhere in those dunes in Cape May County, buried for 73 years, undisturbed. I tried unsuccessfully to locate it last summer, even with a metal detector.

One thing we need to take into account, however. The published coordinates in the NGS datasheets are for 1983. The ground has shifted by a few feet since then.

HU1290 is unusual in that the NGS data sheet gives only azimuths and not distances to the reference marks. Still, if some of those RMs could be found and the distance between THEM measured, some resection trigonometry would let you find the distances to the tri station disk. Can you see the lighthouse for an azimuth reference?

>"Ground has shifted"

Are you talking about local movements or continental tectonic drift? NAD83 is tied to the North American tectonic plate, so the coordinates don't move appreciably from that cause. The WGS84 coordinates of a physical point do change about 1 or 2 cm per year, or 0.5 foot per decade. Your recreational-grade handheld unit won't show any difference if you set it to WGS84 or NAD83. I found the HTDP conversion to work reasonably well for me.

I did a somewhat different experiment, testing the idea of using a GPS receiver for measuring a distance to a reference mark by calculating the coordinates of the reference mark. In my experiment, I used a long wall at the top of a parking lot with no roof. I didn't calculate coordinates, but instead I took lots of individual measurements, not averages, at several different distances along the wall that were measured with a tape measure.

The results were pretty awful, sufficiently awful that I never published the results here. The GPS receiver, as I recall, wasn't all that bad at measuring the distance, but it was pretty terrible at measuring along a straight line. Some of the points it indicated were a few feet off the line. I still have the data written on some 3x5 cards.

Edited by Black Dog Trackers

The hour-long averagings are interesting and your results look surprisingly good to me also.

However, in looking for a horizontal control mark, hour averaging isn't what will be done, except possibly in the case where digging a lot is going to be required, and even then, the digging area, as per your graph, is going to be about the size of a king bed. Instead, a person will be holding the GPS receiver in hand and looking at its readings while walking around. This is why, in my experiment, I wrote down every change in readings while the unit was stationary (I also calcualated the average of the data). The readings wandered around a 4 foot square area. Unfortunately, as I was saying in the previous post, it was not the best place for the 4 foot square area.

Does your unit have WAAS? Mine doesn't.

Yes, WAAS turned on. Good point.

Hey, the spread on my data is not king size, it would almost fit on a twin!

But the results of the tests were both pleasing and disappointing. I was pleased that two methods of determining position agreed reasonably well. The triangulation is not an NGS station, but my own triangulation/resection project using a few known points spread over several miles, lots of radio towers, and a transit.

And I was disappointed to find that it does take an excessive amount of time to get down to a region that is still a bigger size of hole than you would usually want to dig.

Edit: If the usual statistical behavior holds, the standard deviation of the average goes down as sqrt(N) waypoints used. So only a few 1-hour averages would give me a 95% chance of being within the (non-gaussian) internal computation error. As discussed in the report, I estimate the internal computation error at some points as 2 feet because straight line distances (A- + (B-C) miss (A-C) by that much. That's what ultimately determines the size of the hole to dig.

If somebody can come up with carefully taken data showing another model works better, then I would seriously consider buying a new one. But I think a lot of the variability is propagation stuff that you can't get around unless you use corrections from something more specific to your locale than WAAS. That's why the pros use OPUS to compare their measurements to the nearest CORS stations. (Of course they also use carrier phase, and keep track of which satellites were used, etc.)

Differential measurement using two similar handhelds a mile apart would be interesting. You should start and stop them at the same times. But you don't have control over which satellites they want to use, and if one can see a satellite that the other doesn't then you would expect some discrepancy in the average.

Edited by Bill93

Third, use these techniques to find HU1290 (CREEK). It's somewhere in those dunes in Cape May County, buried for 73 years, undisturbed. I tried unsuccessfully to locate it last summer, even with a metal detector.

..I might be interested in joining you for that project - I've had HU1290 programmed in to my GPS for 3 years now! (I always have passed on that due to time constraints) I usually get to do recoveries, I mean vacation, there in the fall. A good 2nd location could be HU1295. (Am also still determined to find the one @ the library in C.M.C.H. - you know which one I'm talking about;)

Incidentally, my daughter's 5th grade science project (How Accuate is a Handeld GPS) consisted of taking 3 units to a tri-station near our house on 8 different visits, (boy, did I love helping w/ that one!) where the average of the individual results were nearly spot on & individual readings probably covered a queen size, except for 2 anomalous readings on a day w/ really lousy weather. Calculations were '5th grade math', but she did win 1st place! (she still refuses to go on 2 mile hikes to the mountaintop tri-stations, tho..)

Edited by Ernmark

HU1290 is unusual in that the NGS data sheet gives only azimuths and not distances to the reference marks. Still, if some of those RMs could be found and the distance between THEM measured, some resection trigonometry would let you find the distances to the tri station disk. Can you see the lighthouse for an azimuth reference?

>"Ground has shifted"

Are you talking about local movements or continental tectonic drift? NAD83 is tied to the North American tectonic plate, so the coordinates don't move appreciably from that cause. The WGS84 coordinates of a physical point do change about 1 or 2 cm per year, or 0.5 foot per decade. Your recreational-grade handheld unit won't show any difference if you set it to WGS84 or NAD83. I found the HTDP conversion to work reasonably well for me.

I think New Jersey is shifting about 1 inch per year in a NNW direction. I'll have to ckeck that out at work. But that would make a differnce of 2 or 3 feet since 1983. If we're talking about narrowing down a handheld's accuracy to within a few feet by long term observations, that would be worth taking into account.

If you look at the photo looking north, you will see a jetty at the horizon. That is the south side of the entrance to the Cape May-Lewes Ferry (the light blue dome to the right being the ferry port). HU2834 (NJGPS 2) is on an old concrete foundation for a navigation light that could probably be seen from HU1290. I don't think the lighthouse was visible. If the second phase of my planned experiment works out, I could have my daughter get up there, chase off the seagulls and scrape off the guano, and read her handheld simultaneously.

Ernmark, I'll contact you if and when I make a second attempt.

If all goes as planned I will have a cool GPS to work with.

It is yet still a handheld.

But before I get it and go off the deep end...let me clarify.

I have 2 Garmin Rino's soon to be 3.

The ones I have now have POSITION REPORTING.

But the transmit limit is 2 miles + -.

The new one will be 15 mile plus.......yeah buddy.

OH for HAM Operators there is a program for using your Garmin Rino for Position Reporting using the repeaters.APRS

And the DGPS

I have mapped using this feature and tracks enabled.

I will set up one at a point and then set another,I then cross position report each GPS back to the other.

I did this on a Sink Hole area and it worked pretty well for determining and watching the hole grow.

I now also have a program that will give a running elevation profile and speed the position is moving to save.

I will run it all on my laptop if things go just right.

I also need to do it before the leaves come out and give those awful refractory problems.

I'm going to try your 4.2 method but I have a couple of questions.

If I understand correctly:

• I just make up four waypoints that are N, S, E and W of the control
• I switch the position format to User UTM or Lambert
• fiddle with the false Easting and Northing to get the four points on the display

My gps (Garmin Colorado) has two Lambert options, one uses two parallels (?) (I don't have it front of me right now). If the Lambert projection(?) is better than just using UTM which Lambert should I use?

The Colorado doesn't do waypoint averaging but I was thinking that I could just turn the track feature on while it is sitting there then get all the data to a spreadsheet and do the math there. If this works how would I chunk up the data to make it better match your averaged waypoints?

The most convienient control for me to use is in the grass between a freeway on ramp and the freeway, probably a good thirty feet to the pavement of each. Will the cars speeding buy skew my results?

thnx

Either Lambert would probably be fine for that method. It only needs to have low map distortion over a few yards and the scale factor only needs to be good enough to be accurate to a fraction of a foot over those yards. UTM is the only user defined option I had, and as noted it has a problem with finding a northing that accommodates the distance from the equator in feet. In Lambert you might even be able to increase the scale factor and work in decimeters or 4-inch units instead of feet.

The arbitrary waypoints are to get a conversion between lat-lon and the user display format by reading out the same points both ways. I am assuming that if you enter ddd.88882 it expresses that value to its best internal representation of ddd.888820 whereas a waypoint captured live that displays that value might internally be anything from ddd.888815 to ddd.888824. I used 4 points because that averages out some of the internal rounding. That rounding is evident when you compare the sum of distances along a line to the total distance the unit displays.

No waypoint averaging? That's a bummer. But if you can get user format values for track points, that's a great idea. Why didn't I think of that? I'd set the unit to capture track points every 30 seconds. I'm not sure what it would do on automatic - it may only record when it thinks it has moved and that isn't good. I'd set up the unit, let it settle for a while, and then either clear the track log or note a starting time and ignore points before that.

You could do an interesting plot with that data, by averaging blocks of N minutes worth of points into block averages, calculating the standard deviation of the block values, and plotting it versus N.

I used an hour to be a fairly long average. I didn't want to stand out there for several hours doing something every 5, 10, 0r 15 minutes. I just sat in the house and went out hourly to take data.

Cars going by at that distance may have a small momentary effect but probably not consistent enough to be serious at the foot resolution level. I'd be a little afraid of setting up something in the location you describe. I've read several tales of surveyors doing real work and getting fined by CalTrans for not having a permit to be on the right of way.

Edited by Bill93

But if you can get user format values for track points
... Nope, tracks get recorded as decimal degrees. I haven' t been able to decode all of the stuff coming out of the USB port yet but the RMC sentence is in decimal minutes. Think I'll try it with the tracks and see what happens - seems like you can get fractions of meters in your result with all that averaging going on.

Anyway, I learned more about map projections and now those arbtrary waypoints I made match up very well with the SPC coordinates on the station data sheet. While out hunting I've often thought in terms of "x feet north and y feet east" when looking around at the vicinity of a station - so those eastings and northings will come in handy - thnx.

and getting fined by CalTrans
I've found a couple of much more suitable/enjoyable stations that I can sit down and read a book at (or all the stuff I've been accumulating since I started this) - no need to atttract Smokey's attention too often .

I find that with my Garmin 76s and MapSource program, the data transfers (in some format I haven't investigated) with sufficient precision that tracks or waypoints are displayed in whatever format I select in MapSource. I can set up the same USR format in Mapsource and see the same 1-foot numbers for waypoints as I had in the handheld unit. They all agree except for a very few and I assume those were so close to being rounded either way that it doesn't matter.

My USR UTM is calibrated to State Plane Coordinates like on the data sheets, but since I'm in a Lambert zone it only tracks with SPC for a short distance. It's off 10 feet by the time I get a few miles away. USR LAMBERT is a feature I'm going to want in any upgrade unit I buy.

I used Bill Wallace's excellent suggestion to use the Track Log, setting the Garmin MapSource program to my USR UTM grid display format. I saved the file as tab delimited text, did some manipulation, and ended up with an Excel spreadsheet. I ran a 10-minute moving average over the data and made these graphs.

You can see that the North coordinate seems to shift from morning to afternoon or next day. Note that the short-dashed stays under the other lines most of the time. So even averaging for a couple hours in one session would not give as good a result as taking some 10 or 20-minute averages at different times or days.

There is a guy that has been doing GPS averaging experiments for years people might be interested in.

His site is: David Wilson GPS Accuracy

Some of the data is pre WAAS, but there are WAAS results also.

- jlw

So even averaging for a couple hours in one session would not give as good a result as taking some 10 or 20-minute averages at different times or days
So I was thinking that I might run the data through an FFT and that it might give an indication of the optimum period for averaging. But I guess playing around with the periods is easy enough in Excel. I have to get some time off and some data first.... sigh. In your graphs the solid line is the running average? and the short and long dashed lines are..?????
I find that with my Garmin 76s and MapSource program, the data transfers (in some format I haven't investigated) with sufficient precision that tracks or waypoints are displayed in whatever format I select in MapSource. I can set up the same USR format in Mapsource and see the same 1-foot numbers for waypoints as I had in the handheld unit. They all agree except for a very few and I assume those were so close to being rounded either way that it doesn't matter
So when you export from Mapsource it stays in your user grid format? I'll have to try that.
There is a guy that has been doing GPS averaging experiments for years people might be interested in.
Thnx.

Wilson has a lot of interesting data. Very little of it relates to what can be achieved with long averages using WAAS. The ultimate accuracy is hard to predict from shorter-term rms error (even with a lot of data) because the real behavior is not Rayleigh. The computational accuracy and display rounding get into it when your averaging is long enough get the HDOP and other effects under control.

The heavy line is the "accepted" position, which I expect to have an accuracy better than 1 foot.

The other lines are 3 different runs on an afternoon, a morning, and a different afternoon.

I tried an FFT and didn't learn a lot right away. It may take a LOT of data to get a smooth enough plot to see much. And note that if the propagation differs from day to day you would need many days worth of data to get those frequency components represented in the FFT plot.

I believe the transfer from the Garmin 76s to MapSource uses some format with sufficient precision to retain most of the accuracy in the data. That needs to be investigated for confirmation. The handheld unit and the MapSource program each perform conversions to whatever display format you select, and they agree except for an occasional least-digit difference where I suspect one rounded xxxx499 down and the other up. You have to set up the USR format in each place if you want to see that display, but I don't think it controls the transfer.

Not as fancy as yours but I have been averaging during the contest and have 1 ready.

This one was within 2 feet or less.

HD1354 SALEM

Not as lucky with some of the others but close.

I will add a few as I get better at it.

Bills (both!):

I'm not a math expert, nor the best GPS expert on the forum, but:

1) I'm reasonably sure the GPS HDOP "variation" is not completely random. It is tied to all sorts of physical phenomena (day/night, ionospheric variations, clock and orbital variations of 20+ sats, etc), each with it's own time cycle.

2) Your "averaging" methods all assume (or depend on, or are designed to work) on variation that is random.

3) At best, I think that a very long term average will eventually converge on a composite of the various physical phenomena cycles, possible with an underlying random noise floor.

There is a reason survey grade GPS receiving "systems" (including off-line NGS corrections, etc) go through all sorts of wickets to correct the collected data. It would be nice if a long term average of a consumer grade GPSr would result in a "really good" position. But - I don't think it will happen. A little better, sure. Worth 10 minutes when placing a cache, or maybe 30 minutes before digging for a triangulation station. But I wouldn't expect huge improvements. Just my opinion / experience.

Larry

(Klemmer)

Edited by Klemmer & TeddyBearMama

1) Yes there is probably determinism behind some of the effects. If you don't know the details and there is no obvious pattern in your results, then treating them as random is a reasonable thing to do. As an extreme sort of analogy, physics could determine how a flipped coin will land given all the linear and angular velocities, distances, etc, but you don't know the details so you treat it as random.

2) The most important assumption is not that the variation is random but that it is unbiased (zero mean) when sampled at "random" times. Even a pure sinusoid meets that test. If you take a reading every day when the constellation is very similar, then unbiased might be a shaky assumption. If you have a bunch of different constellations (times) and different ionospheric conditions (days), then I think it isn't a bad assumption.

3) Your post has a tone of "you shouldn't try to do that." This data is an attempt to find out what can be done, and what the limitations actually are, rather than giving up, guessing, or relying on anecdotes like "yesterday I was within 2 feet".

I certainly need similar data for more known positions before having great confidence in the results. Particularly so regarding whether it is unbiased and the limits on internal computation precision. What I have so far seems like a reasonable and informative start.

Survey grade equipment and processing gets positions to centimeters by using more of the GPS signal features, primarily carrier phase tracking at both the rover and reference stations. I know we won't get there with a CA code receiver and its internal processing accuracy.

What I seem to be finding is that you CAN do better than a 10 or 30 minute average if you have a reason to put the time into it. It is better to get data at multiple times rather than one long average, and HTDP is better than the datum (non-) conversion in the handheld. The paper also discusses some of the problems that are encountered.

Bill(s):

Oh, no, I didn't mean you shouldn't try it. Sorry if it sounded that way. I was just trying to point out some of the pitfalls and other issues. Try away! I've been thinking of trying some long period averaging myself, which is why I gave the concept some thought. But, I don't really have a practical way to do it, that I have figured out yet.

Larry

(Klemmer)

Although my benchmarking experiance is bordering on nil:

It seems that it might be worthwhile if you could determine

• that taking x waypoints over a period of time will bracket the true location
• that inaccuracies are biased towards a certain quadrant at certain times of day

Generalized things like that might make finding a pesky station easier ( for us beginners anyway): narrowing your search to a few square feet instead of a few square meters.

I happened on an interesting plot in a thread on another forum, showing what a professional GPS receiver measured on one point on several different days, before applying any corrections. His receiver should be better than ours because it is using more parts of the satellite signals, and his corrections, when later applied would be better than WAAS.

I tried playing around post-processing hand held GPS data a while ago.

I was able to capture the raw data from my Garmin and convert it to RINEX format using the free Gar2rnx program: http://artico.lma.fi.upm.es/numerico/miembros/antonio/pd/

Topcon Tools (commercial GPS software) flat out refused to have anything to do with the data.

So then I tried sending it to be post-processed through Natural Resources Canada's Online Global GPS Processing Service:

http://www.geod.rncan.gc.ca/products-produits/ppp_e.php

I got some real interesting results from this. Much more detailed than you get from OPUS. These included

Estimated Parameters & Observations Statistics with graphs. The sad thing is after all this it was still about 2 meters off from the station. This is when the statistics showed the standard dev. leveling out at 0.5 meters after about 20 min. My conclusion is; for less than \$150 my Garmin will get me closer than I ever would have gotten with out it. But if I want survey accuracy I will need survey equipment.

This was still fun and if you want OPUS like results from your hand held try the Canadian site. You can see your "Tropospheric Zenith Delay" in meters and your "Station Clock Offset" in nanoseconds. Don't know what that means but it sure sounds technical.

Was it giving you NAD83 or ITRF positions? For California the difference is 1.25 meters more or less which may be a major part of your difference.

- jlw

Was it giving you NAD83 or ITRF positions? For California the difference is 1.25 meters more or less which may be a major part of your difference.

- jlw

CSRS

CSRS is the same as NAD83

I think the problem is in the reciever, too many cycle slips. And chosen antenna type is none, don't know where the phase center on a Garmin Etrex is.

Edit: I first thought you were depending on the Garmin conversion between datums, which is a null transform, but on rethinking it, I guess your technique really does end up in NAD83(which??).

Edited by Bill93

## Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

×   Pasted as rich text.   Paste as plain text instead

Only 75 emoji are allowed.