instagram

Saturday, January 31, 2015

OneShot125: Quantitative Testing

Note: this original spoke of the sensor rate being 666. However, this was a mistake and the sensor rates were 500 Hz and 1000 Hz.

There has been a lot of talk about OneShot125 mode for ESCs and how much better you can tune your quadcopter with it. Briefly, this is a mode that uses a pulse width of 125-250 µs instead of the typical 1000-2000 µs. This shorter pulse width allows a higher update rate (up to 2khz). In addition, the recommendation is to update the output as soon as you calculate a new value. This improves the latency of sensor to ESCs in two ways: the synchronous update reduces the latency due to two independent loops and the shorter pulse duration saves another 1.5ms waiting for the falling edge of the pulse. With an update rate of 400 Hz on the PWM outputs that latency has a worse case of 2.5 ms so ultimately you are talking about 4ms latency reduction.

It sounds somewhat unlikely that such a small time delay would make a difference, but at the same time the ESC latency is the ultimate performance bottle neck for quadcopters. With ESC response times getting into the 10-20s of ms, this communication latency can become an appreciable component.

Back in December ernieieft wrote Tau Labs support for it and with autotuning got higher PIDs and felt like it was quite locked in.  Here you can see the outputs from the four channels as well as the interrupt from the MPU9250 on Sparky2. This shows that each time there is a sensor update, shortly after it there is a pulse on the output channels (synchronous updates).


Zooming in, we can see what the latency is from the MPU9250 indicating a sample is ready to actually completing the pulse, and it is about 300 µs. Much better than the worst case 4 ms described above. This is with a short output pulse so in reality the time could go as high as 450 µs.


However, I wanted to quantitatively compare how much specifically OneShot125 mode really makes a difference. Luckily, we have our autotuning algorithm which actually allows monitoring the time constant of the delay from a change in the output to a change in the gyros.

Normal PWM mode

So first I ran autotuning with Seeing Spark and Sparky2 using KISS ESCs in traditional mode. This is with the sensor running at the default rate for Sparky2 (500 Hz).


After three repeats, this is what I got:




OneShot125

Then I cracked SeeingSpark open and soldered the JP1 bridge to enable OneShot mode.




Note: per request I have put the files to make this power distro board on OSHPark. I'll try and make a BOM and assembly guidelines and upload that soon.

Here you can see the outputs running at 500 Hz synchronously to the sensor.


Then popped in a fresh battery, and did a few more sessions of autotuning.






OneShot125 - 1000 Hz sensor rate

I also wanted to see if increasing the sensor rate to further reduce the latency would cause more improvement, so I set it to 1 Khz.


 and ran another three rounds of autotuning.




Statistics

So of course the punchline. Is there a significant difference between traditional PWM and OneShot mode as measured by the time constant of the response? Yes. Anova shows p < 0.01 for an effect of output type:

Anova results

And the multiple comparison shows that both OneShot tests were significantly different than traditional mode.



Conclusion

Using OneShot mode on this quad results in about a 20% reduction in ESC latency. Even before OneShot, these KISS ESCs were pretty damn fast so it already flew really well. I'm not sure I'll be able to tell the difference in rate mode, but I do believe a better acro pilot than me could. I also didn't run these parameters through the system models to start seeing what the difference in effective control bandwidth will be for the outer loops (e.g. attitude).

It also remains to be seen if there are similar benefits with BLHeli and I believe SimonK has support coming soon for OneShot. I'll probably repeat these tests once I have the right hardware.

Saturday, January 24, 2015

EKF: Enhancements and Unit Tests

We have a very nice EKF that was written by Dale Schinstock from Kansas University that is used for navigation in our code. Generally it works quite well, although it is fairly sensitive to tuning parameters. I've discussed this somewhat in this post about the magnetometer and this about vertical control.

One thing that I haven't liked (and has slown me down for rolling out updates to the EKF) is that we don't really have any test coverage of it - especially for various initial conditions and systematic biases that might exist.

Unit tests

Recently I wrote a python wrapper that calls into the EKF. I've had something similar for Matlab in the past, but of course the problem with that is it requires a matlab license and cannot be easily integrated into a test environment. With this wrapper in hand, I wrote a series of uni tests (using the Python unittest platform). With this I can systematically test things like how rapidly it converges from various initial conditions given a set of inputs and measurements (with the option to visualize the results, but of course having this disabled when systematically testing things).

For example, I can simulate bias gyro inputs and verify that it tracks that bias correctly

test_gyro_bias

Or initialized at the totally wrong attitude, the EKF will converge to the correct one. Here, it takes longer than I would like and optimizing convergence times (through the EKF tuning) is on my list of things to do and then add worst case values to the test assertions.

test_init_bad_q_bias

When given a position different than the initial location, it quickly snaps to that location at the first update (because the initial position variance is quite high)

test_pos_offset
It is also possible to inject noise and verify that the system remains stable in this case

test_stable

Problematic cases

It can also expose certain problems with the current EKF. For example, if the mag isn't perfect (for example I shrink the z-axis measurement) that results in a biased attitude and the subsequently coupling bias into the velocity and position. In this case, the quad is meant to be sitting still at the origin (except for the mag). It also learns a bias (that isn't there) to explain why the attitude isn't making sense.

test_mag_offset

In a related problem, when simulating starting facing north and presenting mags consistent with facing East, it does reasonably but ends up picking up some roll/pitch errors that influence position:

test_face_west
Another issue is that a fairly small (e.g. 0.2 m/s^2) bias in the accelerometers can really bias the vertical velocity and altitude

test_accel_bias

Fourteen state EKF with mag-attitude decoupling

To address these shortcomings, I made two major changes to the EKF. The first, is to track the bias of accelerometer in the z-axis direction. The complementary filter used for altitude hold mode has such a feature and it proves quite useful for getting robust performance. We previously had a 16-state variant with biases on all three axes, but in my hands that is overkill and actually can be overparameterized - so there can be incorrect solutions to a given set of measurements that are not a good state.

The second was a bit trickier - to make the magnetometer values only influence the heading (in earth frame) without influencing roll and pitch. The solution to this is to pre-transform the measurement measurements by backing out the current estimate of roll and pitch. Then the predicted measurement is based only on the heading term. It ends up being quite a bit of math so I used matlab to rework all the equations and come up with an efficient covariance prediction matrix.

This combination ended up fixing those issues. For example, having a badly scaled (but facing the correct way) magnetometer reading ends up producing positively boring results, with no bias of the attitude:

test_mag_offset

Starting facing the wrong way just nicely rotates around to the correct heading without any errors in the attitude (and thus no position errors)

test_face_west

And having a biased accelerometer is nicely tracked and corrected for

test_accel_bias

At the same time, it still correctly identifies when there is a bias in the gyros and learns this without having a problem identifying zero accel bias. Note that the position and velocity drift while it learns the gyro bias and gets the attitude correct


[Note to self. 49549c3b9c682641d2461a90e176a2f3db7dc1b8 passes all these tests]

Tuning and optimizing convergence

Armed with a filter that can at least deal with some of the systematic errors we anticipate seeing on a multirotor (e.g. distorted mag fields, imperfect accel bias), the next step is to tune it to behave well in real life. There are a few sets of major parameters that define the EKF performance:
  • gyro noise - Q[0..2], this is the amount of error expected between the gyro inputs and the change in state. increasing this variance will make the system trust the gyros less (and thus make things like the mags relatively more influential)
  • accel noise - Q[3..5], this is the amount of error expected between the accel inputs and the change in velocity. Increasing this variance will make the velocity estimation trust the GPS and baro relatively more. It will also alter how the roll and pitch are estimated - basically tuning between trusting the gyros versus accels)
  • bias walk - Q[6..9], how much the bias is expected to change during flight. Making this value smaller means the system will take longer to convergence when estimating the bias terms, but will be less likely to get random bias values when there is noise
  • gps position noise - R[0..2], how much noise is expected from the GPS position measurements. setting this lower will trust the GPS more and setting it higher will trust the integrated velocity estimate more
  • gps velocity noise - R[3..5], how much noise is expected from the GPS velocity measurements, setting this lower will trust the GPS more and setting it higher will trust the accel (and position) more
  • mag noise - R[6..8], how much noise is expected from the mag. trusting this more will get a more accurate heading but be more susceptible to anything that distorts the mags
There is also a component of redundancy amongst these term, in that the covariances can all be scaled by a constant without changing the behavior of the state estimate. However, it is convenient to try and preserve the real units (position in m, velocity in m/s, etc) so the variance has meaningful units (true m^2 variance).

Bias convergence rates

One important parameter to converge quickly enough are the gyro bias terms. We try and initialize these at startup but still tracking changes in error is important and a critical function of the EKF. To test this I see how quickly the EKF learns the gyro bias (within 10%) after initialization. This takes multiple minutes to converge which is slower than we want in practice. Using this code we can increase the bias walk parameters for both the XY gyro, the Z gyro (which is affected differently since it gets corrections from mags instead of accels) and the Z accelerometer. The goal I wanted was for the gyros to converge within 10% of the correct value (when initialized at 10 deg/s off) within 30 seconds. For the accel bias I initialized with 1 m/s^2. After tweaking the variances this was achievable (without making it too fast -- thus tracking noise -- or failing any of the previous tests).

test_gyro_bias

Checking that it did fail any of the previous tests (including those with simulated noise) is critical because if the bias drifts around it can create very bad estimates.

Changing biases

An initial mismatch in the biases is also different than a change during flight. The initial covariance parameters also play into the former. The later is important to be fairly stable although should still track slower changes. This can be simulated by letting the filter run for 30 seconds and then introduce the bias.
changing test_gyro_bias

You can see changing the gyro bias causes the accel bias to briefly change. This is because the biased attitude makes the velocity and position have error, but it self corrects. These influences are unavoidable in a coupled filter like the EKF. Similarly, we can change the accelerometer bias which takes a bit longer to correct.

changing test_accel_bias

Replaying simulated flights

I also wrote a very simple simulator which mimics taking off and flying in circles (with a net drift) and passes that data plus noise to the real implementation of the EKF to make sure that it behaves correctly. Again, this can also be done with an initial bias added or incorrect state to verify that the system ends up at the correct state.

First for a simple flight without anything mismatched (except for sensor noise) the state estimates all do the correct things.

test_circle

Then we can test that it converges when initialized with the wrong attitude (the dotted line shows the real data):

test_bad_init_q

Or with either biased gyros or accels and check that it converges to the correct flight plan and bias values:

test_gyro_bias_circle

test_accel_bias_circle
This test is a really good one. Simulate the quad rocking pitch up and down while yawing. This is a good robust test on the mag attitude compensation. In fact I noticed at one point while doing this that I had an issue and the accel bias could wander a bit much. This led to me finding a bug in how the compensation was applied and fixing it. You can see below that the biases stay stable.

rock_and_turn

Replaying real flights

I've written about using python to analyze logs we collect here and here, With this code it is also possible to replay log data  a real flight. This is important because there are numerous places where the real world violations of the model might cause issues.

Here is a replay of a position hold flight comparing to the real data (or online estimates). You can see the heading well tracked the previous estimate. The position tracked the real data. And finally, the biases were fairly stable through the flight, although perhaps with a bit more oscillation in the gyro bias than I would like.



Flight tests

Of course all this simulation is all well and good, but what really matters is how well it performs. In practice the heading was nice and locked in and the annoying twitch in altitude that I had observed in the past when engaging position hold was gone.


Here is another flight. This was flying Seeing Spark with Sparky2 running this new INS and logging to the android app via a TauLink. I used the log files to generate a video overlay to better see the performance in time with the video. I was mostly flying in position hold mode and using the loiter feature.


The estimates for position look really good and track well with what the video shows the quad doing. Some of the time it drifted laterally but it pretty much always indicated that on the map, so this means it is most likely an issue of tuning rather than a problem with the filter. For vertical velocity it seemed to be generally in agreement with what the video showed. In forward flight, though, the altitude estimate did not seem to track as well - often indicating it was climbing while it was dropping. At the end of the maneuver the altitude estimate would drop rapidly to catch up. Interestingly, in a number of these occurrences it was my impression the climbrate plot was doing the correct thing (and contradicting the altitude estimate).

This disagreement where the velocity looks correct and the altitude goes the opposite way suggests that the accels are producing fairly good data but the baro is not. It's possible the airframe shape of seeing spark with the board on the inside of a cavity creates negative pressure when flying forward. This could cause it to think it is climbing while it is not. I need to try repeating this with another open frame.

Baro glitch

Here was another interesting moment. I was testing with Freedom on a QAV500 frame. I noticed in the logs some times when the baro put out REALLY weird values. Like shooting up 50m in a very short time. When analyzing the logs with video, something became really clear:


So the bottom right has this silly spiky pattern on altitude that is not real. Interestingly, they happen once every cycle while I'm spinning level. I realized, this is when the sun is hitting the baro. Definitely time to cover it.

Wednesday, December 31, 2014

More log analysis

I've written previously about using python and log parsing, which this writeup uses heavily.

I was testing position hold with the new 14 state INS the other day with Sparky2 on Seeing Spark. The new filter is working great. I ran autotune and 6 point calibration and engaged position hold and it held beautifully still. The new estimation of the z-axis accel bias worked as it should so there was no glitch in the altitude. The new magnetometer handling meant that the attitude was wonderfully locked in and not biased while tracking the heading really well. Spinning around while holding and it didn't budge a bit.

I wrote some code to calibrate the magnetometer while spinning which I think will be quite useful. It fits the data to a sphere as well as making sure the horizontal component has the appropriate magnitude (this prevents fitting to an edge condition).



You can see that the blue data (x versus y axis) quite nicely fits a circle. This is both an easier calibration procedure than 6 point and more useful since you can see the deviation from correct (the plot on the right shows the magnitude of the mag data) and the is performed with the motors running at hover. This will also work with the built in logging to flash.

Analyzing glitch

However, at one point it was just hovering and then started going to the side. I've occasionally seen things like this in the past and really wanted to get to the bottom of this. Whenever I started digging into navigation logs I typically end up writing the same lines in python over and over again. I finally decided to sit down and write a log analyzer to facilitate this.


./python/logview.py -v TauLabs-2014-12-31_16-11-30.tll




This shows a snippet of the log file. The upper left panel shows the position and the upper right the velocity. You can see there is substantially less than a meter movement while holding and very low velocities. The bottom left shows the attitude and there are only a few degrees perturbation. The bottom right shows the gyro and shows that the quad was spinning around at the time. The interface also has a few options to toggle extra plots.

I zoomed in on when the hold deviation occurred:


What was extremely informative about this is that you can see the raw GPS position and velocity jump by a few meters at 188 seconds. Critically this occurs before the attitude deviates and not as a result of flying. Here is that time point zoomed in:


Again showing that there are clearly sample with the attitude nearly horizontal right up to the point where the bad position sample comes. In addition, you can see the INS racing to catch up (which wouldn't happen if there was a real change first since the accelerometers would sense it).

The end result of this bad position sample was the UAV flies the opposite direction to fix the perceived error. This is a tough problem since we have to trust the GPS generally to have any hope of a good position hold. It is also exactly what ArduCopter had to implement GPS glitch protection to solve. It has been extremely rare in my experience and within 2 seconds the GPS had corrected the error. However, it is definitely something where I'd like to get better logs.

Saturday, December 20, 2014

BrainFPV - cool new board using Tau Labs

I was lucky enough to get a BrainFPV from HeliShredder last week, which is a new flight controller that uses Tau Labs.





It is an impressive board with a lot of features in it's 36x36 mm size. Beyond having a full sensor suite (3-axis gyro, 3-axis accel, mag, and baro), its biggest unique feature is of course an integrated OSD.

This kind of integration also gives a lot of power - like the ability to switch OSD display modes on the fly.

I'll just paste from Brain's page for htFPV Specific Features:

FPV features:

  • Full-graphic OSD (360x266 for PAL):
    • Software adjustable black and white levels
    • PAL/NTSC autodetect
    • 4 fully user configurable OSD pages, selectable using switch on transmitter
  • Audio output (not yet supported by software)
  • 3 analog inputs for voltage, current, RSSI measurement
  • RSSI measurement using PWM, PPM, or analog input
  • Other Features:

Other features:

  • CPU: STM32F405RG (32bit, 168MHz, 192kB RAM, 1MB Flash)
  • 64Mbit flash for settings, way points, logging
  • InvenSense MPU-9250 latest generation 3-axis gyro/accel/mag
  • Barometer: MeasSpec MS5611
  • Receiver compatibility: PWM, PPM, S.Bus, DSM2, DSMX, HoTT SUMD/SUMH
  • Up to 10 PWM outputs (up to 400Hz update rate) 
  • Up to 3 serial ports for telemetry, GPS, RC receiver, etc.
  • External I2C port, can e.g. be used with an external HMC5883 compass
  • Micro USB port for configuration via PC 

Installing into MHQ Quadcopter

To test it out, I decided to use my foldable MHQ quadcopter from Steve (thingiverse link). First I had to splice a JST connector into the video line so it could plug into the BrainFPV controller.


This is the Pico camera from GetFPV with a mount that I designed to hold it in the MHQ. Then I plugged it into the BrainFPV and immediately had a nice little OSD showing.



Unfortunately my video recorder crops the edges so you can't see most of the OSD field. In the GCS you can also modify the layout, white black balance, and even switch the layout with a toggle of the switch.

HeliShredder did a nice job of fitting the OSD configuration into the configuration gadget. With it all tested, then I just had to finish reassembling my quadcopter.


It definitely makes the wiring really nice and simple having just that one board. I really need to get add a battery current/voltage sensor to this frame now, since the OSD can show that to me. I just hate soldering on the wiring harness :(


And all closed up and ready to fly. Time to practice my FPV flying ... indoors ...


You can see a bit of line noise from the motors. It is recommended to have a filter on the power line, but I don't have one at the moment. I'd recommend checking out HeliShredder's videos to better see the performance.

OSD History

I'm really stoked to see this board out there and running Tau Labs. Hopefully the hardware designs will be open sourced in the not-too-distant-future and it can become included as an official target.

Sambas (now with the LinuxDrone project) started the OSD project in 2011 back with OpenPilot. The original design had some issues and I ended up cutting up one of the prototypes to design a new way of syncing the two SPI channels together. 


The end result was we got it going, although with some issues that limited drawing to the edge of the screen and required running the microcontroller at a strange rate.


I believe some more work was done on the code subsequently, but ultimately the OSD with OpenPilot stagnated and nothing came of it. Luckily HeliShredder came along at picked up the code and ran with it. I believe he'd previously also had some experience developing on the Super-OSD project in the past.

Interestingly, he independently came up with a better way to sync the SPI channels that resolved those issues that Sambas and I had discussed way back when and it works great. There is also some nice tricks he used to also adjust the white / black balance to make sure it always looks nice that ended up an issue for us. He also did a lot of work to optimize the code so it can run with the flight controller jitter-free as well as the configuration interface to make it adjustable. 

Ultimately, this is a great example of open source where something would have otherwise died in a pile of unused code (sadly like my ESC has become) is now improved and used and I think will make a lot of people happy. Consistent with OSS principles, all his changes are available on github.

Final Thoughts

This board is going to be really popular, I suspect. Of course since it runs Tau Labs (and from my limited testing) it flies wonderful ;-) and I was able to simply import my configuration on this frame from when it was running Sparky and immediately it was tuned and flew well.

The OSD looks super crisp and sharp with good black and white levels and responds nice and quickly. I haven't even tried the modes where it shows waypoints and such, but apparently it does that. I can't wait to throw the battery monitor and GPS on there so I can see my speed, direction to home and battery status.

Between this and Gemini, I really need to spend more time FPVing.