The setup for this lab required that I install the needed Artemis libraries and that the examples ran as expected. In order for the examples to work I had to raise the baud rate to 115200.
The first example was a simple program that caused the LED on the board to blink repeatedly and it worked as expected.
The next example tested that serial communication was working with a simple echo program, which returned the inputs as expected.
This example tested the temperature reading on the board. As the video demonstrates, the example runs as expected - the temperature reading increases from about 32400 to 33000 when held in my warm hands, and decreases back to about 32400 when blown on.
In this example the test worked on the microphone in the chip. It again gave the expected results and the frequency reading increased when I wistled and was otherwise reading 0 besides occasional background noise.
To begin we needed to make a python virtual environment and get Jupyter lab working. I experienced a lot of issues with python configurations and with getting Jupyter lab to work. The solution ended up being moving the location of my python instillations, deleting the older versions, and running the Jupyter lab command with the `--no-browser` option to prevent it from crashing. Once this was done and the codebase was downloaded, I could check that it had worked successfully by running burning the sketch onto the board and observing the MAC address being printed out.
The codebase uses a UUID and the MAC address of the board to facilitate connection between the board and laptop. They are used as identifiers so the board and computer know they are connecting to the correct device. The base makes use of BLECharacteristics to send messages (strings) between the board and laptop.
The configuration required that I generate a new UUID, so my devices could identify each other among all the other devices searching for bluetooth connection. So I generated a new one and, in connections.yaml, updated the ble_service field to be the new uuid as well as the artemis_address to be the MAC address that was displayed earlier. I also had to change line 53 in base_ble.py from if AT_LEAST_MAC_OS_12, to if True, in order for things to run correctly.
The video below demonstrates the demo file running as expected.
I implemented the echo command as was recommended in the handout, so the board responds with "Robot says -> [input] :)" Because the requirements are very similar to the PING command, I used that as a framework and edited the strings that were appended.
This command required more to implement because it is not a command that exists yet. To make it a valid command it needed to be added to the CMD enum in cmd_types.py, and the CommandTypes enum in ble_arduino.ble, and as a case in handle_command function also in ble_arduino.
Constructing the string was similar to what happened above and the time was gotten using a built-in function.
Here I was to create a notification handler which would be called whenever a message was received from the board. This is the final form of my handler that handles all the commands from this lab. It will store temperature and time readings in ther own lists as well as storing a log of each message received in a list, and it prints each message as it receives it.
This task called for me to have the board send as many messages as it could in, what I decided was, 5 seconds. To accomplish that I assigned the sending loop to a command called SAMPLE_TIME. The process for adding this command was the same as for GET_TIME_MILLIS and the function was a loop of that command.
As the image above shows, the board sent 634 messages in the 5 seconds it was running, meaning it has a transfer rate of about .126 floats per millisecond, or .504 Bytes per second.
In this task the board takes some number of time readings (I chose 100), and sends them all at once instead of as they are recorded. This command SEND_TIME_DATA required a global list, and even though the measurements are all taken before any are sent, they must still be sent one by one because the BLECharacteristics places a cap on the size of string that we can send.
As can be seen in the screenshot above, the total time elapsed while the 100 measurements were taken was only 2 milliseconds, meaning it recorded at least 50 times per millisecond. So it is obvious that sending measurements as they are taken significantly decreases how often a measurement can be taken, so if the focus is on taking as many measurements as possible, say for analyzing what happened while the board is running, then it is better to send data in batches so you have as detailed of a dataset as possible.
To complete this task, taking a temperature measurement for each of the time measurements in the above section, another new command was needed to store and send this data called GET_TEMP_READINGS.
As you can see in the picture below, the function seems to be working as intended, but it taking the extra temperature measurement does significantly slow the rate of measurements being taken. In the second screenshot you can see that the total time elapsed for 100 time and 100 temperature measurements is ~6400 milliseconds.
As mentioned in the last part of the All at Once Efficiency section, sending data in batches allows for a much higher data recording rate. As mentioned above, the batching method allows for around 200 bytes per millisecond to be sent, way more than the .5 bytes gotten sending each individual measurement. In this sense it seems like there isn't a reason to be sending measurements as you take them. It is more efficient to take a group of measurements and send them all at once. The amount of data that you allow to build up before sending probably depends on the application you need the data for. Like the example I gave earlier, if you are simply doing logging, it is fine to only occasionally send data so that your logs have very precise data, but if you need the measurements for calculations critical to the application running, sending in small-medium batches is probably the best way to go. Assuming the data rate of purely recording the time (200 bytes per millisecond), it would take a bit under 2 seconds to fill the board's 384kB of RAM.
Above shows the ICU connected to the Artemis via the QWIIC connect cable. The only setup I needed to get the example code running was to change the value of AD0_VAL from 1 -> 0, which changes the starting I2C address.
Below is a video of the accelerometer (above) and gyroscope (below) data. The pink, blue, and yellow lines represent the x, y, and z axes respectively. They are all being measured and a notable difference between them is the accelerometer data returned to 0 when the board was not moving.
To test the accuracy and noise of the accelerometer I made a new command to collect 1000 pieces of data and measured the pitch (first) and roll (second) with a range of -90 -> 90º using the equations below. Included with a graph of the two is the fourier transform frequency graph.
I got these measurements by rotating the board on its edge on the table. The data seems to be pretty accurate although quite noisy, so below is the same data passed through a low pass filter with a cutoff frequency of 10 Hz. It does a good job of mitigating the noise as well as muting any shakes from my hand.
Below are the graphs for the pitch(left), roll(center), and yaw(right) as recorded by the gyroscope. The board was not moved during these measurements so the drag is very clear, but there is little range in the measurements so they are precise, although not accurate. Below those is data from me rotating about each axis sequentially from -90 -> 90º and it takes seemingly precise and steady readings.
I implemented a complementary filter with an alpha value of .3. I chose that number to mitigate the noise from the accelerometer. Below is a screenshot of the pitch recordings from the gyroscope and from the complement filter. They follow the same trends but the complement is more responsive to quick changes and does not suffer from the drag of the gyroscope.
After moving sampling to the main loop and removing print statements, I was able to record new IMU data every 2-10ms. I believe it fluctuated that much because 2ms (.5 kHz), is about the max rate of the IMU's accelerometer and gyroscope data collection. This sampling is too quick though and would only serve to fill the artemis' storage too quickly. The average sampling rate came out to about 3ms.
So while sending at this rate the artemis can store and send up to 1500 datapoints in about 5 seconds.
After playing with the robot for a bit I found that it is a shockingly quick car. It can turn on a dime at any point, if done while moving quickly it makes the car roll over. Similarly it drives at the same speed forward or backwards, and swapping between them quickly makes it flip over. One thing I'll need to be careful about is how instantaneous it changes direction. It happens fast enough that there is a loss of grip on the wheels, and as mentioned it is prone to flipping and rolling.
We will be connecting two identical Time of Flight sensors, which means they have the same I2C address, 0x52. To use both simultaneously requires using the XSHUT pin to change the address of one of the sensors. I will be placing the sensors on the front and back end of the robot. Since from my understanding our robot will be acting in static settings, it will not need to worry about an obstacle coming from the side, which will be its blindspot. The 180º separation of the cameras will make it easier to swap between driving the robot forwards and backwards. Below is a diagram of how I connected the sensors to the Artemis board.
Above is the final product of connecting the first sensor to the Artemis and scanning for it. The address printed is 82, which is 0x52 in decimal. This means I will need to choose an address other than 82 for the other sensor to use. In choosing the distance mode, I decided to test the short range. It is more accurate than the long range mode, and less prone to setting noise. However, I believe for stunts that require the robot to drive forward at a high speed, the short range will not see far enough ahead for the robot to react. In the mean time I have tested the short range mode by gathering 50 readings at .3, .6, .9, and 1.2 meters and averaging them. Below is a graph that compares the measured and actual distances.
The graph shows the readings deviating more as the distance gets further. I believe some of this is likely due to error in my measurements of the actual distance. In the screenshot on the right it shows the variance of each measurement set in mm. It shows that the sensors are very precise, if a little inaccurate at the edge of its range.
To use both sensors at once, one sensor must receive a low signal to its XSHUT port. This will cause it to ignore what is on the I2C bus even if it is addressed to it. Then the other sensor must have its address changed, and the XSHUT can be returned to high, and the sensor will receive I2C communications normally.
Above is my first attempt at gathering readings as quickly as possible along with the code that was being executed. A new datapoint was being recorded about every 100ms. I thought this was being slowed by all the Serial.prints, so I changed it so that it would only print time when a new point was available, but this resulted in a more consistent 100ms, not a faster read time.
This sampling rate is much slower than the VL53L0X data sheet's 50Hz sampling rate. So the limiting factor is the Artemis, although I am uncertain why. Finally I created a method to send sensor data over bluetooth and plotted it using matplotlib.
I chose to use pins A0 and A1 for the right motor, and A14 and A16 for the left motor. I mostly chose these pins because I wanted the pins to be on the same side of the Artemis as the wheels they were driving. This made it easier to follow wires when I was having issues.
Using a separate battery for the motor drivers and the Artemis prevents the Artemis from seeing jumps in current that the motor drivers can cause for example when they start up.
To test using the board, I set up the power supply with 3.7V and cycled the PMW value from 255 to 0 every 50ms. The oscilloscope showed the board behaving as expected by generating a square wave, meaning the output was a constant value.
Unfortunately I forgot to record the wheels spinning independently before fully assembling the components, but above is a video of the wheels spinning in both directions individually at the end. The code I used to do the original testing was very similar to what was used with the oscilliscope, but for the video I used a bluetooth command that sets all 4 pin values.
But once I had tested that the motor drivers were connected properly I assembled everything onto the chasis. Not visible are the motor drivers which are under the tape that the Artemis sits on.
To find the lowest PWM value that could drive the car, I set it on the ground and started with a value I was confident would work (100 for both forward pins) and decremented by 10, alternating between decreasing left and right. I ended up determining that the lowest I could drive with is 40 forward for both sides. To find the minimum value to do an on axis turn, I started with left forward at 40 and right backward at 40, and did the same increment/decrement testing to find the balance of forward value = 2*backward value for ~90º turn.
Unfortunately as I attempted to make it drive in a straight line I found that the minimum being the same for both was a coincidence, because to drive straight the left motor needed a much higher value than the right. I began with 100 for both forward pins, to ensure it could cover the minimum 6 ft distance, and saw that the path was heavily leaning left, aka the right motor was spinning faster than the left. I guess and checked my way to a balance and settled on the right PMW being 0.6 * the left. The guess and check method provided some gems such as this
I found the values like this because I made a bluetooth command to update the PMW values, which made it very simple to continuously try new values. The bluetooth command code is shown above with the demonstration of both wheels spinning. But below is the final video of the car driving straight and the PMW values that I settled on for the test.
For my open loop, I had the robot drive forward, do a spin, then reverses out of the spin.
To implement the PID controller I added a bluetooth command that would set a flag, and in my main loop if pid is set, then it calls the Foot() function which will attempt to get a reading and send motor inputs. I also had a STOP command to interrupt the control and stop the robot and send data immediately. Those functions are shown below.
I also added a record flag that will call a logging function every time the Foot() function is called. Once the arrays were full or STOP was called it would send the stored data. This allowed me to have finer control over what data I recorded as well as allowing me to run the controller as long as I wanted.
To implement this controller I started with a P controller. I decided that a kp gain of 10 would be good because I am measuring distance in feet, so my max distance value is about 13 feet in long distance mode (which the sensor is in), so a gain of 10 makes my P-value range from about -10 to 100, which gives easy scaling for motor inputs by percentages. The else case is for linear extrapolation which is later but the rest simply checks for a new sensor reading and updates current distance and error beliefs.
Since u = P and represents a percentage of error, it is easy to scale into a range of motor inputs. Since my motors drive weaker backwards the range for those values is [140, 255] and forward it is [90, 255]. To keep the bot straight I account for my right motor being stronger and give it 30% of that backwards and 50% forwards.
As you can see in the video the bot stops before it hits the wall (barely) and after some oscillation settles at about 1ft from the wall. As you can see though my sensor was starting to experience large semi-random spikes in readings which made it very difficult to implement the integral and derivative part of the controller. Below is a graph I got from holding the robot still for a few seconds, as you can see it occasionally reads very high. Before the sensor started getting erraneous readings I had settled at kp=8, ki=3, kd=0.25. I used the data from that run (below) to determine that the sensor readings were coming in about every 100ms. I then added the else case from above so that the bot could estimate its position based on it's previous readings.
Below is a video of the P controller running with the extrapolation functionality. I also have added the controller output graph, the spikes correspond to the erroneous readings and the distance measurements are large enough that the graph isn't usable. However I have a screenshot of the sent data and you can see it settles at just under a foot from the wall.
Initially started by having the PID function use the data stored in logging arrays, which allowed the orientation to run for about 7 seconds. I then modified the arduino code so that it could run the controller and store and then send data for the 7 seconds without interrupting the control. I created a command to set a flag to run the PID controller and in the main loop, if the flag was set it would call the function. This command also decided the target angle for the bot to align itself with respect to its initial position. Similarly there is a command and flag for recording data, and when the logs are full it sets another flag to start sending in each loop iteration instead of recording.
I also added a stop command so that I could stop the robot at any point during its operation and send any data it had recorded, and a command to change gain. Because of these, I could run the PID controller indefinitely while collecting data and adjusting gains as needed, and pause anytime the robot got out of control.
The ICM does have some drift as you can see in the graph above. Those measurements were taken with the bot sitting still, so it does slightly drift positive. One way to accommodate this would be to use the complimentary filter we created in lab2, but I found that the drift in this case is minimal enough that it does not impact the goals of orienting the robot unless it is left to sit for several minutes. Another limitation of the gyroscope is the rate at which it reads data, the maximum rate it can read a change in degrees is 2000 per second, which I set it to ensure data is being gathered as quickly as possible. I did this by adding the below code to the setup section of the arduino sketch.
I started by implementing just a P controller. In my PID command I read in a goal angle, and make sure it is between 0 and 360. Then in the Pid control function, I assign my goal to be that angle distance from the first measured angle. This helps mitigate the effect of drift from letting the bot sit before starting the controller. This worked pretty easily sort of by chance. I selected 2 as my beginning Kp value arbitrarily and it just so happened to be a good value.
The python code above sets the minimum PMW values to account for the difference in motor strength as well as helping scale up the rate at which the robot turns. It then sets the gains so that only the P value is used, and sets the recording flag. As you can see in the video, this setup has the bot snap into orientation with little oscillation.
I tried to add in I and D components, but both made the orientation worse in one way or another. The integral portion did technically make the orientation more accurate, but this resulted in the motor stuttering a bit at the end of it's alignment to slightly tweak the angle, because the integral term adds back some of the slop that I built in to my PID controller by allowing it to be +/- 5º of the desired angle. The derivative term could have benefitted from some filtering as it made the output oscillate a lot at the end. The end behavior with the derivative term involved was the bot rapidly switching the direction it was turning and "walking" forward instead of adjusting it's angle.
An issue with using live sensor readings to control our robot is that we are limited by the data rate of our sensors. In particular the TOF sensors read rather slow and can significantly hinder precise control. Using a Kalman Filter to fill in the gaps of data should help make our control systems better.
To build the model we need to account for as many forces as possible through the "drag" and "mass" we use. To do this I gathered data by driving the bot at the wall at about 50% speed to record its terminal velocity. Below are the graphs of the readings.
I assumed the final velocity was approximately the terminal, so using that I determined that 90% of final velocity occurs at about 1.3 seconds. Using this and the formulas from lecture above, I found m = -0.000155 and d = -0.000503. For the A and B matrices I also used the provided formulas, but for the C matrix I used [1, 0] because my distance is the positive distance from the wall, not negative. For covariance matrices, I set the measurement sigma to 20mm, because that is approximately the error according to the TOF sensor documentation. I then set the other 2 to 100 so that the filter will favor the read values.
I set defined my function and ran it witht the while loop above, with the incrementing of t being about the sampling rate. This was done as a sanity check to see that the filter follows the recorded values and you can see in the graph that it did just that while slightly overestimating. I use 1 as the u input because I am at a constant input value to the motors. To see how well it predicts values the final addition in the while-loop condition and incrementing of t need to be altered. As shown below the predicted values deviate to be much less than the measured value with each additional prediction made.
I found that the issue appears to be the velocity of the state estimating model. It very rapidly approaches the terminal velocity and stays there, meaning whenever it makes a prediction it is assuming the bot is moving at terminal velocity. Increasing sigma 2, the uncertainty of the velocity component of the state estimation, helps slightly as shown, but only resets the error when a new data point is taken.
I chose to Implement the stunt by appending the two P controllers I made in the previous labs. To make them work together I had the distance controller run first and, once in the desired range, stop itself and start the orientation controller by setting the icm flag. The commented out if clauses are from the original distance controller. Since I no longer need to settle at a distance, the behavior for overshooting was no longer necessary. I similarly changed the end behavior of the orientation controller so that it would drive back towards the start position for 1.5 seconds once it completed its turn instead of just turning.
This is the design that I ultimately went with, but I started encountering issues that hadn't been fully resolved during the distance control lab. My distance readings would semi-randomly spike into the millions or negative millions which made the control logic difficult. I came to find, after much difficulty, that the issue was with my extrapolation. It appears that my calculation for dt when deriving the slope was sometimes causing the slope to become infinite because of the speed of the loop. I am still not entirely sure why this occurred so sporadically, but to resolve this I added a 1ms delay after each run of the distance controller.
Below are the videos of the code running successfully andthe data gathered from them. I was struggling to get the robot to do a smooth drift and drive and settled on the behavior where it drifts into orienting, then drives.
I chose to tackle this lab using the P controller built in the previous labs. The only modification that I made to the controller was what it's terminating conditions were. I had it attempt to take a reading at every angle and every time it reached a goal state it would pause until it gathered a measurement before moving to the next one. The controller stopped when the angle calculations surpasses 370, I start the goal at 0 and increment by 25, so there are always at least 14 measurements. I did two spinning readings in each location. Below you can see the changes to the goal state.
As you can see from the video it does mostly spin in place, and makes almost an exactly 360º turn. Becuase of that I believe it is very safe to approximate measurements for the robot staying still. As can be seen in the code above I changed my goal by -25, I chose to do this because my motors spin better in that direction, but it since numpy's to radians method handles negatives, it did not alter my transform matrices.
Above is the first spinning measurement taken for each part of the map. As is evident in some of the maps, sometimes when the bot overshot the angle it would take a measurement and continue sending the wrong distance distpite the angle change. So for the translated graph I left out distance measurements that were the same as the one before them, this mostly cleaned up the issues.
Below is the map I got with each spin, as well as where I believe the measurements indicate a wall. I chose these places for the walls because it is where there are the most data points located at/around. The top left and bottom right of the area have large clusters that I don't believe are a wall because their color indicates they were taken from the opposite side of the map, nearing the end of the sensors range, and so I belive they are erroneous readings.
This lab had us simulate using a Bayes filter to accomplish grid localization. To make this possible we defined the map to a 3D space of size (12, 9, 18) representing x, y, and theta respectively. We used an odometry model to provide transition probabilities.
To find the probability of getting to a new state, we must first calculate what moves were necessary to get to the new position.
This function computes the probability that you took the estimated action, u, by assuming a gaussian distribution around the action calculated in the previous function.
In this function we use the previous two functions to new bel_bar values. This requires iterating through every possible position to calculate the probability it is the prior state of each other state, but to save some iterations, any with a probability less than .1% are skipped.
This function exists separately from the next one so that the next one can be read easier. This provides the probability of each measurement given the true state. This is also estimated to be gaussian around the actual state.
This step goes through each position in our grid and uses the bel_bar probability of each times the product of the probabilities calculated in the sensor_model function.
In the video you can see that the bayes filter (blue) stays pretty accurate to the true state (green) despite having a very inaccurate odometry model (red) to work off of. It seems to work worst at the point where the robot changes direction. It tends to overestimate the change at least for the first movement, but it otherwise remains very true
To have this class work with the code from the previous labs, I needed to make small adjustments to the arduino code and to the class. I added 2 fields to avoid needing a global variable and 2 functions to enable message handling. In arduino I had to change the setup and execution of my PID command. In previous labs I sent multiple setup commands to set the gains and motor strengths without needing to reupload arduino code. Since the action being done is a repeated scan, I didn't need that functionality and could have the PID function handle all of that. Additionally, for mapping I had the function set up to take as many distance readings as it could in a rotation, which did not produce a consistent number of readings, so I limited it to be one every 20º.
Next, to actually do the observation I followed the handout instructions for making the function async. I have my robot rotating clockwise because the left motor spins weakly backwards, to counteract that in my calculations I added 360 to each angle (since they are negative and sent in the opposite order they were recorded).
I chose to approach this lab by using the P controllers that I have made in previous labs. I was initially going to use localization as well, but I started to experience odd bugs with the code and decided to instead preplan a path using the P controllers. I recorded the path as a set of alternating distance and orientation control set points shown below. I modified my PID command to set goal to be an index into the list of set points.
I initially had the controllers run until they had reached their goal and then stopped and moved to the next, but overshooting was a really big issue with this approach. To keep that from happening I instead run each set point for a set amount of time, 5 seconds for distance control and 3 seconds for orientation. This mostly resolved the issue, but unfortunately when I got this part working my motor drivers started to fail. To check that this was actually the case, I tried several batteries that I fully recharged before using, and ran the P controller with a gain of 10 (which maxes out PMW value sent to the motors) and it still failed to turn. It even someetimes fails to spin the wheels in the air with no friction from the ground. Due to this, the robot can only turn occasionally and can only do distance control without assistance, but the wheels cannot generate enough torque to turn. Below is what the run looks like with me assisting the robot at the points where it is supposed to turn.
The top video shows the robot mostly going along the path with me doing the turns. You can see in the first long run, the momentum from the forward motion carries into the turn and allows the bot to use less power to start the turn before losing the battle against friction. Prior to the motors giving out, I had active breaking at the end of distance control segments to avoid this. Aside from this it follows the planned path very accurately except for when I over-correct the final turn. The bottom video shows an issue I accidentally replicated that I was experiencing while the motors were working. My TOF sensor is mounted on the front left side, so if the distance control takes it too far along the tile, or if the rotation does not turn right enough, the sensor would read the distance from the box or randomly read a scatter from the box and the far wall, which makes it stop prematurely, or over-corrects and goes way past the goal set point, and can't recover in the 5 seconds alloted for distance control.
Return to the other page!