The Rover

I have been wanting to make a self navigating rover for some time. Finally I got the time to build the base platform.

Base Platform

The platform is a 4 wheel car with two coaster wheels in the back and two 125x58mm driving wheels in front. It has a couple of 30 RPM DC motors each driven by a VNH2SP30 card. The motor drivers are controlled by an Atmega8A. The atmega receives commands from a remote control box. The communication between the car and the remote control is done by two NRF24L01+ modules. These are cheap Chinese modules. They don't work at any data rate except 2Mbps. At that rate, they work pretty well. Steering works by running the motors at different speeds when a turn is to be made. The wheels and motors are firmly attached to the body. Code for the remote control box and the motor control circuit is here.

Currently, the remote control directly tells the rover how fast each wheel should turn. Ideally, the information from the remote control should be used to travel in the given relative direction. For example, when I push the stick forward with no lateral deflection, there should be no change in the heading of the rover. Unfortunately this is not the case. There are various reasons for this:

An easy solution for this is to use some acceleration sensors. I could use one in the middle of the assembly or one at each end. In any case, the final robot would adjust the motors continuously in order to follow a planned path. So, I won't do it for now.

The rover is powered by a 12V 7Ah lead-acid battery. I made a charger for it using the design here. I also added a digital shutdown transistor to LM317 to stop the charger occasionally in order to measure the battery voltage.

I now understand that measuring the voltage of the battery immediately after switching off the charger doesn't really work because the battery needs some time for the chemical reactions inside it to settle down. In any case, here is the code for the Atmega328 inside the charger. Here is another explanation of the charging process.

Visual Navigation

The robot will navigate using some artificial landmarks. I will use 3 solid color balls (initially I can use pit balls) for each landmark. These will be laid out around a circle with 120° seperation. Two will be red and one will be blue. By using the color information the robot will be able to tell on which side of the landmark it is. By looking at the distance between balls, it will be able to tell its bearing. The perceived size of the balls will be used to determine the distance. I don't expect this to give me very high accuracy. I will use this to approximately plan a route. While travelling, the robot will drive itself using another mechanism to avoid obstacles or to stay on a path if there is any.

I have implemented a circle detection algorithm here. I now need to build the onboard computer (a raspberry pi) in order to test it in a real life situation.

The Raspberry

That's now done. The raspberry pi captures video and sends frames to a PC. Steering is still done by nRF24 remote control. Next step is to make an SPI connection between the Atmega and RaspberryPI so that the PI can control the steering. The current code for the PC and the RaspberryPI is here.

In the new version 1.1, RPI sends steering commands to the Atmega thru SPI. I didn't use the hardware chip select pin because I don't fully understand how it works (maybe spidev0.0 vs spidev0.1?). Instead I connected RPI(gpio0=pin27) to Atmega(D3) as a device enable input. The Atmega responds to SPI communication only while this line is high. The MISO line is also connected. The Atmega echoes back the bytes it receives. This is not very useful, but it facilitated the code for data transfer from a slave device to the RPI.

When you use the SPI interface on an RPI, there are two important parameters to set: clock polarity (SPI_CPOL) and clock phase (SPI_CPHA). The clock polarity describes at what voltage the clock line should be when the bus is idle. If you set SPI_CPOL, the clock is high when the bus is idle. This means that the MOSI line should be sampled by the slave on the falling edge of the clock. I cleared this bit so that MOSI is sampled when the clock goes high. The second parameter, the phase, tells the SPI driver when to sample the MISO line. If you set SPI_CPOL=0 and SPI_CPHA=1, then the MISO line is sampled on the falling edge of the clock. I initially had SPI_CPHA=0, but this caused problems such as the input data being one clock cycle behind etc. Here are some links:

When using SPI on an RaspberryPI, there are other things to watch out for too. First, you need to make sure that the kernel headers for your cross compiler matches the kernel on the RPI. In my case, it didn't and this caused some EINVALs until I copied the headers from RPI to my cross compiler directory. Second, you need to fill out all non-padding fields in the transfer struct. Just zeroing it out doesn't work. You need to set the speed, the frame width etc.

On the Atmega side, I used a home-brew software SPI interface because the hardware SPI pins were taken for other purposes.

The MISO connection is handled by a single transistor for level shifting from Atmega(5V) to RPI(3.3V). It's a really basic circuit, base of the transistor (2N4400) is connected to the Atmega output thru a 1K resistor. The RPI input is pulled up by a 1K resistor and is also connected to the transistor's collector. The emitter of the transistor is directly grounded. This creates an inverting level shifter which works without problems at slow speeds (I tried up to 8KHz).It might work at higher speeds, but definitely not over something like 100KHz since the Atmega runs at 1MHz and needs to process each incoming bit in software.

So, the overall usage is as the following. First, I run the control-and-view program on my Linux PC, which listens for a socket connection from the RPI. Then, I run the RPI control program with two arguments: first is the path to SPI device node and the second is the SPI speed in Hertz. This program then connects to the PC and starts sending out captured images when asked to start. I hate doing things this way but I need some sort of meta-control program to start things automatically and set their configuration using some GUI.

For emergencies, the nRF24 remote control still works, the Atmega uses whatever package comes in whether from the remote control or the RPI.

The system works pretty well but I have one major problem. In the first version, I had the circuits scattered and simply taped to the rover's body. This meant that the power lines to the motor controllers were pretty far from the RPI. Since then I packed everything into a container and the power lines run close to the RPI. This causes two problems.

First, when there is a lot of load, due to some obstacle for example, the power lines emit so much noise that the RPI outputs change state. This causes corrupt SPI packets (I have package intergrity checks for this) as well as unintentional switching of the enable output.

Second, power line noise interferes with WiFi. This causes the rover to lose WiFi connection where the signal is slightly weaker than normal. For instance, when the rover lost WiFi signal, I went there and checked the reception with my cellphone but I still saw a strong signal there.

My current proposed solution is to cover the power lines with grounded copper tape. This will let the noise go thru the tape instead of spreading around. However, such a big metal surface may still affect WiFi operation so I might have to take the RPI out of the box and put it in a separate container. In fact, that seems to be the best option so far.