News

Running Training on Raspberry Pi

Creating a Smart Hydrophone Processing System- Software

This article is a continuation of the previous article on Creating a Smart Hydrophone Processing System – Hardware

It is assumed that you have already setup the Raspberry Pi to have remote desktop and SSH agents active. I also have setup my network to assign a specific IP to this device both for wired or wireless operation. It is also assumed that you already installed git or any other tools you require for development. We used the default Raspbian Linux. Login in with SSH should present something similar as the next picture. Of course, you may use a remote desktop environment; select which best fits your taste.

Raspberry Pi Login Screen
Raspberry Pi Login Screen

Notice that this is a Linux 5 kernel. This is important mostly for driver’s compatibility or support with the DAC+ADC Pro board. However, we did not face any real issues with our setup.

Our development plan was based on using python tools and machine learning for fast prototyping.  We based our classification code on this GitHub repo: seth814/Audio-Classification: Code for YouTube series: Deep Learning for Audio Classification (github.com)

Python can get messy with packages and proper system configurations. Note that the Raspberry Pi image comes with both Python2 and Python3 interpreters. Writing simple python runs the 2.7 version while running python3 calls the Python 3.x interpreter. Keep this in mind.

In our case, we used the Python3 setup. First, we created a virtual environment to install packages. There are many ways to do the setup. You may try the anaconda system (although the link below for TensorFlow says that this will not work), which will take care of any dependencies and install all the packages, or you can follow on here and see the more tedious manual approach. The idea is to have a system set up in our Raspberry that matches the one we use in our desktop environments to ensure maximum compatibility and help us with testing.

Download the virtualenv package:

$ sudo pip3 install virtualenv

Then we can select a directory to do our development and create the virtual environment there:

 $ cd UnderwaterSoundProcessing
 $ virtualenv audioml
 $ cd audioml
 $ source bin/activate
 … do something…
 $ Deactivate 

If you check the requirements.txt in the github repo, you will notice a long list of items that must be installed along with their version numbers. If you try the simple command:

$ pip3 install -r requirements.txt

You will fail miserably. The reason is simple. Kapre requires TensorFlow version at least 1.15, but default TensorFlow for Raspberry is 1.14.

TensorFlow 2 cannot be installed by default on RPB Pi4. We followed the instructions from this link:

https://itnext.io/installing-tensorflow-2-3-0-for-raspberry-pi3-4-debian-buster-11447cb31fc4

The above link states two things:

  1. The script used is on a google drive (check contents, security and stability issue)
  2. TFLite can provide significant improvements in prediction. You may convert your normal models to Tflite for faster processing. Need to test on each application if accuracy is maintained.
  3. When installing in venv do not use sudo. In case you did (as I did), reinstall (last step)  tensorflow without the sudo. Test that Tensorflow is installed from python before continuing.
  4. If in python interpreter you are able to import the tensorflow package as stated in the link you are good to go.

For Tensorlow and Numpy packages we install the ATLAS library.

$ sudo apt install libatlas-base-dev

Another issue we faced was the LLVM library. Numba 0.48 requires v7 of LLVM and not v9.

 $ sudo apt install llvm-7 
 $ LLVM_CONFIG=llvm-config-7 pip3 install llvmlite 

Now running the dependencies command should work:

$ pip3 install -r requirements.txt
Installing packages for ML
Installing packages for ML

We run our predict module to see it working. We had already trained previously on a desktop PC the machine learning network to classify a mixture of sea sounds taken in a previous experiment and from the web.

 $ python predict.py 
Running Prediction on Raspberry Pi
Running Training on Raspberry Pi

All these are nice, but still where is the hydrophone? Next steps describe the process to properly interface the DAC+ADC Pro module.

We followed the procedures from the manufacturer: Configuring Linux 4.x or higher | HiFiBerry

Although the kernel is 5, we needn’t to do the work-around presented for the EEPROM. First, we tested the playback. Please note that the device is not the default, but rather the sysdefault as seen after the next command.

Sound Devices List
Sound Devices List

The following commands will output sound from the left and right channel respectively.

 $ speaker-test -D sysdefault -t sine -f 440 -c 2 -s 1
 $ speaker-test -D sysdefault -t sine -f 440 -c 2 -s  2 

Now to test record try the following command for 3 seconds recording, Signed 16, wav format, and show Vumeter.

$ arecord -D sysdefault -r 44100 -d 3 -f S16 -t wav -V mono test.wav

You will notice that the hydrophone signal is very low. This is expected and we need to boost the signal with the following command:

$ amixer -D sysdefault cset name='ADC Capture Volume' 96,96

The values of this command are steps between 0 and 104 and will set ADC volume 0.5db/step. So 96 is about 48dB. You may adjust this value to a lower level depending on the sensitivity. A 33dB gain should work just fine. You may retry to record again and observe the VU meter levels.

For python to access the audio device, we installed the ALSA audio:

 $ sudo apt-get install libasound2-dev
 $ pip install pyalsaaudio 

To ensure that the same exact code is used either for off-line prediction and on-line prediction, we used a file wrapper in python. The sampling function performs a sampling for a duration (like 1 second) and stores the result in a file. Then the main loop acquires this file and processes it like it does off-line content.

def readdaq(self):
     loops = int(self.sample_dur_seconds * round( (self.fs_Hz / self.chunk) ) )
     frames = []
     sample = np.array(frames)
     while loops > 0:
         loops -= 1
         # Read data from device
         l, data = self.inp.read()
         if l>0:
             if l!= self.chunk:
                 print("Sampling Error ", l)
             frames.append(data)
             
     wf = wave.open('/mnt/tmpfs/sample.wav', 'wb')
     wf.setnchannels(self.channels)
     wf.setsampwidth(2)
     wf.setframerate(self.fs_Hz)
     wf.writeframes(b''.join(frames))
     wf.close()

     batch = self.readfile('/mnt/tmpfs/sample.wav')
     
     return batch

Note that the file is saved on a temporary location. This is a ram drive created as follows:

 $ sudo mkdir /mnt/tmpfs
 $ sudo mount -o size=1M -t tmpfs none /mnt/tmpfs 

The reason of using a ram drive is that we do not want to wear out our SDcard, or have slow-downs due to file system activity. We use this scratchpad area to write the samples and use them for processing. Then the sample is overwritten by the next sample.

This way we  streamline the testing process and are able to run the same code with or without hydrophone, either on desktop, or on Raspberry Pi.

Conclusion

Using open-source software and off-the-shelf hardware we are able to have a platform for sound classification using machine learning. We created a uniform testbed that can be used to test either on-line or off-line the methods employed for evaluation purposes.

Acknowledgment

The SMART Cable was developed through the SMART Cable Project. This project
is part of the Research & Innovation Foundation Framework Programme RESTART
2016-2020 for Research, Technological Development and Innovation and co-funded
by the Republic of Cyprus and the European Regional Development Fund with grant
number ENTERPRISES/0916/0066.

For more information you may visit: SMART Cable :
Cyprus Subsea Consulting & Services (cyprus-subsea.com)

This project is also part of the MARI-Sense Project INTEGRATED/0918/0032. The MARI-Sense project develops intelligent systems that allow human operators to make sense of the complex maritime environment for applications including transport and shipping, coastal tourism, search and rescue, and maritime spatial planning.


CodeProject

H2a Hydrophone Connected

Creating a Smart Hydrophone Processing System- Hardware

In scientific projects, it is often needed to sample sounds from remote locations, for classification or other purposes. As data link rates may be low or unreliable, transmitting raw samples to inland processing centers may not be an option. An alternative is to do off-line processing in batches. This means that raw data are stored in non-volatile memory untill a physical visit replaces the memory media and uses the first batch for processing after any events occurred. It is obvious that a module that performs real-time classification will have to send a very small amount of data to, possibly cloud-based data centers.

Towards this direction, we will show how to build the basic elements needed for such systems using simple parts and open-source software packages, to provide a platform to build classification systems for hydrophone sensors.

Hardware

The hardware we are going to use for this example is based on Raspberry Pi 4.

Raspberry Pi 4 module
Raspberry Pi module

To interface we use a hydrophone an analog to-digital converter is needed. For our experiments we used the DAC+ADC Pro from HiFiBerry (HiFiBerry DAC+ ADC Pro | HiFiBerry). This module offers both audio input and output. In our case, we are only interested for the audio.

HiFiBerry DAC+ ADC Pro
HiFiBerry DAC+ ADC Pro

This HAT is placed on top of the Raspberry PI. To mount properly the module, we used the following items as seen in the next picture.

ADC and Raspberry Pi side to side
ADC and Raspberry Pi side to side

We used two of the nylon stands to hold the front side of the HAT. We skipped the back nylon stands as they wouldn’t allow to fully insert the connector. In the picture above we have assembled the front stands in the Raspberry Pi 4 and we are ready to connect the HAT. Also note the small nylon bolts for securing the HAT and two jumpers we will need for the hydrophone.

Next, we connect the HAT on top of the Raspberry PI module. We fully insert the connector; we should not see any naked pins protruding from the bottom as seen in the picture.

ADC on top of Raspberry Pi, connector side
ADC on top of Raspberry Pi

We then place the bolts to be ready for tightening the HAT. The HAT will have a slight inclination as the front stands will keep it slightly upwards, in respect with the fully inserted connector.

ADC front side mount
ADC front side mount

After completing the mechanical fit, comes the jumper settings. The two jumpers J1 and J3 must be placed to provide power to the hydrophone. This is because our hydrophones are behaving like condenser microphones and they do not generate their own voltage. Make sure you check which type of hydrophone you use as depending of the hydrophone technology you may not need this power; Failing to properly identify the type of hydrophone may damage either your equipment.

ADC Jumper Settings
ADC Jumper Settings

Our hydrophone is the H2a Aquarian Audio from Aquarian Hydrophones.  Next pictures show the hydrophone and its connector ready to be connected on the platform. H2a Hydrophone (aquarianaudio.com)

H2a Hydrophone
H2a Hydrophone Just Before Connection

Next, we connect the hydrophone to the 3.5mm Jack input of the DAC+ADC Pro HAT and we are ready to go from the hardware standpoint.

H2a Hydrophone Connected
H2a Hydrophone Connected

Conclusion

This ends the first part of the smart Hydrophone, concluding the Hardware. A setup showing how to integrate off the shelf components to support Hydrophone sound sampling from the sea was presented.

Acknowledgement

The SMART Cable was developed through the SMART Cable Project. This project is part of the Research & Innovation Foundation Framework Programme RESTART 2016-2020 for Research, Technological Development and Innovation and co-funded by the Republic of Cyprus and the European Regional Development Fund with grant number ENTERPRISES/0916/0066.

For more information you may visit: SMART Cable : Cyprus Subsea Consulting & Services (cyprus-subsea.com)/

This project is also part of the MARI-Sense Project INTEGRATED/0918/0032. The MARI-Sense project develops intelligent systems that allow human operators to make sense of the complex maritime environment for applications including transport and shipping, coastal tourism, search and rescue, and maritime spatial planning.

MARISENSE Experiment #1

Last May we conducted the first experiment of the program. Purpose was to create a dataset of underwater sounds and respective surface images, that would be used to develop and test our algorithms. Various kinds of equipment were used, from RIBs, Sailboats up to underwater ROVs and gliders. You can take a taste of the process on how the experiment was conducted and at the same time hear the respective underwater sounds.

Embedded Hour, Episode 02

In May 2020, we presented a compact heterogeneous computing embedded platform.

The platform is based on a coldfire Microcontroller and a Spartan6 LX9 device all put on a two layer board. Simple and effective. We also demonstrated two of the prototype systems live showing the configuration of the FPGA from the microcontroller and then the seamless register mapping into the microcontroller memory space.

Perseus CLE goes DVI

Some time ago I wanted to test the capabilities of the PerseusCLE board. I created an expansion card which supported motor drivers for DC Brushed or Stepper motors, Analog front ends etc.

I always wanted to try and output a DVI/HDMI signal using TMDS and I knew that my spartan 6 device was capable of doing this. However when I initially designed PerseusCLE, I did not think at all trying this, I just wanted a strip-down version of my bulky PerseusCFE to a more cost effective solution.

What are these CLE/CFE stand for anyway? Well I started with CFE: Coldfire Full Edition.

Perseus Coldfire Full Edition
Perseus CFE Board

This board had all the bells and whistles I wanted at the time. Dual switching power supplies (logic and motor power), second crystal for the FPGA clock, SDRAM on FPGA, Ethernet connectivity, USB connectivity, SD Card, CANBus, model servo PWM outputs and lot’s of Olimex UEXT connectors for UEXT modules. All in just 2 layers PCB.

The board is large and I wanted something smaller and cheaper. Hence I decided to strip down many of the features of the Full Edition, creating the CLE: Coldfire Light Edition.

Perseus CLE Board
Perseus CLE Board

Features reduced to a minimum, like SDCard, native USB only, no separate FPGA clock (used same clock as MCU), still many connectors and a single switching power supply.

So designing the expansion board, I thought to give it a try and add an HDMI connector with a crystal oscillator to provide the missing external clock to my FPGA. I tried to match signal length for the TMDS signals from the FPGA to the expansion board as initially did not plan to have equal signal lengths up to the PerseusCLE connectors. It wasn’t my intention to drive so high speed signals back then. I needed to use Excel and measuring the length on the main board and calculating what was the actual signal length for each signal and add the corresponding missing length in the I/O board. Pretty challenging.

You can find how DVI/HDMI works as a concept and a Verilog implementation at FPGA4FUN. However I am using VHDL and searching the net I found various implementations some from Xilinx some from derivative works of Mike Field. I used a mix of the available sources. I liked this repo from drxzc. I also created and tested with GHDL Xilinx IP, like PLL and SERDES modules.

I was so anxious that I procrastinated to check the actual hardware. After creating the interconnections and verified that the setup was probably good, I decided to give it a try.

PerseusCLE with DVI
Perseus CLE and DVI Expansion

Although I expected to fail, I hoped for the best. Everything was wrong. The TMDS signals had to pass a simple flat cable to interconnect the boards. My reference 25MHz clock had to go with wires back to the main board. In order to reduce the effects of the signal integrity, I used a low resolution of 640×480. For simplicity I added a simple pattern generation. The idea if this worked was to replace it with video memory that the microcontroller would write. The bit rate in the data lanes would be 10 times my 25MHz clock giving 250Mbps per lane. This is where the TV shows says: “Don’t do this at home, experiment executed by Experts”. Well I would stick on the first part: “Don’t do this at home”; I see no expert around….

I put my FPGA configuration to my SDCard and modified COFILOS code to load this DVI configuration. I checked that my reference clock was running. My poor 100MHz DPO had not a good chance to capture the high speed data lanes of the serializers outputs.

DVI Test Setup

When my full setup was up an running I connected the HDMI cable… Silence. Excitement. Fear. Waiting to see the result. Nope, needed to select the correct HDMI input at the television. Ok. Let’s see. Oh!

Test monitor with noisy pattern displayed
My test monitor with the noisy test pattern on it

It worked! Well not as it should, but given the circumstances and the implementation I had to follow I am more than happy. The next boards would be tailored to provide proper signal integrity and produce a clean signal.

I did a small redesign in my VHDL to make sure that the issue I was looking, was not related to internal FPGA timings, instead of driving with my test pattern generator I tried driving a constant RGB value. Retrying this on another monitor I had very similar results. I need more specialized hardware to drive it with proper signal integrity and clock signals. No surprise.

At a later time, I also tried to use the internal PLL to generate my clock frequencies. I was not happy with my external 25MHz clock running around. I also did some modifications on my VHDL code as follows .

First I created generics input for the various VESA timings. Now the design is parametric. I also changed the color values to be zero during sync. To reduce timing issues on place and route I also used registered outputs from the Test Pattern Generator.

I started the experiments again with either clock coming from my MCU and create the clock frequencies using the PLL, but still got same results.

Perseus CLE board driving the DVI signals
DVI Setup, 2nd try

As this setup had the same behavior as the original configuration, I reverted to the external 25MHz clock. It seems that this worked after the last changes! I had my DVI output on my monitor. Sometimes tweaking with the HDMI cable could lose the stability of my signal, or maybe the stability of my clock signal going around with cables was not good enough to have a good output, but nevertheless, the proof of concept was completed.

Working DVI
Working DVI pattern

It was really fun to work with SERDES and proprietary vendor IPs and see how they actually work. Really getting into these details provide a good background for other applications.