How to use Edge Impulse AI/ML tools on SAMA7G54 MPU
Objective
This tutorial shows you how to use Edge Impulse AI/ML tools on Microchip MPU's under Linux® using SAMA7G54-EK.
The SAMA7G54 is a high-performance, Arm® Cortex®-A7 CPU-based embedded microprocessor (MPU) running up to 1 GHz. It supports multiple memories such as 16-bit DDR2, DDR3, DDR3L, LPDDR2, LPDDR3 with flexible boot options from octal/quad Serial Peripheral Interface (SPI), SD/eMMC as well as 8-bit SLC/MLC NAND Flash.
The SAMA7G54 integrates complete imaging and audio subsystems with 12-bit parallel and/or MIPI-CSI2 camera interfaces supporting up to 8 megapixels and 720p @ 60 fps, up to four I2S, one SPDIF transmitter and receiver and a 4-stereo channel audio sample rate converter.
The device also features a large number of connectivity options including Dual Ethernet (one Gigabit Ethernet and one 10/100 Ethernet), six CAN-FD and three high-speed USBs. Advanced security functions like secure boot, secure key storage, and high-performance crypto accelerators for AES, SHA, RSA and ECC are also supported.
Microchip provides an optimized power management solution for the SAMA7G54. The MCP16502 has been fully tested and optimized to provide the best power vs. performance for the SAMA7G54.
Hardware Prerequisite
- SAMA7G54 Evaluation Kit
- Linux machine
- USB camera
- USB-to-serial cable (TTL level)
- USB cable (micro USB to type-A cable)
- Ethernet cable
SD card
The hardware setup is shown in the accompanying figure.
Software Tools
PC development environment: This demo needs an Ubuntu PC to build the SD card image.
Docker environment: The SD card image building will use docker, if your environment does not install docker, please follow the instructions below to install docker: How to install docker on Ubuntu.
Edge Impulse account: This demo is operated on Edge Impulse online studio. To use the Edge Impulse online studio, a user account is needed. The sign-up address is: https://edgeimpulse.com/.
Hardware Connection
Jumper Settings
Software setup
Users can use the pre-built image or build the SD-card image manually.
Use a pre-built image:
SD card image downloading address: https://cdn.edgeimpulse.com/build-system/microchip.sama7.100724.sdcard.img.zip
This pre-built image has edge-impulse-linux preinstalled.
Now copy the build/sdcard.img to flash it to the SD card. Please click here for guidance on how to flash an SD card
Build the image manually:
Use Docker and Buildroot-at91 to build the SAMA7G54-EK SD card image. First, we need to clone the source code.
Prepare for source code:
Download the SAMA7G54 Edge Impulse example source code:
user@at91:~/$ git clone https://github.com/edgeimpulse/example-microchip-sama7g54.git
user@at91:~/$ cd example-microchip-sama7g54
user@at91:~/example-microchip-sama7g54$ ls
buildroot-config Dockerfile LICENSE sharp-linux-armv7.node
Config.in example-standalone-inferencing-linux.mk README.md
- Build image in docker:
- Build the docker container and run it with:
user@at91:~/example-microchip-sama7g54$ sudo docker build . -t microchip
user@at91:~/example-microchip-sama7g54$ sudo docker run -it -v $PWD/build:/buildroot-microchip/buildroot-at91/output/images microchip- Now let’s build the SD card image in the docker environment:
root@a4d260547a56:/# cd buildroot-microchip/buildroot-at91/
root@a4d260547a56:/buildroot-microchip/buildroot-at91# ls
CHANGES Config.in.legacy Makefile.legacy board dl linux sharp-linux-armv7.node toolchain COPYING DEVELOPERS README boot docs output support utils Config.in Makefile arch configs fs package system
- Run the command "make menuconfig" if the application depends on any packages and add them in the Buildroot config. Otherwise, exit the menuconfig without changes.
root@a4d260547a56:/buildroot-microchip/buildroot-at91# make menuconfig
- And then, you can continue to build the image with make:
root@a4d260547a56:/buildroot-microchip/buildroot-at91# make
After completing the make process, go to the example-microchip-sama7g54 folder (outside of the container). You will find that a build folder was generated:
user@at91:~/example-microchip-sama7g54$ ls
build Config.in example-standalone-inferencing-linux.mk README.md buildroot-config Dockerfile LICENSE sharp-linux-armv7.node
The built SD card image is in this build folder and you can copy the build/sdcard.img to flash it to the SD card. Please click here for guidance on how to flash a SD card
Hands-On (Train the Model)
In this hands-on chapter, let's learn how to train a model and design a complete AI/ML project using Edge Impulse. The example used here is to detect the SD cards using a USB camera on SAMA7G54-EK.
Login into Edge Impulse website.
Login to your account via: https://studio.edgeimpulse.com/login.
Create an Edge Impulse project.
- After logging in successfully, you would see + Create new project button on your home page. Click this button to create a new Edge Impulse project.
- Name the project. Then, choose your project setting by selecting Public or Private. Last, click Create new project.
Review Dashboard information.
Now we have a project overview. In the Dashboard menu, we can see the project information, add data, etc.
Choose the target device.
In the Dashboard menu, click the top right button to add the target device type. Choose Microchip SAMA7G54 Evaluation Kit in the Target device drop-down box, and then click Save button to save the configuration.
Choose a device to sample data.
- Click the left menu item Devices to connect to a device that can sample training data. An easy way to do this is to connect to a smartphone that can take pictures quickly.
- Use the smartphone to scan the QR code and allow your smartphone to use the camera if a pop-up prompts you to do so. You will take pictures and upload the pictures automatically to the online studio of the project you just created. Of course, you can choose sample pictures from your computer or development board.
- Once your smartphone connects to Edge Impulse web successfully, check the smartphone screen and click Collecting images?
- Give the web page permission to access the camera of the smartphone.
- Now the smartphone can take photos of SD cards.
- At this step, you can click on the Label: unknown text box, to set the label for the image. This can help distinguish between different sample acquisition sessions.
- Click on the Capture button to transfer the SD card picture to the online studio automatically.
Check sampled data.
Sample a lot of pictures to train a good model. In this example, 63 SD-card photos are taken at different angles as experimental data.
Label the sampled data.
- Label the sampled data for training.
- Now save the labeled data. Normally, a lot of work os required to label all data.
Create the training model.
- Now let’s create the training model named “Impulse”. Click the menu Experiments. Then, click Create a new impulse as shown in the accompanying image.
- The sampled data should be preprocessed before training. When passing an image to the input block, raw features are extracted. The processing block allows you to filter these raw features to keep only the most significant ones; the ones that the learning block will need to use. Click the Add button to add preprocessing.
- Add a learning block to train data.
- Finally, save the configured Impulse.
Image Process.
- Set the color mode to Grayscale. Of course, you can set RGB as well.
- After configuring the parameters, generate the features by clicking Generate features on the top menu. Generating features is a key step for converting raw data (Raw Features) into feature vectors for training machine learning models. This step is often referred to as feature extraction, which enhances the performance of the model by extracting meaningful information from raw sensor data.
- Check for job completion on the bottom left output console.
Train the model.
- From the left menu, click Object detection to choose a model to begin training.
- In this demo, we choose the FOMO MobileNetV2 0.35.
- Click the Save & train button after the model is chosen. The training process will start. You can see the training output log message in the top right window.
Live classification
- In this section, you can test the trained model performance on your smartphone, PC, or other device that could capture data.
- For example, the fastest way to test live classification is by using your smartphone. Scan the QR Code with a smartphone.
- Pay attention on this step when your smartphone is connected. Don’t click Collecting images? button. Instead click the Switch to classification mode button.
- In classification mode, your trained model will be downloaded and deployed to your smartphone.
- The classification running real time on the smartphone.
- The SD cards were detected within 1ms. Another way to test the model is to detect the database pictures that we captured before.
- For example, the picture sd.563gdjtr was selected to test.
Model testing.
- Let’s recall the previous part Data acquisition. We can find the captured data was split into two parts:
A. TRAIN: 83%
B. TEST: 17%
- These TEST data was not handled in the previous training process. This will be used to test the trained model. Click the left menu Model testing to check the test results of the TEST data detected by the model.
- Until now, we trained an initial model on the Edge Impulse cloud servers. In the next chapter, we will introduce how to deploy this trained model to our SAMA7G54-EK target to have real testing.
Hands-On (Deploy the model on SAMA7G54-EK)
The SAMA7G54-EK SD-card image flashed the SD card image by using Balena Etcher or other tools. When the SAMA7G54-EK runs, please be aware that the login username and password in SAMA7G54-EK serial terminal are:
- Username: root
- Password: edgeimpulse
If you would like to use SSH to connect to the board, some additional steps are necessary:
- cd /etc/ssh/
- nano sshd_config
- Uncomment and change PermitRootLogin prohibit-password to PermitRootLogin yes
- Uncomment PasswordAuthentication yes
- CTRL+X then Y then Enter
- reboot to restart SSH
- ifconfig to get IP address
- On your host machine ssh root@www.xxx.yyy.zzz
Deploy .eim file.
- From the left menu, click Deployment, and search for the board: Microchip SAMA7G54 Evaluation Kit (ARMv7).
- Then click the Build button to build the executable file.
- The browser will download the executable file automatically when the build is complete. You can find the file in your browser’s default downloading location.
- The file name is: detect-sd-card-linux-armv7-v9.eim
- Edge Impulse Model (EIM) files are native Linux and macOS® binary applications that contains your full impulse created in Edge Impulse Studio. The impulse consists of the signal processing block(s) along with any learning and anomaly block(s) you added and trained. EIM files are compiled for your particular system architecture and are used to run inference natively on your system. Detailed documentation about EIM files can be found at https://docs.edgeimpulse.com/docs/run-inference/linux-eim-executable.
- Upload the detect-sd-card-linux-armv7-v9.eim file to the SAMA7G54-EK by using an ftp tool for example such as FileZilla client, and then, check the file on the SAMA7G54-EK serial terminal:
# ls
detect-sd-card-linux-armv7-v9.eim video-capture-at91utils.sh
Make sure the file is executable:
# chmod +x detect-sd-card-linux-armv7-v9.eim
- Start the demo:
# edge-impulse-linux-runner --model-file detect-sd-card-linux-armv7-v9.eim
[RUN] Starting the image classifier for Wayne Jia / Detect SD card (v9)
[RUN] Parameters image size 96x96 px (1 channels) classes [ 'SD' ]
[GST] checking for /etc/os-release
[RUN] Using camera /base/soc/ehci@500000-3 starting...
[RUN] Connected to camera
- Want to see a feed of the camera and live classification in your browser? Go to http://10.160.138.25:4912.
- Open a browser and point the camera to the SD cards. You will see the real-time inferencing on the browser window as shown in the accompanying image:
- The camera captured 2 SD cards and they are detected as “SD” with an F1 score. Also, the Time per inference time is 9 ms on SAMA7G54-EK.
Deploy C++ library.
- We can run the detect SD card demo with the built .eim file on the Edge Impulse cloud server. You may want to build a customized application rather than an unmodifiable .eim binary. In this chapter, we’ll demonstrate how to build your own C++ library with the trained model in Buildroot-at91 for the target SAMA7G54-EK.
- From the left menu, select Deployment, select C++ library, and then click Build button.
- The browser will download the C++ library zip package automatically once the build is complete. You can find the file on your browser’s default downloading location.
- Unzip the detect-sd-card-v10.zip file and copy the 3 folders below red framed to the Buildroot-at91 in docker.
- Copy the 3 folders to the shared path build between your Ubuntu host and docker. Please note that the build folder is mapped to the docker’s directory: /buildroot-microchip/buildroot-at91/output/images
user@at91:~/example-microchip-sama7g54$ sudo cp -r edge-impulse-sdk model-parameters tflite-model build
user@at91:~/example-microchip-sama7g54$ ls build
at91bootstrap.bin rootfs.tar sama7g5ek_pdmc0.dtbo at91-sama7g5ek.dtb sama7g5ek_at25ff321a_click1.dtbo sama7g5ek_wilc3000.dtbo
boot.bin sama7g5ek_i2s0_pcm5102a.dtbo sama7g5-sdcardboot-uboot-4.0.9-rc1.bin boot.vfat sama7g5ek_i2s0_proto.dtbo sdcard.img edge-impulse-sdk sama7g5ek_isc_imx219.dtbo tflite-model model-parameters sama7g5ek_isc_imx274.dtbo u-boot.bin rootfs.ext2 sama7g5ek.itb uboot-env.bin rootfs.ext4 sama7g5ek.its zImage
- Check your docker ID:
user@at91:~/example-microchip-sama7g54$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a4d260547a56 microchip "/bin/bash" Less than a second ago Up 4 seconds gallant_jones
- Enter to docker:
user@at91:~/example-microchip-sama7g54$ sudo docker exec -it a4d260547a56 /bin/bash
- Copy the 3 unzipped folders to /buildroot-microchip/buildroot-at91/package/example-standalone-inferencing-linux folder.
root@a4d260547a56:/# cd /buildroot-microchip/buildroot-at91/output/images
root@a4d260547a56:/buildroot-microchip/buildroot-at91/output/images# cp -r edge-impulse-sdk model-parameters tflite-model ../../package/example-standalone-inferencing-linux/
root@a4d260547a56:/buildroot-microchip/buildroot-at91/output/images# cd ../../package/example-standalone-inferencing-linux/
root@a4d260547a56:/buildroot-microchip/buildroot-at91/package/example-standalone-inferencing-linux# ls
Config.in build-opencv-linux.sh inc tensorflow-lite tidl-rt LICENSE build-opencv-mac.sh ingestion-sdk-c tflite utils Makefile edge-impulse-sdk model-parameters tflite-model README.md example-standalone-inferencing-linux.mk source third_party
- Modify customized code in source /custom.cpp if necessary.
root@a4d260547a56:/buildroot-microchip/buildroot-at91/package/example-standalone-inferencing-linux# ls source
audio.cpp camera.cpp collect.cpp custom.cpp eim.cpp
- Add the example-standalone-inferencing-linux to image build configuration in menuconfig:
root@a4d260547a56:/buildroot-microchip/buildroot-at91/package/example-standalone-inferencing-linux# cd ../..
root@a4d260547a56:/buildroot-microchip/buildroot-at91# make menuconfig
- Select Example Standalone Inferencing Linux in the path: Target packages a Miscellaneous. Save the setting in menuconfig, build the image again:
root@a4d260547a56:/buildroot-microchip/buildroot-at91# make
make: Warning: File 'docs/manual/manual.mk' has modification time 1087962 s in the future…
- Flash the SD card image after the build is complete. Boot up the SAMA7G54-EK with the SD card, and you will find an application called custom in /home folder. The custom is built from the custom.cpp in example-standalone-inferencing-linux/source
# ls /home/
custom
- Let’s recall Step 11: Live classification from the chapter Train the Model. There is a button to copy Raw features, which can be used as the testing data for custom application.
# ./custom raw_features.txt
Predictions (DSP: 5 ms., Classification: 80 ms., Anomaly: 0 ms.):
#Object detection results:
SD (0.886719) [ x: 48, y: 40, width: 8, height: 8 ]
# ls
custom debug.bmp raw_features.txt
In this demo, we use the C++ code to inference a raw RGB image data from a txt file. And print the inferencing result in log. This demo generates an inferenced picture debug.bmp which could be downloaded to the host computer to check. We have shown a generic way of using the C++ library and how to integrate it within Buildroot to build a custom SD card image. You are free then to customize the C++ code to build and run more complete application. Click the duplication button next to the Raw features, paste them to a text file named raw_features.txt and upload it to the SAMA7G54-EK board /home folder. Now we can run the C++ library application.
Advanced: How to optimize the model
Optimizing an object detection model trained by Edge Impulse involves several steps to improve its accuracy, reduce false positives/negatives, and ensure it runs efficiently on the target hardware. Here are suggested optimization methods for an object detection model:
- Improve Data Quality and Quantity
- Use EON Tuner
Improve picture quality and quantity
- Let’s capture more data by taking pictures on a consistent background. Go to Data acquisition on the left menu and connect your smartphone to take more SD card pictures. Make sure the SD card is on a clean background.
- Click the chip icon to capture SD card pictures from the smartphone (you need to scan the QR Code). In this test, the 7 SD card pictures are captured with a white background.
- Go to the Retrain model menu and train again. Then check the Model test results. We can find that the accuracy was improved up to 61.54%.
- However, the number of samples is small for ML, we only use it to illustrate the training process. In an actual application, you need to collect a large amount of raw training data to get a better model.
Use EON Tuner
- As you can see, we get only a 61.54% of accuracy when processing the test dataset. This number is too low. The final result is really dependent on the settings of the model used during the training phase. This is why we need to find a way to fine-tune these settings.
- One of the most powerful tools that provides Edge Impulse, is precisely dedicated to this task. This is the EON Tuner, a tool that will automatically test different models with different training settings and show the user the best ones.
- To access the EON Tuner, go into Experiments and Eon Tuner:
- When you click on Run EON Tuner, you will see a window like what is shown in the accompanying figure.
- It is possible to customize how the EON Tuner runs. However, we will focus on how to run from a template. Click on Use Case Templates and select Object Detection (centroids).
- As you can see, the Search space configuration has been updated with the selected template. You can give a specific name for each EON Tuner run in the top right side text field. See the accompanying image as an example.
- Once you are ready, you can start the Tuner. It will take quite some time to run. After the completion, you can see the different results. You can select the Impulse you prefer based on your application requirements. For this example, we will select the first one, as shown in the accompanying image.
- Now, you can see on the Object Detection tab, the accuracy has been significantly improved:
- When running the model testing, see the accompanying figure for the improved accuracy.
- Using the EON Tuner is a good way to easily test various model architectures. You can customize the way it works. Test more or different settings, please have a look at the Edge Impulse Documentation.