How to use Edge Impulse AI/ML tools on SAMA7G54 MPU

Last modified by Microchip on 2024/10/03 10:35

Objective

This tutorial shows you how to use Edge Impulse AI/ML tools on Microchip MPU's under Linux® using SAMA7G54-EK.

The SAMA7G54 is a high-performance, Arm® Cortex®-A7 CPU-based embedded microprocessor (MPU) running up to 1 GHz. It supports multiple memories such as 16-bit DDR2, DDR3, DDR3L, LPDDR2, LPDDR3 with flexible boot options from octal/quad Serial Peripheral Interface (SPI), SD/eMMC as well as 8-bit SLC/MLC NAND Flash.

The SAMA7G54 integrates complete imaging and audio subsystems with 12-bit parallel and/or MIPI-CSI2 camera interfaces supporting up to 8 megapixels and 720p @ 60 fps, up to four I2S, one SPDIF transmitter and receiver and a 4-stereo channel audio sample rate converter.

The device also features a large number of connectivity options including Dual Ethernet (one Gigabit Ethernet and one 10/100 Ethernet), six CAN-FD and three high-speed USBs. Advanced security functions like secure boot, secure key storage, and high-performance crypto accelerators for AES, SHA, RSA and ECC are also supported.

Microchip provides an optimized power management solution for the SAMA7G54. The MCP16502 has been fully tested and optimized to provide the best power vs. performance for the SAMA7G54.

Hardware Prerequisite

  • SAMA7G54 Evaluation Kit
  • Linux machine
  • USB camera
  • USB-to-serial cable (TTL level)
  • USB cable (micro USB to type-A cable)
  • Ethernet cable
  • SD card

    The hardware setup is shown in the accompanying figure.

    Hardware Setup

Software Tools

  • PC development environment: This demo needs an Ubuntu PC to build the SD card image.

  • Docker environment: The SD card image building will use docker, if your environment does not install docker, please follow the instructions below to install docker:  How to install docker on Ubuntu.

  • Edge Impulse account: This demo is operated on Edge Impulse online studio. To use the Edge Impulse online studio, a user account is needed. The sign-up address is: https://edgeimpulse.com/.

Hardware Connection

Hardware Connection

Jumper SettingsJumper Settings

Back to Top

Software setup

Users can use the pre-built image or build the SD-card image manually.

  1. Use a pre-built image:

  2. Build the image manually:

    Use Docker and Buildroot-at91 to build the SAMA7G54-EK SD card image. First, we need to clone the source code.

    • Prepare for source code:

      Download the SAMA7G54 Edge Impulse example source code:

      user@at91:~/$ git clone https://github.com/edgeimpulse/example-microchip-sama7g54.git

      user@at91:~/$ cd example-microchip-sama7g54

      user@at91:~/example-microchip-sama7g54$ ls

      buildroot-config  Dockerfile  LICENSE    sharp-linux-armv7.node

      Config.in  example-standalone-inferencing-linux.mk  README.md

    • Build image in docker: 
      • Build the docker container and run it with:

      user@at91:~/example-microchip-sama7g54$ sudo docker build . -t microchip                                                                                  
      user@at91:~/example-microchip-sama7g54$ sudo docker run -it -v $PWD/build:/buildroot-microchip/buildroot-at91/output/images microchip

      • Now let’s build the SD card image in the docker environment:

      root@a4d260547a56:/# cd buildroot-microchip/buildroot-at91/

      root@a4d260547a56:/buildroot-microchip/buildroot-at91# ls

      CHANGES    Config.in.legacy  Makefile.legacy  board    dl    linux    sharp-linux-armv7.node  toolchain COPYING    DEVELOPERS README  boot     docs  output   support  utils  Config.in  Makefile   arch  configs  fs    package  system

      • Run the command "make menuconfig" if the application depends on any packages and add them in the Buildroot config. Otherwise, exit the menuconfig without changes.

      root@a4d260547a56:/buildroot-microchip/buildroot-at91# make menuconfig

      • And then, you can continue to build the image with make:

      root@a4d260547a56:/buildroot-microchip/buildroot-at91# make

      • After completing the make process, go to the example-microchip-sama7g54 folder (outside of the container). You will find that a build folder was generated:

      user@at91:~/example-microchip-sama7g54$ ls

      build  Config.in   example-standalone-inferencing-linux.mk  README.md buildroot-config  Dockerfile  LICENSE sharp-linux-armv7.node

Back to Top

Hands-On (Train the Model)

In this hands-on chapter, let's learn how to train a model and design a complete AI/ML project using Edge Impulse. The example used here is to detect the SD cards using a USB camera on SAMA7G54-EK.

Login into Edge Impulse website.

Login to your account via: https://studio.edgeimpulse.com/login.

Edge Impulse Account Setup

Back to Top


Create an Edge Impulse project.

  • After logging in successfully, you would see + Create new project button on your home page. Click this button to create a new Edge Impulse project.

Create an Edge Impulse project

  • Name the project. Then, choose your project setting by selecting Public or Private. Last, click Create new project.

Name the Edge Impulse project

Back to Top


Review Dashboard information.

Now we have a project overview. In the Dashboard menu, we can see the project information, add data, etc.

Add Dashboard Information

Back to Top


Choose the target device.

In the Dashboard menu, click the top right button to add the target device type. Choose Microchip SAMA7G54 Evaluation Kit in the Target device drop-down box, and then click Save button to save the configuration.

Choose Target Device

Back to Top


Choose a device to sample data.

  • Click the left menu item Devices to connect to a device that can sample training data. An easy way to do this is to connect to a smartphone that can take pictures quickly.Choose device to sample data
  • Use the smartphone to scan the QR code and allow your smartphone to use the camera if a pop-up prompts you to do so. You will take pictures and upload the pictures automatically to the online studio of the project you just created. Of course, you can choose sample pictures from your computer or development board.Mobile QR CodeConnect Mobile with Edge Impulse
  • Once your smartphone connects to Edge Impulse web successfully, check the smartphone screen and click Collecting images?              

Data Collection

  • Give the web page permission to access the camera of the smartphone. 

Allow access to Camera

  • Now the smartphone can take photos of SD cards.

                     Upload images from Mobile phone

  • At this step, you can click on the Label: unknown text box, to set the label for the image. This can help distinguish between different sample acquisition sessions.

                     Labelling the images

                     Uploading images from Mobile phone to Edge Impulse

  • Click on the Capture button to transfer the SD card picture to the online studio automatically.

Back to Top


Check sampled data.

Sample a lot of pictures to train a good model. In this example, 63 SD-card photos are taken at different angles as experimental data.

Check sampled data

Back to Top


Label the sampled data.

  • Label the sampled data for training.Label sampled dataEnter label
  • Now save the labeled data. Normally, a lot of work os required to label all data.Save labelled data

Back to Top


Create the training model.

  • Now let’s create the training model named “Impulse”. Click the menu Experiments. Then, click Create a new impulse as shown in the accompanying image.Create training model
  • The sampled data should be preprocessed before training. When passing an image to the input block, raw features are extracted. The processing block allows you to filter these raw features to keep only the most significant ones; the ones that the learning block will need to use. Click the Add button to add preprocessing.Add processing block
  • Add a learning block to train data.Learning block
  • Finally, save the configured Impulse.Save impulse

Back to Top


Image Process.

  • Set the color mode to Grayscale. Of course, you can set RGB as well.Image pre-processing
  • After configuring the parameters, generate the features by clicking Generate features on the top menu. Generating features is a key step for converting raw data (Raw Features) into feature vectors for training machine learning models. This step is often referred to as feature extraction, which enhances the performance of the model by extracting meaningful information from raw sensor data.Generate features
  • Check for job completion on the bottom left output console.Job Completion

Back to Top


Train the model.

  • From the left menu, click Object detection to choose a model to begin training.Train the model
  • In this demo, we choose the FOMO MobileNetV2 0.35.Choose a model
  • Click the Save & train button after the model is chosen. The training process will start. You can see the training output log message in the top right window.Save and train

Back to Top


Live classification

  • In this section, you can test the trained model performance on your smartphone, PC, or other device that could capture data.Live classification
  • For example, the fastest way to test live classification is by using your smartphone. Scan the QR Code with a smartphone.Classify images from mobile phone
  • Pay attention on this step when your smartphone is connected. Don’t click Collecting images? button. Instead click the Switch to classification mode button.

                        Classification mode

  • In classification mode, your trained model will be downloaded and deployed to your smartphone.

                          Download trained model to mobile phone

  • The classification running real time on the smartphone.

                         Run classification on mobile

  • The SD cards were detected within 1ms. Another way to test the model is to detect the database pictures that we captured before.Test the model
  • For example, the picture sd.563gdjtr was selected to test.Test the model from the captured images

Back to Top


Model testing.

  • Let’s recall the previous part Data acquisition. We can find the captured data was split into two parts:

                    A. TRAIN: 83%

                    B. TEST: 17%Data acquisition

  • These TEST data was not handled in the previous training process. This will be used to test the trained model. Click the left menu Model testing to check the test results of the TEST data detected by the model.

Model Testing

  • Until now, we trained an initial model on the Edge Impulse cloud servers. In the next chapter, we will introduce how to deploy this trained model to our SAMA7G54-EK target to have real testing.

Back to Top

Hands-On (Deploy the model on SAMA7G54-EK)

The SAMA7G54-EK SD-card image flashed the SD card image by using Balena Etcher or other tools. When the SAMA7G54-EK runs, please be aware that the login username and password in SAMA7G54-EK serial terminal are:

  • Username: root
  • Password: edgeimpulse

If you would like to use SSH to connect to the board, some additional steps are necessary:

  1. cd /etc/ssh/
  2. nano sshd_config
  3. Uncomment and change PermitRootLogin prohibit-password to PermitRootLogin yes
  4. Uncomment PasswordAuthentication yes
  5. CTRL+X then Y then Enter
  6. reboot to restart SSH
  7. ifconfig to get IP address
  8. On your host machine ssh root@www.xxx.yyy.zzz

Deploy .eim file.

  • From the left menu, click Deployment, and search for the board: Microchip SAMA7G54 Evaluation Kit (ARMv7).

Deployment

  • Then click the Build button to build the executable file.

Build the model

  • The browser will download the executable file automatically when the build is complete. You can find the file in your browser’s default downloading location.

Download executables

  • The file name is:  detect-sd-card-linux-armv7-v9.eim
  • Edge Impulse Model (EIM) files are native Linux and macOS® binary applications that contains your full impulse created in Edge Impulse Studio. The impulse consists of the signal processing block(s) along with any learning and anomaly block(s) you added and trained. EIM files are compiled for your particular system architecture and are used to run inference natively on your system. Detailed documentation about EIM files can be found at https://docs.edgeimpulse.com/docs/run-inference/linux-eim-executable.
  • Upload the detect-sd-card-linux-armv7-v9.eim file to the SAMA7G54-EK by using an ftp tool for example such as FileZilla client, and then, check the file on the SAMA7G54-EK serial terminal:

# ls
detect-sd-card-linux-armv7-v9.eim  video-capture-at91utils.sh

Make sure the file is executable:
# chmod +x detect-sd-card-linux-armv7-v9.eim

  • Start the demo:

# edge-impulse-linux-runner --model-file detect-sd-card-linux-armv7-v9.eim
[RUN] Starting the image classifier for Wayne Jia / Detect SD card (v9)
[RUN] Parameters image size 96x96 px (1 channels) classes [ 'SD' ]
[GST] checking for /etc/os-release
[RUN] Using camera /base/soc/ehci@500000-3 starting...
[RUN] Connected to camera

  • Want to see a feed of the camera and live classification in your browser? Go to http://10.160.138.25:4912.
  • Open a browser and point the camera to the SD cards. You will see the real-time inferencing on the browser window as shown in the accompanying image:

Classify images on real time

  • The camera captured 2 SD cards and they are detected as “SD” with an F1 score. Also, the Time per inference time is 9 ms on SAMA7G54-EK.

Back to Top


Deploy C++ library.

  • We can run the detect SD card demo with the built .eim file on the Edge Impulse cloud server. You may want to build a customized application rather than an unmodifiable .eim binary. In this chapter, we’ll demonstrate how to build your own C++ library with the trained model in Buildroot-at91 for the target SAMA7G54-EK.
  • From the left menu, select Deployment, select C++ library, and then click Build button.

Deploy C++ library

Configure the deployment

  • The browser will download the C++ library zip package automatically once the build is complete. You can find the file on your browser’s default downloading location.

Downloaded package

  • Unzip the detect-sd-card-v10.zip file and copy the 3 folders below red framed to the Buildroot-at91 in docker.

Extracting the required folders

  • Copy the 3 folders to the shared path build between your Ubuntu host and docker. Please note that the build folder is mapped to the docker’s directory: /buildroot-microchip/buildroot-at91/output/images

user@at91:~/example-microchip-sama7g54$ sudo cp -r edge-impulse-sdk model-parameters tflite-model build
user@at91:~/example-microchip-sama7g54$ ls build
at91bootstrap.bin rootfs.tar sama7g5ek_pdmc0.dtbo at91-sama7g5ek.dtb  sama7g5ek_at25ff321a_click1.dtbo  sama7g5ek_wilc3000.dtbo
boot.bin sama7g5ek_i2s0_pcm5102a.dtbo sama7g5-sdcardboot-uboot-4.0.9-rc1.bin boot.vfat sama7g5ek_i2s0_proto.dtbo sdcard.img edge-impulse-sdk    sama7g5ek_isc_imx219.dtbo tflite-model model-parameters   sama7g5ek_isc_imx274.dtbo u-boot.bin rootfs.ext2 sama7g5ek.itb  uboot-env.bin rootfs.ext4 sama7g5ek.its zImage

  • Check your docker ID:

user@at91:~/example-microchip-sama7g54$ sudo docker ps
CONTAINER ID   IMAGE       COMMAND       CREATED                  STATUS         PORTS     NAMES
a4d260547a56   microchip   "/bin/bash"   Less than a second ago   Up 4 seconds  gallant_jones

  • Enter to docker:

user@at91:~/example-microchip-sama7g54$ sudo docker exec -it a4d260547a56 /bin/bash

  • Copy the 3 unzipped folders to /buildroot-microchip/buildroot-at91/package/example-standalone-inferencing-linux folder.

root@a4d260547a56:/# cd /buildroot-microchip/buildroot-at91/output/images
root@a4d260547a56:/buildroot-microchip/buildroot-at91/output/images# cp -r edge-impulse-sdk model-parameters tflite-model ../../package/example-standalone-inferencing-linux/
root@a4d260547a56:/buildroot-microchip/buildroot-at91/output/images# cd ../../package/example-standalone-inferencing-linux/
root@a4d260547a56:/buildroot-microchip/buildroot-at91/package/example-standalone-inferencing-linux# ls
Config.in build-opencv-linux.sh inc tensorflow-lite  tidl-rt LICENSE build-opencv-mac.sh ingestion-sdk-c tflite utils Makefile edge-impulse-sdk model-parameters  tflite-model README.md example-standalone-inferencing-linux.mk source            third_party

  • Modify customized code in source /custom.cpp if necessary.

root@a4d260547a56:/buildroot-microchip/buildroot-at91/package/example-standalone-inferencing-linux# ls source
audio.cpp  camera.cpp  collect.cpp  custom.cpp  eim.cpp

  • Add the example-standalone-inferencing-linux to image build configuration in menuconfig:

root@a4d260547a56:/buildroot-microchip/buildroot-at91/package/example-standalone-inferencing-linux# cd ../..
root@a4d260547a56:/buildroot-microchip/buildroot-at91# make menuconfig

Modify code

  • Select Example Standalone Inferencing Linux in the path: Target packages a Miscellaneous. Save the setting in menuconfig, build the image again:

root@a4d260547a56:/buildroot-microchip/buildroot-at91# make
make: Warning: File 'docs/manual/manual.mk' has modification time 1087962 s in the future…

  • Flash the SD card image after the build is complete. Boot up the SAMA7G54-EK with the SD card, and you will find an application called custom in /home folder. The custom is built from the custom.cpp in example-standalone-inferencing-linux/source

# ls /home/
custom

  • Let’s recall Step 11: Live classification from the chapter Train the Model. There is a button to copy Raw features, which can be used as the testing data for custom application.

# ./custom raw_features.txt
Predictions (DSP: 5 ms., Classification: 80 ms., Anomaly: 0 ms.):
#Object detection results:
SD (0.886719) [ x: 48, y: 40, width: 8, height: 8 ]
# ls
custom  debug.bmp raw_features.txt

In this demo, we use the C++ code to inference a raw RGB image data from a txt file. And print the inferencing result in log. This demo generates an inferenced picture debug.bmp which could be downloaded to the host computer to check. We have shown a generic way of using the C++ library and how to integrate it within Buildroot to build a custom SD card image. You are free then to customize the C++ code to build and run more complete application. Click the duplication button next to the Raw features, paste them to a text file named raw_features.txt and upload it to the SAMA7G54-EK board /home folder. Now we can run the C++ library application.

Back to Top

Advanced: How to optimize the model

Optimizing an object detection model trained by Edge Impulse involves several steps to improve its accuracy, reduce false positives/negatives, and ensure it runs efficiently on the target hardware. Here are suggested optimization methods for an object detection model:

  • Improve Data Quality and Quantity
  • Use EON Tuner

Improve picture quality and quantity

  • Let’s capture more data by taking pictures on a consistent background. Go to Data acquisition on the left menu and connect your smartphone to take more SD card pictures. Make sure the SD card is on a clean background.    Improve picture quality
  • Click the chip icon to capture SD card pictures from the smartphone (you need to scan the QR Code). In this test, the 7 SD card pictures are captured with a white background.Capturing images from different background
  • Go to the Retrain model menu and train again. Then check the Model test results. We can find that the accuracy was improved up to 61.54%. Retrain model
  • However, the number of samples is small for ML, we only use it to illustrate the training process. In an actual application, you need to collect a large amount of raw training data to get a better model.

Back to Top

Use EON Tuner

  • As you can see, we get only a 61.54% of accuracy when processing the test dataset. This number is too low. The final result is really dependent on the settings of the model used during the training phase. This is why we need to find a way to fine-tune these settings.
  • One of the most powerful tools that provides Edge Impulse, is precisely dedicated to this task. This is the EON Tuner, a tool that will automatically test different models with different training settings and show the user the best ones.
Information

Note: The EON Tuner is a feature from the Professional and Enterprise plans of Edge Impulse. You can use it with a free 14-day trial (No credit card required). Sign-up for the free trial here.

  • To access the EON Tuner, go into Experiments and Eon Tuner:Access EON tuner
  • When you click on Run EON Tuner,  you will see a window like what is shown in the accompanying figure.Run EON tuner
  • It is possible to customize how the EON Tuner runs. However, we will focus on how to run from a template. Click on Use Case Templates and select Object Detection (centroids).Search Configurations
  • As you can see, the Search space configuration has been updated with the selected template. You can give a specific name for each EON Tuner run in the top right side text field. See the accompanying image as an example. Configure EON tuner
  • Once you are ready, you can start the Tuner. It will take quite some time to run. After the completion, you can see the different results. You can select the Impulse you prefer based on your application requirements. For this example, we will select the first one, as shown in the accompanying image. Start tuner
  • Now, you can see on the Object Detection tab, the accuracy has been significantly improved:Accuracy level detection
  • When running the model testing, see the accompanying figure for the improved accuracy.Run model testing
  • Using the EON Tuner is a good way to easily test various model architectures. You can customize the way it works. Test more or different settings, please have a look at the Edge Impulse Documentation.

Back to Top