Download the corresponding SR9900 driver according to the operating system used.
Extract the driver compressed package to the desktop path->Enter the device manager and select the currently unrecognized device (named USB 10/100 LAN
or with SR9900
characters)->Right-click and select Custom Update->Select The path to decompress the compressed package -> click Confirm and wait for the update to complete.
Extract the driver compressed package->Double-click to open the SR9900_v1.x.pkg file->Click Next to install according to the prompts. (The compressed package contains a detailed version of the driver installation tutorial pdf)
sudo ifconfig en10 down
sudo ifconfig en10 up
After connecting to USB for power supply, UnitV2 will start automatically, and the power indicator light will show red and white, and will go out after the startup is completed. UnitV2 internally integrates the basic AI recognition application developed by M5Stack and has built-in multiple recognition functions (such as face recognition, object tracking and other common functions), which can quickly help users build AI recognition applications. Through the following two connection methods, the PC/mobile terminal can access the domain name unitv2.py
or IP:10.254.239.1
through the browser to access the preview web page through the identification function. During the identification process, UnitV2 will continuously output identification sample data (JSON format, UART: 115200bps 8N1
) through the serial port (HY2.0-4P interface at the bottom)
Note: The built-in recognition service has some compatibility issues on the Safari
browser. It is recommended to use the Chrome
browser to access it.
Ethernet mode connection
: UnitV2 has a built-in wired network card. When you connect to the PC through the TypeC interface, a network connection will be automatically established with UnitV2.
AP mode connection
: After UnitV2 is started, AP hotspot (SSID: M5UV2_XXX: PWD:12345678)
will be enabled by default. Users can establish a network connection with UnitV2 through WiFi access.
Switch different recognition functions by clicking on the navigation bar of the function page or sending JSON instructions through Serial Port communication. Note: Line breaks are not allowed to be inserted in other positions except at the end of the sent command string.
//The value of the function key can be specified as any of the following functions
Audio FFT
Code Detector
Face Detector
Lane Line Tracker
Motion Tracker
Shape Matching
Camera Stream
Online Classifier
Color Tracker
Face Recognition
Target Tracker
Shape Detector
Object Recognition
//Please note that args must be a list.
{
"function":"Object Recognition",
"args":[
"yolo_20class"
]
}
//If the function switch is successful, a reply will be received
{
"msg":"function switched to Object Recognition."
}
//If the specified function does not exist, a reply will be received
{
"error":"function Object Recognition not exist"
}
//If the function switching fails, a reply will be received
{
"error":"invalid function."
}
480P real-time video preview.
480P real-time video preview.
Switch function to Camera Stream
{
"function": "Camera Stream",
"args": ""
}
Identify the QR code in the screen and return the coordinates and content of the QR code.
Please switch the function to Code Detector
{
"function": "Code Detector",
"args": ""
}
{
"running":"Code Detector",
"num":2, // Number of QR codes
"code":[
{
"prob": 0.987152, // confidence rate
"x":10, // 0 ~ 640
"y":10, // 0 ~ 480
"w":30,
"h":30, // QR code bounding box
"type":"QR/DM/Maxi", // include "Background", "QR/DM/Maxi", "SmallProgramCode", "PDF-417", "EAN", "Unknown"
"content":"m5stack"
},
{
"prob": 0.987152, // confidence rate
"x":10,
"y":10,
"w":30,
"h":30, // QR code bounding box
"type":"QR/DM/Maxi", // include "Background", "QR/DM/Maxi", "SmallProgramCode", "PDF-417", "EAN", "Unknown"
"content":"m5stack"
}
]
}
Target detection based on YOLO Fastest and NanoDet. Support V-Training.
Please switch the function to Object Recognition
//Select the parameter "yolo_20class" to switch to this function
{
"function": "Object Recognition",
"args": ["yolo_20class"]
}
//Select the parameter "nanodet_80class" to switch to this function
{
"function": "Object Recognition",
"args": ["nanodet_80class"]
}
Objects recognized by the built-in model
yolo_20class: [
"aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog",
"horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"
]
nanodet_80class: [
"person","bicycle","car","motorbike","aeroplane","bus","train","truck","boat","traffic light",
"fire hydrant","stop sign","parking meter","bench","bird","cat","dog","horse","sheep","cow",
"elephant","bear","zebra","giraffe","backpack","umbrella","handbag","tie","suitcase","frisbee",
"skis","snowboard","sports ball","kite","baseball bat","baseball glove","skateboard","surfboard",
"tennis racket","bottle","wine glass","cup","fork","knife","spoon","bowl","banana","apple",
"sandwich","orange","broccoli","carrot","hot dog","pizza","donut","cake","chair","sofa","pottedplant",
"bed","diningtable","toilet","tvmonitor","laptop","mouse","remote","keyboard","cell phone","microwave",
"oven","toaster","sink","refrigerator","book","clock","vase","scissors","teddy bear","hair drier","toothbrush"
]
{
"num": 1,
"obj": [
{
"prob": 0.938137174,
"x": 179,
"y": 186,
"w": 330,
"h": 273,
"type": "person"
}
],
"running": "Object Recognition"
}
Detect the specified color area and return the coordinates of the color area.
Detect the specified color area and return the coordinates of the color area.
You can directly adjust the LAB threshold slider to filter out the background and obtain the color area of interest. You can also directly frame the color area of interest on the screen. The system will automatically calculate the color with the largest proportion of the target area and filter out the background. You can further adjust the slider based on the calculation to achieve better filtering effects. Clicking the "To Mask Mode" button will switch to Mask mode, in which you can directly see the filtering effect. Clicking the "To RGB Mode" button again will switch back to RGB mode.
Before performing the following operations, please switch the function to Color Tracker
{
"function": "Color Tracker",
"args": ""
}
4.3.1 Specify LAB threshold
// *LAB threshold mapping is 0~255
{
"config":"Color Tracker",
"l_min":0, // 0 ~ 255
"l_max":0, // 0 ~ 255
"a_min":0, // 0 ~ 255
"a_max":0, // 0 ~ 255
"b_min":0, // 0 ~ 255
"b_max":0 // 0 ~ 255
}
{
"running":"Color Tracker",
"msg":"Data updated."
}
4.3.2 Specify ROI (automatically calculate threshold)
{
"config":"Color Tracker",
"x":0, // 0 ~ 640
"y":0, // 0 ~ 480
"w":30,
"h":30,
}
// *va and vb refer to the degree of color dispersion within the ROI. If the degree of dispersion is high, the tracking effect will be poor.
{
"running":"Color Tracker",
"a_cal":0.0,
"b_cal":0.0, //Calculate threshold
"va":0.0,
"vb":0.0, //Color dispersion rate
"l_min":0, // Fixed value 0
"l_max":255, // Fixed value 255
"a_min":0, // a_cal - (10 + (int)(va / 2.0f))
"a_max":0, // a_cal + (10 + (int)(va / 2.0f))
"b_min":0, // b_cal - (10 + (int)(vb / 2.0f))
"b_max":0 // b_cal + (10 + (int)(vb / 2.0f))
}
{
"running":"Color Tracker",
"cx": 0, // Center X-axis coordinate
"cy": 0, // Center Y-axis coordinate
"r": 0, // radius
"mx": 0, // moment x position
"my": 0 // moment y position
}
检测画面中的道路线,将其拟合成直线,返回直线角度与坐标。
Detect the road lines in the screen, fit them into straight lines, and return the straight line angles and coordinates.
You can directly adjust the LAB threshold slider to filter out the background and obtain the color area of interest. You can also directly frame the color area of interest on the screen. The system will automatically calculate the color with the largest proportion of the target area and filter out the background. You can further adjust the slider based on the calculation to achieve better filtering effects. Clicking the "To Mask Mode" button will switch to Mask mode, in which you can directly see the filtering effect. Clicking the "To RGB Mode" button again will switch back to RGB mode.
Before performing the following operations, please switch the function to LaneLine Tracker
{
"function": "Lane Line Tracker",
"args": ""
}
5.3.1 Specify LAB threshold
// * LAB threshold mapping is 0~255
{
"config":"Lane Line Tracker",
"l_min":0, // 0 ~ 255
"l_max":0, // 0 ~ 255
"a_min":0, // 0 ~ 255
"a_max":0, // 0 ~ 255
"b_min":0, // 0 ~ 255
"b_max":0, // 0 ~ 255
}
{
"running":"Lane Line Tracker",
"msg":"Data updated."
}
5.3.2 Specify ROI (automatically calculate threshold)
{
"config":"Lane Line Tracker",
"x":0, // 0 ~ 640
"y":0, // 0 ~ 480
"w":30,
"h":30,
}
//* va and vb refer to the degree of color dispersion within the ROI. If the degree of dispersion is high, the segmentation effect will be poor.
{
"running":"Lane Line Tracker",
"a_cal":0.0,
"b_cal":0.0, // Calculate threshold
"va":0.0,
"vb":0.0, // Color dispersion rate
"l_min":0, // Fixed value 0
"l_max":255, // Fixed value 255
"a_min":0, // a_cal - (10 + (int)(va / 2.0f))
"a_max":0, // a_cal + (10 + (int)(va / 2.0f))
"b_min":0, // b_cal - (10 + (int)(vb / 2.0f))
"b_max":0 // b_cal + (10 + (int)(vb / 2.0f))
}
{
"running":"Lane Line Tracker",
"x":0,
"y":0, //The base point of the fitting line
"k":0 // Slope of fitting line
}
Select the target on the screen and track it, using the MOSSE algorithm.
Just frame the object of interest on the screen.
Before performing the following operations, please switch the function to Target Tracker
{
"function": "Target Tracker",
"args": ""
}
{
"running":"Target Tracker",
"x":0,//0~640
"y":0,//0~480
"w":0,
"h":0
}
Detect and track moving targets, and return the target's coordinates and angles.
Click the 'Set as background' button to set the background. The algorithm can adapt to slowly changing backgrounds.
Before performing the following operations, please switch the function to Motion Tracker
{
"function": "Motion Tracker",
"args": ""
}
7.3.1 Determine the background
Send
//Sending this command will determine the background
{
"config":"Motion Tracker",
"operation":"update"
}
Receive
{
"running":"Motion Tracker",
"msg":"Background updated."
}
{
"running":"Motion Tracker",
"num":2,
"roi":[
{
"x":0,
"y":0,
"w":0,
"h":0,
"angle":0.0,
"area":0
},
{
"x":0,
"y":0,
"w":0,
"h":0,
"angle":0.0,
"area":0
}
]
}
This function can train and classify objects in the green target box in real time, and the feature values obtained by training can be saved on the device for next use.
Before performing the following operations, please switch the function to Online Classifier
{
"function": "Online Classifier",
"args": ""
}
8.3.1 Train
Send
//This command will put the device into training mode and extract features once and store them under the specified category. If class_id does not exist, this class will be created.
{
"config":"Online Classifier",
"operation":"train",
"class_id":1, // Integer (0 ~ N), ID of class
"class":"class_1" // String, the name of the class
}
Receive
{
"running":"Online Classifier",
"msg":"Training [class name] [num of training] times"
}
8.3.2 Save&Run
Send
{
"config":"Online Classifier",
"operation":"saverun",
}
Receive
{
"running":"Online Classifier",
"msg":"Save and run."
}
8.3.3 Reset
Send
//This command will put the device into training mode and clear all categories.
{
"config":"Online Classifier",
"operation":"reset",
}
Receive
{
"running":"Online Classifier",
"msg":" Please take a picture."
}
{
"running":"Online Classifier",
"class_num":2, //The number of classes identified
"best_match":"class_1", //best matching class
"best_score":0.83838, //best match score
"class":[ //Score for each class
{
"name":"class_1",
"score":0.83838
},
{
"name":"class_2",
"score":0.66244
}
]
}
Detect and recognize faces.
Before performing the following operations, please switch the function to Face Recognition
{
"function": "Face Recognition",
"args": ""
}
9.3.1 Train
Send
//To create a new face, provide face_id in order (0 ~ N).
{
"config":"Face Recognition",
"operation":"train",
"face_id":1, // Integer (0 ~ N), face ID
"name":"tom" // String, the name of the face
}
//For example, there are already 3 faces (0~2). To create a new face, you need to specify the id as 3.
Receive (Success)
{
"running":" Face Recognition ",
"msg":"Training tom" // Training surface name
}
Receive (Error)
{
"running":"Face Recognition",
"msg":"Invalid face id"
}
9.3.2 Stop Train
Send
{
"config":" Face Recognition ",
"operation":" stoptrain",
}
Receive
{
"running":"Face Recognition",
"msg":"Exit training mode."
}
9.3.3 Save&Run
Send
{
"config":" Face Recognition ",
"operation":"saverun",
}
Receive
{
"running":"Face Recognition",
"msg":"Faces saved."
}
9.3.4 Reset
Send
//This command will delete all faces.
{
"config":"Face Recognition",
"operation":"reset",
}
Receive
{
"running":"Face Recognition",
"msg":"Reset success"
}
9.4.1 Training Mode
{
"running":"Face Recognition",
"status":"training", // training or missing
"x":0,
"y":0,
"w":0,
"h":0, // Facial recognition bounding box
"prob":0, // Detect confidence rate
"name":0,
}
9.4.2 Normal Mode (匹配得分>0.5)
{
"running":"Face Recognition",
"num":1, //Number of faces recognized
"face":[
{
"x":0, // 0 ~ 320
"y":0, // 0 ~ 240
"w":30,
"h":30, // Facial recognition bounding box
"prob":0, // Detect confidence rate
"match_prob":0.8, // Match confidence rate
"name": "tom",
"mark":[ // landmarks
{
"x":0,
"y":0
},
{
"x":0,
"y":0
},
{
"x":0,
"y":0
},
{
"x":0,
"y":0
},
{
"x":0,
"y":0
},
]
},
]
}
9.4.3 Normal Mode (match score <=0.5)
{
"running":"Face Recognition",
"num":1, // Number of faces recognized
"face":[
{
"x":0, // 0 ~ 320
"y":0, // 0 ~ 240
"w":30,
"h":30, // facial recognition bounding box
"prob":0, // confidence rate
"name": "unidentified",
"mark":[ // landmarks
{
"x":0,
"y":0
},
{
"x":0,
"y":0
},
{
"x":0,
"y":0
},
{
"x":0,
"y":0
},
{
"x":0,
"y":0
},
]
},
]
}
Detect faces in the picture and give 5 landmarks.
Before performing the following operations, please switch the function to Face Detector
{
"function": "Face Detector",
"args": ""
}
{
"running":"Face Detector",
"num":1, // Number of faces recognized
"face":[
{
"x":0,
"y":0,
"w":30,
"h":30, //Facial recognition bounding box
"prob":0, // confidence rate
"mark":[ // landmark
{
"x":0,
"y":0
},
{
"x":0,
"y":0
},
{
"x":0,
"y":0
},
{
"x":0,
"y":0
},
{
"x":0,
"y":0
}
]
}
]
}
Detect shapes in the frame and calculate their area. Able to identify squares, rectangles, triangles, pentagons, and circles.
Click the 'Set as background' button to set the background. The algorithm can adapt to slowly changing backgrounds.
Before performing the following operations, please switch the function to Shape Detector
{
"function": "Shape Detector",
"args": ""
}
Send
// 发送这条指令将会确定背景
{
"config":"Shape Detector",
"operation":"update"
}
Receive
{
"running":"Shape Detector",
"msg":"Background updated."
}
{
"running":"Shape Detector",
"num":2,
"shape":[
{
"name":"Rectangle", // "unidentified", "triangle", "square", "rectangle", "pentagon", "circle"
"x":0,
"y":0,
"w":0,
"h":0,
"angle":0.0, // Can be used when the shape is square or rectangular
"area":0
},
{
"name":"Rectangle", // "unidentified", "triangle", "square", "rectangle", "pentagon", "circle"
"x":0,
"y":0,
"w":0,
"h":0,
"angle":0.0, //Can be used when the shape is square or rectangular
"area":0
}
]
}
Matching any given shape (but the shape should not contain curves), the uploaded shape will be converted into feature data and saved on the device for next use.
Click the add button to add a shape. You need to upload a shape template image as shown below (png format, the shape is black and the background is white). The file name will be the name of the shape.
Click the reset button to clear all uploaded shapes.
Click the 'Set as background' button to set the background. The algorithm can adapt to slowly changing backgrounds.
To be developed, not supported yet.
// The shape returned here is the file name of the uploaded template. Please note that if the confidence rate is lower than 30%, it will be marked as unidentified.
{
"running":"Shape Matching",
"num":2,
"shape":[
{
"name":"arrow", // Your custom shape name cannot be recognized when the confidence rate is less than 30
"max_score":83, // Confidence score, if the shape is unknown, there is no
"x":0,
"y":0,
"w":0,
"h":0,
"area":0
},
{
"name":"unidentified", // Your custom shape name, not recognized when confidence score is less than 30
"x":0,
"y":0,
"w":0,
"h":0,
"area":0
},
]
}
Capture audio through the device's microphone, perform a real-time FFT (Fast Fourier Transform) and plot a time-frequency plot. The green graph below is the RMS of the audio, indicating the current loudness.
None
During the identification process, UnitV2 will continuously output identification sample data (JSON format, UART: 115200bps 8N1
) through the serial port (HY2.0-4P interface at the bottom). The following are case programs for reading recognition results on different platforms.
JSON strings can be parsed using the ArduinoJson library.
void setup() {
Serial.begin(115200);
Serial2.begin(115200, SERIAL_8N1, 16, 17);
}
void loop() {
if(Serial2.available()) {
String recvStr = Serial2.readStringUntil('/n');
if(recvStr[0] == '{'){
Serial.print(recvStr);
}
}
}
import machine
import json
uart1 = machine.UART(1, tx=16, rx=17)
uart1.init(115200, bits=8, parity=None, stop=1)
PROTOCOL_START = b'{'[0]
while True:
if uart1.any():
data = uart1.readline()
if data[0] == PROTOCOL_START:
json_data = json.loads(data)
from json.decoder import JSONDecodeError
import subprocess
import json
import base64
import serial
import time
from datetime import datetime
from PIL import Image
import os
import io
uart_grove = serial.Serial('/dev/ttyS0', 115200, timeout=0.1)
reconizer = subprocess.Popen(['/home/m5stack/payload/bin/object_recognition', '/home/m5stack/payload/uploads/models/nanodet_80class'],
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
reconizer.stdin.write("_{\"stream\":1}\r\n".encode('utf-8'))
reconizer.stdin.flush()
img = b''
while True:
today = datetime.now()
path = str(today.strftime("%Y_%m_%d") + "/")
newpath = "/media/sdcard/" + path
line = reconizer.stdout.readline().decode('utf-8').strip()
if not line:
break # Process finished or empty line
try:
doc = json.loads(line)
if 'img' in doc:
byte_data = base64.b64decode(doc["img"])
img = bytes(byte_data)
elif 'num' in doc:
for obj in doc['obj']:
uart_grove.write(str(obj['type'] + '\n').encode('utf-8'))
if obj['type'] == "aeroplane":
print('aeroplane ' + today.strftime("%Y_%m_%d_%H_%M_%S"))
if os.path.exists(newpath):
image_path = newpath + today.strftime("%Y_%m_%d_%H_%M_%S") + ".jpg"
img = Image.open(io.BytesIO(byte_data))
img.save(image_path, 'jpeg')
else:
os.mkdir(newpath)
image_path = newpath + today.strftime("%Y_%m_%d_%H_%M_%S") + ".jpg"
img = Image.open(io.BytesIO(byte_data))
img.save(image_path, 'jpeg')
time.sleep(1)
else:
print('Not detect '+ today.strftime("%Y_%m_%d_%H_%M_%S"))
except JSONDecodeError as e:
print("Error: Invalid JSON string")
print("JSONDecodeError:", str(e))