pdf-icon

UnitV2 built-in recognition service

Driver Installation

Download the corresponding SR9900 driver according to the operating system used.

Windows10

Extract the driver compressed package to the desktop path->Enter the device manager and select the currently unrecognized device (named USB 10/100 LAN or with SR9900 characters)->Right-click and select Custom Update->Select The path to decompress the compressed package -> click Confirm and wait for the update to complete.

MacOS

Extract the driver compressed package->Double-click to open the SR9900_v1.x.pkg file->Click Next to install according to the prompts. (The compressed package contains a detailed version of the driver installation tutorial pdf)

  • After the installation is complete, if the network card cannot be enabled normally, you can open the terminal and use the following command to re-enable the network card.
sudo ifconfig en10 down
sudo ifconfig en10 up

Connect device

After connecting to USB for power supply, UnitV2 will start automatically, and the power indicator light will show red and white, and will go out after the startup is completed. UnitV2 internally integrates the basic AI recognition application developed by M5Stack and has built-in multiple recognition functions (such as face recognition, object tracking and other common functions), which can quickly help users build AI recognition applications. Through the following two connection methods, the PC/mobile terminal can access the domain name unitv2.py or IP:10.254.239.1 through the browser to access the preview web page through the identification function. During the identification process, UnitV2 will continuously output identification sample data (JSON format, UART: 115200bps 8N1) through the serial port (HY2.0-4P interface at the bottom)

Note: The built-in recognition service has some compatibility issues on the Safari browser. It is recommended to use the Chrome browser to access it.

  • Ethernet mode connection: UnitV2 has a built-in wired network card. When you connect to the PC through the TypeC interface, a network connection will be automatically established with UnitV2.

  • AP mode connection: After UnitV2 is started, AP hotspot (SSID: M5UV2_XXX: PWD:12345678) will be enabled by default. Users can establish a network connection with UnitV2 through WiFi access.

Built-in functions

Function switching

Switch different recognition functions by clicking on the navigation bar of the function page or sending JSON instructions through Serial Port communication. Note: Line breaks are not allowed to be inserted in other positions except at the end of the sent command string.

//The value of the function key can be specified as any of the following functions
Audio FFT
Code Detector
Face Detector
Lane Line Tracker
Motion Tracker
Shape Matching
Camera Stream
Online Classifier
Color Tracker
Face Recognition
Target Tracker
Shape Detector
Object Recognition
//Please note that args must be a list.
{
    "function":"Object Recognition",
    "args":[
        "yolo_20class"
    ]
} 

Switch the data format of function status response


//If the function switch is successful, a reply will be received
{
    "msg":"function switched to Object Recognition."
}

//If the specified function does not exist, a reply will be received
{
    "error":"function Object Recognition not exist"
} 

//If the function switching fails, a reply will be received
{
    "error":"invalid function."
}

1.Camera Stream

1.1 Description

480P real-time video preview.

480P real-time video preview.

1.2 Serial port operation

Switch function to Camera Stream

{
    "function": "Camera Stream",
    "args": ""
}

2.Code Detector

2.1 Description

Identify the QR code in the screen and return the coordinates and content of the QR code.

2.2 Web page operation

2.2 Serial port operation

Please switch the function to Code Detector

{
    "function": "Code Detector",
    "args": ""
}

2.3 Sample output

{
    "running":"Code Detector",
    "num":2, // Number of QR codes
    "code":[
        {
            "prob": 0.987152, // confidence rate
            "x":10, // 0 ~ 640
            "y":10, // 0 ~ 480
            "w":30,
            "h":30, // QR code bounding box
            "type":"QR/DM/Maxi",  // include "Background", "QR/DM/Maxi", "SmallProgramCode", "PDF-417", "EAN", "Unknown"
            "content":"m5stack"
        },
        {
            "prob": 0.987152, // confidence rate
            "x":10,
            "y":10,
            "w":30,
            "h":30, // QR code bounding box
            "type":"QR/DM/Maxi",  // include "Background", "QR/DM/Maxi", "SmallProgramCode", "PDF-417", "EAN", "Unknown"
            "content":"m5stack"
        }
    ]
}

3. Object Recongnition

3.1 Description

Target detection based on YOLO Fastest and NanoDet. Support V-Training.

3.2 Web page operations

  1. Click Add to upload the model file. tar format, please view the tutorial for training custom models UnitV2 V-Training
  2. After selecting the model, click Run to run the specified model. (Built-in models: nanodet_80class, yolo_20classs can be run and used directly)

3.3 Serial port operation

Please switch the function to Object Recognition


//Select the parameter "yolo_20class" to switch to this function
{
    "function": "Object Recognition",
    "args": ["yolo_20class"]
}
//Select the parameter "nanodet_80class" to switch to this function
{
    "function": "Object Recognition",
    "args": ["nanodet_80class"]
}

Objects recognized by the built-in model


yolo_20class: [
    "aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", 
    "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"
]

nanodet_80class: [
        "person","bicycle","car","motorbike","aeroplane","bus","train","truck","boat","traffic light",
        "fire hydrant","stop sign","parking meter","bench","bird","cat","dog","horse","sheep","cow",
        "elephant","bear","zebra","giraffe","backpack","umbrella","handbag","tie","suitcase","frisbee",
        "skis","snowboard","sports ball","kite","baseball bat","baseball glove","skateboard","surfboard",
        "tennis racket","bottle","wine glass","cup","fork","knife","spoon","bowl","banana","apple",
        "sandwich","orange","broccoli","carrot","hot dog","pizza","donut","cake","chair","sofa","pottedplant",
        "bed","diningtable","toilet","tvmonitor","laptop","mouse","remote","keyboard","cell phone","microwave",
        "oven","toaster","sink","refrigerator","book","clock","vase","scissors","teddy bear","hair drier","toothbrush"
]

3.4 Sample output

{
    "num": 1,
    "obj": [
        {
            "prob": 0.938137174,
            "x": 179,
            "y": 186,
            "w": 330,
            "h": 273,
            "type": "person"
        }
    ],
    "running": "Object Recognition"
}

4. Color Tracker

4.1 Description

Detect the specified color area and return the coordinates of the color area.

4.2 Web page operations

Detect the specified color area and return the coordinates of the color area.

4.2 Web page operations

You can directly adjust the LAB threshold slider to filter out the background and obtain the color area of interest. You can also directly frame the color area of interest on the screen. The system will automatically calculate the color with the largest proportion of the target area and filter out the background. You can further adjust the slider based on the calculation to achieve better filtering effects. Clicking the "To Mask Mode" button will switch to Mask mode, in which you can directly see the filtering effect. Clicking the "To RGB Mode" button again will switch back to RGB mode.

  • About CIELAB color space
  • LAB threshold is mapped to 0~255.
  • The L in LAB represents brightness. This threshold (0~255) is usually not set, indicating that the impact of brightness is ignored. But this will cause the algorithm to be unable to distinguish black and white, please note.
  • The algorithm will only return the largest target

4.3 Serial port operation

Before performing the following operations, please switch the function to Color Tracker

{
    "function": "Color Tracker",
    "args": ""
}

4.3.1 Specify LAB threshold

  • Send
    // *LAB threshold mapping is 0~255
    {
        "config":"Color Tracker",
        "l_min":0// 0 ~ 255
        "l_max":0// 0 ~ 255
        "a_min":0// 0 ~ 255
        "a_max":0// 0 ~ 255
        "b_min":0// 0 ~ 255
        "b_max":0 // 0 ~ 255
  • Receive
    {
        "running":"Color Tracker",
        "msg":"Data updated."
    }
    

4.3.2 Specify ROI (automatically calculate threshold)

  • Send
    {
        "config":"Color Tracker",
        "x":0// 0 ~ 640
        "y":0// 0 ~ 480
        "w":30,
        "h":30,
    }
    
  • Receive
    // *va and vb refer to the degree of color dispersion within the ROI. If the degree of dispersion is high, the tracking effect will be poor.
    {
        "running":"Color Tracker",
        "a_cal":0.0,
        "b_cal":0.0//Calculate threshold
        "va":0.0,
        "vb":0.0//Color dispersion rate
        "l_min":0// Fixed value 0
        "l_max":255// Fixed value 255
        "a_min":0// a_cal - (10 + (int)(va / 2.0f))
        "a_max":0// a_cal + (10 + (int)(va / 2.0f))
        "b_min":0// b_cal - (10 + (int)(vb / 2.0f))
        "b_max":0  // b_cal + (10 + (int)(vb / 2.0f))

4.4 Sample output

{
    "running":"Color Tracker",
    "cx": 0, // Center X-axis coordinate
    "cy": 0, // Center Y-axis coordinate
    "r": 0, // radius
    "mx"0// moment x position
    "my"0 // moment y position
}

5. LaneLine Tracker

5.1 Description

检测画面中的道路线,将其拟合成直线,返回直线角度与坐标。

5.2 Web page operations

Detect the road lines in the screen, fit them into straight lines, and return the straight line angles and coordinates.

5.2 Web page operations

You can directly adjust the LAB threshold slider to filter out the background and obtain the color area of interest. You can also directly frame the color area of interest on the screen. The system will automatically calculate the color with the largest proportion of the target area and filter out the background. You can further adjust the slider based on the calculation to achieve better filtering effects. Clicking the "To Mask Mode" button will switch to Mask mode, in which you can directly see the filtering effect. Clicking the "To RGB Mode" button again will switch back to RGB mode.

  • About CIELAB color space
  • LAB threshold is mapped to 0~255.
  • The L in LAB represents brightness. This threshold (0~255) is usually not set, indicating that the impact of brightness is ignored. But this will cause the algorithm to be unable to distinguish black and white, please note.

5.3 Serial port operation

Before performing the following operations, please switch the function to LaneLine Tracker

{
    "function": "Lane Line Tracker",
    "args": ""
}

5.3.1 Specify LAB threshold

  • Send
    // * LAB threshold mapping is 0~255
    {
        "config":"Lane Line Tracker",
        "l_min":0// 0 ~ 255
        "l_max":0// 0 ~ 255
        "a_min":0// 0 ~ 255
        "a_max":0// 0 ~ 255
        "b_min":0// 0 ~ 255
        "b_max":0// 0 ~ 255
    }
    
  • Receive
    {
        "running":"Lane Line Tracker",
        "msg":"Data updated."
    }
    

5.3.2 Specify ROI (automatically calculate threshold)

  • Send
    {
        "config":"Lane Line Tracker",
        "x":0// 0 ~ 640
        "y":0// 0 ~ 480
        "w":30,
        "h":30,
    }
    
  • Receive
    //* va and vb refer to the degree of color dispersion within the ROI. If the degree of dispersion is high, the segmentation effect will be poor.
    {
        "running":"Lane Line Tracker",
        "a_cal":0.0,
      "b_cal":0.0, // Calculate threshold
      "va":0.0,
      "vb":0.0, // Color dispersion rate
      "l_min":0, // Fixed value 0
      "l_max":255, // Fixed value 255
        "a_min":0// a_cal - (10 + (int)(va / 2.0f))
        "a_max":0// a_cal + (10 + (int)(va / 2.0f))
        "b_min":0// b_cal - (10 + (int)(vb / 2.0f))
        "b_max":0  // b_cal + (10 + (int)(vb / 2.0f))

5.4Sample output

{
    "running":"Lane Line Tracker",
    "x":0,
    "y":0, //The base point of the fitting line
    "k":0 // Slope of fitting line

6. Target Tracker

6.1 Description

Select the target on the screen and track it, using the MOSSE algorithm.

6.2 Web page operations

Just frame the object of interest on the screen.

6.3 Serial port operation

Before performing the following operations, please switch the function to Target Tracker


{
    "function": "Target Tracker",
    "args": ""
}

6.4 Sample output

{
    "running":"Target Tracker",
    "x":0,//0~640
    "y":0,//0~480
    "w":0,
    "h":0

7. Motion Tracker

7.1 Description

Detect and track moving targets, and return the target's coordinates and angles.

7.2 Web page operations

Click the 'Set as background' button to set the background. The algorithm can adapt to slowly changing backgrounds.

7.3 Serial port operation

Before performing the following operations, please switch the function to Motion Tracker

{
    "function": "Motion Tracker",
    "args": ""
}

7.3.1 Determine the background

  • Send

    //Sending this command will determine the background
    {
        "config":"Motion Tracker",
        "operation":"update"
  • Receive

    {
        "running":"Motion Tracker",
        "msg":"Background updated."
    }
    

7.4 Sample output

{
    "running":"Motion Tracker",
    "num":2,
    "roi":[
        {
            "x":0,
            "y":0,
            "w":0,
            "h":0,
            "angle":0.0,
            "area":0
        },
        {
            "x":0,
            "y":0,
            "w":0,
            "h":0,
            "angle":0.0,
            "area":0
        }
    ]
} 

8. Online Classifier

8.1 Description

This function can train and classify objects in the green target box in real time, and the feature values obtained by training can be saved on the device for next use.

8.2 Web page operations

  1. Click the 'Reset' button to clear existing categories and enter training mode.
  2. Click the '+' button to create a new category.
  3. Select the category you want to train.
  4. Place the object to be trained within the green target box.
  5. Click the 'Train' button to complete a training session.
  6. Change the angle of the object and repeat the training until you think you have achieved the desired effect.
  7. Click the 'save&run' button to save the training results on the device, and exit the training mode for object recognition and classification.

8.3 Serial port operation

Before performing the following operations, please switch the function to Online Classifier


{
    "function": "Online Classifier",
    "args": ""
}

8.3.1 Train

  • Send

    //This command will put the device into training mode and extract features once and store them under the specified category. If class_id does not exist, this class will be created.
    {
      "config":"Online Classifier",
      "operation":"train",
      "class_id":1, // Integer (0 ~ N), ID of class
      "class":"class_1" // String, the name of the class
      }
    
  • Receive

    {
        "running":"Online Classifier",
        "msg":"Training [class name] [num of training] times"
    }
    

8.3.2 Save&Run

  • Send

    {
        "config":"Online Classifier",
        "operation":"saverun",
    }
    
  • Receive

    {
        "running":"Online Classifier",
        "msg":"Save and run."
    }
    

8.3.3 Reset

  • Send

    //This command will put the device into training mode and clear all categories.
    {
        "config":"Online Classifier",
        "operation":"reset",
    } 
    
  • Receive

    {
        "running":"Online Classifier",
        "msg":" Please take a picture."
    }
    

8.4 Sample output

{
    "running":"Online Classifier",
    "class_num":2//The number of classes identified
    "best_match":"class_1"//best matching class
    "best_score":0.83838//best match score
    "class":[ //Score for each class
        {
            "name":"class_1",
            "score":0.83838
        },
        {
            "name":"class_2",
            "score":0.66244
        }
    ]
}

9. Face Recognition

9.1 Description

Detect and recognize faces.

9.2 Web page operations

  1. Click the Reset button to clear all existing faces.
  2. Click the + button to create a new face.
  3. Select the faces you want to train.
  4. Look into the camera to make sure the face you want to train is in the right position.
  5. Click the Train button.
  6. During training, when the bounding box is yellow, it means training is in progress. At this time, you can slowly turn your head to sample different angles to achieve better recognition results.
  7. If the bounding box turns red, it means the target has been lost, usually because the face has changed too much. Please adjust the face position until the face is found again.
  8. Click Stop when you think the desired effect is achieved. The device is now able to recognize this face.
  9. Click the Save button to save the feature data to the device for next time.

9.3 Serial port operation

Before performing the following operations, please switch the function to Face Recognition

{
    "function": "Face Recognition",
    "args": ""
}

9.3.1 Train

  • Send

    //To create a new face, provide face_id in order (0 ~ N).
    {
      "config":"Face Recognition",
      "operation":"train",
      "face_id":1, // Integer (0 ~ N), face ID
      "name":"tom" // String, the name of the face
    }
    //For example, there are already 3 faces (0~2). To create a new face, you need to specify the id as 3.
    
  • Receive (Success)

    {
        "running":" Face Recognition ",
        "msg":"Training tom" // Training surface name
    }
    
  • Receive (Error)

    {
        "running":"Face Recognition",
        "msg":"Invalid face id"
    }
    

9.3.2 Stop Train

  • Send

    {
        "config":" Face Recognition ",
        "operation":" stoptrain",
    }
    
  • Receive

    {
        "running":"Face Recognition",
        "msg":"Exit training mode."
    }
    

9.3.3 Save&Run

  • Send

    {
        "config":" Face Recognition ",
        "operation":"saverun",
    }
    
  • Receive

    {
        "running":"Face Recognition",
        "msg":"Faces saved."
    }
    

9.3.4 Reset

  • Send

    //This command will delete all faces.
    {
        "config":"Face Recognition",
        "operation":"reset",
    }
    
  • Receive

    {
        "running":"Face Recognition",
        "msg":"Reset success"
    }
    

9.4Sample output

9.4.1 Training Mode

{
    "running":"Face Recognition",
    "status":"training"// training or missing
    "x":0,
    "y":0,
    "w":0,
    "h":0, // Facial recognition bounding box
    "prob":0, // Detect confidence rate
    "name":0,
}

9.4.2 Normal Mode (匹配得分>0.5)

{
    "running":"Face Recognition",
    "num":1//Number of faces recognized
    "face":[
        {
            "x":0// 0 ~ 320
            "y":0// 0 ~ 240
            "w":30,
            "h":30, // Facial recognition bounding box
            "prob":0, // Detect confidence rate
            "match_prob":0.8, // Match confidence rate
            "name""tom",
            "mark":[ // landmarks
                {
                    "x":0,
                    "y":0
                },
                {
                    "x":0,
                    "y":0
                },
                {
                    "x":0,
                    "y":0
                },
                {
                    "x":0,
                    "y":0
                },
                {
                    "x":0,
                    "y":0
                },
            ]
        },
    ]
} 

9.4.3 Normal Mode (match score <=0.5)

{
    "running":"Face Recognition",
    "num":1// Number of faces recognized
    "face":[
        {
            "x":0// 0 ~ 320
            "y":0// 0 ~ 240
            "w":30,
            "h":30// facial recognition bounding box
            "prob":0// confidence rate
            "name""unidentified",
            "mark":[ // landmarks
                {
                    "x":0,
                    "y":0
                },
                {
                    "x":0,
                    "y":0
                },
                {
                    "x":0,
                    "y":0
                },
                {
                    "x":0,
                    "y":0
                },
                {
                    "x":0,
                    "y":0
                },
            ]
        },
    ]
} 

10. Face Detector

10.1 Description

Detect faces in the picture and give 5 landmarks.

10.2 Web page operations

10.3 Serial port operation

Before performing the following operations, please switch the function to Face Detector

{
    "function": "Face Detector",
    "args": ""
}

10.4Sample output

{
    "running":"Face Detector",
    "num":1//  Number of faces recognized
    "face":[
        {
            "x":0,
            "y":0,
            "w":30,
            "h":30//Facial recognition bounding box
            "prob":0// confidence rate
            "mark":[ // landmark
                {
                    "x":0,
                    "y":0
                },
                {
                    "x":0,
                    "y":0
                },
                {
                    "x":0,
                    "y":0
                },
                {
                    "x":0,
                    "y":0
                },
                {
                    "x":0,
                    "y":0
                }
            ]
        }
    ]
}

11. Shape Detector

11.1 Description

Detect shapes in the frame and calculate their area. Able to identify squares, rectangles, triangles, pentagons, and circles.

11.2 Web page operations

Click the 'Set as background' button to set the background. The algorithm can adapt to slowly changing backgrounds.

11.3 Serial port operation

Before performing the following operations, please switch the function to Shape Detector

{
    "function": "Shape Detector",
    "args": ""
}
  • Send

    // 发送这条指令将会确定背景
    {
        "config":"Shape Detector",
        "operation":"update"
  • Receive

    {
        "running":"Shape Detector",
        "msg":"Background updated."
    }
    

11.4 Sample output

{
    "running":"Shape Detector",
    "num":2,
    "shape":[
        {
            "name":"Rectangle"// "unidentified", "triangle", "square", "rectangle", "pentagon", "circle"
            "x":0,
            "y":0,
            "w":0,
            "h":0,
            "angle":0.0// Can be used when the shape is square or rectangular
            "area":0
        },
        {
            "name":"Rectangle"// "unidentified", "triangle", "square", "rectangle", "pentagon", "circle"
            "x":0,
            "y":0,
            "w":0,
            "h":0,
            "angle":0.0//Can be used when the shape is square or rectangular
            "area":0
        }
    ]
} 

12. Shape Matching

12.1 Description

Matching any given shape (but the shape should not contain curves), the uploaded shape will be converted into feature data and saved on the device for next use.

12.2 Web page operation

Click the add button to add a shape. You need to upload a shape template image as shown below (png format, the shape is black and the background is white). The file name will be the name of the shape.

Click the reset button to clear all uploaded shapes.

Click the 'Set as background' button to set the background. The algorithm can adapt to slowly changing backgrounds.

12.3 Serial port operation

To be developed, not supported yet.

12.4 Sample output

// The shape returned here is the file name of the uploaded template. Please note that if the confidence rate is lower than 30%, it will be marked as unidentified.
{
    "running":"Shape Matching",
    "num":2,
    "shape":[
        {
            "name":"arrow"// Your custom shape name cannot be recognized when the confidence rate is less than 30
            "max_score":83// Confidence score, if the shape is unknown, there is no
            "x":0,
            "y":0,
            "w":0,
            "h":0,
            "area":0
        },
        {
            "name":"unidentified"// Your custom shape name, not recognized when confidence score is less than 30
            "x":0,
            "y":0,
            "w":0,
            "h":0,
            "area":0
        },
    ]
} 

13. Audio FFT

13.1 Description

Capture audio through the device's microphone, perform a real-time FFT (Fast Fourier Transform) and plot a time-frequency plot. The green graph below is the RMS of the audio, indicating the current loudness.

  • The microphone response cutoff frequency is around 10KHz.

13.2 Web page operation

13.4 Sample output

None

Serial port reading

During the identification process, UnitV2 will continuously output identification sample data (JSON format, UART: 115200bps 8N1) through the serial port (HY2.0-4P interface at the bottom). The following are case programs for reading recognition results on different platforms.

Arduino

JSON strings can be parsed using the ArduinoJson library.


void setup() {

  Serial.begin(115200);
  Serial2.begin(115200, SERIAL_8N1, 16, 17);

}

void loop() {

 if(Serial2.available()) {
   String recvStr = Serial2.readStringUntil('/n');
   if(recvStr[0] == '{'){
     Serial.print(recvStr);
   }
 }
  
}

Micropython


import machine
import json

uart1 = machine.UART(1, tx=16, rx=17)
uart1.init(115200, bits=8, parity=None, stop=1)

PROTOCOL_START = b'{'[0]

while True:
  if uart1.any():
    data = uart1.readline()
    if data[0] == PROTOCOL_START:
        json_data = json.loads(data)

Use python to call model files


from json.decoder import JSONDecodeError
import subprocess
import json
import base64
import serial
import time
from datetime import datetime
from PIL import Image
import os
import io

uart_grove = serial.Serial('/dev/ttyS0', 115200, timeout=0.1)
reconizer = subprocess.Popen(['/home/m5stack/payload/bin/object_recognition', '/home/m5stack/payload/uploads/models/nanodet_80class'],
                             stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)

reconizer.stdin.write("_{\"stream\":1}\r\n".encode('utf-8'))
reconizer.stdin.flush()

img = b''

while True:
    today = datetime.now()
    path = str(today.strftime("%Y_%m_%d") + "/")
    newpath = "/media/sdcard/" + path

    line = reconizer.stdout.readline().decode('utf-8').strip()
    if not line:
        break  # Process finished or empty line

    try:
        doc = json.loads(line)
        if 'img' in doc:
            byte_data = base64.b64decode(doc["img"])
            img = bytes(byte_data)
        elif 'num' in doc:
            for obj in doc['obj']:
                uart_grove.write(str(obj['type'] + '\n').encode('utf-8'))
                if obj['type'] == "aeroplane":
                    print('aeroplane ' + today.strftime("%Y_%m_%d_%H_%M_%S"))
                    if os.path.exists(newpath):
                        image_path = newpath + today.strftime("%Y_%m_%d_%H_%M_%S") + ".jpg"
                        img = Image.open(io.BytesIO(byte_data))
                        img.save(image_path, 'jpeg')
                    else:
                        os.mkdir(newpath)
                        image_path = newpath + today.strftime("%Y_%m_%d_%H_%M_%S") + ".jpg"
                        img = Image.open(io.BytesIO(byte_data))
                        img.save(image_path, 'jpeg')
                    time.sleep(1)
                else:
                    print('Not detect '+ today.strftime("%Y_%m_%d_%H_%M_%S"))
    except JSONDecodeError as e:
        print("Error: Invalid JSON string")
        print("JSONDecodeError:", str(e))
On This Page