๐ง
eAI
AI/ML Inference Engine
On-device AI/ML inference engine enabling machine learning without cloud connectivity. Supports neural networks for classification, object detection, NLP, and more.
Intelligence LayerC/C++, PythonActive Development
Key Features
Tiny Runtime โ Under 64 KB ROM
TFLite Micro, ONNX Micro, custom eAI format
Hardware Acceleration โ CMSIS-NN, NEON, RISC-V vector
Model Zoo โ Keyword, anomaly, image, health, gesture
On-Device Training โ Federated & transfer learning
INT8/FP16 Quantization
Python Training Pipeline โ PyTorch & TensorFlow
Architecture
Python Training Pipeline (PyTorch/TensorFlow) โโโ eAI Model Compiler (Graph Optimization, Operator Fusion) โโโ eAI Inference Runtime (C) โ โโโ Model Loader โ โโโ Operator Registry โ โโโ Memory Manager โโโ Hardware Abstraction (CMSIS-NN, NEON, RV Vector) โโโ eos Integration (Task, Timer, DMA)
Code Example
c
#include <eai/eai.h>
#include <eai/model.h>
eai_runtime_init();
eai_model_t *model = eai_model_load(
"/flash/models/hr_quality.eai");
float ppg_data[128];
read_ppg_sensor(ppg_data, 128);
eai_tensor_t input = {
.data = ppg_data,
.shape = {1, 128},
.dtype = EAI_FLOAT32
};
eai_tensor_t output;
eai_infer(model, &input, &output);
float quality = ((float *)output.data)[0];API Highlights
| Function | Description |
|---|---|
eai_runtime_init() | Initialize inference runtime |
eai_model_load() | Load model from flash/file |
eai_infer() | Run inference on input tensor |
eai_profile() | Profile inference performance |
eai_quantize() | Dynamic INT8 quantization |