C11 embedded AI framework with two tiers: EAI-Min (50KB MCUs) and EAI-Framework (enterprise). 12 LLMs, ReAct agents, LoRA fine-tuning, federated learning.
AI inference and training at the edge — from tiny MCUs to enterprise gateways
EAI-Min fits in 50KB for Cortex-M0 MCUs with basic inference. EAI-Framework adds full model management, agents, and training for larger devices.
Pre-optimized models from TinyLlama to Phi-3, quantized for embedded targets with INT4/INT8 support and automatic model selection by device profile.
Autonomous AI agents using Reasoning + Acting paradigm. Agents can call device tools — sensors, actuators, network APIs — in a structured loop.
On-device Low-Rank Adaptation fine-tuning for personalizing models with local data without full retraining or cloud connectivity.
Privacy-preserving distributed training across device fleets. Differential privacy, secure aggregation, and bandwidth-efficient gradient compression.
Model encryption, input validation, output sanitization, inference sandboxing, memory isolation, audit logging, access control, and secure model updates.
Dynamic model scaling based on battery level and power budget. Automatic quality/power trade-off with configurable energy profiles.
Pre-configured profiles: Ultra-Low-Power, Battery-Optimized, Balanced, Performance, and Enterprise — each tuning model size, precision, and scheduling.
Build eAI and run your first inference on device
# Clone the repository
git clone https://github.com/embeddedos-org/eAI.git
cd eAI
# Build the full framework tier
cmake -B build -DEAI_TIER=framework
cmake --build build
# Or build the minimal tier for MCUs
cmake -B build -DEAI_TIER=min -DEAI_TARGET=cortex-m4
cmake --build build
Core AI modules
| Module | Description | Header |
|---|---|---|
eai/inference |
Model loading, quantization, and inference engine | eai/inference.h |
eai/models |
12 curated LLMs with auto-selection by device | eai/models.h |
eai/agents |
ReAct agents — reasoning, tool calling, planning | eai/agents.h |
eai/lora |
LoRA fine-tuning — adapters, merge, export | eai/lora.h |
eai/federated |
Federated learning — aggregation, privacy, sync | eai/federated.h |
eai/security |
8-layer security — encryption, sandboxing, audit | eai/security.h |
eai/power |
Power-aware inference — profiles, scaling, budget | eai/power.h |
eai/deploy |
Deployment profiles — ultra-low to enterprise | eai/deploy.h |