eAI — Embedded AI Framework

C11 embedded AI framework with two tiers: EAI-Min (50KB MCUs) and EAI-Framework (enterprise). 12 LLMs, ReAct agents, LoRA fine-tuning, federated learning.

12 Curated LLMs ReAct Agents LoRA Fine-Tuning Federated Learning C11 / 50KB MCUs

Key Features

AI inference and training at the edge — from tiny MCUs to enterprise gateways

Two Tiers (EAI-Min + EAI-Framework)

EAI-Min fits in 50KB for Cortex-M0 MCUs with basic inference. EAI-Framework adds full model management, agents, and training for larger devices.

12 Curated LLMs

Pre-optimized models from TinyLlama to Phi-3, quantized for embedded targets with INT4/INT8 support and automatic model selection by device profile.

ReAct Agents with Tool Calling

Autonomous AI agents using Reasoning + Acting paradigm. Agents can call device tools — sensors, actuators, network APIs — in a structured loop.

LoRA Fine-Tuning

On-device Low-Rank Adaptation fine-tuning for personalizing models with local data without full retraining or cloud connectivity.

Federated Learning

Privacy-preserving distributed training across device fleets. Differential privacy, secure aggregation, and bandwidth-efficient gradient compression.

8-Layer Security

Model encryption, input validation, output sanitization, inference sandboxing, memory isolation, audit logging, access control, and secure model updates.

Power-Aware Inference

Dynamic model scaling based on battery level and power budget. Automatic quality/power trade-off with configurable energy profiles.

5 Deployment Profiles

Pre-configured profiles: Ultra-Low-Power, Battery-Optimized, Balanced, Performance, and Enterprise — each tuning model size, precision, and scheduling.

Quick Start

Build eAI and run your first inference on device

# Clone the repository
git clone https://github.com/embeddedos-org/eAI.git
cd eAI

# Build the full framework tier
cmake -B build -DEAI_TIER=framework
cmake --build build

# Or build the minimal tier for MCUs
cmake -B build -DEAI_TIER=min -DEAI_TARGET=cortex-m4
cmake --build build

API Highlights

Core AI modules

Module Description Header
eai/inference Model loading, quantization, and inference engine eai/inference.h
eai/models 12 curated LLMs with auto-selection by device eai/models.h
eai/agents ReAct agents — reasoning, tool calling, planning eai/agents.h
eai/lora LoRA fine-tuning — adapters, merge, export eai/lora.h
eai/federated Federated learning — aggregation, privacy, sync eai/federated.h
eai/security 8-layer security — encryption, sandboxing, audit eai/security.h
eai/power Power-aware inference — profiles, scaling, budget eai/power.h
eai/deploy Deployment profiles — ultra-low to enterprise eai/deploy.h