Shipping Country
Free shipping within the continental US over $50. Conditions apply
Select Country
Drive what’s next in Edge AI
NXP’s Ara family of discrete neural processing units, part of NXP’s edge AI portfolio, empowers designers to push the boundaries of intelligent systems. By offloading intensive AI inference from host processors, Ara NPUs deliver low-latency, power-efficient AI execution while providing the flexibility to scale as models and applications evolve.
With Ara-2 accelerating real-time generative AI and Ara-1 serving established deployments, the portfolio allows system designers to future-proof their AI strategies, improving privacy, responsiveness, and overall system efficiency.
Ara-2 is NXP’s latest discrete neural processing unit, optimized for real-time generative AI and modern transformer-based workloads at the edge. Built to handle the compute and memory demands of large language models and multimodal AI, Ara-2 enables on-device execution with low latency, reduced operational costs and enhanced data privacy.
Ara240 is the first product launching in this family, and currently in preproduction. Its balanced architecture combines high compute density, large on-chip memory and high off-chip bandwidth to efficiently execute large and complex models across embedded and AI-enabled compute platforms.
Related: i.MX 8M Plus
Related:FRDM i.MX 8M Plus Development Board
Ara-2 Factsheet
About NXP
Following the completion of NXP’s acquisition of Kinara, Ara discrete NPUs are now part of NXP’s broader intelligent edge portfolio. Ara devices pair naturally with NXP application processors, enabling system-level AI acceleration where NPUs handle inference while the host processor manages control, preprocessing and postprocessing.
The Ara software stack is being integrated into NXP’s eIQ AI and ML software environment, simplifying model optimization, deployment and scaling across embedded platforms.
NXP technologies enable flexible, scalable AI systems that support everything from traditional time series anomaly detection or vision inference, to advanced, multimodal generative AI at the edge.