Neurxcore’s NPU Series Is Tailored For AI Inference Tasks

The Neural Processor Unit (NPU) product line for AI inference applications has been introduced by Neurxcore, a leading supplier of AI solutions.

It is based on patent-protected internal designs and an expanded and improved version of NVIDIA’s Deep Learning Accelerator (Open NVDLA) technology, which is available to the public. Neurxcore’s SNVDLA IP series, which primarily focuses on image processing, including object recognition and classification, establishes a new benchmark for energy economy, performance, and capabilities. Additionally versatile for generative AI applications, SNVDLA has already undergone silicon validation, is based on a 22nm TSMC platform, and has been demonstrated on a demonstration board with multiple apps running on it.

Along with the cutting-edge IP package, Neurxcore’s Hieracium SDK (Software Development Kit) is included. This kit is designed to help configure, optimize, and assemble neural network applications on SNVDLA products. It is based on the open-source Apache TVM (Tensor-Virtual Machine) framework. From ultra-low power to high-performance scenarios, Neurxcore’s product line serves a broad range of industries and applications. These include robotics, edge computing, AR/VR, ADAS, wearables, smartphones, smart homes, surveillance, Set-Top Box and Digital TV (STB/DTV), smart TV, and more.

Together with this innovative product, Neurxcore provides a comprehensive package that enables the creation of personalized NPU solutions. This package includes novel operators, optimized subsystem design enabled by AI, and optimized model development that includes training and quantization.

As stated by Neurxcore’s founder and CEO, Virgile Javerliac: “Inference is involved in 80% of AI computational tasks.” It is essential to reduce energy use and costs while keeping performance levels high.

He conveyed his appreciation to the hardworking group that created this innovative solution and underlined Neurxcore’s dedication to customer service and opportunity exploration.

An essential component of AI is the inference stage, which entails applying models to produce content or make predictions. Neurxcore’s creative solutions effectively handle this stage, which makes it perfect for a range of applications—even those that serve several users at once.

In comparison to the first NVIDIA version, the SNVDLA product line shows notable gains in energy efficiency, performance, and feature set. It also gains from NVIDIA’s industrial-grade development. Fine-grained configurable features of the product line, including the number of cores and multiply-accumulate (MAC) operations per core, enable a wide range of applications in a variety of markets. It is among the best in its class due to its remarkable energy and cost efficiency. Additionally, affordable prices along with an open-source software environment made possible by Apache TVM guarantee flexible and easily available AI solutions.

The deployment of optimized semiconductor devices is necessary for the use of artificial intelligence techniques in data centers, edge computing, and endpoint devices, according to Gartner’s 2023 AI Semiconductors research, Forecast: AI Semiconductors, Worldwide, 2021-2027. At a five-year compound annual growth rate of 20%, the revenue from these AI semiconductors is expected to reach $111.6 billion by 2027.