Clicky chatsimple

Custom AI Processor Made By Meta

Category :

AI

Posted On :

Share This :

Facebook, Instagram, and WhatsApp’s parent company, Meta, has been working hard to improve its artificial intelligence (AI) capabilities and lessen its dependency on external hardware, such as Nvidia GPUs, by creating its own unique AI processors. The company’s initiatives fit into a larger pattern amongst tech behemoths to develop domain-specific silicon that is tailored for specific workloads, particularly as AI applications become more essential to their offerings.
The Meta Training and Inference Accelerator (MTIA), a family of specially built AI processors from Meta, is intended for inference workloads, which are essential for AI applications that apply learned capabilities to fresh data. The MTIA is tailored for Meta’s internal applications and is intended to offer more computational power and efficiency than conventional CPUs.

The MTIA chip runs at 800 MHz and is produced by TSMC utilizing a 7nm technology. It makes use of the open-source RISC-V instruction set architecture (ISA), a substitute for the x86 and ARM architectures. High volumes of concurrent operations may be handled by the device, frequently with lower-precision arithmetic, which is usually adequate for AI workloads and enables more computations per watt of power.

The continuous transition from general-purpose CPUs to domain-specific silicon specialized for particular applications is reflected in Meta’s bespoke silicon initiative. Since 2016, the business has been constructing bespoke hardware with GPUs; but, in 2020, it started developing its own unique chips after realizing that GPUs were not always the best option for efficiently executing Meta’s particular recommendation tasks.

Meta has unveiled the Meta Scalable Video Processor (MSVP), an ASIC intended to speed up live-streaming and video on demand (VOD) content, in addition to the MTIA. The production and transmission of video material encoding is a major difficulty for social media sites like Facebook, which receive billions of views every day. This is where the MSVP chip comes in.

By switching to its proprietary chips, Meta may save a significant amount of money each year by lowering its energy costs and minimizing the need to buy third-party chips. In addition to optimizing Meta’s datacenter power usage, the installation of Artemis processors will free up Nvidia’s well-liked H100 processors for AI training.

The company’s goals extend beyond inference acceleration and Artemis. According to reports, Meta is working on a more advanced CPU that would be able to handle AI training tasks and be comparable to Nvidia’s H100 GPUs. This project is a component of Meta’s larger ambition to create silicon internally in order to reduce its reliance on Nvidia processors; nonetheless, Meta has no intention of removing Nvidia GPUs entirely from its datacenters.

The MTIA and MSVP chips, among other developments in Meta’s AI infrastructure, are a part of an ambitious strategy to construct the company’s next generation of AI infrastructure. Its long-term vision of the metaverse and its AI-driven applications depends on the company’s ability to create larger, more complex AI models and deploy them effectively at scale, which will be made possible by these initiatives.