Meta Breaks Ground with Revolutionary Custom AI Chip Unveiling

Meta Properties Unveils New Chip for AI Processing: MTIA

Meta, the parent company of Facebook, Instagram, and WhatsApp, has introduced its first custom-designed computer chip, the Meta Training and Inference Accelerator (MTIA), optimized for running recommendation engines. The chip is tailored for processing artificial intelligence programs and benefits from close collaboration with the company’s PyTorch developers. According to Meta, the chip is version one of a family of chips and is tuned for deep learning recommendation models. It runs software that optimizes programs using Meta’s PyTorch open-source developer framework. The chip consists of a mesh of blocks of circuits that operate in parallel.

Microsoft, Google, and Amazon are among the tech giants that have developed their own chips for AI in addition to using the standard GPU chips from Nvidia. Meta’s announcement was part of a broad presentation in which several Meta executives discussed how they are beefing up Meta’s computing capabilities for artificial intelligence. The company discussed a “next-gen data center” it is building that “will be an AI-optimized design, supporting liquid-cooled AI hardware and a high-performance AI network connecting thousands of AI chips for data center-scale AI training clusters.”

Meta also disclosed a custom chip for encoding video, called the Meta Scalable Video Processor (MSVP), designed to more efficiently compress and decompress video and encode it into multiple different formats for uploading and viewing by Facebook users. The MSVP chip offers a peak transcoding performance of 4K at 15fps at the highest quality configuration with 1-in, 5-out streams and can scale up to 4K at 60fps at the standard quality configuration. Meta believes that dedicated hardware is the best solution in terms of compute power and efficiency for video, as people spend half their time on Facebook watching video, with over four billion video views per day.

Meta has hinted at its development of a chip for years, with its chief AI scientist, Yann LeCun, being interviewed by ZDNET in 2019 on the matter. The company kept silent about the details of those efforts even as its peers rolled out chip after chip, and as startups such as Cerebras Systems, Graphcore, and SambaNova Systems arose to challenge Nvidia with exotic chips focused on AI. The MTIA has aspects similar to chips from the startups. At the heart of the chip, a mesh of sixty-four so-called processor elements, arranged in a grid of eight by eight, echoes many designs for AI chips that adopt what is called a “systolic array,” where data can move through the elements at peak speed.

The MTIA chip is somewhat unusual in being constructed to handle both of the two main phases of artificial intelligence programs, training and inference. Training is the stage when the neural network of an AI program is first refined until it performs as expected. Inference is the actual use of the neural network to make predictions in response to user requests. Usually, the two stages have very different requirements in terms of computer processing and are handled by distinct chip designs. The MTIA chip, according to Meta, can be up to three times more efficient than GPUs in terms of the number of floating-point operations per second for every watt of energy expended. However, when the chip is tasked with more complex neural networks, it lags GPUs, Meta said, indicating more work is needed on future versions of the chip to handle complex tasks.

Meta’s engineers emphasized how MTIA benefits from hardware-software “co-design,” where the hardware engineers exchange ideas in a constant dialogue with the company’s PyTorch developers. In addition to writing code to run on the chip in PyTorch or C++, developers can write in a dedicated language developed for the chip called KNYFE. The KNYFE language “takes a short, high-level description of an ML operator as input and generates optimized, low-level C++ kernel code that is the implementation of this operator for MTIA,” Meta said. The company integrated multiple MTIA chips into server computers based on the Open Compute Project that Meta helped pioneer.

Meta’s engineers will present a paper on the chip at the International Symposium on Computer Architecture conference in Orlando, Florida, in June, titled, “MTIA: First Generation Silicon Targeting Meta’s Recommendation System.” As a company, Meta is committed to advancing artificial intelligence and developing hardware that can optimize AI processing. The MTIA chip is only the first of a family of chips that Meta will release in the future. With the growing need for powerful AI processing capabilities, the development of custom-designed computer chips optimized for AI programs will continue to be a major area of focus for tech companies.

Categories: AI & ChatGPT