Meta just announced that it is developing its first custom silicon chip designed to run AI models.
Meta Training and Inference Accelerator (MTIA) will be its first homegrown custom accelerator chip family customized especially for Meta’s internal workloads, delivering better performance, decreasing latency, and increasing efficiency.
With AI companies racing to integrate AI into their workflow, the tech giant will also be developing a next-generation AI-optimized data center design, as well as entering the second phase of development for Research SuperCluster (RSC) - its new 16,000 GPU supercomputer dedicated to AI research.
“These efforts — and additional projects still underway — will enable us to develop larger, more sophisticated AI models and then deploy them efficiently at scale,” the company shared in a press release.
“By rethinking how we innovate across our infrastructure, we’re creating a scalable foundation to power emerging opportunities in areas like generative AI and the metaverse,” the company added.
Meta will also be deploying CodeCompose, a generative AI-based coding software similar to Github’s Copilot, to get code suggestions for internal software development.
Michael Bolin, a software engineer at Meta, said the underlying model is built on top of public research from the company that “we have tuned for our internal use cases and codebases.”
“On the product side, we’re able to integrate CodeCompose into any surface where our developers or data scientists work with code,” he explained.
Meta’s recent developments are part of its long-term plan to leverage AI technology in line with its infrastructure vision.
“Over the next decade, we’ll see increased specialization and customization in chip design, purpose-built and workload-specific AI infrastructure, new systems and tooling for deployment at scale, and improved efficiency in product and design support,” the company wrote.