MatX will be at MLSys. Come join us at our After Hours in Santa Clara to talk about chips, compilers, partitioning, and optimizing ML models for future hardware. Many of us will be there, including Reiner Pope and Mike Gunter. Tuesday May 14th at 4pm, see matx.com/meetmatx.
MatX
Computer Hardware Manufacturing
Mountain View, CA 1,081 followers
Making AI better, faster, and cheaper with more powerful hardware.
About us
MatX designs hardware tailored for the world’s best AI models: We dedicate every transistor to maximizing performance for large models. For these models, we deliver 10× more computing power, enabling AI labs to make models an order of magnitude smarter and more useful. Our hardware would make it possible to train GPT-4 and run ChatGPT, but on the budget of a small startup. A world with more widely available intelligence is a happier and more prosperous world—picture people of all socioeconomic levels having access to an AI staff of specialist MDs, tutors, coaches, advisors, and assistants.
- Website
-
https://matx.com
External link for MatX
- Industry
- Computer Hardware Manufacturing
- Company size
- 11-50 employees
- Headquarters
- Mountain View, CA
- Type
- Privately Held
Locations
-
Primary
Mountain View, CA, US
Employees at MatX
Updates
-
We need to run LLMs as fast as physical limits allow. Doing that requires you to make coordinated changes across the hardware, the software, and the ML algorithms, and not get distracted by other problems you could also choose to solve. This needs a new kind of company to do that. That's why we created MatX.
Introducing MatX: we design hardware tailored for LLMs, to deliver an order of magnitude more computing power so AI labs can make their models an order of magnitude smarter. Our hardware would make it possible to train GPT-4 and run ChatGPT, but on the budget of a small startup. Our founding team has designed chips at Google and Amazon, and we’ve built chips with 1/10 the team size typically needed. Here’s how we’re approaching the problem of inefficient and insufficient compute. While other chips treat all models equally, we dedicate every transistor to maximizing performance on the world’s largest models. Our goal is to make the world’s best AI models run as efficiently as allowed by physics, bringing the world years ahead in AI quality and availability. A world with more widely available intelligence is a happier and more prosperous world—picture people of all socioeconomic levels having access to an AI staff of specialist MDs, tutors, coaches, advisors, and assistants. Our design focuses on cost efficiency for high-volume pre-training and production inference for large models. This means: 1/ We’ll support training and inference. Inference first. 2/ We optimize for performance-per-dollar first (we’ll be best by far), and for latency second (we’ll be competitive). 3/ We offer excellent scale-out performance, supporting clusters with hundreds of thousands of chips. 4/ Peak performance is achieved for these workloads: large Transformer-based models (both dense and MoE), ideally 20B+ parameters, and inference having thousands of simultaneous users. 5/ We give you low-level access to the hardware. We believe that the best hardware is designed jointly by ML hardware experts and LLM experts. Everyone on the MatX team, from new grad to industry veteran, is exceptional. Our industry veterans have built ML chips, ML compilers, and LLMs, at Google or Amazon or various startups. Our CEO, Reiner Pope, was Efficiency Lead for Google PaLM, where he designed and implemented the world’s fastest LLM inference software. Our CTO, Mike Gunter, was Chief Architect for one of Google’s ML chips (at the time, Google’s fastest) and was an Architect for Google’s TPUs. Our CDO Silicon, Avinash Mani, has over 25 years of experience in building products and world-class engineering teams in silicon and software at Amazon, Innovium and Broadcom. We’re backed by $25M of investment from specialist investors and operators who share our vision, including: Daniel Gross and Nat Friedman (lead investors, and experts in the AI space), Rajiv K. (CEO at Auradine), Amjad Masad (CEO at Replit), Outset Capital, Homebrew, SV Angel. Additionally we have investment from leading AI and LLM researchers including Irwan Bello, James Bradbury, Aakanksha Chowdhery, Ph.D., William (Liam) Fedus, and David Ha. Check out our Bloomberg profile (https://t.co/kyW43Nph2Y). Learn more at https://matx.com and consider joining us to build the best chips for LLMs.
AI Is Putting the Silicon Back in Silicon Valley
bloomberg.com
-
Introducing MatX: we design hardware tailored for LLMs, to deliver an order of magnitude more computing power so AI labs can make their models an order of magnitude smarter. Our hardware would make it possible to train GPT-4 and run ChatGPT, but on the budget of a small startup. Our founding team has designed chips at Google and Amazon, and we’ve built chips with 1/10 the team size typically needed. Here’s how we’re approaching the problem of inefficient and insufficient compute. While other chips treat all models equally, we dedicate every transistor to maximizing performance on the world’s largest models. Our goal is to make the world’s best AI models run as efficiently as allowed by physics, bringing the world years ahead in AI quality and availability. A world with more widely available intelligence is a happier and more prosperous world—picture people of all socioeconomic levels having access to an AI staff of specialist MDs, tutors, coaches, advisors, and assistants. Our design focuses on cost efficiency for high-volume pre-training and production inference for large models. This means: 1/ We’ll support training and inference. Inference first. 2/ We optimize for performance-per-dollar first (we’ll be best by far), and for latency second (we’ll be competitive). 3/ We offer excellent scale-out performance, supporting clusters with hundreds of thousands of chips. 4/ Peak performance is achieved for these workloads: large Transformer-based models (both dense and MoE), ideally 20B+ parameters, and inference having thousands of simultaneous users. 5/ We give you low-level access to the hardware. We believe that the best hardware is designed jointly by ML hardware experts and LLM experts. Everyone on the MatX team, from new grad to industry veteran, is exceptional. Our industry veterans have built ML chips, ML compilers, and LLMs, at Google or Amazon or various startups. Our CEO, Reiner Pope, was Efficiency Lead for Google PaLM, where he designed and implemented the world’s fastest LLM inference software. Our CTO, Mike Gunter, was Chief Architect for one of Google’s ML chips (at the time, Google’s fastest) and was an Architect for Google’s TPUs. Our CDO Silicon, Avinash Mani, has over 25 years of experience in building products and world-class engineering teams in silicon and software at Amazon, Innovium and Broadcom. We’re backed by $25M of investment from specialist investors and operators who share our vision, including: Daniel Gross and Nat Friedman (lead investors, and experts in the AI space), Rajiv K. (CEO at Auradine), Amjad Masad (CEO at Replit), Outset Capital, Homebrew, SV Angel. Additionally we have investment from leading AI and LLM researchers including Irwan Bello, James Bradbury, Aakanksha Chowdhery, Ph.D., William (Liam) Fedus, and David Ha. Check out our Bloomberg profile (https://t.co/kyW43Nph2Y). Learn more at https://matx.com and consider joining us to build the best chips for LLMs.
AI Is Putting the Silicon Back in Silicon Valley
bloomberg.com
-
We're very happy to announce MatX, a new company developing specialized chips for LLMs. See matx.com and https://lnkd.in/gBY36xwp for details.
Reiner Pope on Twitter
twitter.com
-
MatX reposted this
HUGE congratulations to my incredible partners Kanjun and Josh on the $200M Series B. I'm grateful to be an advisor to Imbue, and also to get to work with them to build Outset Capital together. I'm also proud of the Outset Capital mention in the Forbes announcement today, which highlights our unique value prop. Giving advice is easy. Building is not. Because I am full-time on the fund, while Kanjun and Josh are also full-time AI founders, we give our founders the best of both worlds: high-touch support and empathetic, expert support from folks right there in the trenches with you. Through the fund, we've been lucky to get to back incredible founders like Michelle, Justine, Tasneem, Zach, Reiner, Banks, Max, Nikhil, Alek, David, Niya, Marie, Michael, Frederik, Kevin, and many more. And this is just the beginning. Kudos to Alex and Kenrick on an excellent piece. Link in comments.
-
MatX reposted this
MatX is hiring https://matx.com/ https://lnkd.in/gKSSyK6K
MatX | Faster chips for LLMs
matx.com