High-throughput chips for LLMs
We dedicate every transistor to maximizing performance for large models. Other products put large models and small models on equal footing; MatX makes no such compromises. For the world's largest models, we deliver 10× more computing power, enabling AI labs to make models an order of magnitude smarter and more useful.
Our product
- We support training and inference.
- We optimize for performance-per-dollar first, and for latency second.
- We'll offer the best performance-per-dollar by far.
- We'll provide competitive latency, e.g. <10ms/token for 70B-class models.
Target workloads
- Transformer-based models with at least 7B (ideally 20B+) activated parameters, including both dense and MoE models.
- Thanks to an excellent interconnect, we can scale up to the largest (e.g. 10T-class) models.
- For inference: peak performance requires at least thousands of simultaneous users, and up to many millions.
- For training: peak performance requires at least 10²² total training FLOPs (7B-class). Can scale well up to very large, e.g. 10²⁹, total training FLOPs (10T-class).
- We offer excellent scale-out performance, supporting clusters with hundreds of thousands of chips.
- We give you low-level control over the hardware; we know that expert users want that.
What this enables
- The world's best models will be available 3–5 years sooner.
- Individual researchers can train 7B-class models from scratch every day, and 70B-class models multiple times per month.
- Any seed-stage startup can afford to train a GPT-4-class model from scratch and serve it at ChatGPT levels of traffic.
Investors
Spark Capital, Daniel Gross and Nat Friedman, Jane Street, Triatomic Capital, and many others.