MPT-30B: Raising the bar for open-source foundation models

$ 18.99

5
(497)
In stock
Description

Introducing MPT-30B, a new, more powerful member of our Foundation Series of open-source models, trained with an 8k context length on NVIDIA H100 Tensor Core GPUs.

MosaicML Just Released Their MPT-30B Under Apache 2.0. - MarkTechPost

Democratizing AI: MosaicML's Impact on the Open-Source LLM Movement, by Cameron R. Wolfe, Ph.D.

February 2002 - National Conference of Bar Examiners

MPT-30B: MosaicML Outshines GPT-3 With A New LLM To Push The Boundaries of NLP

Benchmarking and Defending Against Indirect Prompt Injection Attacks on Large Language Models

MosaicML, now part of Databricks! on X: MPT-30B is a bigger sibling of MPT-7B, which we released a few weeks ago. The model arch is the same, the data mix is a

Benchmarking and Defending Against Indirect Prompt Injection Attacks on Large Language Models

Timeline of Transformer Models / Large Language Models (AI / ML / LLM)

Is Mosaic's MPT-30B Ready For Our Commercial Use?, by Yeyu Huang

Timeline of Transformer Models / Large Language Models (AI / ML / LLM)

MosaicML Releases Open-Source MPT-30B LLMs, Trained on H100s to Power Generative AI Applications

MosaicML's latest models outperform GPT-3 with just 30B parameters

How to Use MosaicML MPT Large Language Model on Vultr Cloud GPU