Many in the industry consider AI model market winners it has been decided: Big Tech will own it (Google, Meta, Microsoft, a little bit of Amazon) with their model makers of choice, mainly OpenAI and Anthropic.
But tiny 30-person startup Arcee AI disagrees. The company recently released a truly and permanently open (Apache license) general purpose, foundation model called Trinity, and Arcee claims that at 400B parameters, it is one of the largest open source foundation models ever trained and released by a US company.
Arcee said the Trinity was compared to Meta’s Llama 4 Maverick 400B, and Z.ai GLM-4.5, a high-performing open-source model from Tsinghua University in China, according to benchmark tests conducted with base models (very little post training).

Like other state-of-the-art (SOTA) models, Trinity is intended for coding and multi-step processes such as agents. However, despite its size, it is not yet a true competitor to SOTA because it only supports text.
More modes are in the works — a vision model is currently being developed, and a speech-to-text version is on the roadmap, CTO Lucas Atkins told TechCrunch (pictured above, left). In comparison, Meta’s Llama 4 Maverick is multi-modal, supporting both text and images.
But before adding more AI modes to its roster, Arcee said, it wanted a base LLM that would appeal to its main target customers: developers and academics. The team specifically wants to entice US companies of all sizes from choosing open models from China.
“Ultimately, the winners in this game, and the only way to really win in usage, is to have the best open weight model,” Atkins said. “To win the hearts and minds of developers, you have to give them the best.”
Techcrunch event
San Francisco
|
October 13-15, 2026
Benchmarks show that the Trinity-based model, which is now being watched as more post-training takes place, largely holds its own and, in some cases, slightly outperforms the Llama in tests of coding and math, common sense, knowledge and reasoning.
The progress Arcee has made so far to become a competitive AI Lab is impressive. The great Trinity model follows two previous small models released in December: the 26B-parameter Trinity Mini, a fully post-trained reasoning model for tasks from web apps to agents, and the 6B-parameter Trinity Nano, an experimental model designed to push the boundaries of small but chatty models.
The kicker is, Arcee trained them all for six months for a total of $20 million, using 2,048 Nvidia Blackwell B300 GPUs. This comes from roughly $50 million the company has raised to date, says founder and CEO Mark McQuade (pictured above, right).
That kind of money is “a lot for us,” said Atkins, who led the effort to build the model. However, he acknowledges that this pales in comparison to how much large labs are spending today.
The six-month timeline was “very calculated,” says Atkins, whose career before the LLMs involved building voice agents for cars. “We are a younger startup that is very hungry. We have a lot of talented and bright young researchers who, when given the opportunity to spend this amount of money and train a model of this size, we trust that they will rise to the occasion.
McQuade, then an early employee of open-source model marketplace HuggingFace, says Arcee didn’t start out wanting to be the new US AI Lab: The company originally did model customization for big business clients like SK Telecom.
“We’re just doing post-training. So we’re going to do the good work of others: We’re going to take a Llama model, we’re going to take a Mistral model, we’re going to take an open-source Qwen model, and we’re going to post-train it to make it better” for the intended use of a company, he said, including doing reinforcement learning.
But as their client list grew, Atkins said, the need for their own model became a necessity, and McQuade worried about relying on other companies. At the same time, many of the best open models come from China, which US businesses fear, or are prohibited from using.
It was a nerve-wracking decision. “I think there are less than 20 companies in the world that have pre-trained and released their own model” of the size and level Arcee is looking for, McQuade said.
The company started small at first, trying its hand at a small, 4.5B model created with training company DatologyAI. The success of the project inspired larger efforts.
But if the US already has the Llama, why does it need another open weight model? Atkins says by choosing the open source Apache license, the startup is committed to always being open in its models. This is after Meta CEO Mark Zuckerberg last year shows that his company is not always making all the most advanced models open source.
“Llama can be viewed as not truly open source because it uses a Meta-controlled license with commercial and usage caveats,” he said. This is the reason some open source acquisition organizations that Llama is not open source compliant at all.
“Arcee exists because the US needs a permanently open, Apache-licensed, border-grade alternative that can actually compete with the current border,” McQuade said.
All Trinity models, large and small, can be downloaded for free. The largest version will be released in three flavors. Trinity Large Preview is a small post-trained teaching model, meaning it’s trained to follow human instructions, not just predict the next word, using it for general chat use. The Trinity Large Base is the base model without post-training.
Then we have TrueBase, a model with any training data or post training so that businesses or researchers who want to customize it do not have to open any data, rules or assumptions.
Acree AI will eventually offer a hosted version of its general release model, it says, its competitively priced API. That release is up to six weeks away as the startup continues to improve the model’s reasoning training.
The API price for Trinity-Mini is $0.045 / $0.15, and a rate-limited free tier is also available. Meanwhile, the company still sells post-training and customization options.







