Strategic Round II
Complete

READ MORE
Chai AI logo
CHAI AI Cluster

1.4 EXAFLOP CLUSTER

CHAI  |  PALO ALTO

[ CHAI Revenue Growth ]

Incentives & Scale

JOBS

We have built a consumer platform where users can build their own AI characters and stories. Our B2C business has grown exponentially, reaching $70M/yr in revenue in 2026.

Graph showing Chai revenue growth
OCT 2022
APR 2023
OCT 2023
APR 2024
OCT 2024
APR 2025
OCT 2025
FEB 2026
NOV 2022

CHAI Launches on App Store

We were the first to launch a consumer AI platform, allowing users to create their own ChatAIs—ahead of Character AI and ChatGPT.

FEB 2023

Deploys First In-House 6B LLM

Open-sourced LLMs no longer satisfied our users' requirements, as the LLMs needed to be adapted for social and engagement purposes. We saw a +10% engagement boost from our own in-house model.

MAR 2023

Deploys Best-of-4 Reward Model

We continued to iterate on RLHF (Reinforcement Learning with Human Feedback), training a reward model directly on user signals. This led to a huge boost in our day 30 user retention.

APR 2023

Larger Model Upgrade - 13B Architecture

We found that a bigger model leads to better depth, therefore better retention. We re-trained our LLM from scratch and saw another +10% engagement boost.

MAY 2023

PPO Model Deployed

Using Proximal Policy Optimization, a reinforcement learning technique, we optimized our base foundation model to decrease the probability a chat session ends.

JUNE 2023

Deploys Reward Model XL

Continued to scale up our reward model. Trained with 100 million signals to decrease user retry rate and increase chat session length.

OCT 2023

Efficient Inference & Custom GPU Orchestration

Off-the-shelf load balancing and vLLM were no longer sufficient to support our user base at 500K DAU scale. We implemented custom CUDA kernels together with our own GPU orchestration system.

NOV 2023

Increased GPU Reservation

We hit a scaling issue due to high demand from our users. We reserved an additional 1,000 A100 GPUs from our provider to scale reliably.

NOV 2023

Deployed Model Blending

CHAI invented model blending—ensembling different LLMs trained on different targets at the conversation level. This outperformed GPT-3 in user retention.

DEC 2023

BO8 Reward Model Deployed

With increased cluster capacity, we implemented Best-of-8 rejection sampling, utilizing our upgraded reward model to its full extent.

MAR 2024

DPO Model Deployed

Utilizing Direct Preference Optimization with user preference datasets, we boosted engagement by 20%. The performance stacked well with our existing reward model.

AUG 2024

Upgraded All Existing Blends to DPO

Building on the success of DPO, we iterated on optimization targets and data selection, and successfully deployed DPO across all production blends.

SEP 2024

13B Reward Model Deployed

With increased GPU capacity due to cluster upgrades, we were able to serve larger reward models for all users.

OCT 2024

10x 24B Models Deployed

We upgraded our existing production blend to 24B models. With blending enabled, we saw a surge in daily active users and day 30 retention.

JAN 2025

Model Mesh Orchestrator Deployed

To support over 1M Daily Active Users, Model Mesh—an in-house cluster orchestration platform—was deployed to handle multi-cluster, multi-GPU-type serving of hundreds of LLMs in production.

MAR 2025

GRPO Deployed

GRPO (Group Relative Policy Optimization) is an upgrade from Direct Preference Optimization, resulting in a +15% engagement improvement.

SEP 2025

AMD Instinct™ MI Series Deployed

All AMD MI325X and MI300X nodes are online and added to Model Mesh, serving over 50% of our production traffic.

NOV 2025

Mixture-of-Experts Training Begins

We began incorporating large Mixture-of-Experts models into our RLHF research, with the goal of increasing user revenue and screentime through our expanded compute capacity.

DEC 2025

2x 235B A22B Deployed

We upgrade our production blend to incorporate 235B A22B Mixture-of-Experts models. Platform screentime and both revenue increased by 25%.

[ GPU Cluster ]

1.4 EXAFLOPS GPU CLUSTER
FOR AI INFERENCE

At CHAI, we serve hundreds of in-house trained LLMs across several GPU chip types from both AMD and Nvidia. While open-source solutions such as vLLM work well for simple workloads, we've found that we can further improve upon vLLM by almost an order of magnitude through several optimizations, such as custom kernels and compute-efficient attention approximations.

NUMBER OF GPUS
5000 GPUs
NUMBER OF TOKENS SERVED
1.2T Tokens / Day
NUMBER OF UNIQUE LLMS SERVED
51K LLMs
CLUSTER COMPUTE PERFORMANCE
>1.4 Exaflops
NVIDIA A100
NVIDIA A100
NVIDIA L40S
NVIDIA L40S
AMD Mi325x
AMD Mi325x
AMD Mi300x
AMD Mi300x
0K
OPS PER SECOND
0K
LLMS DEPLOYED
0T
TOKENS PER DAY

CHAI has seen demand for its AI models grow exponentially. This demand has exceeded the capacity and ability of off-the-shelf providers. Out of necessity, we have had to verticalize and bring inference in-house. Starting small with a cluster of A5000s rented on-demand from CoreWeave in 2023, we've grown to a cluster size of thousands of GPUs, spread across 4 regions. Multi-region inference has challenges and has brought CHAI to the cutting edge of technology.

TOKENS PROCESSED PER DAY

8.1T
1.2T
1.2T
0.8T
0.4T
0.3T
0.1T

[ Product ]

Building Platform for Social AI

NEWS

We believe in platforms. There is huge demand for AI that is not only factually correct but also entertaining and social.

Gradient background Gradient background Gradient background
IOS ANDROID