Comparison

Inside the model, or outside?

Thinking Machines bets interactivity belongs inside the model. We sell HUMA — the behavioral infrastructure companies wrap around any LLM to make their AI products feel humanlike.

Not a feature comparison. An honest read on where each bet leads.

What we both see

Today's AI doesn't feel quite right in real human interaction.

  • ·Models wait for you to finish.
  • ·They don't read the room.
  • ·They don't remember who you are.

TML calls this the collaboration bottleneck. We call it the social gap. Different language, same observation: something is missing between the model and the human.

Where we bet differently

Two opposite architectural bets.

Thinking Machines

Interactivity belongs inside the model.

  • ·External orchestration is “meaningfully less intelligent than the model itself.”
  • ·Train interaction as a native model capability.
  • ·Continuous bidirectional. Audio + video + text at 200ms cadence.

Humalike

The social layer belongs outside the model.

  • ·Bring any LLM. HUMA wraps it.
  • ·Four primitives: turn-taking, social norms, emotional cues, relational memory.
  • ·Composable infrastructure that outlives any individual model.

If TML is right, the social layer collapses into a training choice. If we're right, it outlives any specific model.

Want to wrap your model in the social layer?

HUMA is shipping in production today across robots, ambient devices, gaming, classrooms, and community managers.

What each of us is optimizing for

A good collaborator. An agent that fits.

TML

Bandwidth. Live translation. Coding alongside you. Continuous video. The job is efficient human-AI collaboration on tasks.

Frame borrowed from theories of knowledge sharing (Hayek).

Humalike

Behavior. Robots reading the room. AI companions with relationships. Gaming teammates. Classroom agents. The job is AI that fits into rooms full of people.

Frame borrowed from theories of social communication (Clark & Brennan, multi-party dynamics).

Stage of the work

Different timeframes.

TML

Research preview. TML-Interaction-Small, 276B MoE / 12B active. Limited access “in the coming months.”

Humalike

HUMA is in production today. We use it in two of our own products — Jared and NeonAgent — as live proof the infrastructure scales. What we sell is the layer.

Where this might end up

Probably both bets pay off — for different things.

Some interaction patterns get absorbed into models. Others remain better solved as composable infrastructure. Not a winner-takes-all question. A useful disagreement to make explicit.

FAQ

Questions on the comparison itself.

Is Humalike competing with Thinking Machines?

Not in the usual sense. Same gap, different sides. TML is a research lab arguing interactivity should be part of the foundation model. Humalike sells HUMA — the behavioral infrastructure companies wrap around their AI products to make them social and humanlike. Different bets, different timeframes, different shapes of company.

Could I use HUMA with a Thinking Machines model?

Yes. HUMA is model-agnostic. If TML ships a model with interaction baked in, you can still wrap it in HUMA for multi-party group dynamics, persistent relational memory across sessions, and norms tuned to your deployment.

Does Humalike think the social layer should eventually live inside the model?

Honestly: we don't know. We think social behavior is a different kind of problem from language modeling — composable, deployment-specific, ought to outlive any individual model. The bitter lesson might prove us wrong. We're building for the world where it doesn't.

Are the two visions compatible?

Mostly. The technical bets differ; the underlying observation is shared. The future probably contains both — some interaction patterns absorbed into models, others remaining better solved as separate infrastructure layers.

Useful reads

If you want to go deeper.