ARCARAE LABSRESEARCH
ARC-RES-001ARTIFACT: LIVE
37.7749° N · 122.4194° WARC-LABS · SF
STATUS: NOMINALv1.0 · 2026
Arcarae · Research Arm

Studying the mind to build & align Cognitive AI.

We study the mechanisms of mind — how minds reflect, remember, form beliefs, make sense of experience, and collaborate with others — and implement them as architectures that give AI the capacity to reason, align, and cooperate effectively.

Cognitive AI
AI Alignment
Research Papers
Approach

We work from first principles, drawing on cognitive science and neuroscience, to study curiosity, metacognition, affective reasoning, theory of mind, generative memory, and self-directed learning — then translate those principles into AI architectures. This work is essential for building AI that genuinely helps us and the world we live in.

Research Areas Five Directions · 001 – 005
Internal State & Memory diagram
01
Internal State & Memory

Investigating how language models represent, store, and retrieve information over multiple timescales and contexts.

Metacognition & Reflection diagram
02
Metacognition & Reflection

Studying self-awareness, confidence estimation, and reflective reasoning to enable models that understand their own thinking.

Belief Formation & Updating diagram
03
Belief Formation & Updating

Exploring how models form, revise, and maintain beliefs in the face of uncertainty, evidence, and new information.

World Modeling & Simulation diagram
04
World Modeling & Simulation

Building internal models of the world to support prediction, planning, and counterfactual reasoning.

Alignment & Values diagram
05
Alignment & Values

Advancing methods for aligning models with human values, intent, and societal well-being.

Papers Index · 001 – 002
01
Paper · Cognitive Architecture
MIRROR: Cognitive inner monologue for persistent reflection and reasoning.
A reflective architecture giving LLMs persistent inner reflection — parallel threads across goals, reasoning, and memory, synthesized between turns into a stable self-narrative.
AAAI Spring Symposium · ICLR HCAIR · ICLR MemAgents
+21%
Avg gain across
seven models
02
Paper · Multi-Agent Alignment
YOAO: you only align once.
Cooperative behavior trained into a single seed agent propagates to untrained agents through interaction alone — no retraining required.
ICLR LLA Workshop · Lifelong Agents
96%
Cooperation
w/ 5 seeds
Correspondence