Give physical AI persistent memory of the real world
A robot without persistent memory re-learns its environment every boot. With Headkey, your embodied agent remembers object locations, forms probabilistic beliefs about routines and conditions, and maps how spaces, equipment, and tasks connect — across every deployment.
Three Primitives, One Cognitive Architecture
Each primitive serves a different purpose. Here's how they work for this use case.
Memories
Remember spatial layouts and task procedures
Object locations, navigation paths, and step-by-step procedures are stored and searchable by natural language. Your robot never forgets where things are or how to do a task.
{
"content": "Fire extinguisher located in Warehouse Zone B, aisle 3, top shelf. Last verified during safety audit.",
"tags": [
"safety",
"warehouse",
"zone-b"
],
"importance": "high"
}Beliefs
Form probabilistic beliefs about the environment
"Dock door 4 is usually locked after 6pm" at 0.9 confidence. "Forklift traffic peaks between 2-3pm" at 0.75. Beliefs update as conditions change — seasonal patterns, shifted schedules, rearranged layouts.
{
"statement": "Dock door 4 is locked after 6pm on weekdays",
"confidence": 0.9,
"subject": "Dock Door 4",
"object": "access schedule"
}Relationships
Map spatial and operational topology
Zone A connects to Zone B via corridor 2. The conveyor feeds into the packaging station. Tool X is stored near Workstation Y. Navigate by meaning, not just coordinates.
{
"subject": "Packaging Station",
"object": "Conveyor Belt C",
"predicate": "receives items from"
}Flat Memory vs. Structured Cognition
What changes when your agent has a mind, not just a vector store.
| Dimension | Flat Memory (RAG) | Headkey |
|---|---|---|
| Environment knowledge | Re-learns layout every restart | Persistent spatial memory across deployments |
| Changing conditions | Static rules that require manual updates | Beliefs update with confidence as conditions shift |
| Facility navigation | Coordinate-based pathfinding only | Semantic graph of zones, equipment, and connections |
| Multi-robot learning | Each robot learns in isolation | Org-wide visibility shares discoveries across fleet |
Sensory Event Pipeline
Your Agent Doesn't Have to Decide What to Remember
Stream sensor observations with spatial context. The pipeline automatically extracts memories, forms beliefs, and builds relationships.
What goes in
POST /api/v1/sensory/ingest — Events within 10 seconds and 5 meters are grouped into spatial moments
{
"agentId": "{{agentId}}",
"modality": "read",
"spatialContext": {
"latitude": 37.7749,
"longitude": -122.4194,
"locationLabel": "Zone B, Aisle 3"
},
"modalityPayload": {
"textContent": "Picked SKU-4821 from bin 17. Noticed forklift blocking aisle exit. Rerouted via aisle 4.",
"contentType": "observation",
"metadata": {
"robotId": "bot-07",
"taskId": "pick-2891"
}
}
}What comes out automatically
No tool calls needed — the pipeline builds these for you
Memories
- SKU-4821 picked from Zone B, aisle 3, bin 17
- Forklift obstruction in aisle 3 required reroute via aisle 4
Beliefs
- Aisle 3 in Zone B has frequent forklift congestion (0.80, reinforced)
- Aisle 4 is a viable alternate route from Zone B pick area (0.70, new)
Relationships
- Zone B Aisle 3 → alternate route → Zone B Aisle 4
- Forklift Traffic → causes delays in → Zone B Aisle 3
See It in Action
A warehouse robot that remembers item locations, forms beliefs about facility patterns, and maps how zones and equipment connect.
|Start building your embodied agent
Free to start. Add persistent cognition to any MCP-compatible agent in 60 seconds.