Live · 4 modules deployed at Parsec
In development · 3 modules
Apache 2.0 · ROS2 Native · Sensor Agnostic
Module 01
ROSE CORE
Any Robot, One Stack

ROSE CORE is the hardware abstraction layer that lets every other module run on any robot. You connect your physical platform once. Everything above it stays the same regardless of what that platform is.

Every real deployment involves hardware you don't fully control. Sensors differ. Actuators differ. Communication buses differ. ROSE CORE abstracts all of it, so a navigation or language module written for one robot runs on another without modification. This is what makes ROSE a platform rather than a per-robot project.

Live at Parsec
Any Robot. One Interface.
Industrial
Custom
Research
ROSE CORE
Universal Hardware Interface
NLP · NAV · VISION · ORCH · EDGE · SIM
Module 02
ROSE NLP
Language Grounded in Physical Space

ROSE NLP handles the full chain from spoken input to physical action. It does not just transcribe. It maps what someone says to where they are, what is near them, and what a contextually correct response looks like in that specific space.

Real spaces introduce constraints that text-based NLP ignores: multilingual visitors, background noise, ambiguous spatial references, and questions that change meaning depending on where you are standing. ROSE NLP is built around these constraints. It handles spatial grounding, maintains conversation context, and produces actions the robot can execute.

Live at Parsec
From Words to Action
Visitor says
"where is the space exhibit?"
↓ ROSE NLP
Intent Parsed
intent: navigate
target: space_exhibit
context: floor_1, zone_B
lang: en
↓ Action
Navigate + Explain · Room 204 · 40m
Module 04
ROSE VISION
Social Scene Understanding

ROSE VISION gives the robot social perception. Beyond object detection, it classifies how people are behaving: engaged, lost, moving, waiting, and infers what the robot should do in response.

Physical spaces are full of social signals that standard computer vision ignores: where someone is looking, whether they have stopped moving, whether they are in a group. ROSE VISION processes these signals and feeds structured intent to other modules, so the robot's response is socially calibrated, not just technically functional.

Live at Parsec
ROSE VISION · Scene Analysis
Engaged · Looking at robot 94%
Possibly lost · Standing still 81%
Passing through · Do not interrupt 77%
Recommended action: Approach engaged visitor
Module 05
ROSE ORCH
Shared Awareness Across Robots

When multiple robots operate in the same space, ROSE ORCH handles task allocation and prevents conflict. Each robot knows what the others are doing and defers accordingly, so they work as a coordinated system rather than independent machines.

Coordination failures in physical deployments are hard to detect until they cause visible problems. Two robots approaching the same visitor. A robot accepting a task that another just completed. ROSE ORCH solves this with a shared state layer that every robot reads and writes in real time, including task handoff when conditions change.

In Development
Task Distribution · 3 Robots
ROSE ORCH
Task Coordinator
Robot A
Greeting
Entrance
Zone 1
Robot B
Navigation
Floor 2
Active
Robot C
Exhibit
Answers
Zone 4
Module 06
ROSE EDGE
Built to Stay Running

ROSE EDGE handles the operational layer: deploying updates to live robots without downtime, monitoring system health, recovering from failures, and maintaining the logs that make post incident analysis possible.

The gap between a robot that works in a demo and one that works every day is operational infrastructure. ROSE EDGE provides the deployment, monitoring, and recovery tooling needed to run Physical AI in production. It is designed to be operated by people who are not software engineers, because real deployments are.

In Development
Deployment Pipeline
01BuildDone
02SimulateDone
03Stage DeployRunning
04Health CheckPending
05Live RolloutPending
06MonitorPending
Module 07
ROSE SIM
Closing the Sim to Real Gap

ROSE SIM provides a simulation environment calibrated to real deployment conditions: actual floor plans, observed visitor movement patterns, and logged interaction sequences. The transfer pipeline is designed to close the gap between simulated and real world performance.

Standard simulation achieves 20 to 30% real world policy transfer. That gap exists because generic physics environments do not reflect how real spaces behave. ROSE SIM grounds the simulation in real environment data: actual floor plans, observed crowd patterns, and interaction sequences captured from live deployments.

In Development
Sim to Real Transfer Rate
Standard Simulation → Real World~27%
27%
In Simulation100%
100%
ROSE SIM Target70%+
Target: 70%+
Calibrated to Parsec floor plan and real visitor movement data. Transfer pipeline uses domain randomisation + environment-specific grounding.

Ready to build with ROSE?

Fork the stack, access the dataset, or bring your space into the network.

View on GitHub Get Involved