ROSE is not a single piece of software. It's a layered stack. Each module solves one specific part of the human robot interaction problem. They work independently or together. All open source. All being tested in a live public environment.
ROSE CORE is the hardware abstraction layer that lets every other module run on any robot. You connect your physical platform once. Everything above it stays the same regardless of what that platform is.
Every real deployment involves hardware you don't fully control. Sensors differ. Actuators differ. Communication buses differ. ROSE CORE abstracts all of it, so a navigation or language module written for one robot runs on another without modification. This is what makes ROSE a platform rather than a per-robot project.
ROSE NLP handles the full chain from spoken input to physical action. It does not just transcribe. It maps what someone says to where they are, what is near them, and what a contextually correct response looks like in that specific space.
Real spaces introduce constraints that text-based NLP ignores: multilingual visitors, background noise, ambiguous spatial references, and questions that change meaning depending on where you are standing. ROSE NLP is built around these constraints. It handles spatial grounding, maintains conversation context, and produces actions the robot can execute.
ROSE VISION gives the robot social perception. Beyond object detection, it classifies how people are behaving: engaged, lost, moving, waiting, and infers what the robot should do in response.
Physical spaces are full of social signals that standard computer vision ignores: where someone is looking, whether they have stopped moving, whether they are in a group. ROSE VISION processes these signals and feeds structured intent to other modules, so the robot's response is socially calibrated, not just technically functional.
When multiple robots operate in the same space, ROSE ORCH handles task allocation and prevents conflict. Each robot knows what the others are doing and defers accordingly, so they work as a coordinated system rather than independent machines.
Coordination failures in physical deployments are hard to detect until they cause visible problems. Two robots approaching the same visitor. A robot accepting a task that another just completed. ROSE ORCH solves this with a shared state layer that every robot reads and writes in real time, including task handoff when conditions change.
ROSE EDGE handles the operational layer: deploying updates to live robots without downtime, monitoring system health, recovering from failures, and maintaining the logs that make post incident analysis possible.
The gap between a robot that works in a demo and one that works every day is operational infrastructure. ROSE EDGE provides the deployment, monitoring, and recovery tooling needed to run Physical AI in production. It is designed to be operated by people who are not software engineers, because real deployments are.
ROSE SIM provides a simulation environment calibrated to real deployment conditions: actual floor plans, observed visitor movement patterns, and logged interaction sequences. The transfer pipeline is designed to close the gap between simulated and real world performance.
Standard simulation achieves 20 to 30% real world policy transfer. That gap exists because generic physics environments do not reflect how real spaces behave. ROSE SIM grounds the simulation in real environment data: actual floor plans, observed crowd patterns, and interaction sequences captured from live deployments.
Fork the stack, access the dataset, or bring your space into the network.