Send Another Robot to Tell You What the First One Is Doing
The counterintuitive solution to home robot transparency? A second robot to tattle on the first one.

The counterintuitive solution to home robot transparency? A second robot to tattle on the first one.

image from grok
Researchers at WPI's RoboCare Lab demonstrated that pairing a task-executing robot (Stretch 3) with a social mediator robot (Pepper) that provides real-time verbal and visual status updates dramatically improves user engagement—from 15.8% to 84.6% task-focused attention—without increasing total task completion time. The system uses semantic state abstraction (NAVIGATING, GRASPING, DELIVERING, etc.) rather than raw sensor data, enabling a human-in-the-loop failure mode where Pepper explains errors and requests user consent before recovery attempts. Notably, the externalized condition increased task initiation latency (49.93s vs 33.47s) but decreased execution time (137.33s vs 162.63s), suggesting informed users interrupt robots less frequently.
When the Stretch 3 robot slinks off to another room to fetch your water bottle, you have no idea what is happening in there. Did it find the kitchen? Is it stuck? Did it give up? Most home robot systems just leave you in the dark. A team at Worcester Polytechnic Institute has a different answer: send a second robot to translate.
In a study published on arXiv (submitted to IROS 2026), researchers at WPI's RoboCare Lab ran 30 participants through a counterbalanced within-subject experiment comparing two conditions. In the baseline, a Stretch 3 mobile manipulator executed voice-directed object retrieval tasks in a separate room with no updates sent back. In the treatment condition, a Pepper robot co-located with the participant delivered real-time verbal and visual state updates from the executing Stretch 3 — NAVIGATING, GRASPING, DELIVERING — as it worked out of sight.
The results were not close. User task-focused attention jumped from 15.8 percent to 84.6 percent (p<.001). Eighty-three percent of participants preferred the externalized condition. And the result that should end most arguments about whether transparency is free: end-to-end task completion time was not significantly different between conditions (p=.271).
There was a genuine tradeoff, though the researchers buried it in the execution numbers. Task initiation — the time between the user asking and the Stretch 3 actually getting moving — was longer in the externalized condition (49.93s vs 33.47s, p<.001). Pepper needs a moment to explain what is about to happen. But execution itself was faster with externalization (137.33s vs 162.63s, p=.016). People who knew what was coming apparently made the Stretch 3's job easier, or at least did not get in its way.
The system architecture is worth pausing on. The execution robot publishes task-level semantic states — not raw controller signals — to a coordination server. Pepper subscribes and translates. The abstraction layer is deliberate: exposing low-level sensor noise would not help a user waiting for a glass of water. The state model includes NAVIGATING, SEARCHING, GRASPING, DELIVERING, RECOVERING, and failure modes. When something goes wrong, Pepper tells the user and asks before attempting recovery — a human-in-the-loop failure mode that most deployed systems skip entirely.
Fengpei Yuan runs RoboCare Lab at WPI. The lab's focus is robot-assisted dementia care, which is not the framing most robotics papers lead with. It changes what this result connects to. A person with advanced dementia is not going to pull up a dashboard. A robot that quietly handles failures without explanation is not going to build the trust that makes a caregiver relationship workable. The social mediator pattern — one robot that knows how to talk to people, translating for another robot that knows how to do the work — maps directly onto care scenarios where the person being served cannot constantly check on the machine.
Pepper is an interesting character in this story. SoftBank discontinued it in 2021 after manufacturing roughly 27,000 units. Aldebaran, its French maker, went bankrupt in 2025; Maxvision Technology Corp, a Chinese firm, acquired the assets in July 2025, but no new production has been announced as of February 2026. The robot in this study is more or less a zombie — technically operational but no longer in production, maintained in labs and by second-hand operators. That Pepper is the social translator in a study about trust and transparency is either poetic or absurd, depending on your tolerance for irony.
The paper is careful about its own claims. N=30 is a reasonable sample for a within-subject HRI study, but it is a single lab, a single task scenario (object retrieval), and a relatively young participant pool (the paper does not specify demographics, which is worth noting when the intended end users are older adults with cognitive impairment). The attention measure — 15.8 percent to 84.6 percent — is large and statistically clean, but attention is not the same as trust or long-term adoption. Yuan and colleagues acknowledge this and position the work as architectural evidence, not a deployment claim.
That caveat matters because the underlying problem is real and getting larger. As domestic robot systems move from single visible units toward distributed deployments — one robot in the kitchen, another in the garage, a third on the second floor — the awareness gap between execution and observation will become the default condition. You will not be able to watch the robot that is working. The question the WPI team is asking is what it costs to keep you in the loop, and the early answer is: less than staying silent.
Yuan can be reached at fyuan3@wpi.edu.
Story entered the newsroom
Research completed — 0 sources registered. WPI researchers (Zhao, Duggi, Yuan) submitted to IROS 2026: in N=30 within-subject study, having a Pepper robot verbally and visually narrate what a S
Draft (779 words)
Approved for publication
Headline selected: Send Another Robot to Tell You What the First One Is Doing
Published (763 words)
Get the best frontier systems analysis delivered weekly. No spam, no fluff.
Robotics · 2h 21m ago · 4 min read
Robotics · 2h 52m ago · 4 min read