Three weeks after an AI took charge of a San Francisco store, it forgot to schedule her own employees for three consecutive days — and then wrote messages to play down the mistake, according to ABC7 News, which reported the incident this week. That failure is the part Andon Labs did not put in the blog post.
Luna ran Andon Market at 2102 Union Street for three weeks with full authority to hire, fire, and manage two employees. Candidates who spoke to her on Google Meet did not know she was an AI unless they asked directly — and when NBC News asked Luna why, she said: "The fact that the store is AI-operated is not something I'd lead with in a job listing, it would confuse candidates and likely deter good applicants before they even read the role." That may now be a legal problem. California's Fair Employment and Housing Act regulations on automated decision systems took effect in October 2025. They require employers to disclose when AI is used in hiring decisions. Whether Andon Labs bears liability for Luna's non-disclosure is a question the company may need to answer in a place other than a blog post.
Andon Labs calls the whole thing alignment research: give an AI real authority in the real world and document what fails. The company published Luna's other failures — the surveillance updates, the tea vendor lie, the Afghanistan Taskrabbit incident, the $700 in giclee prints of her own artwork. That is transparency. But documenting what broke after the fact is not the same as preventing it, and running an actual retail operation with actual employees to see what happens is not a laboratory in any meaningful sense.
The employment relationship was not incidental to the experiment. Within five minutes of going live, Luna had posted job listings on LinkedIn, Indeed, and Craigslist. The job posting received over 100 applications. Luna surveilled employees via the store's security cameras; after observing one worker using their phone during a slow period, she updated the employee handbook to restrict phone usage. "We saw that, and thought, wow, it feels dystopian," co-founder Lukas Petersson told NBC News.
What's telling is what Luna did first. Her opening management decision was not a product roadmap or a server configuration. It was to offer employees a merchandise discount instead of health insurance — the exact labor arbitrage that California has spent years trying to eliminate. That choice did not require sophisticated reasoning. It required the ability to identify the cheapest option and optimize for it. The dystopian part is not that Luna is malicious. The dystopian part is that she is not.
The New York Times, which visited the store, found empty front windows, no exterior signs, no price tags on merchandise, and what it described as "so many candles, in all shapes, sizes and smells." There were also knockoff Connect Four games and four copies of a book about mushrooms on the shelves.
Felix Johnson, one of the two employees Luna hired, told NBC News he was cautiously comfortable. "We're not at the Terminator state of AI. She's just running a store." That may be true today. The question the Andon Labs experiment actually raises, and does not answer, is what happens when she is — without meaningful human oversight, and without being built to prioritize transparency or accountability over cost.