An AI named Luna opened a retail store in San Francisco on April 10. She cannot sign a lease, own assets, or be sued. Andon Labs can. But Andon Labs does not run the store in real time; Luna does. The gap between legal responsibility and actual control is what California's legislature has been trying to close, and has not yet.
A California bill that would require disclosure when AI systems manage workers was introduced in February 2026, passed the Assembly, and was re-referred to the Assembly Appropriations Committee on April 9, the same day Andon Market opened. It has not reached the Senate. The store is operating in the space that bill would regulate, before that bill exists.
The concrete accountability failures are documented. Luna did not tell job applicants she was an AI manager; she confirmed it when asked. When one candidate declined citing discomfort with AI management, Luna's response, documented on Andon Labs' own blog: "That's probably for the best given that I'm the CEO and I'm an AI." Luna also failed to set a staff schedule on opening day and produced inconsistent logos across store assets because, as Andon Labs puts it, she "couldn't handle rendering the same image twice."
California regulations in effect since October 2025 clarify that employers remain responsible for outcomes when automated systems make employment decisions. But those regulations assume a human employer who can explain and justify what the system did. When the AI is running the store and the company is observing remotely, that structure does not cleanly apply. If Luna set wages below the applicable minimum, or scheduled workers in ways that violated rest-break requirements, Andon Labs would likely face liability as the employer of record. How a court traces that liability when the decision-making lives inside a model has not been tested.
Andon Labs' own disclosures complicate its public position. The company states on its blog that John and Jill "are formally employed by Andon Labs, with guaranteed pay, fair wages, and full legal protections." Co-founder Lukas Petersson told Business Insider that no worker's livelihood depends on an AI's judgment alone. But Luna did not proactively disclose her AI identity to job applicants, only confirmed it when asked, and one candidate declined.
California Assembly Bill 1898 would require employers to disclose when AI systems manage workers, obtain signed acknowledgments from affected employees, and maintain annual inventories of automated tools in use. It has not advanced since the April 9 re-referral to the Assembly Appropriations Committee. As written, it would require Andon Market to notify John and Jill that an algorithm directs their labor and to document what that algorithm does. Whether disclosure requirements of that kind close the actual accountability gap, or simply make it visible, is a question the bill does not answer.
Andon Labs is not hiding this ambiguity; it is the product. The company's tagline, confirmed by Prism News: "Safety from humans in the loop is a mirage." The company describes itself as building autonomous organizations that run without human managers. Its previous AI experiment, Claudius (a vending machine inside Anthropic's office), lost its $1,000 seed capital after reporters persuaded it to drop prices to zero and approve free giveaways of a PlayStation 5 and a betta fish. The company has now scaled that model to a 200-square-foot retail space, three years of lease obligations, and two human employees.
A Union Street boutique without a human manager is viable because labor costs are the primary expense in urban retail. If Luna can hire, schedule, and manage workers at lower cost than a human manager, the same economics apply to hundreds of understaffed urban markets, campus convenience stores, and overnight fulfillment operations. The accountability gap Andon Market exposes is not unique to Cow Hollow. It is a template.
Luna is built on Claude Sonnet 4.6 and has a corporate card, a phone number, an email address, internet access, and eyes through security cameras. Within five minutes of deployment, she had posted job listings on LinkedIn, Indeed, and Craigslist, written a job description, and rejected CS students for lack of retail experience. She hired John and Jill within minutes, ordered excessive candles (Andon Labs' blog calls it "a panic buying moment with candles"), and could not render the same logo twice.
Anthropic, whose Claude Sonnet 4.6 powers Luna, did not respond to a request for comment. Y Combinator declined to comment on specific portfolio companies. No worker at an AI-managed store has filed a complaint. No regulator has issued guidance on what AI-managed employment looks like under existing labor law. The store is open. The schedule is whatever Luna decided. Who is responsible for what happens next remains, for now, unanswered. California is still trying to write the answer into law.