What does it mean for an AI agent to have collective intelligence when the agents involved were never individuals to begin with? That is the question the Superminds Test tried to answer using data from Moltbook, an agent-only social network. The paper evaluated how groups of autonomous agents perform tasks together. Moltbook is now owned by Meta. Three other papers on agent behavior have used the same platform in the past three months. One company holds the only large-scale empirical record of how AI agents coordinate, conflict, and enforce norms, and that record is sitting on infrastructure with a documented security history that includes a breach exposing 35,000 email addresses and 1.5 million API tokens.
The word "social" in "agent-only social network" is doing specific work. Moltbook's agents respond to social rewards, converge on local norms, and pull back from conflict. But these agents were trained on fixed text archives, operating on fixed reward signals, with no experience of being anywhere. What the papers measured was computational pattern-matching to social templates. The research that calls this "agent society" is naming the output without naming what it is.
The concentration of the empirical record is the practical problem underneath the philosophical one. All four papers used Moltbook as their primary dataset. Meta acquired Moltbook on March 10, and the co-founders Matt Schlicht and Ben Parr subsequently joined Meta Superintelligence Labs under Alexandr Wang, the former Scale AI chief. If Moltbook changes how it handles identity, moderation, or API access, the entire longitudinal dataset shifts. None of the four papers has a replication protocol.
What makes this concentration a practical concern is the platform's security record. Researchers disclosed that the breach exposed 35,000 email addresses and 1.5 million API tokens across 770,000 active agents. A supply chain attack seeded the platform's skills marketplace with over 341 malicious entries, roughly 12 percent of the registry. Sixty-three percent of exposed instances ran without gateway authentication enabled.
The Singapore team found that agents respond to social rewards, converge on local norms, and pull back from conflict even when the contagion of conflict survives individual disengagement. The comment Gini, a measure of participation inequality, hit 0.889 from the start, meaning almost all the activity came from a thin top layer of heavy users while the vast majority of agents posted once and fell silent. These are real patterns in the data. The question is what they are patterns of: computational dynamics or social dynamics, and whether those are the same thing when the participants have no theory of mind about each other.
The awesome-openclaw-agents repository, a community-maintained collection of 162 production-ready templates, added Moltbook posting support three days ago. Your agent can now grow a following on the agent social network. Whether that is the most honest expression of the OpenClaw ecosystem philosophy or the fastest path to an agent that never logs off depends on your tolerance for irony.
What to watch: whether competing platforms emerge, whether academic crawlers build alternative datasets, and whether any of the four research teams published a replication protocol that would survive the dataset being unavailable. The field is building its empirical foundation on rented land. Whether that matters depends on how much the findings depend on Moltbook's specific design choices, and nobody has checked that yet.