There is a moment in the history of any dangerous technology when the safety research is still honest. Before the product is defined, before the deployment is locked, before the lawyers arrive. Penn Engineering thinks it has found that moment for swarm robotics — and Singapore is paying for the privilege.
A three-year collaboration launched this month between the University of Pennsylvania, Carnegie Mellon University, and the National University of Singapore, backed by Singapore's Home Team Science and Technology Agency (HTX) and defense contractor ST Engineering, is setting out to build safety guarantees directly into the algorithms that coordinate teams of physical AI agents. Not bolted on after the fact. Built in from the start. The project is anchored at Penn's PRECISE Center and led by Rahul Mangharam, a professor in Electrical and Systems Engineering who runs the Safe Autonomous Systems Lab, better known as xLAB, according to Penn Engineering's announcement.
Most AI agents live in software. They answer questions, generate text, recommend products. The consequences of a wrong output are measured in user experience, not physics. Mangharam and his collaborators are working on something different — systems that move through the physical world, that collide with air and structures and people, that operate with limited energy and imperfect sensors and real-time constraints no software stack can fully simulate.
"Once AI operates in physical space, it has to deal with real constraints and real consequences," Mangharam said in Penn's announcement. "Safety can't be an afterthought or something we trade for better performance. It has to be built in from the start."
The research centers on adversarial multi-agent coordination — scenarios in which teams of agents must execute together while an opposing team tries to disrupt them. It sounds like a board game. It maps directly to drone warfare.
The Singapore connection is the real story.
HTX is not a civilian research agency. It is Singapore's homeland security technology arm — the agency responsible for equipping the city-state's front-line security forces with autonomous systems, surveillance infrastructure, and AI-enabled defense capabilities, according to HTX's own public records. ST Engineering, its primary industry partner on this project, is one of Southeast Asia's largest defense conglomerates. In 2025, HTX and ST Engineering formed a $10 million joint venture, Codex Solutions, to build out HTX's in-house software development capabilities for the Home Team, confirmed via ST Engineering's SGX filing and Straits Times reporting. The Penn collaboration sits inside that same ecosystem.
Penn's announcement frames the potential applications as natural disasters, transportation challenges, and infrastructure failure. Which is plausible. It is also exactly the vocabulary you would use if you wanted to describe how coordinated drone swarms could protect a port, secure a border, or defend critical infrastructure from an adversarial actor — without using the word "defense."
The technical approach is worth a paragraph on its own. The project uses neurosymbolic AI — a framework that combines neural networks with structured, human-encoded knowledge — through Physics-Informed Neural Networks (PINNs). The idea is that safety constraints and physical laws can be written directly into the learning architecture, giving the system hard boundaries on what it can and cannot do, rather than hoping the model figures it out through trial and error. Whether hard-coded safety rules make failure modes more predictable or simply create new ones is an open question in the field. The Penn team is betting on the former.
The drone platforms being used for testing are small — no bigger than a tea saucer, designed to fit in a human hand. The test scenarios involve teams competing and cooperating simultaneously, which is the kind of problem that scales poorly in simulation and worse in the real world.
Penn's collaborators include Antonio Loquercio, an assistant professor in Penn's Electrical and Systems Engineering department whose research explores control and learning for embodied AI systems at the university's GRASP Lab. Also on the team: Mingmin Zhao, an assistant professor in Computer and Information Science who develops sensing and perception techniques, and Linh Phan, a professor in CIS who studies scalable distributed algorithms for rapidly changing multi-agent environments. CMU's role centers on large-scale physical demonstrations — the university runs one of the oldest and most defense-connected robotics programs in the country, with deep ties to U.S. military research that stretch back decades.
Singapore's investment in American university research to develop swarm AI safety frameworks is not altruism. It is infrastructure. The city-state has a documented interest in autonomous border and port security, and ST Engineering has an existing portfolio of unmanned aerial systems — including the Artos with swarm-type tactics and the EagleStrike loitering munition — sold to military customers across Asia and the Middle East, confirmed via EDR Magazine, Shephard Media, and Asian Military Review. If the Penn team produces provable safety guarantees for multi-agent coordination, the most likely early adopters are not emergency response agencies — they are the defense ministries that fund HTX's parent ministry in the first place.
This is not necessarily a criticism. Physical AI safety is a real problem, and provable guarantees embedded in the architecture are a better foundation than post-hoc patching. But the framing matters. A three-year academic project, funded by a homeland security agency, studying adversarial multi-agent coordination with defense-adjacent industry partners, using small drones as test platforms, is not a natural disaster response program. It is a defense technology development pipeline wearing a public safety costume.
The question worth asking is whether a safety-first approach, developed under the umbrella of a security agency rather than a civilian regulator, will produce safety guarantees that constrain military applications or enable them with better justifications. Mangharam's team may be building genuinely safer systems. Whether those systems are safer for everyone, or just safer for the side that commissioned them, is a question nobody in the announcement bothered to answer.
What comes next will tell. The three-year timeline is long enough for the research to mature and short enough for the geopolitical environment to shift. Watch whether the project produces publishable safety frameworks available to civilian developers, or whether the output flows primarily back to HTX and ST Engineering's product pipeline. The difference is not academic.
† Add footnote: "Source-reported; not independently verified. The article characterizes the technical approach as providing 'hard boundaries,' which may not fully reflect the neurosymbolic/PINNs methodology described in the source."