Venture capitalist says making sure 'killer robots' aren't running around is the 'cost of doing business' in defense AI
US Army
- Military leaders argue AI has an important role in future warfare.
- There's been a shift in industry collaboration with the Department of Defense on AI and autonomy.
- AI in military tech must adhere to ethical frameworks, Snowpoint Ventures' Doug Philippone said.
Nobody wants "killer robots," so making sure artificial intelligence systems don't go rogue is the "cost of doing business" in military tech, the founder of a venture capital firm said during a Wednesday discussion of AI technology on the battlefield.
"You have to be able to make AI that can work within an ethical framework, period," Doug Philippone, co-founder of Snowpoint Ventures, a venture capital firm that merges tech talent with defense issues, said during the Reagan Institute's National Security Innovation Base Summit.
"I don't think anybody is, you know, trying to have killer robots that are just running around by themselves," he said.
Philippone explained that companies working in the military technology space that are worth making an investment in must have "thought through those problems and work in that ethical environment." He said these aren't limitations on development. Instead, they're requirements.
Autonomous machines tend to cause a certain degree of apprehension, especially when such tech is applied to the DoD's "kill chain." While military leaders maintain that the systems are critical for future warfare, they also pose ethical concerns about what machine autonomy might ultimately mean.
Times are changing
The defense-technology space appears to be experiencing a major shift in perspective. Last month, Google reversed course on a previous pledge against developing AI weapons, prompting criticism from some employees. The move seemed to reflect a greater willingness among more tech companies to work with the Defense Department on these technologies.
Throughout Silicon Valley, "there's been a massive cultural shift from 'no way we're thinking about defending America' to 'let's get in the fight,'" said Thomas Robinson, the Chief Operating Officer of Domino Data Lab, a London-based AI solutions company.
He said at Wednesday's event that "it is just a palpable difference between even a few years ago."
There has been a sharp rise in smaller, more agile defense technology firms, such as Anduril, breaking into areas like uncrewed systems and autonomy, spurring a view among some defense tech leaders that the new Trump administration could create new DoD contract opportunities potentially worth hundreds of millions, if not billions, of dollars.
Part of that cultural shift has spurred concerns around "revolving doors" of military officials heading to the venture capital tech realm after retirement, creating possible conflicts of interest.
Air Force photo by Richard Gonzales
US military leaders have increasingly prioritized the development of AI capabilities in recent years, with some arguing that whichever side dominates this tech space will be the winner in future conflicts.
Last year, then-Air Force Secretary Frank Kendall said the US is locked in a technological arms race with China. AI is crucial, he said, and "China is moving forward aggressively."
The Air Force has been experimenting with AI-piloted fighter aircraft, among other AI-enabled tools, as have other elements of the US military and American allies. "We're going to be in a world where decisions will not be made at human speed," Kendall said in January. "They're going to be made at machine speed."
Certain areas of armed conflict, including cyber warfare and electronic warfare, are likely to be dominated by AI technologies that assess events happening at unimaginably fast speeds and unimaginably small dimensions.
AI with guardrails
That makes AI a top investment. During Wednesday's discussion, US congressional representative Ro Khanna of California expressed support for a proposal from 2020 Democratic presidential candidate Michael Bloomberg, which called for shifting 15% of the massive Pentagon budget to advanced and emerging tech.
As the nominee for defense secretary, Pete Hegseth committed to prioritizing new technology, writing that "the Department of Defense budget must focus on lethality and innovation." He said that "technology is changing the battlefield."
But ethical considerations remain key. Last year, senior Pentagon officials, for instance, discussed guardrails put in place to calm fears that it was "building killer robots in the basement."
Understanding exactly how an AI tool's algorithms work will be important for ethical battlefield implementation, Philippone noted, and so will understanding the quality of data being absorbed β otherwise, it's "garbage in, garbage out."
"Whether it's Tyson's Chicken or it's the Department of the Navy, you want to be able to say 'this problem is important," he explained. "What is the data going in?"
"You understand how it flows through the algorithms, and then you understand the output in a way that is auditable, so you can understand how we got there," he said. "And then you codify those rules."
Philippone said the opacity of some AI companies' proprietary knowledge is "BS" and a "black box approach" to technology. He said that companies should instead aim for a more transparent approach to artificial intelligence.
"I call it the glass box," he said. Understanding how the inner workings of a system work can help avoid hacks, he said, "this is really important from an ethics perspective and really understanding the process of your decision in your organization."
"If you can't audit it," he said, "that leaves you susceptible."