Last week, X’s AI-powered chatbot Grok publicly melted down. This week, xAI’s newly announced “Grok for government,” which repurposes Grok to serve myriad federal agencies, has secured a $200 million Department of Defense contract.
On Monday, the Department of Defense Chief Digital and Artificial Intelligence Office announced $200 million contracts for xAI, the company operating Grok on the social media platform X, and other prominent AI-forward or centric companies, including Anthropic, OpenAI, and Google, to “leverage the technology…of U.S. frontier AI companies to develop agentic AI workflows across a variety of mission areas”— including within the “warfighting domain.”
DoD Chief Digital and Artificial Intelligence Officer Dr. Doug Matty explained the DoD’s rationale for the contracts in the press release announcing them. But exactly how the initiative would bolster the DoD’s warfighting effort, and why hundreds of millions are needed for the initiative, still seems somewhat unclear.
“The adoption of AI is transforming the Department’s ability to support our warfighters and maintain strategic advantage over our adversaries,” DoD Chief Digital and Artificial Intelligence Officer Dr. Doug Matty said. “Leveraging commercially available solutions into an integrated capabilities approach will accelerate the use of advanced AI as part of our Joint mission essential tasks in our warfighting domain as well as intelligence, business, and enterprise information systems.”
Asked what the contracts might entail within the “warfighting domain,” a DoD official told RS: “DoD plans to leverage the talent and technology of U.S. frontier AI companies to develop agentic AI workflows across a variety of mission areas. The Department will not further elaborate on mission-specific use cases at this time.”
AI tools like Large Language Models (LLMs) have become popular since AI pioneer ChatGPT’s public debut in late 2022, with governments and federal agencies increasingly looking to incorporate AI into their operations.
But when it comes to war, practical and ethical concerns exist. First, the widespread application of AI technology within wartime contexts, by outsourcing warfighting and the decisions related to AI-powered tools may depersonalize warfighting and make it easier to get into conflict.
Meanwhile, AI-powered tools can also be unpredictable — Grok itself unexpectedly spiraled last week after changes to its code led to a public meltdown. They can be known to even pass off false information as true, leading to concerns its use within military contexts might lead to significant battlefield errors and even loss of life.
Deep tech companies have increasingly collaborated with the defense sector in recent years, especially relating to AI. For example, OpenAI dropped its ban on AI military applications in early 2024. Military operations during Israel’s war on Gaza and the war in Ukraine have also used AI, especially as a military targeting tool.