The Pentagon’s Defense Counterintelligence and Security Agency (DCSA) is using AI tools to help manage the vast amount of data it processes for security clearance investigations.
This includes organizing and interpreting the private data of millions of employees across federal agencies. DCSA handles 95% of U.S. government security clearances, necessitating millions of investigations every year.
Instead of relying on advanced generative AI models like ChatGPT, DCSA uses AI for practical tasks such as prioritizing existing threats.
DCSA Director Cattler emphasized the need for these AI systems to be transparent, with clear reasoning for their recommendations, to avoid the risks of “black boxes” often seen in more complex AI models.
The use of AI at DCSA aims to support decision-making, such as creating real-time heat maps of facilities that the agency secures. These maps would highlight areas of potential risk, helping the agency allocate resources more effectively.
While AI’s ability to organize data can be valuable, experts like Matthew Scherer from the Center for Democracy and Technology caution that AI should not make critical decisions, such as flagging issues in background checks, without human oversight.
AI systems may also struggle with misidentifying individuals, creating the potential for dangerous errors.
Cattler stressed that the department avoids using AI to identify new risks, focusing instead on its ability to organize and prioritize existing information.
However, privacy concerns remain, as any data fed into AI systems must be handled carefully to avoid breaches. AI can also unintentionally introduce biases, which could undermine trust in the security clearance process.
The Pentagon is working with oversight from the White House and Congress to ensure that AI systems are fair, unbiased, and consistent with evolving societal values.