top of page
Search

AI and Threat Assessment

  • Writer: Dr. Brian Van Brunt
    Dr. Brian Van Brunt
  • Feb 25
  • 4 min read
Figures of Luke Skywalker and C3-PO at the front of a Mardi Gras float.

I remember listening to a keynote speaker at an ATAP conference about ten years ago who talked about conducting threat assessments without meeting the subject, instead relying on third-party information, police reports, and sometimes interview transcripts, or watching the interview from afar. This was a nuanced concept for me at the time. I had always seen the benefit of asking questions directly to the subject, reading their responses, and moving the interview forward based on what emerged in a more exploratory and naturalistic manner.


With the advent of AI, we are seeing the use of ChatGPT and other platforms to conduct threat assessments and develop mitigation plans. As someone who has conducted hundreds of violence risk and threat assessment interviews, I have found AI helpful and timesaving across various threat areas.


The caveat here (you know this was coming, especially if you’ve seen the 2025 South Park episode “Sickofancy”) is that AI is a gold-in/gold-out process (also known as garbage in/garbage out). An important part of the violence risk and threat assessment processes is maintaining a skeptical inquisitive mindset, a phrase included in the after-action report generated from the 2013 Arapahoe High School shooting. AI connects facts and processes information, but currently lacks nuance in the application. Simply creating a custom ChatGPT to take a fact pattern from a case will not yield results that withstand legal challenge.


As support to a violence risk and threat assessment process, I find AI invaluable. It saves time, connects dots in ways that I may miss, and allows for a more efficient analysis of writing samples and additional documents. What it lacks, however, is the same scrutiny and skeptical inquisitive mindset critical for those doing this work. It cannot ask things like “What did I get wrong?” or “What does the evidence support and what are some rival, plausible hypotheses that might be equally valid?”


Although the field of threat assessment began with data and statistics from actuarial tables and predictive models, this method involved problems with transportability across populations, difficulties in calibration vs. discrimination that led to overconfidence, and difficulties with individual prediction. This approach was replaced by a focus on a structured professional judgement model that relied on a trained assessor.


Ask yourself… is all this confusing and/or new? This might be a good indication that you should be cautious when asking AI platforms to give an opinion on student risk. Without this context, there is a greater risk of harming a student when forming opinions based on AI assessments.


Below are some areas that I have seen in recent practice where law enforcement and threat assessment professionals have overused AI in developing their conclusions.


  • Source ambiguity: who said what, and how do we know?

    We need to avoid vague phrases like “available documentation indicates…” without clarifying what the documentation is, who authored it, when it was created, and whether it was corroborated.

 

  • Clinical claims that look inferred rather than established

    For example, if there is a mention of hallucinations or paranoia in the incident report from a non-clinical staff member that is not clinically confirmed, it could reflect misinterpretation, exaggeration, or lack of context (sleep deprivation, substances, stress, metaphorical language, etc.)

 

  • High-impact assertions with missing incident details

    If documentation by a third-party (including police or faculty incident reports) contains issues that are potentially serious but are described only in headline form, with no dates, context, severity, precipitating events, or outcome, this leads the reader to be forced to “fill in the blanks,” and blanks tend to get filled with worst-case imagery.

 

  • Mental illness and threat

    It’s long been documented that those with mental illness are more likely to be victims of violent crime rather than commit one. However, some mental illnesses have been used in popular movies and media as a shorthand for violent tendencies (e.g., schizophrenia and bipolar disorder). This can lead AI to conflate a mental illness diagnosis with misconduct and violence risk based on a loose intent inference.

 

  • Assumptions without supporting evidence

    The computer isn’t going to tell on itself. It will take facts and apply them. For example, if you include a phrase such as “No confirmed access to weapons,” the AI isn’t going to make the correction and understand that this statement is not the same as the person having “no access” to weapons.


So, how do we fix this? Can we use AI in threat assessment at all?


We need to be more aware that AI can do some amazing things as well as some not-so-amazing things.


  • Make sure the evidence supports statements that you feed into an AI threat engine.

  • Ask the AI to critique itself with a prompt such as “Read the following threat report and identify limitations and weaknesses present and include a plan to correct and improve these potential errors.”

  • Separate domains of the report that may become blended. For example, mental illness and emotional stability, targeted violence indicators, and fitness for duty/practice.

  • Include a direct evidence table that allows for clear citations of material used.

 
 
 

Comments


Behavioral Threat Assessment and Management Institute

Brian Van Brunt | brian@dprep.com​​

Bethany Smith | bethany@dprep.com

DPrep Safety Division text logo; links to dprepsafety.com
WVPA Workplace Violence Prevention Association text logo; links to WVPA.org
Training Outpost text logo; links to trainingoutpost.com
Pathways logo; links to pathwaystriage.com
DarkFox logo; links to darkfoxthreat.com
InterACCT International ALliance for Care and Threat Teams logog; links to interactt.org

@ 2025 BTAM Institute. Website by Looking Glass Consulting and Design

bottom of page