Experimentally testing the effectiveness of webforms versus chatbots for suspicious activity reporting

Experimentally testing the effectiveness of webforms versus chatbots for suspicious activity reporting

  • Joel Elson , Callie Vitro , Erin Kearns , Ryan Schuetzler
  • Threats , Terrorism , Theory
  • March 2026
Table of Contents

Understanding the current mileu through Rapoport’s four waves theory of modern terrorism.

Abstract

Purpose This paper examines how reporting interfaces (webforms vs chatbots) shape user inputs and explores the potential of generative language models (GLMs) to enhance the reporting of suspicious behavior. Study 1 compared a rule-based chatbot with a standard webform modeled on a local reporting system. Study 2 optimized the webform, tested a corresponding rule-based chatbot and introduced a ChatGPT-3.5–based system that used adaptive probing rather than fixed dialog.

Design/methodology/approach In two lab experiments, participants viewed a suspicious scenario video and were randomly assigned to report via webform or chatbot. Reports were evaluated on accuracy, anonymity, trust and usability.

Findings Chatbots performed as well as – or better than – webforms across both studies along measures of report accuracy, anonymity, trust and usability.

Practical implications Findings suggest chatbots may have a use case in reporting suspicious behavior, maintaining accuracy, user trust and usability compared to traditional methods, offering guidance for future reporting technology design.

Originality/value This study contributes to the literature on reporting suspicious behavior through the use of emerging chatbot technology.

Findings suggest chatbots may have a use case in reporting suspicious behavior, maintaining accuracy, user trust and usability compared to traditional methods, offering guidance for future reporting technology design.

Full Paper

Follow link below for the paper. Questions? Contact the authors.
Full Paper
Share :