In Behavioral Health, Artificial Intelligence is showing up everywhere, but how do we use AI ethically while protecting clients, data, and professional judgment?
In our webinar, “Ethical Behavior Analysis in the Age of AI,” host Amy Cook, BCBA, is joined by Ryan O’Donnell, BCBA and Dr. David Cox, BCBA-D, to explore what AI literacy really means for behavior analysts, supervisors, and organizations.
AI Is Math, Not Magic
A central theme of the discussion:
Models are just functions: outputs based on inputs and assumptions. Once behavior analysts see AI this way, they can start asking better questions:
- What was this model built to predict?
- What data were used to train it?
- How similar are those cases to my clients?
- What assumptions and limitations does the vendor acknowledge?
This is the same functional thinking BCBAs already use, but now it has been applied to algorithms.
Bias, Context, and Human Oversight
Because AI models are trained on specific datasets, some bias is inevitable. Rather than assuming a tool is “objective,” clinicians should:
- Review model documentation (e.g., “model cards”)
- Look at how performance varies across different populations
- Treat outputs as one data point in context, not ground truth
The speakers emphasize a contextualist stance:
No AI prediction is “the truth,” but the behavior of a model under certain conditions. Human experts still provide the nuance, ethics, and clinical judgment.
Ethical Use Inside Organizations
The webinar highlights that education alone isn’t enough. Organizations must arrange contingencies that support ethical AI use, including:
- Open communication about when and how staff use AI
- Guardrails around client privacy and data security
- Policies that prioritize quality of care, not just efficiency
- Clear human checkpoints where clinicians review, edit, and sign off on AI-assisted work
Every click of “accept” on an AI-generated draft is still a human clinical decision.
Safeguarding Client Data
Public tools like ChatGPT and Gemini are not designed for PHI by default. The speakers recommend:
- Never pasting identifiable client information into public AI tools
- Using closed, secure, or locally hosted systems where possible
- Redacting data if external tools are used
- Being transparent with parents and caregivers about how AI supports services
Ethical AI use is inseparable from ethical data stewardship.
How Hi Rasmus Fits In
Hi Rasmus is a mission-driven platform built to empower the people who care for children with autism, inspired by a father’s journey with his son Rasmus.
Our platform helps:
- Capture and analyze real-world behavioral health data
- Simplify documentation so clinicians can focus on the child, not the paperwork
- Support school teams, clinics, and organizations of all sizes
- Explore responsible, transparent AI features that keep clinicians in control
We’re on a mission to improve the lives of 1 million children with autism by empowering the professionals and caregivers who support them.



