Faculty: Carolyn Troiano ‎ ‎ ‎ ‎ ‎‎ ‎ ‎ |‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ Code: FDB2848


  • Date:05/28/2026 11:00 AM - 05/28/2026 12:30 PM
  • Location Online Event

 

Description

As artificial intelligence tools are increasingly introduced into GxP quality systems, organizations are facing a fundamental challenge: how to establish meaningful validation and control for systems that do not behave in a deterministic manner. Traditional Computer System Validation (CSV) approaches were built for predictable systems with fixed outputs, while AI tools generate variable, context-dependent results. This mismatch often leads to either excessive, low-value documentation or insufficient control.

This webinar provides a structured, practical approach to addressing that gap by combining established CSV thinking with modern Computer Software Assurance (CSA) principles. The session begins by clarifying where traditional validation approaches fall short when applied to AI and why simply extending legacy methods is ineffective. Participants will understand how to reposition validation efforts toward what actually matters in an AI-assisted environment.

The course then focuses on defining intended use as the foundation for control. Participants will learn how to clearly establish how AI tools are used within quality systems, including activities such as document drafting, summarization, and analytical support. Building on this, the session introduces a risk-based classification model to evaluate AI use based on its impact on product quality, patient safety, and data integrity.

Finally, the webinar outlines practical, inspection-ready approaches for implementing appropriate controls. This includes defining user responsibilities, establishing verification expectations, maintaining human accountability, and documenting decisions in alignment with global regulatory expectations. The emphasis remains on creating defensible, efficient systems that are both compliant and operationally practical.

| WHY YOU SHOULD ATTEND

Organizations today are actively using AI within quality systems, but many have not yet established a clear, defensible approach to validation and control. In practice, this results in two extremes: applying traditional CSV methods that do not fit AI behavior, or using AI informally without defined use cases, risk assessment, or verification expectations. Neither approach supports inspection readiness or consistent decision-making.

This session addresses that gap by focusing on what regulators actually expect: demonstrable control over how AI is used, how risks are evaluated, and how outputs are verified.

  • Understand why traditional CSV approaches break down when applied to AI-driven, non-deterministic systems
  • Learn how to apply CSA principles to define intended use and establish risk-based control strategies
  • Identify how to differentiate low-risk and high-risk AI use cases within GxP processes
  • Establish clear accountability, verification, and documentation practices aligned with inspection expectations

Without a structured approach, it becomes difficult to explain how AI fits into your processes, what decisions it influences, and how those decisions are controlled. This webinar provides a practical, implementation-focused model that helps organizations move from uncertainty to clarity, ensuring AI remains a controlled support tool within quality systems rather than a source of regulatory risk.

| AREAS COVERED

  • Limitations of traditional CSV when applied to AI and non-deterministic systems
  • Applying CSA principles to AI use within GxP quality systems
  • Defining intended use and establishing clear boundaries for AI-assisted activities
  • Risk-based classification of AI use based on impact to quality, safety, and data integrity
  • Differentiating low-risk vs high-risk AI use cases in regulated processes
  • Designing appropriate, risk-based controls for AI-assisted workflows
  • Verification expectations for AI-generated outputs and supporting evidence
  • Maintaining accountability, authorship, and human oversight in AI-assisted decisions
  • Documentation and traceability expectations aligned with ALCOA++ principles
  • How AI use is evaluated during FDA inspections and system audits
  • Common validation and control gaps observed in current AI use
  • Practical steps to implement defensible, inspection-ready AI controls

| WHO SHOULD ATTEND

  • Quality Assurance Departments
  • Quality Control Departments 
  • QA/IT and System Owners 
  • Validation Specialists 
  • Compliance Managers 
  • Regulatory Affairs Departments 
  • Quality System Leads
  • Pharmaceutical Manufacturers 
  • Biotechnology Firms 
  • Medical Device Companies 
  • Contract Manufacturing Organizations (CMOs) 
  • Contract Research Organizations (CROs) 
  • Organizations implementing digital quality systems
  • Computer System Validation, 21 CFR Part 11 & Data Integrity Compliance Specialists

Course Director: Carolyn Troiano

Carolyn Troiano has more than 30 years of experience in computer system validation in the pharmaceutical, medical device, animal health, tobacco and other FDA-regulated industries. She is currently an independent consultant, advising companies on computer system validation and large-scale IT system implementation projects. During her career, Carolyn worked directly, or on a consulting basis, for many of the larger pharmaceutical companies in the US and Europe. She developed validation programs and strategies back in the mid-1980s, when the first FDA guidebook was published on the subject, and collaborated with FDA and other industry representatives on 21 CFR Part 11, the FDA’s electronic record/electronic signature regulation. Carolyn has participated in industry conferences. She is currently active in the PMI, AITP, and RichTech, and volunteers for the PMI’s Educational Fund as a project management instructor for non-profit organizations.