Everything life sciences organizations need to know about making AI systems compliant with FDA's electronic records and electronic signatures regulation — from audit trails and validation to sovereign architecture.
FDA 21 CFR Part 11 establishes the criteria under which electronic records and electronic signatures are considered trustworthy, reliable, and equivalent to paper records and handwritten signatures. Published in 1997 and enforced since then, it applies to any electronic record created, modified, maintained, archived, retrieved, or transmitted under FDA-regulated activities.
For life sciences AI, this means every electronic record your AI system generates — model outputs, predictions, analytical results, quality assessments — must meet Part 11 requirements if that record supports regulatory decision-making.
The regulation covers three core areas: electronic records (Subpart B), electronic signatures (Subpart C), and the controls necessary to ensure the authenticity, integrity, and confidentiality of both.
Part 11 mandates specific controls for systems that create or maintain electronic records:
1. Validation — Systems must be validated to ensure accuracy, reliability, consistent intended performance, and the ability to discern invalid or altered records. For AI systems, this extends to model validation: demonstrating that your AI produces accurate, reproducible results within defined parameters.
2. Audit Trails — Computer-generated, time-stamped audit trails must independently record the date, time, and identity of operators who create, modify, or delete electronic records. Previously recorded information must not be obscured. For AI, every inference, model update, and data transformation constitutes an auditable event.
3. Access Controls — Systems must use authority checks to ensure only authorized individuals can use the system, electronically sign records, access operations, or alter records. This includes role-based permissions, session timeouts, and device/location restrictions.
4. Operational System Checks — Enforce permitted sequencing of steps and events. For AI pipelines, this means enforcing that data preprocessing occurs before model inference, that validation checks pass before results are released, and that human review occurs where required.
5. Device Checks — Determine the validity of data input or operational instructions. AI systems must validate incoming data against expected formats, ranges, and integrity checks before processing.
6. Authority Checks — Ensure that only authorized individuals can sign electronic records, access the system, or alter records. This goes beyond basic authentication to include signature-level permissions and signing authority verification.
7. Record Retention — Electronic records must be readily retrievable throughout the record retention period. For AI systems generating regulatory evidence, this means maintaining not just the outputs but the complete computational context that produced them.
AI introduces unique compliance challenges that traditional Part 11 implementations don't address:
Model Versioning & Reproducibility — When an AI model is updated (retrained, fine-tuned, or replaced), the new model may produce different outputs for the same inputs. Part 11 requires that you can reproduce any historical result. This means maintaining a complete version history of every model, including weights, hyperparameters, training data references, and the exact inference configuration used for each output.
Training Data as Electronic Records — The datasets used to train AI models may themselves constitute electronic records if they contain or derive from regulated data. Part 11's record integrity requirements extend to training data: you need audit trails for data curation, preprocessing, augmentation, and any modifications.
Black Box Problem — Part 11 requires the ability to "discern invalid or altered records." For opaque AI models, this means implementing explainability measures, confidence scoring, and anomaly detection that can flag when outputs may be unreliable — even if you can't fully explain the model's internal reasoning.
Continuous Learning Systems — AI systems that update automatically (online learning, reinforcement learning) pose special challenges. Each model update is essentially a system change that may require revalidation. FDA expects a risk-based framework for managing these updates.
Multi-Model Pipelines — Modern AI architectures often chain multiple models together. Part 11's operational system checks requirement means you need to validate and audit the entire pipeline, not just individual models. Every handoff between models is an auditable event.
The audit trail is the backbone of Part 11 compliance. For AI systems, effective audit trails must capture:
Data Ingestion Events — When data enters the system: source, format, integrity hash, timestamp, operator identity, and any transformations applied during ingestion.
Model Inference Events — Every time an AI model processes data: input data reference, model version, configuration parameters, output, confidence score, timestamp, and requesting user.
Model Lifecycle Events — Training runs, validation results, deployment decisions, rollbacks, and decommissioning. Each must include the rationale, approver identity, and supporting evidence.
Access Events — System logins, permission changes, failed access attempts, and session activities. These must be immutable and timestamped by the system (not user-adjustable clocks).
Change Events — Any modification to records, configurations, or system parameters must capture the previous value, new value, reason for change, and approver identity.
The key architectural principle: audit trails must be **system-generated** (not relying on users to log their actions), **immutable** (cannot be altered after creation), and **independently verifiable** (the audit system must be separate from the systems it audits).
BioCompute's Evidence Engine implements this as a cryptographically chained audit log where each record includes a hash of the previous record, creating a tamper-evident chain that satisfies the strictest interpretation of Part 11 requirements.
Part 11 Subpart C establishes that electronic signatures are legally equivalent to handwritten signatures. For AI systems, electronic signatures apply when:
- A scientist approves AI-generated analytical results - A quality manager signs off on AI-assisted batch release decisions - A regulatory affairs specialist certifies AI-compiled submission documents - A clinical researcher validates AI-processed trial data
Requirements for electronic signatures:
- Each signature must be unique to one individual and not reusable - Identity must be verified before a signature is assigned - At least two distinct identification components (e.g., username + password) - For continuous sessions, at least one component must be re-entered for each signing - Non-biometric signatures used within a single session must use all identification components for the first signing, with at least one component for subsequent signings - Signatures must be linked to their respective electronic records so they cannot be transferred to falsify another record
For AI workflows specifically: When an AI system generates a regulatory document (like an Evidence Book), the electronic signature on that document must be linked not just to the document but to the specific AI output and computational state that produced it. This prevents someone from signing a document, then modifying the underlying AI output while the signature remains attached.
FDA expects computer system validation (CSV) following GAMP 5 principles. For AI systems, validation typically follows three phases:
Installation Qualification (IQ) — Verify that the AI system is installed correctly: hardware configuration matches specifications, software versions are documented, network connectivity is confirmed, and security controls are operational. For AI specifically, this includes verifying GPU configurations, model serving infrastructure, and data pipeline components.
Operational Qualification (OQ) — Verify that the system operates as intended within specified ranges: test AI model accuracy against known datasets, verify audit trail generation, confirm access controls enforce permissions correctly, test electronic signature workflows, and validate data integrity across the pipeline. OQ for AI should include boundary testing (extreme inputs), adversarial testing (deliberately malformed data), and reproducibility testing (same input produces same output).
Performance Qualification (PQ) — Verify that the system performs reliably under real-world conditions: run the AI system with production-representative data, verify performance meets acceptance criteria, confirm audit trails capture all expected events, and test failure modes and recovery procedures. PQ should simulate peak loads, concurrent users, and edge cases specific to your therapeutic area.
Ongoing Validation: Part 11 compliance isn't a one-time event. You need a change control process that triggers revalidation when models are updated, data sources change, infrastructure is modified, or regulatory requirements evolve. BioCompute's automated validation framework generates IQ/OQ/PQ documentation continuously, reducing revalidation from weeks to hours.
Part 11 distinguishes between "closed systems" (controlled by the record-responsible entity) and "open systems" (controlled by others). This distinction is critical for AI architecture decisions:
Cloud AI = Open System — When your AI runs on infrastructure controlled by a third party (cloud providers, SaaS platforms), it's likely an open system. This triggers additional Part 11 requirements: document encryption, digital signatures (not just electronic), and additional measures to ensure record integrity across organizational boundaries.
Sovereign AI = Closed System — When your AI runs entirely within your controlled infrastructure — your servers, your network, your access controls — it operates as a closed system under Part 11. This significantly simplifies compliance because you need fewer additional controls and you have complete authority over the system.
The sovereignty advantage for AI:
Your data never traverses external networks, eliminating transmission security concerns. Your models run on infrastructure you control, simplifying validation. Your audit trails are generated and stored within your boundary, ensuring integrity. Your access controls are managed by your organization, not delegated to a provider.
BioCompute's sovereign architecture is designed specifically for this: a closed-system AI platform where the entire computational pipeline — data ingestion through evidence generation — runs within your controlled enclave. This reduces the Part 11 compliance surface area by eliminating the open-system requirements entirely.