AI Governance Glossary
Plain-language definitions of AI governance, regulatory compliance, and life sciences AI concepts — with context for pharma, biotech, and diagnostics teams.
What Is AI Governance?
AI governance is the set of policies, processes, and controls that ensure artificial intelligence systems operate safely, transparently, and in compliance with regulations. For life sciences, it covers model validation, access control, audit trails, bias monitoring, and regulatory evidence collection across every AI interaction.
Read moreWhat Is Sovereign AI?
Sovereign AI is an approach to artificial intelligence where organizations maintain complete control over their AI infrastructure, data, and model interactions — with no data leaving their defined boundary. This includes on-premises deployment, private cloud isolation, customer-managed encryption keys, and in air-gapped configurations, zero internet connectivity.
Read moreWhat Is 21 CFR Part 11?
21 CFR Part 11 is the FDA regulation that establishes requirements for electronic records and electronic signatures in FDA-regulated industries. It requires validated software systems, secure user authentication, complete audit trails, and tamper-evident electronic signatures that are legally equivalent to handwritten signatures.
Read moreWhat Is an AI Gateway?
An AI Gateway is an infrastructure layer that sits between users and AI models, governing every interaction before it reaches a language model. It provides prompt guardrails, PII detection and redaction, model routing, cost tracking, and policy enforcement — ensuring AI usage complies with organizational and regulatory requirements.
Read moreWhat Is Agentic AI?
Agentic AI refers to AI systems that can autonomously plan, execute multi-step tasks, use external tools, and make decisions with minimal human intervention. Unlike simple chatbots, AI agents can chain together actions — searching databases, calling APIs, generating documents, and triggering workflows — to accomplish complex goals.
Read moreWhat Is MCP Governance?
MCP (Model Context Protocol) governance is the practice of controlling and auditing how AI agents access external tools, data sources, and APIs through the Model Context Protocol. MCP has become the standard interface for connecting AI agents to external systems, with 97M+ monthly SDK downloads — making MCP governance essential for regulated environments.
Read moreWhat Are Evidence Books?
Evidence Books are per-project, per-regulation compliance packages that contain all the documentation, audit trails, and evidence artifacts required for a specific regulatory framework. They map every AI interaction, validation record, and approval workflow to frameworks like FDA 21 CFR Part 11, EU AI Act, GxP/GMP, HIPAA, ISO 42001, or NIST AI RMF.
Read moreWhat Is the EU AI Act?
The EU AI Act is the European Union's comprehensive regulation for artificial intelligence, establishing risk-based requirements for AI systems operating in or affecting EU markets. It classifies AI systems into risk categories (unacceptable, high, limited, minimal) and imposes corresponding obligations — from outright bans to transparency requirements to full conformity assessments.
Read moreWhat Is GxP Compliance?
GxP is a collective term for 'Good Practice' regulations and quality guidelines applicable to the life sciences industry. The 'x' represents the specific field: GMP (Good Manufacturing Practice), GLP (Good Laboratory Practice), GCP (Good Clinical Practice), GDP (Good Distribution Practice), and others.
Read moreWhat Is the FDA AI Credibility Framework?
The FDA AI Credibility Framework is a 7-step process published in the FDA's January 2025 draft guidance for evaluating AI models used in drug and biological product development. It provides a structured approach for sponsors to demonstrate that their AI models are fit for purpose and their outputs can be trusted for regulatory decision-making.
Read moreWhat Is a Forward Deployed Engineer?
A Forward Deployed Engineer is a dedicated technical resource who embeds directly with a customer's team to own implementation outcomes — not just support tickets. FDEs work alongside R&D, IT, and compliance teams to configure, optimize, and scale platform deployments in the customer's specific regulatory and technical environment.
Read moreWhat Is HIPAA Compliance for AI Systems?
HIPAA (Health Insurance Portability and Accountability Act) compliance for AI systems requires that any AI processing protected health information (PHI) meets strict standards for data privacy, security, and access controls. This includes encryption at rest and in transit, access audit logging, minimum necessary data exposure, and Business Associate Agreements.
Read moreReady to govern your AI?
See how BioCompute provides full-stack AI governance for regulated life sciences.