AI Governance & Technological Independence | Lexiane
AI governance by architecture: 25 abstraction interfaces, SHA-256 audit trail, RAGAS quality metrics, RBAC/ABAC access control. EU AI Act and ISO 42001 ready.
AI governance is not reducible to a set of internal policies and compliance declarations. It rests on an organization’s ability to demonstrate — to its executives, its auditors, its regulators — that its artificial intelligence systems are under control: control over the decisions they make, the data they process, the vendors they depend on, and the errors they might make.
This capacity for demonstration cannot be improvised. It is built into the system architecture from the very first design decisions — or it is not built at all.
Lexiane is designed so that every governance requirement finds an answer in the architecture rather than in the process. This document details how.
The regulatory context: three frameworks that converge
The European AI Act — Regulation (EU) 2024/1689
Published in the Official Journal of the European Union on 12 July 2024 and entering into force on 1 August 2024, the Artificial Intelligence Regulation establishes a binding legal framework for AI systems placed on the market or put into service in the European Union.
For high-risk AI systems — whose obligations become fully applicable no later than 2 August 2026 — the AI Act imposes in particular:
- Article 9 — A documented risk management system, maintained throughout the system’s lifecycle
- Article 10 — Data governance requirements: quality of training and operational data, relevance, representativeness, absence of identified biases
- Article 12 — Retention of event logs enabling the tracing of system operation over a defined period, particularly for autonomous systems
- Article 13 — System transparency toward users and competent authorities
- Article 17 — A quality management system covering design, development, validation, and post-market surveillance
For organizations deploying AI-based documentary systems in sectors classified as high-risk by Annex III of the regulation — critical infrastructure, healthcare, education, employment, administration of justice — these obligations are not optional.
ISO/IEC 42001:2023 — AI Management System
Published in December 2023, ISO/IEC 42001 is the first international standard defining the requirements of an Artificial Intelligence Management System (AIMS). It is addressed to organizations that develop or use AI systems, and is structured around the same principles as ISO 9001 or ISO 27001 management standards: policy, objectives, planning, support, operations, performance evaluation, improvement.
The standard requires in particular the documentation of AI usage policies, identification and treatment of risks specific to AI systems, traceability of system decisions, and demonstration of control over organizational and social impacts.
NIST AI RMF 1.0 — AI Risk Management Framework
Published in January 2023 by the National Institute of Standards and Technology, the NIST AI Risk Management Framework organizes AI risk management around four functions: Govern, Map, Measure, Manage. The Govern function is transversal: it conditions the effectiveness of the other three by establishing the policies, roles, responsibilities, and governance culture necessary.
These three frameworks converge on the same fundamental requirements: decision traceability, data control, vendor independence, human oversight mechanisms, and continuous demonstration capacity. This is precisely what Lexiane’s architecture makes possible.
The five governance pillars Lexiane addresses
1. Data governance
Data governance for a RAG system covers three distinct dimensions: the quality of ingested data, the protection of sensitive data, and data residency.
Quality and traceability of ingested data
Every document that enters Lexiane is recorded in the persistent MetadataStore — a SQLite database with versioned migration. The document identifier, its ingestion date, format, processing parameters, and collection membership are preserved. At any time, it is possible to reconstruct the exact state of the document base at a given date, identify which documents contributed to a response, and trace every modification made to the corpus.
The SHA-256 audit trail covers every processing step: document ingested, fragments created, embeddings computed, entities extracted, responses generated. Each entry is signed by the hash of the previous one. The chain is inviolable: any retrospective modification is mathematically detectable. This register constitutes technical proof that data was processed in compliance with defined policies — without relying on a declaration.
Personal data protection
The PII filter operates upstream of the document ingestion pipeline, before any vectorization. Personal data detected — email addresses, phone numbers, IBAN, social security numbers, IP addresses — is processed according to configurable policies by category: typed masking, deletion, hashing. The applied policy is recorded in the audit trail.
Data residency and localization
In air-gapped configuration, Lexiane has no active network interface toward the outside. Data localization is an architectural property, not a contractual one. In hybrid or cloud configuration, each external adapter is an identified, documented, and replaceable component — the mapping of outbound data flows is precise and auditable.
2. Model governance and vendor independence
One of the most underestimated governance risks in AI deployments is undocumented vendor dependency: a system built around a single language model or a single cloud provider, whose terms of use, pricing, and deprecation policies evolve unilaterally.
Lexiane addresses this risk by architecture. Twenty-five typed abstraction interfaces define all contact points between the kernel and external components. Each component — language model, embeddings engine, vector database, reranker, document parser — is connected via one of these interfaces. Currently supported providers cover the entire spectrum:
| Component | Supported providers |
|---|---|
| LLM | OpenAI · Anthropic · Ollama · Mistral.rs (local) |
| Embeddings | OpenAI · Candle (local) · Ollama |
| Vector store | Qdrant · pgvector · SQLite (in-memory) |
| Reranker | Cohere |
| Sparse search | Tantivy (BM25) |
| Document parser | Native Rust · Docling |
Substituting a component does not touch the pipeline, the business logic, or the already-indexed data. It translates into a configuration change. This is not a flexibility promise — it is a verifiable property in the system architecture.
Model lifecycle governance
Each response produced by Lexiane can include metadata about the model used for generation — version, provider, parameters. Token consumption statistics per stage are accumulated in the pipeline context (UsageStats) and accessible after execution. This data enables tracking of performance and cost evolution over time, and detecting behavioral drift related to a model update.
3. Operational pipeline governance
A production AI system must be observable, instrumented, and controlled. Lexiane exposes three distinct mechanisms for this purpose.
Lifecycle hooks and observability
The lifecycle hook system (PipelineHooks) enables instrumenting each pipeline stage without modifying its code: on_stage_start, on_stage_complete, on_stage_error, on_pipeline_complete. These callbacks receive the stage name, its status, and structured metadata. They enable feeding an external monitoring system in real time — Prometheus, Datadog, OpenTelemetry, or any internal supervision system — without coupling between the pipeline and the observation infrastructure.
Execution metrics
PipelineMetrics and StageMetrics provide aggregated timing data after each execution: duration of each stage, total pipeline duration, stages in error. These metrics enable detecting performance regressions, identifying bottlenecks, and tracing the evolution of system behavior over time.
Document access control
The AccessControl port implements a mechanism for filtering retrieval results based on the requesting user’s rights. It supports RBAC (role-based access control) and ABAC (attribute-based access control) models. Retrieved documents are filtered before generation: a user cannot obtain a response built from documents they do not have access to, even if those documents exist in the vector store.
This mechanism is particularly critical in multi-user environments where data of different sensitivity coexist in the same corpus: HR data, financial data, project data, data classified by level.
Query routing by complexity
The QueryRouter port classifies each incoming query according to its complexity and routes it to the appropriate pipeline mode: linear pipeline for direct questions, GraphRAG pipeline for relational questions, simple search for direct lookups, or agentic mode for complex analyses requiring multiple retrieval iterations. This query governance mechanism enables optimizing computational resources and guaranteeing that complex queries receive appropriate processing — without leaving this choice to the end user.
4. Quality governance and human oversight
The AI Act, in Articles 14 and 26, insists on the need for effective human oversight over high-risk AI systems. This oversight is only effective if it is informed — which presupposes mechanisms for measuring the quality of system outputs, and feedback loops enabling users to influence system behavior.
Automated quality evaluation (RAGAS)
The QualityEvaluator port implements RAGAS-type evaluation metrics on each produced response:
- Faithfulness — is the response supported by the retrieved sources, or did the system extrapolate beyond the provided context?
- Answer relevance — does the response actually address the question asked?
- Context precision — are the retrieved passages specifically relevant to the question?
- Context recall — did the system retrieve all available information in the corpus?
These metrics are calculated continuously on production exchanges. They constitute the system’s quality dashboard — without which human oversight can only be exercised on impressions, not measurements.
Input and output guardrails
Input guardrails (InputGuardrail) detect and block prompt injection attempts, out-of-scope queries, and content likely to violate usage policies. Output guardrails (OutputGuardrail) verify the produced response before transmission: detection of toxic content, sensitive data leakage, out-of-scope responses.
These mechanisms are the system’s automated human control points — the barriers that prevent the system from operating outside the limits the organization has defined for it.
Relevance gate and abstention
Before generation, RelevanceGateStage evaluates the overall confidence score of the retrieved context. If this score is below the configured threshold, the system refrains from generating a response and explicitly signals the context insufficiency. This behavior — preferring abstention over an ungrounded response — is a fundamental governance requirement for systems deployed in contexts where an incorrect response has measurable consequences.
Human feedback loop
The FeedbackStore port records user feedback on produced responses: validation, correction, satisfaction score. This data feeds a feedback register exploitable for continuous system improvement — identification of domains where retrieval quality is insufficient, detection of poorly handled query types, measurement of perceived system evolution over time.
This feedback loop is the operational mechanism of the continuous human oversight that governance frameworks require. It does not replace supervision — it makes supervision actionable.
5. Organizational governance and accountability
Governance of an AI system is not only a technical matter. It presupposes identified roles, documented responsibilities, and incident response capability.
Separation of responsibilities by architecture
Lexiane’s hexagonal architecture materializes the separation of responsibilities at the code level. The certified kernel (vectrant-core) is under the responsibility of the product team — its properties are mechanically verifiable. Each external adapter is under the responsibility of the team integrating it. The interfaces between the two are explicit, typed, and tested contracts.
This separation facilitates the distribution of responsibilities among development teams, security teams, and business teams — without grey areas or implicit couplings.
Immutability of the audit register
The SHA-256 audit trail is immutable by design. In the event of an incident — incorrect response, unauthorized access, non-compliant data processing — the reconstruction of the sequence of events is possible with certainty and independently. The cryptographic chain constitutes an inviolable event register that can be presented to a regulator, an internal auditor, or a court, with the guarantee that its content has not been altered.
Schema versioning and migration
The SQLite and pgvector adapters maintain a versioned migration register (_vectrant_migrations). Migrations are applied sequentially at startup — only versions not yet applied execute. This approach guarantees that the database state is always consistent with the version of the deployed software, and that the history of schema evolutions is documented and reproducible.
What Lexiane produces as governance evidence
An AI governance framework — whether the AI Act, ISO/IEC 42001, or an internal framework — requires evidence, not declarations. Here is what Lexiane makes available:
| Governance requirement | Evidence provided by Lexiane |
|---|---|
| System decision traceability | Chained SHA-256 audit trail, immutable, exportable |
| Identification of processed data | Persistent MetadataStore with complete ingestion history |
| Personal data protection | PII filtering documented in the audit trail, policy by category |
| Control over third-party vendors | 25 abstraction interfaces — exhaustive dependency mapping |
| Data access control | AccessControl port (RBAC/ABAC) — filtering before generation |
| Output quality measurement | RAGAS metrics continuously — faithfulness, relevance, precision, recall |
| Human oversight mechanisms | Input/output guardrails, relevance gate, feedback loop |
| Operational observability | PipelineHooks + PipelineMetrics — instrumentation without coupling |
| Resilience against vendor failures | À la carte architecture — substitution without pipeline modification |
| No undefined behavior | #![forbid(unsafe_code)] enforced by the compiler — zero unwrap() in production |
What AI governance requires from the systems you deploy — and what Lexiane makes possible
AI governance is not a state to achieve. It is a continuous process: observe, measure, correct, document. The frameworks — AI Act, ISO/IEC 42001, NIST AI RMF — converge on this point: governance is exercised over time, on systems that must have been designed to make it possible.
A system built on an opaque framework, coupled to a single vendor, without decision traceability and without quality measurement mechanisms, can satisfy documentary requirements. It cannot satisfy real governance requirements — because the evidence that real governance demands is structurally absent.
Lexiane does not claim to solve all of an organization’s governance problems. It provides the technical mechanisms that make governance possible: traceability, measurement, control, independence. The policy that relies on these mechanisms remains the responsibility of the organization deploying the system.
Frequently asked questions from AI governance and compliance officers
Our organization must comply with the AI Act. Does Lexiane fall within the scope of high-risk systems? The applicability of the AI Act to a given system depends on its use and deployment context — not on the technology used. A RAG system deployed in a medical, judicial, or recruitment context may fall within the scope of Annex III. Evaluating this applicability is a matter for specialized legal counsel. What we can affirm: if your system falls within this scope, Lexiane’s architecture provides the tools to respond to the obligations of Articles 9, 10, 12, 13, and 17.
Is ISO/IEC 42001 applicable to our organization if we do not develop AI ourselves? ISO/IEC 42001 is addressed to both organizations that develop AI systems and those that use them. Its scope covers the entire lifecycle — development, deployment, operation, retirement. If you deploy Lexiane in your operations, the standard’s requirements on governance of AI systems used apply to you.
How do we demonstrate to our board of directors that our AI system is under control? The governance dashboard that Lexiane enables is based on four measurable indicators: response faithfulness rate (RAGAS), abstention rate (queries rejected by the relevance gate), guardrail incidents (blocked injections, filtered content), and evolution of user satisfaction (FeedbackStore). These metrics enable factual, real-time AI governance reporting, without depending on subjective assessment.
Can Lexiane’s audit logs be isolated in our SIEM?
The lifecycle hook system (PipelineHooks) enables exporting each pipeline event to any external collection system — Splunk, Elastic, or any SIEM consuming structured events. The SHA-256 audit trail can also be exported independently for regulatory archiving.
Start the conversation about your governance framework.
AI system governance is a subject built to measure — according to your sector, your framework, your organization, and the systems you deploy. We do not offer a generic response to a question that is not generic.
We offer a structured exchange on your existing governance framework, your regulatory obligations, and what Lexiane’s architecture can concretely bring to them.
What you can expect from this exchange:
- A response within 48 business hours
- A contact who knows the AI Act, ISO/IEC 42001, and the operational challenges of AI governance
- An honest mapping of what Lexiane covers — and what it does not
→ Contact us
No commercial commitment. A substantive conversation.
Regulatory and normative references cited in this document: — Regulation (EU) 2024/1689 of the European Parliament and of the Council, 13 June 2024 (AI Act) — ISO/IEC 42001:2023, Information technology — Artificial intelligence — Management system, December 2023 — NIST AI Risk Management Framework 1.0, National Institute of Standards and Technology, January 2023 — Regulation (EU) 2016/679 (GDPR) — Regulation (EU) 2022/2554 (DORA)
Request access to the Auditable Core
Sign up to be notified when our Core audit programme opens. In accordance with our privacy policy, your professional email address will be used exclusively for this technical communication, with no subsequent marketing use. Access distributed via secure private registry.
Contact us