Scans agentic AI infrastructure, evaluates against multiple compliance frameworks, and produces defensible reports.
4 active modules covering EU and US frameworks (648 total criteria), with 12 more in pipeline. Each is self-contained: its own knowledge base, classification engine, assessment criteria, and report configuration.
Regulation (EU) 2024/1689. Classification against Annex III high-risk domains, obligation mapping per role (provider/deployer), 15 agentic risk flags, penalty exposure calculation.
348 criteria • 311 questions • 13 doc types
LiveRegulation (EU) 2022/2554. Digital operational resilience for financial entities. ICT risk management, incident reporting, third-party oversight, penetration testing.
131 criteria • 14 doc types • 3 tiers
BetaRegulation (EU) 2016/679. AI-scoped data protection assessment. Automated decision-making (Art 22), DPIAs, lawful basis, international transfers, data subject rights.
97 criteria • 12 doc types • 4 tiers
BetaNIST AI 100-1. US voluntary framework. Govern, Map, Measure, Manage functions. GAI companion profile. Cross-references to EU AI Act controls.
72 criteria • 15 doc types • 4 tiers
Beta12 additional modules in draft: NIS2, PRA SS1/23, NCSC, HIPAA, CDDO/GDS, ISO 42001, ISO 27001, SOC 2, FCA AI, UK AISI, IL3, CE Plus. Each follows the same build pattern — authoritative source, structured knowledge base, classification engine, assessment questions, report configuration.
How data moves through the system, from your infrastructure to a compliance report.
HEX 165 Agent runs in your environment and scans both code and documentation in one pass. Connector plugins auto-detect and inspect agentic frameworks. The documentation scanner matches files against required document types using keyword matching. Documents never leave your environment. Output: a pre-upload summary for your confirmation before anything is sent.
System model exported to the platform over TLS. Stored in the Engagement Database with timestamp, connector ID, and knowledge base version.
Rules engine loads the active module's knowledge base, classifies each agent, maps obligations per role, evaluates criteria, produces coverage summary by type, generates documentation gap report, detects agentic flags. Where multiple modules are active, each is evaluated independently with cross-references mapped. Produces: findings + targeted questions per module.
You answer targeted questions through the platform. Each answer triggers re-evaluation — findings update, outstanding questions reduce. Loop until all questions are answered or marked N/A.
Both teams (ours and yours) review findings in the platform. Each finding confirmed or disputed. Disputed findings flagged for resolution. Nothing finalised without mutual agreement.
Report generated from confirmed findings. Per-module sections with classification, gaps, remediation, architecture flags, and penalty exposure (where applicable). Cross-framework summary shows overlapping compliance. Exportable as HTML, JSON, PDF.
The rules engine is deterministic. No AI interprets law. The same system model always produces the same result. Five steps: classify, map obligations, evaluate, flag risks, cross-reference.
The engine delegates to the active module's classification engine. Each framework has its own tiers and rules:
Classification + role determines which assessment sections apply. The module defines which roles exist (provider, deployer, controller, processor, financial entity, etc.) and which obligations each role carries. Only applicable sections are evaluated.
Assessment questions are checked against the system model. For each question:
Every non-compliant finding includes a remediation action derived from the legal text.
Architecture-specific risk indicators are detected from the system model — risks unique to agentic and multi-agent systems that go beyond standard compliance checks. Each module defines its own flags with framework-specific legal references.
Where modules overlap (e.g., EU AI Act risk management and NIST AI RMF Govern/Map functions, or GDPR data protection and EU AI Act Article 10), the cross-reference engine maps equivalent controls. One compliant finding can satisfy requirements across multiple frameworks.
15 architecture risks specific to agentic AI that standard compliance checklists miss. Detected automatically from the system model. Each module maps these to its own legal basis.
The system has no way to halt all agents in a safe state. In a multi-agent workflow, this means no controlled shutdown.
EU AI Act: Art 14(4)(e) • DORA: Art 11 • NIST: Manage 2.4 — Critical
Events are not automatically recorded. Compliance across all frameworks requires audit trails of system behaviour.
EU AI Act: Art 12(1) • DORA: Art 10 • GDPR: Art 30 • NIST: Measure 2.6 — Critical
An agent creates other agents at runtime with LLM-generated instructions and LLM-chosen tools. Agent count, behaviour, and tool access are not human-designed.
EU AI Act: Arts 9, 12, 14, 15 • NIST: Map 1.5 — Critical
Agent instructions direct it to conceal its AI nature, or it uses a human persona with no AI disclosure while interacting with users.
EU AI Act: Art 50(1) • NIST: Govern 4.1 — Critical
Agent is instructed to make decisions affecting people without human oversight. All frameworks require proportionate human control.
EU AI Act: Art 14(1) • GDPR: Art 22 • NIST: Govern 1.3 — Critical
Agents operate autonomously with no defined human review points in the workflow.
EU AI Act: Art 14(1) • NIST: Govern 1.3 — High
An agent operates without human involvement. Oversight must be commensurate with the level of autonomy.
EU AI Act: Art 14(2) • NIST: Manage 2.2 — High
Agent behaviour changes after deployment. May constitute a substantial modification requiring reassessment.
EU AI Act: Art 3(23) • NIST: Manage 4.2 — High
Stop mechanism exists but only halts one agent, not the entire workflow.
EU AI Act: Art 14(4)(e) — High
A fully autonomous agent has tools that modify data or execute actions, with no human approval gate.
EU AI Act: Art 14(1), 14(4)(d) • NIST: Manage 2.4 — High
Agent accesses personal data via tools but no documented data governance is in place.
EU AI Act: Art 10(2) • GDPR: Arts 5, 6, 35 — High
Agents pass data between each other. Post-market monitoring must cover inter-agent behaviour.
EU AI Act: Art 72(2) • NIST: Measure 2.5 — Medium
Logs retained for less than the minimum required period (e.g., 6 months under EU AI Act).
EU AI Act: Art 26(6) — Medium
Agent uses a general-purpose AI model but upstream provider documentation may be missing.
EU AI Act: Art 53(1)(b) — Medium
Agent interacts with users and generates content but its instructions contain no reference to AI disclosure.
EU AI Act: Art 50(1) • NIST: Govern 4.1 — Medium
6 built connectors (4 supported, 2 coming soon) plus 7 planned. Auto-detected from project dependencies — no manual configuration. Shared across all compliance modules.
Call cloud platform APIs to retrieve agent metadata. Structured responses, richest data.
Parse configuration files and Python code to extract agent definitions. Read access to the codebase only.
Compliance documentation often lives outside the codebase. Document source connectors search external platforms for required documentation (risk assessments, DPIAs, technical docs, policies) using the same keyword matching as the local scanner. Only titles, URLs, and match status are returned — document content is not transmitted.
What each connector extracts automatically vs what generates a targeted question.
| Data Point | LangGraph | Anthropic | Pydantic AI | CrewAI | Bedrock | OpenAI | Relevance |
|---|---|---|---|---|---|---|---|
| Agent name / description | Auto | Auto | Auto | Auto | Auto | Auto | Technical documentation |
| Model used | Auto | Auto | Auto | Auto | Auto | Auto | Model governance |
| Tools / functions | Auto | Auto | Auto | Auto | Auto | Auto | Risk management |
| Instructions / prompt | Auto | Auto | Auto | Auto | Auto | Auto | Technical documentation |
| Guardrails | — | — | — | — | Auto | — | Risk management |
| Stop mechanism | Auto | Auto | Auto | Partial | Question | Question | Human oversight |
| Human checkpoints | Auto | Auto | Auto | Question | Question | Question | Human oversight |
| Logging enabled | Auto | Auto | Auto | Question | Question | Question | Record-keeping |
| Orchestration / flow | Auto | Auto | Auto | Auto | Question | Question | Human oversight |
| Decision domains | Question | Question | Question | Question | Question | Question | Risk classification |
| Personal data processing | Question | Question | Question | Question | Question | Question | Data governance |
Auto = extracted automatically. Question = generates a targeted question. As connectors improve, more fields move from Question to Auto.
648 compliance criteria across 4 modules, each with its own structured knowledge base. Every entry traces to an authoritative source — EUR-Lex for EU regulations, NIST.gov for US frameworks.
Each module contains:
| Module | Criteria | Doc Types | Classification Tiers | Source | Penalties |
|---|---|---|---|---|---|
| EU AI Act | 348 | 13 | Prohibited / High-Risk / Limited / Minimal | EUR-Lex | Up to €35M / 7% |
| DORA | 131 | 14 | Critical Functions / Standard / Microenterprise | EUR-Lex | Up to 2% turnover |
| GDPR | 97 | 12 | High-Risk / Special Category / Standard / Minimal | EUR-Lex | Up to €20M / 4% |
| NIST AI RMF | 72 | 15 | High / Moderate / Low / Minimal Impact | NIST.gov | Voluntary |
Every data entry must have a verifiable citation (article, paragraph, source URL). No entry exists without a source. Where legislation is ambiguous, it is flagged as "awaiting clarification" — never filled with interpretation. Knowledge bases are updated only when official legal text changes.
Your data stays with you. The agent is read-only. No AI interprets law. Everything is inspectable before you run it.
The HEX 165 Agent runs in your environment. Raw source code and documents never leave. Only normalised metadata — agent names, tools, architecture patterns — is transmitted to the platform.
Our agents never modify your files, systems, or infrastructure. They scan and report. No writes, no installs, no background processes.
The compliance engine is deterministic — static rules derived from the legislation. Same input always produces the same output. No ML models, no probabilistic reasoning.
:ro--network=none, upload separatelyhex165 manifest to discover agents and files without reading their content. Edit the manifest to exclude items.--network=none to scan with no network access. Review the output before uploading.Deterministic rules engine — not an AI system under Article 3(1). EU data residency. GDPR self-service built in.
The HEX 165 platform uses a deterministic rules engine — no AI, no ML, no inference. It does not meet the definition of an AI system under Article 3(1) and is not subject to the Act's obligations.
If AI-assisted features are added in future, they will be clearly labelled, advisory only, and compliant with Article 50 transparency requirements.
Data residency: Germany (EU). No international transfers. No third-party data sharing. Minimal data collection. Full GDPR self-service: data export (Art 15/20), account deletion (Art 17), data inventory summary — all available in the platform UI.
ISO 27001 and ISO 42001 certification programme underway.