System Overview

CUSTOMER ENVIRONMENT LangChain LangGraph Supported Anthropic SDK + Swarm patterns Supported Pydantic AI Typed agents Supported CrewAI YAML + Python Supported AWS Bedrock Agents API Coming soon OpenAI Assistants API Coming soon Document Sources Local files — supported Confluence — coming soon SharePoint — planned Notion • Google Drive HEX 165 AGENT Scans code + documentation • Read-only • Runs locally • Auto-detects connectors • TLS-protected export Raw data stays here. Only the normalised system model leaves. SYSTEM MODEL (JSON over TLS) PLATFORM (EU-HOSTED) Module Loader Discovers modules Validates manifests Loads knowledge bases Compliance Modules EU AI Act (348) DORA (131) GDPR (97) NIST AI RMF (72) Each: classification + rules + KB Rules Engine Classify (module-specific tiers) Map obligations per role Evaluate criteria Detect agentic risk flags Generate targeted questions Deterministic • No AI inference Cross-Reference Engine Maps overlapping controls between modules One scan → multi-framework coverage Report Generator Classification + legal basis Gaps + remediation Architecture flags Penalty exposure Cross-framework summary HTML • JSON • PDF Engagement DB System models Findings per module Customer answers Confirmations Reports PostgreSQL • Audit trail WEB APPLICATION Multi-module dashboard • Engagement workspace • Targeted questions • Findings review • Confirmation flow • Report export OUR TEAM Operates • Reviews • Confirms CUSTOMER TEAM Answers • Reviews • Confirms REPORT Gaps • Remediation • Deadlines

Compliance Modules

4 active modules covering EU and US frameworks (648 total criteria), with 12 more in pipeline. Each is self-contained: its own knowledge base, classification engine, assessment criteria, and report configuration.

View module details

EU AI Act

Regulation (EU) 2024/1689. Classification against Annex III high-risk domains, obligation mapping per role (provider/deployer), 15 agentic risk flags, penalty exposure calculation.

348 criteria • 311 questions • 13 doc types

Live

DORA

Regulation (EU) 2022/2554. Digital operational resilience for financial entities. ICT risk management, incident reporting, third-party oversight, penetration testing.

131 criteria • 14 doc types • 3 tiers

Beta

GDPR

Regulation (EU) 2016/679. AI-scoped data protection assessment. Automated decision-making (Art 22), DPIAs, lawful basis, international transfers, data subject rights.

97 criteria • 12 doc types • 4 tiers

Beta

NIST AI RMF

NIST AI 100-1. US voluntary framework. Govern, Map, Measure, Manage functions. GAI companion profile. Cross-references to EU AI Act controls.

72 criteria • 15 doc types • 4 tiers

Beta

Module Pipeline

12 additional modules in draft: NIS2, PRA SS1/23, NCSC, HIPAA, CDDO/GDS, ISO 42001, ISO 27001, SOC 2, FCA AI, UK AISI, IL3, CE Plus. Each follows the same build pattern — authoritative source, structured knowledge base, classification engine, assessment questions, report configuration.

Data Flow

How data moves through the system, from your infrastructure to a compliance report.

1

Scan

HEX 165 Agent runs in your environment and scans both code and documentation in one pass. Connector plugins auto-detect and inspect agentic frameworks. The documentation scanner matches files against required document types using keyword matching. Documents never leave your environment. Output: a pre-upload summary for your confirmation before anything is sent.

2

Upload

System model exported to the platform over TLS. Stored in the Engagement Database with timestamp, connector ID, and knowledge base version.

3

Evaluate

Rules engine loads the active module's knowledge base, classifies each agent, maps obligations per role, evaluates criteria, produces coverage summary by type, generates documentation gap report, detects agentic flags. Where multiple modules are active, each is evaluated independently with cross-references mapped. Produces: findings + targeted questions per module.

4

Supplement

You answer targeted questions through the platform. Each answer triggers re-evaluation — findings update, outstanding questions reduce. Loop until all questions are answered or marked N/A.

5

Confirm

Both teams (ours and yours) review findings in the platform. Each finding confirmed or disputed. Disputed findings flagged for resolution. Nothing finalised without mutual agreement.

6

Report

Report generated from confirmed findings. Per-module sections with classification, gaps, remediation, architecture flags, and penalty exposure (where applicable). Cross-framework summary shows overlapping compliance. Exportable as HTML, JSON, PDF.

How Rules Are Applied

The rules engine is deterministic. No AI interprets law. The same system model always produces the same result. Five steps: classify, map obligations, evaluate, flag risks, cross-reference.

View evaluation steps
1

Classification

The engine delegates to the active module's classification engine. Each framework has its own tiers and rules:

  • EU AI Act — maps agent purpose against Annex III high-risk domains (8 areas, 25 use cases). Systems profiling natural persons are always high-risk. GPAI model usage detected from provider field.
  • DORA — determines if the entity performs critical/important functions, is a microenterprise (simplified regime), or falls under standard obligations.
  • GDPR — assesses whether AI processing involves automated decision-making, special category data, or triggers DPIA requirements.
  • NIST AI RMF — classifies by impact level (high/moderate/low/minimal) based on the AI system's context and consequences.
2

Obligation Mapping

Classification + role determines which assessment sections apply. The module defines which roles exist (provider, deployer, controller, processor, financial entity, etc.) and which obligations each role carries. Only applicable sections are evaluated.

3

Evaluation

Assessment questions are checked against the system model. For each question:

  • Auto-evaluate — if HEX 165 captured the evidence (e.g., logging enabled, stop mechanism present), the finding is determined automatically
  • Customer answer — if you have answered this question, your answer is used
  • Undetermined — if neither is available, a targeted question is generated

Every non-compliant finding includes a remediation action derived from the legal text.

4

Agentic Risk Flags

Architecture-specific risk indicators are detected from the system model — risks unique to agentic and multi-agent systems that go beyond standard compliance checks. Each module defines its own flags with framework-specific legal references.

5

Cross-References

Where modules overlap (e.g., EU AI Act risk management and NIST AI RMF Govern/Map functions, or GDPR data protection and EU AI Act Article 10), the cross-reference engine maps equivalent controls. One compliant finding can satisfy requirements across multiple frameworks.

Agentic Risk Flags

15 architecture risks specific to agentic AI that standard compliance checklists miss. Detected automatically from the system model. Each module maps these to its own legal basis.

View all 15 flags

No stop mechanism

The system has no way to halt all agents in a safe state. In a multi-agent workflow, this means no controlled shutdown.

EU AI Act: Art 14(4)(e) • DORA: Art 11 • NIST: Manage 2.4 — Critical

No automatic logging

Events are not automatically recorded. Compliance across all frameworks requires audit trails of system behaviour.

EU AI Act: Art 12(1) • DORA: Art 10 • GDPR: Art 30 • NIST: Measure 2.6 — Critical

Dynamic agent spawner

An agent creates other agents at runtime with LLM-generated instructions and LLM-chosen tools. Agent count, behaviour, and tool access are not human-designed.

EU AI Act: Arts 9, 12, 14, 15 • NIST: Map 1.5 — Critical

Deceptive instructions

Agent instructions direct it to conceal its AI nature, or it uses a human persona with no AI disclosure while interacting with users.

EU AI Act: Art 50(1) • NIST: Govern 4.1 — Critical

Autonomous decisions affecting persons

Agent is instructed to make decisions affecting people without human oversight. All frameworks require proportionate human control.

EU AI Act: Art 14(1) • GDPR: Art 22 • NIST: Govern 1.3 — Critical

No human checkpoints + autonomous agents

Agents operate autonomously with no defined human review points in the workflow.

EU AI Act: Art 14(1) • NIST: Govern 1.3 — High

Fully autonomous agent

An agent operates without human involvement. Oversight must be commensurate with the level of autonomy.

EU AI Act: Art 14(2) • NIST: Manage 2.2 — High

Adapts post-deployment

Agent behaviour changes after deployment. May constitute a substantial modification requiring reassessment.

EU AI Act: Art 3(23) • NIST: Manage 4.2 — High

Partial stop (single agent only)

Stop mechanism exists but only halts one agent, not the entire workflow.

EU AI Act: Art 14(4)(e) — High

Autonomous write/execute without approval

A fully autonomous agent has tools that modify data or execute actions, with no human approval gate.

EU AI Act: Art 14(1), 14(4)(d) • NIST: Manage 2.4 — High

Personal data access without governance

Agent accesses personal data via tools but no documented data governance is in place.

EU AI Act: Art 10(2) • GDPR: Arts 5, 6, 35 — High

Multi-agent interaction

Agents pass data between each other. Post-market monitoring must cover inter-agent behaviour.

EU AI Act: Art 72(2) • NIST: Measure 2.5 — Medium

Short log retention

Logs retained for less than the minimum required period (e.g., 6 months under EU AI Act).

EU AI Act: Art 26(6) — Medium

GPAI model documentation gap

Agent uses a general-purpose AI model but upstream provider documentation may be missing.

EU AI Act: Art 53(1)(b) — Medium

User-facing agent without AI disclosure

Agent interacts with users and generates content but its instructions contain no reference to AI disclosure.

EU AI Act: Art 50(1) • NIST: Govern 4.1 — Medium

Connectors

6 built connectors (4 supported, 2 coming soon) plus 7 planned. Auto-detected from project dependencies — no manual configuration. Shared across all compliance modules.

View connector details and extraction matrix

API Connectors

Call cloud platform APIs to retrieve agent metadata. Structured responses, richest data.

  • AWS Bedrock — ListAgents, GetAgent, action groups, guardrails, IAM roles
  • OpenAI Assistants — List/Get assistants, tools, instructions, model, metadata
  • Azure AI Foundryplanned
  • LangGraph Platformplanned

Code Scanners

Parse configuration files and Python code to extract agent definitions. Read access to the codebase only.

  • LangChain / LangGraph — graph orchestration, tool bindings, stop mechanisms. Supported.
  • Anthropic SDK — Swarm patterns, handoff functions, dynamic spawner detection. Supported.
  • Pydantic AI — typed agents, structured outputs, logfire integration. Supported.
  • CrewAI — YAML config or Python code scan, delegation, LLM resolution. Supported.
  • AutoGen, Semantic Kernel, OpenAI Agents SDK, Vercel AI SDK, LlamaIndex, Haystack, DSPyplanned

Document Sources

Compliance documentation often lives outside the codebase. Document source connectors search external platforms for required documentation (risk assessments, DPIAs, technical docs, policies) using the same keyword matching as the local scanner. Only titles, URLs, and match status are returned — document content is not transmitted.

  • Local files — scans project directory for .md, .pdf, .docx, .txt, .yaml, .html. Supported.
  • Confluence — search via CQL, match pages against documentation requirements. Coming soon
  • SharePointplanned
  • Google Driveplanned
  • Notionplanned

Extraction Matrix

What each connector extracts automatically vs what generates a targeted question.

Data Point LangGraph Anthropic Pydantic AI CrewAI Bedrock OpenAI Relevance
Agent name / descriptionAutoAutoAutoAutoAutoAutoTechnical documentation
Model usedAutoAutoAutoAutoAutoAutoModel governance
Tools / functionsAutoAutoAutoAutoAutoAutoRisk management
Instructions / promptAutoAutoAutoAutoAutoAutoTechnical documentation
GuardrailsAutoRisk management
Stop mechanismAutoAutoAutoPartialQuestionQuestionHuman oversight
Human checkpointsAutoAutoAutoQuestionQuestionQuestionHuman oversight
Logging enabledAutoAutoAutoQuestionQuestionQuestionRecord-keeping
Orchestration / flowAutoAutoAutoAutoQuestionQuestionHuman oversight
Decision domainsQuestionQuestionQuestionQuestionQuestionQuestionRisk classification
Personal data processingQuestionQuestionQuestionQuestionQuestionQuestionData governance

Auto = extracted automatically. Question = generates a targeted question. As connectors improve, more fields move from Question to Auto.

Knowledge Base

648 compliance criteria across 4 modules, each with its own structured knowledge base. Every entry traces to an authoritative source — EUR-Lex for EU regulations, NIST.gov for US frameworks.

View knowledge base details

Module Architecture

Each module contains:

  • Assessment questions — criteria with expected answers, remediation, and source citations
  • Criteria register — source of truth for criteria counts and types
  • Documentation requirements — required document types with keyword matching for the scanner
  • Classification rules — framework-specific tier logic
  • Cross-references — mappings to equivalent controls in other modules

Platform Totals

  • 648 compliance criteria across 4 modules
  • 54 required document types
  • 4 jurisdictions (EU × 3, US × 1)
  • 15 agentic risk flags
  • 42 cross-reference mappings between modules
ModuleCriteriaDoc TypesClassification TiersSourcePenalties
EU AI Act34813Prohibited / High-Risk / Limited / MinimalEUR-LexUp to €35M / 7%
DORA13114Critical Functions / Standard / MicroenterpriseEUR-LexUp to 2% turnover
GDPR9712High-Risk / Special Category / Standard / MinimalEUR-LexUp to €20M / 4%
NIST AI RMF7215High / Moderate / Low / Minimal ImpactNIST.govVoluntary

Accuracy Standard

Every data entry must have a verifiable citation (article, paragraph, source URL). No entry exists without a source. Where legislation is ambiguous, it is flagged as "awaiting clarification" — never filled with interpretation. Knowledge bases are updated only when official legal text changes.

Trust & Security

Your data stays with you. The agent is read-only. No AI interprets law. Everything is inspectable before you run it.

View security details

Your data stays with you

The HEX 165 Agent runs in your environment. Raw source code and documents never leave. Only normalised metadata — agent names, tools, architecture patterns — is transmitted to the platform.

Read-only. Always.

Our agents never modify your files, systems, or infrastructure. They scan and report. No writes, no installs, no background processes.

No AI interprets law

The compliance engine is deterministic — static rules derived from the legislation. Same input always produces the same output. No ML models, no probabilistic reasoning.

Agent Hardening

  • Docker container — non-root user, all capabilities dropped
  • Read-only filesystem — container cannot write to itself
  • Read-only project mount — your directory mounted with :ro
  • Offline scanning — run with --network=none, upload separately
  • Zero dependencies — no runtime npm dependencies beyond Node.js built-ins
  • Small footprint — 35KB npm package, 53MB Docker image

Platform Hardening

  • EU data residency — hosted in Nuremberg, Germany
  • TLS everywhere — all connections encrypted via HTTPS
  • JWT authentication — bcrypt-hashed credentials
  • Engagement isolation — ownership checks prevent cross-tenant access
  • CORS restricted — API only accepts platform-origin requests
  • Localhost-bound services — only nginx is publicly accessible
  • Parameterised queries — preventing SQL injection

What never leaves your environment

  • Source code
  • Environment variables or API keys
  • Log file contents
  • Customer data processed by your agents
  • Document contents (only filenames and keyword match results)
  • Database contents
  • Git history

What is transmitted

  • Agent names and descriptions
  • System prompt / instruction text
  • Tool names and descriptions
  • Model provider and model ID
  • Orchestration structure (connections, checkpoints)
  • Logging framework detected (not log contents)
  • Stop mechanism presence
  • Document inventory (filenames and match status only)

What you can verify before running anything

  • Read the source code — the agent is 35KB of readable JavaScript. No obfuscation, no compiled binaries.
  • Manifest first — run hex165 manifest to discover agents and files without reading their content. Edit the manifest to exclude items.
  • Review before sending — every upload shows a summary of what was found and asks for confirmation.
  • Run offline first — use --network=none to scan with no network access. Review the output before uploading.
  • Build it yourself — the Dockerfile is open. Build from source rather than downloading ours.

Our Own Compliance

Deterministic rules engine — not an AI system under Article 3(1). EU data residency. GDPR self-service built in.

View compliance details

EU AI Act

The HEX 165 platform uses a deterministic rules engine — no AI, no ML, no inference. It does not meet the definition of an AI system under Article 3(1) and is not subject to the Act's obligations.

If AI-assisted features are added in future, they will be clearly labelled, advisory only, and compliant with Article 50 transparency requirements.

GDPR

Data residency: Germany (EU). No international transfers. No third-party data sharing. Minimal data collection. Full GDPR self-service: data export (Art 15/20), account deletion (Art 17), data inventory summary — all available in the platform UI.

ISO 27001 and ISO 42001 certification programme underway.

See HEX 165 in action

Book a demo and see how the platform handles your specific regulatory requirements.

Book a Demo