Last week we submitted a formal comment to NIST on AI agent security standards. This week, we submitted a public comment to the European Commission's Digital Fitness Check — a review of whether the EU's existing digital rulebook is fit for purpose in a world of autonomous AI agents.
The short answer we gave them: not yet. But the fixes are specific and achievable.
Here's what we said — and why it matters for every developer building agents right now.
Why the EU matters even if you're not in Europe
The EU AI Act becomes fully enforceable in August 2026. That's five months away. If your agent operates on behalf of EU users — regardless of where you're based — you're in scope.
More importantly, EU rules tend to become de facto global standards. GDPR shaped data handling practices worldwide, not just in Europe. The AI Act is following the same pattern. The rules being written right now, in response to consultations like this one, will shape what compliant agents look like globally for years.
That's why we participated. Not because we're headquartered in the EU — we're not — but because the standards being set here will matter everywhere.
The agent economy is already here
Before we got into the problems, we gave the Commission some context on why urgency is warranted.
Visa, Mastercard, Google, Stripe, and OpenAI all launched agent payment infrastructure in 2025. MCP hit 97 million monthly downloads. The payment rails and connectivity standards exist. What doesn't exist is the trust layer that makes transacting between unknown agents safe.
The infrastructure is being laid for an economy that will move trillions of dollars through autonomous agents. The identity and verification layer is nowhere near ready.
Three problems we told them about
1. "AI agent" isn't defined anywhere in EU law
This sounds like an abstract legal problem. It isn't. It's a practical blocker for every developer trying to build compliantly right now.
Consider: you connect a language model to a set of tools via MCP. Your system can browse the web, execute code, send emails, and make API calls autonomously. Is it an "AI system" under the AI Act? Almost certainly. Is it high-risk? Depends on the domain, but there's no clear guidance. Are you the provider or the deployer if you built it for your own use? Unclear. Does each MCP server you connect to constitute a separate regulated system? No one knows.
The Future Society's analysis found the AI Act "was not originally designed with AI agents in mind." MEP Lagodinsky filed a parliamentary question specifically on AI agent regulation in September 2025. A Commission response is still pending. Agent-specific guidance from the AI Office isn't expected before 2027 — a full year after enforcement begins.
Developers building in good faith cannot comply with requirements they can't reliably identify as applying to them.
2. The accountability model breaks for multi-agent systems
The AI Act assumes a clean chain: a provider builds a system, a deployer operates it, a user interacts with it. That model doesn't describe how agents actually work in 2026.
Production agent pipelines routinely chain multiple agents from different providers — a planning agent, a research agent, a financial agent, an execution agent — each making autonomous decisions that feed into the next. When something goes wrong, the AI Act provides no framework for how liability travels through that chain.
The European Law Blog described this as an "accountability vacuum that neither providers nor deployers can navigate." The incident reporting guidelines that become binding in August 2026 focus on single-agent failures and assume a one-to-one causality map that doesn't exist in multi-agent systems.
This isn't theoretical. When AI pricing agents in Germany's fuel market began reacting to each other's decisions, prices rose without any individual system behaving incorrectly. That kind of emergent harm has no home in current EU law.
3. Compliance is inaccessible for small developers
The AI Act's compliance infrastructure was designed for enterprises. Conformity assessment processes assume legal teams, third-party auditors, and dedicated compliance functions. For the solo developer or three-person team building an agent on the weekend, there is no clear, affordable path to demonstrated compliance.
This isn't just a fairness issue. It's a market structure issue. Fixed compliance costs act as a barrier to entry that favors large incumbents over independent developers. If the only entities that can afford to comply are the big platforms, the EU's safety objectives get achieved while its competitiveness objectives get undermined.
The long tail of developers building agents is where the ecosystem actually lives. A compliance framework that doesn't work for them doesn't really work.
Three things we asked for
The full text of our EU Digital Fitness Check submission is available on request. Our Verification Standard is publicly available at kynver.com/standard. Our NIST comment is published at kynver.com/blog/nist-comment.
The window to shape the trust infrastructure for the agent economy is open right now — and as the submission statistics show, it's less crowded than you'd expect. We're moving fast to be part of setting the standard before the ecosystem consolidates around whatever solutions are already in place.
Kynver is in early development. Join the waitlist to get first access and locked-in early adopter pricing.
Join the waitlist →