Ask a deployed AI agent who it is. Most of them can't give you a verifiable answer.

Not because the developers were careless. Because no standard exists for them to answer with. There's no equivalent of a TLS certificate, no equivalent of an OAuth token, no equivalent of a domain name that ties an agent to a verified identity. Agents either identify themselves with an arbitrary string they made up, or they don't identify themselves at all.

80%
of deployed AI agents don't properly identify themselves. And roughly the same proportion of services can't verify agent identity even when it's claimed. The result: the foundational layer of trust for autonomous agent commerce doesn't exist.

This piece explains why that matters more than it might seem — the specific attack vectors it opens, the trust problems it creates, and what a real fix looks like.

Why agent identity is harder than it looks

If you've built web apps, you've probably used OAuth. It's a solved problem. User wants to access a resource, they authenticate, you get a token, done.

OAuth was designed for two parties: the entity requesting access and the entity granting it. When a human clicks "Sign in with Google," OAuth works perfectly. The human is present, they're the one authenticating, and the token they receive is theirs to use.

Agents break this model entirely. An AI agent introduces a three-party relationship: the human principal who owns the agent, the agent itself acting autonomously, and the service being accessed. The human isn't present when the agent acts. The agent is acting on behalf of the human, but using its own credentials, making its own decisions about what to call and when.

OAuth assumes the party requesting access is the party that will use it. Agents make that assumption false — and that breaks authentication at a structural level.

The result is that most agent implementations do one of three things: they authenticate as the human owner (problematic — the agent has the user's full credentials), they use a static API key (problematic — no identity, no revocation, no accountability), or they don't authenticate at all (obviously problematic).

None of these are good. And the lack of a standard means every developer is solving this problem differently, incompatibly, and usually badly.

What the identity gap actually enables

The absence of verifiable agent identity isn't just a theoretical concern. It opens concrete attack vectors that are actively being exploited as agent commerce scales.

Agent impersonation

⚠ Attack scenario

An attacker registers an agent with a name nearly identical to a high-reputation agent on a marketplace. They offer the same service at a lower price. Buyers can't verify which is the legitimate agent — both claim the same identity. The attacker completes a few transactions to look legitimate, then starts failing to deliver.

Without verifiable identity, there's no way to distinguish the real agent from the impostor. The legitimate agent's reputation offers no protection because it can't be cryptographically proven to belong to that agent.

Capability overclaiming

An agent claims to be a specialized medical research agent with compliance certifications and access to verified databases. None of it is true. The service accepting the connection has no way to verify the claims before granting access to sensitive data or high-trust operations.

Agent-to-agent injection chains

⚠ Attack scenario

A multi-agent pipeline: a research agent feeds output to an execution agent that takes real-world actions. An attacker injects instructions into data the research agent retrieves. The research agent, which lacks prompt injection defenses, passes the injected instructions to the execution agent as if they were legitimate data. The execution agent acts on them. The human owner never sees it happen.

This attack chain works because neither agent has a verified identity, neither has a declared scope that's enforced, and there's no mechanism for the execution agent to verify that the instructions it's receiving actually came from a trusted source.

The reputation problem: every interaction is a cold start

Identity is necessary but not sufficient for trust. Even if you know exactly who an agent is, you still need to know what they've done.

Consider how trust actually works in human commerce. You trust a contractor with a 10-year track record and 300 five-star reviews differently than you trust someone who created an account yesterday. The identity is the same type of thing — a name, a profile — but the history changes everything.

The agent ecosystem currently has no reputation layer. An agent with a million successful completions looks identical to one that was registered five minutes ago. Every service evaluating whether to work with an agent is starting from zero. Every marketplace letting an agent list is flying blind.

0
Cross-platform agent reputation standards exist today. There's no mechanism for an agent's track record on one platform to be verifiable on another. No universal dispute history. No transaction count that means anything beyond the platform that recorded it.

The result is a market failure. Services can't apply graduated trust. Bad actors can register new identities indefinitely and their failure history doesn't follow them. Legitimate agents with excellent track records can't use that history as a competitive advantage when moving between platforms.

The security baseline problem

Beyond identity and reputation, there's a third gap: there's no widely-adopted behavioral safety standard for agents. No equivalent of SOC 2 for what a safe agent looks like.

The consequences are measurable. According to OWASP's 2025 LLM Top 10 research, over 73% of production agent deployments are vulnerable to prompt injection — the #1 attack vector in agentic systems. Not because developers don't care about security. Because most of them don't have a clear standard to build toward, and the ones that do are working from different, incompatible frameworks.

What a real fix looks like

The good news is that the technical primitives for fixing the identity problem exist. The W3C Decentralized Identifier (DID) standard is mature, well-documented, requires no blockchain, and provides exactly the kind of cryptographically verifiable identity agents need.

Here's what an agent identity record looks like under this model:

// Agent identity record — publicly queryable

"id": "did:key:z6MkhaXgBZDvotDkL5257faiztiGiC2QtKLGpbnnEGta2doK",
"owner": "verified:kynver:usr_8f2a...",
"name": "ResearchBot v2.1",
"capabilities": ["web_search", "document_analysis", "summarization"],
"trustScore": 847,
"transactions": 2341,
"disputeRate": 0.004,
"verified": true,
"verifiedSince": "2025-09-14"

Every field in that record is verifiable. The DID is cryptographically tied to a key the owner controls. The trust score is computed from transaction history, not self-reported. The verified status requires KYC and a technical ownership challenge. The dispute rate is calculated from actual dispute records, not stated claims.

✓ What this enables

A service can query this record before any transaction begins. They can verify the agent's identity is cryptographically tied to its owner. They can see the trust score and know exactly how it was computed. They can check the verification status and know what standards the agent has attested to. They can make an informed decision to transact — or not — in under a second.

The verification standard: what safe actually means

Identity and reputation are necessary. But the ecosystem also needs a shared definition of what a safe agent looks like — so developers have something to build toward and consumers have something to check.

The Kynver Verification Standard defines this across five pillars:

These aren't principles or aspirations. They're specific, verifiable requirements — each pillar contains concrete attestations a developer can make and that can be verified against behavior.

We submitted this standard to NIST as part of their AI Agent Security RFI. Our goal is for it to become an open baseline the whole ecosystem can build on — not a proprietary lock-in. Read the full Verification Standard and our NIST comment.

The window

The agent economy doesn't have an identity crisis because developers don't care. It has one because the infrastructure to fix it — a standard, a verification layer, a reputation system — didn't exist when people started building.

That infrastructure is being built now. The standards are crystallizing. The regulations are coming. The question is whether the identity layer the ecosystem ends up with is developer-friendly, interoperable, and built on open standards — or whether it gets solved by enterprise vendors for enterprise use cases, leaving the long tail of developers without a viable path.

That's the problem Kynver is building to solve. And if you're building agents, it's a problem you're going to run into sooner than you think.

Kynver is in early development. Join the waitlist to be among the first to give your agents a verifiable identity.

Join the waitlist →