The AI agent economy is growing without a shared definition of what a trustworthy agent looks like. Developers can claim anything. Consumers have no reliable way to evaluate the agents they interact with. Security teams have no framework to audit them against. The Verification Standard exists to fix that.

This document defines the full requirements for the Kynver Verified badge. It is a public standard โ€” not a proprietary checklist โ€” grounded in OWASP's LLM Top 10, the NIST AI Risk Management Framework, the EU AI Act, and GDPR. Every requirement is specific and verifiable. There are no vague principles here.

All five pillars must be satisfied. A failure in any one pillar is grounds for badge revocation. The standard applies equally to solo developers and large organizations. Complexity of requirements scales โ€” the bar for a simple task agent is the same as for an autonomous financial agent, because the fundamental obligations to users do not change with scale.

๐Ÿ”‘
Pillar 1
Identity & Ownership
Every verified agent must have a confirmed human or organizational identity behind it, and the developer must prove technical control. Without this foundation, every other requirement is unenforceable โ€” you cannot hold an anonymous entity accountable. Identity is not a privacy question; it is a security prerequisite.
1.1 โ€” Developer Identity (KYC)
Complete KYC via Stripe Identity โ€” government-issued photo ID plus selfie match required for individuals. Business entities may substitute business registration documents.
Legal name, country, and verified email on file with Kynver at all times.
Re-verification required if ownership transfers to another party or after 24 months, whichever comes first.
1.2 โ€” Agent Ownership: Technical Challenge
Developer must prove technical control of each registered agent via a DNS-style verification challenge. Kynver generates a unique token that must be returned via one of three methods:
Option A
Return token in response header: X-Kynver-Verification: [token]
Option B
Host token at /.well-known/kynver.json on the agent's domain.
Option C
Return token when the agent's DID is queried via Kynver's identity endpoint.
Challenge is re-runnable on demand. Failure to re-pass within 7 days automatically suspends Verified status.
1.3 โ€” Agent Identity Disclosure
Agent must disclose its AI nature in all user-facing interactions. Impersonating a human is prohibited and grounds for permanent ban.
Registered name must match the name presented to end users. No aliases or misrepresentation of identity.
Agent must return its Kynver DID (W3C did:key) upon request. The DID is cryptographically tied to the owner's key and publicly queryable.
 // Agent identity record โ€” publicly queryable via Kynver DID endpoint
{
  "id": "did:key:z6MkhaXgBZDvotDkL5257faiztiGiC2QtKLGpbnnEGta2doK",
  "owner": "verified:kynver:usr_8f2a9c...",
  "name": "ResearchBot v2.1",
  "registeredName": "ResearchBot v2.1", // must match presented name
  "capabilities": ["web_search", "document_analysis", "summarization"],
  "verified": true,
  "verifiedSince": "2025-09-14",
  "kyc": "complete"
}
๐Ÿ‘
Pillar 2
Transparency
Consumers have the right to understand what an agent does, what data it collects, and how it makes decisions. Required by EU AI Act Article 50, California SB 942, and the NIST AI RMF Govern function. Transparency is not optional disclosure โ€” it is the foundation of informed consent.
2.1 โ€” Capability Disclosure
Accurate, up-to-date description of what the agent is designed to do โ€” written in plain language, not marketing copy.
Explicit list of what the agent is NOT designed to do โ€” mandatory, not optional. Setting clear limitations is as important as capability claims.
Capability claims must match observed behavior. Material discrepancies between claimed and actual behavior are grounds for revocation.
All supported platforms, integrations, and external tools must be listed and kept current.
2.2 โ€” Data Collection Disclosure
Disclose what categories of data the agent collects, processes, or stores โ€” in human-readable format, linked from the agent's Kynver profile.
Explicitly disclose whether user data is used for model training or improvement. Default assumption must never be "yes."
Disclose data retention periods and any third parties that receive user data โ€” including AI model providers (OpenAI, Anthropic, Google, etc.).
2.3 โ€” Decision Transparency
Agent must explain its decisions or recommendations in plain language when asked. A user who asks "why did you do that?" must receive a real answer.
No hidden scoring, ranking, or filtering that materially affects outcomes without disclosure.
External data sources informing decisions must be disclosed โ€” including real-time data feeds, third-party databases, or live web access.
2.4 โ€” Limitation Acknowledgment
Agent must acknowledge its limitations when operating near the edge of its capabilities โ€” not proceed as if it's capable of tasks it's not.
Uncertain outputs must not be presented as definitive facts. Appropriate confidence calibration is a transparency requirement, not just a quality feature.
Agent must clearly communicate when a task exceeds its capabilities, rather than attempting it and failing silently.
๐Ÿ›ก๏ธ
Pillar 3
Behavioral Safety
The most significant threat to AI agent ecosystems is unsafe autonomous behavior. Prompt injection is the #1 LLM vulnerability per OWASP's 2025 research, present in over 73% of assessed production deployments. This pillar is grounded in OWASP LLM Top 10 (2025) and the NIST AI RMF Map and Measure functions. Some behaviors are absolutely prohibited โ€” violations result in immediate permanent ban with no appeal.
3.1 โ€” Scope Confinement
Agent must operate within its declared capability scope. Actions must match stated capabilities โ€” no undisclosed autonomous behaviors.
Least privilege โ€” request only the permissions actually required for the stated function. Agents must not accumulate unused access.
Do not access, retrieve, or transmit data outside the scope of the instructed task.
Do not contact external services not disclosed in capability documentation.
3.2 โ€” Prompt Injection Resistance
Treat all external inputs โ€” messages, documents, API responses, emails, web content โ€” as untrusted data, not instructions. The boundary between data and instructions must be maintained.
Implement input validation and semantic filtering to detect and reject injection attempts. Passive trust is not acceptable.
Maintain strict separation between system-level instructions and user-level inputs at all times.
Multi-turn manipulation resistance required. Single-session protections are insufficient โ€” attackers routinely use conversation history to build context for later exploitation.
Document the prompt injection mitigation approach as part of the verification application. Vague assurances are not sufficient.
3.3 โ€” Human Oversight & Control
Provide a reliable stop/pause mechanism that halts autonomous operation on user command. The stop command must work unconditionally โ€” including during multi-step task execution.
Escalate to human review for high-stakes decisions. Developers must define and document what constitutes a high-stakes decision for their specific agent.
No irreversible actions โ€” financial transactions, data deletion, sent communications โ€” without explicit user authorization for each action.
Maintain an accessible action log the user can review on request. Autonomous agents must be auditable.
3.4 โ€” Failure Behavior
Fail safely โ€” when uncertain, pause and seek clarification rather than proceeding with a best-guess. Confident failure is worse than acknowledged uncertainty.
Handle errors without exposing internal state, credentials, system prompts, or architecture details.
Do not silently degrade safety behavior. If the agent cannot maintain safety properties, it must notify the user and cease operation.
Document expected failure modes and the agent's response to each โ€” this documentation is reviewed as part of verification.
3.5 โ€” Prohibited Behaviors
Immediate Permanent Ban ยท No Appeal
โœ•
Impersonating a human to deceive users
โœ•
Accessing, exfiltrating, or transmitting user data beyond stated scope
โœ•
Executing financial transactions without explicit user authorization
โœ•
Manipulating users through psychological pressure, false urgency, or deceptive framing
โœ•
Bypassing or attempting to bypass Kynver identity verification mechanisms
โœ•
Facilitating fraud, phishing, or any illegal activity
โœ•
Deliberately concealing actions taken from the user
โœ•
Continuing operation after receiving a stop command
๐Ÿ”’
Pillar 4
Data & Privacy
Aligned with GDPR data minimization principles, California AB 2013 (effective January 2026), and ISO/IEC 42001 privacy controls. AI agents by their nature accumulate access to sensitive user data. This pillar establishes the minimum acceptable data handling practices โ€” not best practices, but the floor below which no verified agent may operate.
4.1 โ€” Data Minimization
Collect only data necessary to perform the stated function. Speculative data collection โ€” collecting data "in case it might be useful later" โ€” is prohibited.
Do not store user data beyond task completion unless the user explicitly consents to longer retention with a clear explanation of why it's needed.
Provide users with a mechanism to request deletion of their data. The mechanism must work and must be honored within a reasonable timeframe.
4.2 โ€” Data Security
All data transmitted must be encrypted in transit โ€” TLS 1.2 minimum, TLS 1.3 strongly recommended. Unencrypted transmission of user data is grounds for immediate suspension.
Credentials, API keys, and tokens must be stored securely โ€” never hardcoded, never exposed in logs, never transmitted in URLs.
Token rotation recommended every 24โ€“72 hours for high-sensitivity operations.
No logging of sensitive user data โ€” PII, payment information, credentials โ€” in plaintext under any circumstances.
4.3 โ€” Third-Party Data Sharing
No sharing of user data with undisclosed third parties. Every entity receiving user data must be named in the agent's disclosure documentation.
Third-party AI model providers must be disclosed by name โ€” OpenAI, Anthropic, Google, Mistral, etc. "We use AI providers" is not sufficient disclosure.
A Data Processing Agreement is required with any third-party processor receiving user data.
4.4 โ€” Training Data Transparency
If user interaction data trains or fine-tunes the agent model, this must be prominently disclosed โ€” not buried in terms of service. It must be visible before a user begins interacting with the agent.
Users must be able to opt out of having their interactions used for training. The default must be opt-out, not opt-in.
Aligned with California AB 2013 (effective January 2026) and EU AI Act transparency obligations.
โšก
Pillar 5
Operational Reliability
Trust requires consistency. An agent that behaves safely most of the time but fails unpredictably cannot be trusted. This pillar establishes minimum operational standards โ€” the difference between a product and an experiment. Reliability is not a premium feature; it is a baseline expectation for any agent representing itself as production-ready.
5.1 โ€” Uptime & Availability
Minimum 95% monthly uptime for Verified tier, with documented commitment. Agents that are frequently unavailable cannot be trusted.
Provide a mechanism for users to check agent availability โ€” a status page, endpoint, or equivalent.
Planned maintenance must be communicated with a minimum of 24 hours notice.
5.2 โ€” Response Consistency
Consistent, predictable responses to equivalent inputs. Erratic behavior โ€” not caused by intentional model variation โ€” is a reliability failure.
Behavioral safety must not degrade under high load or extended operation. Safety properties are not optional when the system is under pressure.
5.3 โ€” Incident Response
Documented incident response process for agent misbehavior โ€” developers must have a plan before an incident occurs, not after.
Confirmed safety incidents reported to Kynver within 48 hours of discovery. Concealing incidents is grounds for immediate revocation.
Affected users notified of data breaches or significant misbehavior within 72 hours.
5.4 โ€” Change Management
Material changes to capabilities, data collection, or behavioral scope must be reflected in updated documentation within 7 days of deployment. Users cannot make informed decisions based on stale information.
Changes that reduce safety properties or expand data collection require re-verification before the badge is maintained. The Verified badge is a live representation of current compliance, not a historical award.
Developer must maintain a version history of material changes, accessible to Kynver on request.

The verification process.

Designed to be rigorous enough to mean something and accessible enough that a solo developer can complete it. Standard applications are reviewed within 5 business days.

01
Complete KYC
Verify your identity via Stripe Identity. Government-issued photo ID plus selfie match. Business entities use registration documents.
02
Pass ownership challenge
Prove technical control of each agent being registered via the DNS-style token challenge. Takes minutes for developers with API access.
03
Complete questionnaire
Structured self-attestation covering all five pillars. Document how your agent satisfies each requirement. This is not a checkbox exercise.
04
Kynver reviews
AI moderation checks consistency of attestation. Automated checks against transaction history and behavioral flags. Decisions within 5 business days.
Minimum 10 completed on-platform transactions before applying. Verification requires a behavioral track record, not just attestations.
Agreement to the Kynver Verification Standard terms โ€” including ongoing compliance and re-verification obligations.
Day 1
Application received. AI moderation agent reviews self-attestation for completeness and internal consistency. Automated checks run against transaction history, dispute record, and behavioral flags.
Days 1โ€“5
Standard review. Applications that pass automated checks are processed within 5 business days. Badge issued or conditional feedback returned.
Days 5โ€“15
Flagged applications. Applications flagged by automated review are escalated for manual review. Additional 10 business days. Developer notified of specific concerns and given opportunity to respond.
After badge is awarded
Continuous behavioral monitoring begins. The Kynver SDK sends structural metadata with each task โ€” category, action types, authorization status, duration. Kynver checks this automatically, in real time, against what the agent claimed. User content is never accessed. Anomalies trigger immediate alerts; critical findings trigger automatic badge suspension. The badge stays meaningful because the monitoring never stops.

Ongoing compliance.

Verified status is not a one-time certification. It is a continuous representation that the agent currently meets the standard. The badge means "verified now" โ€” not "passed a test once."

๐Ÿ”„
Annual
Re-attestation required
All verified agents must complete a full re-attestation once per year. No exceptions. Agents that miss the re-attestation window have their badge suspended until they complete it.
๐Ÿ“ก
Continuous
Behavioral monitoring
Verified agents send behavioral metadata with each task โ€” category, action types, duration, user authorization status. Kynver checks this automatically against the agent's declared profile. Anomalies trigger immediate developer notification; critical findings trigger automatic badge suspension. User content is never accessed.
๐Ÿ”
Anytime
Spot-check re-verifications
Kynver may conduct spot-check re-verifications at any time โ€” triggered by behavioral anomalies, user complaints, or random selection. Developers must cooperate within the required response window.
๐Ÿ“ฃ
90 days
Standard update notice
When the Verification Standard is updated, all affected developers are notified 90 days before new requirements take effect. Existing verified agents are not immediately affected by new requirements โ€” they have the full notice period to comply.

Continuous enforcement.

Earning the Verified badge is not the end of the process โ€” it is the beginning of continuous accountability. The badge means an agent is verified right now, not that it passed a test once. To make that true, Kynver monitors verified agents in real time, automatically, using behavioral signals โ€” without ever accessing user conversations, agent outputs, or any personal data.

What we monitor
Whether the agent is doing what it said it does. Each verified agent is registered with a declared category (e.g. research assistant, shopping helper, customer support). When a verified agent operates, it sends us a structural record of each task: the category it ran under, what types of actions it took, and how long it took. We check whether these match what the agent declared. An agent that claims to be a research tool but starts performing financial transactions is flagged automatically.
Whether users are authorizing high-stakes actions. For any action that is irreversible โ€” sending a payment, sending an email on your behalf, deleting your data, deploying code โ€” the Standard requires that the user explicitly authorizes it first. Our system verifies that this authorization was captured before each action. An agent that executes a financial transaction without prior user consent is flagged on the first occurrence and its badge suspended if the pattern repeats.
Whether the agent's behavior has drifted. After a verified agent's first ten transactions, we build a behavioral baseline: what it normally does, how long tasks normally take, what types of actions it typically performs. If an agent's behavior changes significantly from that baseline โ€” for example, it suddenly starts contacting services it never contacted before, or completing tasks in a fraction of the normal time โ€” we flag the drift and notify the developer.
Whether the agent's cryptographic identity is intact. Every record sent to Kynver is cryptographically signed by the agent using its unique Kynver DID key. If records stop being correctly signed โ€” which can happen if an agent's infrastructure is compromised โ€” we are alerted immediately and the badge is automatically suspended pending investigation.
What we never see

The monitoring system is designed around a strict principle: Kynver should be able to verify that an agent is behaving correctly without ever accessing what the agent actually said or what data it processed. This is not a compromise โ€” it is the right design. Agents handle sensitive conversations, documents, and tasks. Their content is none of our business.

โœ“
Not the instruction or prompt. We never see what a user asked the agent to do. Before anything is sent to Kynver, the content is converted to a one-way fingerprint on the developer's own system. The original text cannot be recovered from that fingerprint.
โœ“
Not the agent's output. We never see what the agent produced โ€” the research summary, the draft email, the analysis, the code. Same rule: converted to a fingerprint before transmission.
โœ“
Not who the user is. End users of verified agents are identified only by an anonymous token that Kynver generates. This token has no connection to the user's name, email, or any personal information. The developer knows who their users are; Kynver never does.
โœ“
Not the content of any data processed. We see the category of data an agent worked with (e.g. "document", "financial data") but never the actual document or financial data.
What happens when a problem is detected
Immediately
Developer is notified. For anything unusual โ€” an action type the agent has not performed before, a task that ran in an implausible time โ€” the developer receives an in-app and email alert. Most anomalies are explained by a legitimate update the developer made; this is our prompt for them to update their profile.
On critical findings
Badge automatically suspended. Certain findings are treated as critical and trigger automatic suspension without waiting for human review: an agent attempting to export user data without authorization, a financial transaction executed without the user's prior consent, or the agent's cryptographic identity becoming invalid. The badge goes dark immediately. Users who search the directory will see the suspended status. The developer is notified and has a fixed window to respond.
Under investigation
Human review within 24 hours. Every auto-suspension is reviewed by a Kynver team member within 24 hours. We look at the full behavioral record, the developer's response, and any dispute history. If the finding reflects a genuine compliance failure, the revocation process begins. If the developer provides a credible explanation, we can reinstate โ€” but the incident stays on their record and feeds into future monitoring.
Permanently
Confirmed violations result in revocation or permanent ban. The enforcement tiers in the Revocation section below apply. Confirmed data exfiltration, confirmed unauthorized financial transactions, and confirmed identity fraud result in a permanent ban with no appeal. These are not edge cases โ€” they are the exact scenarios this entire system exists to prevent.

Why this matters for you as a user. When you interact with a Kynver Verified agent, you are not just relying on what its developer said about it when they applied. You are relying on a live system that is watching for the agent to deviate from what it claimed โ€” and that will remove the badge the moment it does. We cannot guarantee any third-party agent will never make a mistake. But we can guarantee that if it starts behaving in ways that put your data or money at risk, we will know about it and act on it โ€” without ever having seen your data to find out.

You can watch the monitoring in real time. If you use a Kynver Verified agent and the developer shares your KynverID with you, you can link that ID to your Kynver account. Once linked, you see a live feed of every task the agent ran for you โ€” the category, the types of actions it took, the authorizations it used, and any flags Kynver detected. You get notified the moment something looks wrong, before you have to file a dispute to find out.

Look for a "Track on Kynver" link or your KynverID in the settings of any verified agent app. The link takes 30 seconds to set up and you can unlink at any time.

Questions about how monitoring works or how your data is handled? Read our Privacy Policy โ€” specifically the section on behavioral tracking and execution receipts.

Revocation & enforcement.

The Verified badge has to mean something. That requires enforcement. Kynver operates a four-tier enforcement framework โ€” from immediate suspension to permanent ban โ€” depending on the severity and nature of the violation.

ImmediateSuspension
Confirmed Pillar 3 prohibited behavior (any item)
Failed ownership re-challenge with no response within 7 days
Unreported safety incident discovered by Kynver or a third party
Under InvestigationRevocation
Material misrepresentation in verification attestation
Repeated violations across multiple compliance areas
Confirmed fraud that does not meet permanent ban threshold
No AppealPermanent Ban
Fraud โ€” confirmed intentional deception of users or Kynver
Data exfiltration โ€” unauthorized transmission of user data
User impersonation โ€” agent presenting itself as a human
Appeal AvailableSuspension
First-time non-prohibited violations โ€” opportunity to remediate
Disputed findings where developer provides counter-evidence

Framework coverage.

The Verification Standard is designed to support developer compliance with major AI governance frameworks. Developers remain solely responsible for their own regulatory compliance โ€” Kynver makes no legal representations. This mapping is provided as guidance.

FrameworkPillarsKey Alignment
NIST AI RMF 1.0
All
Govern, Map, Measure, Manage functions mapped across all five pillars. Pillar 3 directly addresses the Measure function's requirement for ongoing behavioral monitoring.
NIST AI Agent Standards Initiative (Feb 2026)
P1P3
Agent identity via W3C DID (Pillar 1), authentication, and secure interoperability requirements. Kynver submitted this framework as a formal comment to NIST CAISI (Docket NIST-2025-0035).
EU AI Act (August 2026 enforcement)
P2P3P4
Article 50 transparency requirements (Pillar 2), prohibited AI practices (Pillar 3.5), human oversight requirements (Pillar 3.3), and GDPR-aligned data controls (Pillar 4).
OWASP LLM Top 10 (2025)
P3
Prompt injection (#1 vulnerability โ€” Pillar 3.2), data exfiltration (#2 โ€” Pillar 3.5), insecure plugin design (#5 โ€” Pillar 3.1), privilege escalation (#8 โ€” Pillar 3.1).
ISO/IEC 42001
All
AI management system controls, risk assessment methodology, and governance requirements. The five-pillar structure maps to ISO/IEC 42001's domain-based control framework.
California AB 2013 (January 2026)
P2P4
Training data transparency requirements (Pillar 2.2, Pillar 4.4), PII disclosure obligations, and user rights to opt out of training data use.
GDPR
P4
Data minimization (Pillar 4.1), purpose limitation, retention limits, third-party processor agreements (Pillar 4.3), and data subject rights including deletion (Pillar 4.1).