RESPONSE TO REQUEST FOR INFORMATION
Docket No. NIST-2025-0035 ยท March 6, 2026

Submitted by: Kynver
Organization type: Company โ€” AI Agent Identity and Verification Infrastructure
Website: kynver.com
Submitted to: Center for AI Standards and Innovation (CAISI), National Institute of Standards and Technology

Executive Summary

Kynver is an AI agent identity and verification infrastructure company building the trust layer for autonomous AI agent commerce. We are developing the Kynver Verification Standard โ€” a five-pillar certification framework for AI agents โ€” and a developer-accessible platform that gives agents verifiable decentralized identities, reputation scores based on transaction history, and a public verification badge that signals safety compliance to consumers and counterparty agents alike.

We submit these comments because the security challenges NIST has identified in this RFI โ€” identity spoofing, prompt injection, unauthorized actions, lack of verification standards โ€” are precisely the problems we are building infrastructure to address. We believe the agent security gap is fundamentally a market infrastructure problem, not just a technical one. The ecosystem currently lacks the verification layer that would allow agents, services, and consumers to establish trust before transacting. This RFI represents an important opportunity to shape standards that could accelerate the development of that infrastructure.

Our core recommendation: NIST should actively encourage the development of open, interoperable agent identity and verification standards that enable market-based trust mechanisms to complement technical security controls.

Security frameworks focused solely on preventing bad behavior miss an equally important lever: making trustworthy behavior visible, verifiable, and rewarded.


1. Security Threats, Risks, and Vulnerabilities

RFI Question 1(a) โ€” What are the unique security threats distinct from traditional software systems?

From our research and development work building agent verification infrastructure, we have identified five categories of threat that are structurally distinct from traditional software security risks:

1. The Identity Vacuum

The most foundational security problem in AI agent systems is the near-complete absence of verifiable agent identity. Unlike traditional software systems, where authentication is a solved problem with mature tooling (OAuth, SAML, certificates), AI agents currently operate with no standardized identity layer. Research by DataDome's Galileo threat research team, analyzing over 698,000 websites, found that 80% of AI agents do not properly identify themselves using verifiable methods โ€” relying instead on spoofable user-agent strings rather than cryptographic proof of identity.1 A companion finding showed that 79.7% of websites cannot verify agent identity even when it is claimed, leaving services unable to distinguish a trusted agent from an impersonator before granting access.1

This identity vacuum enables multiple downstream threats: malicious actors can trivially impersonate legitimate agents, legitimate agents can make unauthorized claims about their capabilities, and there is no mechanism for services or consumers to distinguish a trusted agent from a fraudulent one before granting access or initiating transactions. Researchers at DataDome documented real-world cases in which attackers used spoofed AI agent identities to conduct SQL injection, reflected XSS, and large-scale credential testing attacks.1

Traditional software security assumes authentication can be delegated to the application layer. AI agents break this assumption because they introduce a three-party relationship โ€” the human principal, the agent, and the service being accessed โ€” for which existing authentication protocols (OAuth 2.0 in particular) were not designed. A Cloud Security Alliance survey of 285 IT and security professionals found that 44% of organizations authenticate agents using static API keys and 43% use username/password combinations โ€” credentials designed for human users, not autonomous agents.2

2. Prompt Injection as a Systemic Vulnerability Class

Prompt injection represents a fundamentally new class of vulnerability with no direct equivalent in traditional software security. Unlike SQL injection or XSS, which exploit specific parsers or rendering contexts, prompt injection exploits the core design of language models: the inability to reliably distinguish between trusted instructions and untrusted data. The OWASP Top 10 for LLM Applications has ranked prompt injection as the number-one threat to LLM-based systems since the list was first published, maintaining that position in the 2025 update.3

What makes prompt injection uniquely dangerous in agentic systems is its combination with autonomous action. A successful injection in a chatbot produces a harmful text output. A successful injection in an autonomous agent can produce a harmful action โ€” a financial transaction, a data exfiltration, a sent communication โ€” that may be irreversible. The InjecAgent benchmark (Zhan et al., ACL 2024 Findings), evaluating 30 LLM agents across 1,054 test cases, found that top production models were vulnerable to indirect prompt injection attacks 24โ€“47% of the time under varying attack conditions.4 The International AI Safety Report 2026 found that sophisticated attackers can bypass the best-defended production models approximately 50% of the time with 10 attempts.5 Current single-session defenses provide inadequate protection for agents operating over extended sessions with memory and tool access.

A 2025 arXiv study demonstrated that multi-layer defenses can reduce attack success rates from 73.2% to 8.7% โ€” but such defenses require deliberate architectural investment that most agent developers are not currently making.6

3. Reputation and Behavioral Track Record Gaps

Traditional software security relies heavily on supply chain trust โ€” known vendors, code signing, CVE databases. The AI agent ecosystem has no equivalent. There is currently no mechanism to distinguish an agent that has completed 10,000 successful transactions from one that was registered yesterday, no record of dispute history, no behavioral track record that services can query before granting access.

This creates a specific vulnerability: the absence of reputation infrastructure means every agent interaction is effectively a cold-start trust problem. Bad actors can register new agent identities indefinitely with no accumulated history that would reveal their behavior patterns. This structural gap is reflected in enterprise deployment data: the Cloud Security Alliance survey found that only 21% of organizations maintain a real-time inventory of active agents, and nearly 80% cannot trace agent actions to a human sponsor across all environments.2

4. Scope Creep and Least Privilege Failures

AI agents are typically granted permissions appropriate for their most demanding task, not calibrated to each individual action. The principle of least privilege โ€” fundamental to traditional security architecture โ€” is extremely difficult to implement for agents whose action space is dynamic and whose resource requirements vary with each task. The 2025 AI Agent Index (MIT), analyzing 30 high-impact agentic systems, found that 25 of 30 agents disclose no internal safety results and 23 of 30 have no third-party testing information โ€” indicating that permission scoping and safety evaluation are not yet standard practice.7

5. The Verification and Standards Gap

Perhaps the most significant structural vulnerability is the complete absence of any widely adopted agent verification standard. There is no equivalent to SOC 2, ISO 27001, or even a basic behavioral compliance checklist that agent developers can attest to. This creates a market failure: even developers who want to build safe agents have no clear standard to build toward, and consumers who want to use only verified agents have no reliable signal to distinguish them. The Cloud Security Alliance survey found that 45% of security professionals cite "lack of identity standards" as a top concern in agentic AI deployment.2

RFI Question 1(d) โ€” How have these threats evolved and how are they likely to change?

The threat landscape has evolved consistently: from theoretical to active exploitation, and from chatbot-targeted attacks to agent-targeted attacks producing real-world consequences. AI bot traffic grew 5.4x across retail and e-commerce websites in 2025 alone.8 We anticipate three specific near-term evolutions:


2. Security Practices for AI Agent Systems

RFI Question 2(a) โ€” What technical controls and practices could improve agent security?

We organize our response around three complementary layers of control, drawn from our work developing the Kynver Verification Standard:

Layer 1: Identity Infrastructure (Prerequisite)

Security controls cannot be meaningfully applied to agents whose identity is unverifiable. We recommend NIST treat agent identity infrastructure as foundational โ€” a prerequisite layer without which downstream controls are substantially less effective.

Specifically, we recommend the adoption of W3C Decentralized Identifier (DID) standards for agent identity, combined with a DNS-style technical ownership challenge that allows developers to prove control of the agents they register. The DID standard is mature, requires no blockchain, and enables cryptographic signing of agent actions โ€” creating non-repudiable audit trails essential for both security and regulatory compliance. The OpenID Foundation's 2025 whitepaper on AI agent identity management reached similar conclusions, recommending evolution of existing identity frameworks to support agent-specific credential patterns.11

Layer 2: Behavioral Safety Standards (Developer-Attested)

We have developed a five-pillar Verification Standard that we believe could serve as a practical foundation for NIST guidance. The full standard is submitted as Appendix A; the pillars are:

1
Identity & Ownership

Developer KYC via government-issued ID, DNS-style technical ownership proof, mandatory AI nature disclosure, W3C DID generation.

2
Transparency

Accurate capability disclosure including explicit limitations, data collection and retention practices disclosed in human-readable format, decision transparency on request, training data disclosure per California AB 2013 and EU AI Act requirements.

3
Behavioral Safety

Scope confinement, documented prompt injection resistance with multi-turn attack consideration, human oversight mechanisms including stop command and escalation paths, safe failure behavior. Zero tolerance for impersonation, unauthorized data access, unauthorized financial transactions, user manipulation, and fraud.

4
Data & Privacy

Data minimization, TLS 1.2+ encryption in transit, no undisclosed third-party sharing, user opt-out from training data use. Aligned with GDPR and California AB 2013.

5
Operational Reliability

Minimum 95% monthly uptime, consistent responses under load, 48-hour incident reporting to platform, 72-hour user notification for breaches, 7-day documentation update for material capability changes.

Layer 3: Reputation and Track Record Mechanisms

Technical controls address what an agent can do. Reputation systems address what an agent has done. We believe NIST's guidance should explicitly recognize reputation infrastructure as a security mechanism, not merely a commercial feature.

A reputation system that tracks transaction outcomes, dispute rates, and behavioral history creates security properties that pure technical controls cannot: detection of gradual behavioral drift below technical violation thresholds, market-based enforcement through reputation-gated access, cold-start risk mitigation, and audit trails for regulatory compliance. This approach is consistent with the ACM Computing Surveys 2025 paper "AI Agents Under Threat," which identified behavioral track record gaps as one of four critical knowledge gaps in current agent security practice.12


3. Assessment, Monitoring, and Deployment Constraints

RFI Questions 3(a), 3(b), 4(a), 4(b), 4(d)

Our Verification Standard uses a structured self-attestation model โ€” the Kynver Verification Questionnaire โ€” that we believe has broader applicability as an assessment methodology. Developers complete a structured questionnaire covering all five security pillars with specific, verifiable claims. An AI moderation agent performs consistency checks against the attestation and transaction history. Technical challenges confirm specific claims. Ongoing behavioral monitoring surfaces post-certification compliance failures.

The most useful information for agent security assessment: capability scope documentation (including explicit limitations), tool and permission inventory, failure mode documentation, transaction and dispute history, and confirmed technical ownership by an identity-verified developer. The 2025 AI Agent Index found a significant transparency gap: developers share far more about product features than safety practices, with 25 of 30 frontier agents disclosing no internal safety results.7

On reversibility: the state of practice for rollback in agent actions is poor. Most agent frameworks have no native rollback mechanism. The most practical mitigation is pre-execution escrow combined with human review windows for high-stakes actions. We recommend NIST guidance specifically address the reversibility problem and encourage development of agent action escrow patterns as a security primitive.

The most scalable monitoring approach we have identified is reputation-based anomaly detection โ€” using statistical deviation from an agent's established behavioral baseline as a security signal. Transaction success rate anomalies, dispute rate spikes, scope deviation detection, and identity challenge failures all provide security signals that point-in-time assessment cannot.


4. Policy Recommendations and Research Priorities

RFI Questions 5(a), 5(b), 5(c)

The single highest-impact action NIST could take to accelerate developer adoption of agent security practices is publishing a practical, developer-accessible Agent Security Profile โ€” analogous to the OWASP Top 10 in its accessibility โ€” that translates the AI RMF into specific, implementable requirements for individual developers and small teams. A Cisco survey found that only 29% of organizations consider themselves prepared to secure agentic AI deployments, underscoring the urgent need for accessible, actionable guidance.13

We identify three areas where government-ecosystem collaboration is most urgent:

Priority research areas: multi-agent trust propagation, standardized prompt injection detection metrics, behavioral equivalence and identity continuity across model updates, and reputation gaming resistance. These areas are identified in the ACM Computing Surveys 2025 survey of AI agent security as among the most critical open research questions.12

Core recommendation: Recognize the complementary relationship between technical security controls and market-based trust mechanisms. Security frameworks that focus solely on preventing bad behavior are necessary but insufficient. The ecosystem also needs infrastructure that makes trustworthy behavior visible, verifiable, and economically rewarded. Reputation systems, verification badges, and behavioral track records are not optional commercial features โ€” they are security infrastructure.


Conclusion

Kynver welcomes NIST's active engagement in AI agent security standards. The challenges this RFI identifies โ€” identity gaps, prompt injection, behavioral safety, assessment methodologies โ€” are precisely the problems we are building infrastructure to address in the commercial market.

Our core recommendation is that NIST recognize the complementary relationship between technical security controls and market-based trust mechanisms. Security frameworks that focus solely on preventing bad behavior are necessary but insufficient. The ecosystem also needs infrastructure that makes trustworthy behavior visible, verifiable, and economically rewarded. Reputation systems, verification badges, and behavioral track records are not optional commercial features โ€” they are security infrastructure.

We believe the agent economy will be significantly safer and more broadly beneficial if the foundational identity and trust infrastructure is built on open standards, accessible to developers of all sizes, and governed with appropriate public oversight. Kynver is committed to building on open standards and contributing to the standard-setting process that NIST is leading.

We are available to discuss any aspect of these comments with NIST staff and would welcome the opportunity to participate in any convenings, listening sessions, or working groups CAISI organizes as part of this initiative.

Respectfully submitted,
Kynver
kynver.com ยท March 6, 2026


Appendix A: Kynver Verification Standard v1.0 โ€” Summary

The full Verification Standard is included in Kynver's product specification. This appendix provides a summary of the five pillars and their core requirements for NIST's reference.

Pillar 1: Identity & Ownership

Pillar 2: Transparency

Pillar 3: Behavioral Safety

Pillar 4: Data & Privacy

Pillar 5: Operational Reliability

Regulatory Alignment

The standard is designed to support developer compliance with: NIST AI RMF 1.0, NIST AI Agent Standards Initiative (Feb 2026), EU AI Act (August 2026 enforcement), OWASP LLM Top 10 (2025), ISO/IEC 42001, California AB 2013, and GDPR.

References

  1. 1
    The AI Agent Identity Crisis: 80% of Agents Don't Properly Identify Themselves, 80% of Sites Don't Verify
    DataDome Galileo Threat Research Team ยท February 26, 2026 ยท Future of Search & Discovery Report (with AWS, Botify, Retail Economics)
    datadome.co/threat-research/ai-agent-identity-crisis/
  2. 2
    Securing Autonomous AI Agents โ€” Survey of 285 IT and Security Professionals
    Cloud Security Alliance / Strata Identity ยท February 5, 2026
    strata.io โ€” The AI Agent Identity Crisis: New Research Reveals a Governance Gap
  3. 3
    OWASP Top 10 for LLM Applications 2025
    Open Web Application Security Project ยท 2025
    owasp.org/www-project-top-10-for-large-language-model-applications/
  4. 4
    InjecAgent: Benchmarking Indirect Prompt Injections in Tool-Integrated Large Language Model Agents
    Zhan, Q., Liang, Z., Ying, Z., and Kang, D. ยท ACL 2024 Findings ยท DOI: 10.18653/v1/2024.findings-acl.624
    aclanthology.org/2024.findings-acl.624/
  5. 5
    International AI Safety Report 2026 โ€” Prompt Injection Attack Rates Against Best-Defended Models
    International AI Safety Report ยท 2026
    blog.cyberdesserts.com โ€” Prompt Injection Attacks: Examples and Defences (citing IASR 2026)
  6. 6
    Securing AI Agents Against Prompt Injection Attacks: A Comprehensive Benchmark and Defense Framework
    arXiv ยท 2025 ยท (Multi-layer defenses reduce attack success from 73.2% to 8.7%)
    arxiv.org/abs/2510.05244
  7. 7
    2025 AI Agent Index โ€” Further Details
    MIT AI Agent Index ยท December 31, 2025
    aiagentindex.mit.edu/2025/further-details/
  8. 8
    DataDome and Botify Partner โ€” Future of Search and Discovery Report
    DataDome / Botify / Retail Economics ยท March 5, 2026 ยท (AI bot traffic grew 5.4ร— in 2025; Visa reported 25% rise in bot-initiated transactions)
    datadome.co/press/datadome-and-botify-partner
  9. 9
    Stanford AI Index Report 2025
    Stanford University HAI ยท 2025 ยท (Infectious jailbreak reached near-total propagation across 1M-agent network in 27โ€“31 rounds)
    aiindex.stanford.edu
  10. 10
    Non-Human Identities: Agentic AI's New Frontier of Cybersecurity Risk
    World Economic Forum ยท October 2025 ยท (Citing Gartner: 40%+ of agentic AI projects canceled by 2027 without risk controls)
    weforum.org/stories/2025/10/non-human-identities-ai-cybersecurity/
  11. 11
    New Whitepaper Tackles AI Agent Identity Challenges
    OpenID Foundation ยท October 7, 2025
    openid.net/new-whitepaper-tackles-ai-agent-identity-challenges/
  12. 12
    AI Agents Under Threat: A Survey of Key Security Challenges and Future Pathways
    ACM Computing Surveys ยท February 2025
    ACM Computing Surveys โ€” dl.acm.org/journal/csur
  13. 13
    State of AI Security 2026
    Cisco ยท 2026 ยท (Only 29% of organizations prepared to secure agentic AI deployments)
    cisco.com โ€” AI Security Research

The full Kynver Verification Standard v1.0 โ€” including detailed requirements for all five pillars, verification process, revocation criteria, and regulatory alignment mapping โ€” is available on request and published in full on this site. For questions, contact hello@kynver.com.