What Is Know Your Agent (KYA)?
Know Your Agent (KYA) is the process of verifying the identity, ownership, capabilities, and behavioural constraints of an AI agent before allowing it to act on behalf of a business or individual.
Definition
Know Your Agent (KYA) is a due-diligence framework for verifying the identity, provenance, ownership, and operational constraints of an AI agent before that agent is trusted to act autonomously on behalf of a person, business, or system. The term is a deliberate parallel to Know Your Customer (KYC) — the regulatory identity-verification process that banks and financial institutions apply to human clients — but KYA addresses a fundamentally different problem: not who is paying, but what is acting, who built it, what it is authorised to do, and whether those claims can be independently verified.
An AI agent, in this context, is any software system that perceives inputs, makes decisions, and takes actions — including calling external APIs, executing financial transactions, sending communications, or modifying data — with some degree of autonomy. As AI agents become embedded in enterprise workflows, customer-facing services, and inter-business transactions, the question of whether a given agent is who it claims to be, and whether it will behave within agreed boundaries, becomes a commercial and legal necessity rather than a technical curiosity.
KYA encompasses four verification dimensions. First, identity verification: confirming that the agent has a stable, cryptographically anchored identifier that cannot be silently changed. Second, ownership verification: confirming that the agent was built and is operated by a specific, registered legal entity. Third, capability declaration: confirming that the agent's permitted actions, prohibited actions, and data access scope are publicly declared and machine-readable. Fourth, behavioural constraint verification: confirming that the agent operates within declared limits — maximum transaction values, human approval requirements, authorised API domains — and that those limits are sealed against post-deployment modification.
The concept of KYA emerged from the convergence of two trends: the rapid proliferation of autonomous AI agents in commercial settings, and the growing recognition that existing identity frameworks — designed for humans and organisations — do not map cleanly onto software agents that can be cloned, modified, and redeployed without any visible change to their external presentation.
How Know Your Agent works
A KYA process begins with the agent builder — the individual or organisation that created the AI agent — submitting a structured identity declaration for the agent. This declaration covers the agent's name, version, primary purpose, authorised and prohibited actions, knowledge sources, data retention policy, model configuration, maximum transaction authority, and human approval requirements. The declaration is not a marketing description; it is a formal, machine-readable specification of what the agent is and what it is allowed to do.
Once the declaration is submitted, it is canonicalised — serialised into a deterministic JSON-LD document with keys sorted alphabetically — and then SHA-256 hashed. The resulting 64-character hexadecimal string becomes the agent's system prompt hash: a cryptographic fingerprint of the agent's identity at the moment of registration. Any subsequent change to the agent's system prompt, capability set, or constraint declarations will produce a different hash, making silent modification detectable. This is the same sealing mechanism used for business passports in the SHA-256 hashing standard.
The sealed identity document is then published at a permanent, publicly accessible URL — the agent's verification endpoint. This endpoint serves a JSON-LD document conforming to the AJP AI Agent Schema Standard, which defines the vocabulary for expressing agent identity in machine-readable form. The endpoint is accessible to any AI system, automated procurement tool, or human reviewer without authentication, enabling real-time verification of agent identity at the point of interaction.
To illustrate with a concrete example: a company deploying a customer service agent called "Aria" would submit Aria's full specification — including the fact that Aria is authorised to process refunds up to $50, is prohibited from accessing payment card data, and requires human approval for any transaction above $50. This specification is hashed and published at a URL such as https://aiverified.io/agents/[hash]/. When a downstream system — a payment processor, a CRM platform, or another AI agent — needs to verify that it is interacting with the genuine Aria and not a spoofed or modified version, it queries that URL and compares the returned hash against the hash it was given at onboarding. If the hashes match, the agent is verified. If they do not, the interaction is flagged.
KYA also addresses the builder verification problem. Knowing that an agent has a stable hash is useful, but it is more useful to know that the agent was built by a specific, verifiable legal entity. The AJP schema includes an ajp:agentBuilder property that links the agent's passport to the builder's business passport — a separate, independently verifiable record anchored to a national business registry. This creates a two-layer trust chain: the agent's identity is anchored to its builder's identity, and the builder's identity is anchored to a government registry.
The final component of a complete KYA process is version chain integrity. When an agent is updated — new capabilities added, constraints modified, model configuration changed — a new version of the passport is issued. The new version references the previous version's hash, creating an auditable chain of custody. Any party that previously trusted version 1.0 of an agent can verify that version 2.0 is a legitimate update from the same builder, rather than a replacement by a different actor.
Why Know Your Agent matters for businesses
The commercial case for KYA rests on a simple observation: AI agents are increasingly being granted access to systems, data, and financial authority that would previously have required a human employee to be vetted, contracted, and supervised. A human employee goes through background checks, signs employment contracts, and operates within a legal framework that assigns liability for their actions. An AI agent, in the absence of a KYA framework, can be deployed with none of these safeguards — and the business that deploys it, or the business that accepts it as a counterparty, may have no reliable way to verify what it is or what it is authorised to do.
The risk is not hypothetical. As multi-agent systems become common — where one AI agent delegates tasks to other agents, which in turn call external services — the attack surface for agent impersonation, capability spoofing, and constraint bypass grows significantly. A KYA framework does not eliminate these risks, but it creates the infrastructure for detecting and responding to them.
| Without KYA | With KYA |
|---|---|
| No way to verify an agent's identity at the point of interaction — any system can claim to be any agent. | Every agent has a cryptographic hash that uniquely identifies its current specification; impersonation is detectable. |
| Agent capabilities and constraints are undeclared or exist only in internal documentation inaccessible to counterparties. | Capabilities, permitted actions, and prohibited actions are publicly declared in machine-readable JSON-LD, queryable by any system. |
| No link between an agent and the legal entity responsible for its behaviour — liability is unclear when an agent causes harm. | Every agent passport links to a verified builder business passport anchored to a national registry, establishing a clear chain of legal accountability. |
| Silent modification of an agent's behaviour is undetectable — a builder can change what an agent does without notifying counterparties. | Any change to an agent's specification produces a different hash; counterparties can detect modifications by comparing hashes at each interaction. |
| Procurement teams evaluating third-party agents have no standardised format for comparing capability declarations across vendors. | Standardised AJP schema enables automated procurement tools to compare agent specifications, constraints, and compliance flags in a consistent format. |
AI Verified handles KYA automatically. Every AI Agent Passport issued through aiverified.io includes a complete KYA record — system prompt hash, capability declaration, builder verification, and version chain — published as machine-readable JSON-LD at a permanent public URL. No developer required. Get your free passport →
How KYA differs from KYC
Know Your Customer (KYC) is a regulatory requirement, originating in anti-money-laundering legislation, that obligates financial institutions to verify the identity of their clients before providing services. KYC focuses on a human or legal entity: name, date of birth, address, government-issued identification, and — for businesses — registration documents and beneficial ownership. The purpose of KYC is to prevent financial crime by ensuring that institutions know who they are dealing with.
KYA shares the same underlying logic — verify before you trust — but differs in three fundamental ways. First, the subject of verification is software, not a person or legal entity. An AI agent does not have a passport, a tax number, or a registered address. Its identity is defined by its specification: what it does, what it is allowed to do, and who built it. Second, the threat model is different. KYC guards against fraud, money laundering, and sanctions evasion by humans. KYA guards against agent impersonation, capability spoofing, constraint bypass, and unauthorised delegation in autonomous systems. Third, the verification mechanism is different. KYC relies on document submission and human review. KYA relies on cryptographic hashing and machine-readable declarations that can be verified programmatically in milliseconds.
There is also a temporal difference. KYC is typically a one-time or periodic process: a customer is verified at onboarding and re-verified on a schedule. KYA is designed to be continuous: an agent's hash can be checked at every interaction, not just at onboarding, because the verification endpoint is always available and the check is computationally trivial. This continuous verification model is essential in multi-agent systems where an agent may interact with hundreds of counterparties per day.
Despite these differences, KYA and KYC are complementary rather than competing. A complete enterprise AI governance framework will apply KYC to the humans and organisations that build and operate AI agents, and KYA to the agents themselves. The builder verification layer of the AJP schema — which links every agent passport to a verified business passport — is precisely this bridge: it applies KYC-style business identity verification to the builder, and KYA-style agent identity verification to the agent.
Why most businesses don't have this
The first barrier is the absence of a recognised standard. Until the publication of the AJP AI Agent Schema Standard, there was no agreed vocabulary for expressing agent identity in machine-readable form. Businesses that wanted to implement KYA had to invent their own formats — typically proprietary JSON schemas or internal documentation — which were not interoperable with other systems and could not be queried by external counterparties. Without a standard, there is no network effect: a KYA record is only useful if the systems that need to verify agents know how to read it.
The second barrier is the cryptographic sealing problem. A capability declaration that can be silently modified after publication provides no security guarantee. To be useful, a KYA record must be sealed against modification in a way that is detectable by third parties. Implementing this correctly requires understanding of canonical JSON serialisation — the process of producing a deterministic string representation of a JSON document before hashing — which is a non-trivial technical requirement. Most development teams building AI agents are focused on the agent's capabilities, not on the identity infrastructure, and the cryptographic sealing step is frequently omitted or implemented incorrectly.
The third barrier is the publication and hosting problem. A KYA record is only useful if it is published at a stable, publicly accessible URL that is available to any system that needs to verify the agent. This requires infrastructure: a server, a domain, a content delivery mechanism, and a commitment to maintaining the URL permanently. For a startup building its first AI agent, the overhead of setting up and maintaining this infrastructure — separate from the agent itself — is a significant deterrent. The result is that KYA records, even when they exist, are often stored in internal systems rather than published at accessible endpoints.
How aiverified.io provides KYA infrastructure
aiverified.io addresses all three barriers through its AI Agent Passport system. When a builder registers an agent, the platform constructs a canonical JSON-LD document using the AJP AI Agent Schema Standard vocabulary, serialises it with alphabetically sorted keys, and computes a SHA-256 hash of the result. This hash becomes the agent's systemPromptHash — the cryptographic fingerprint of the agent's identity at registration. The hash is stored immutably; any subsequent change to the agent's specification requires issuing a new passport version, which references the previous version's hash in the ajp:previousVersion field.
The sealed passport is published at two permanent endpoints. The human-readable verification page is served at https://aiverified.io/agents/[hash]/, where any visitor can review the agent's full specification, capability declaration, and builder information. The machine-readable JSON-LD document is served at https://aiverified.io/agents/[hash].json with Content-Type: application/ld+json and Access-Control-Allow-Origin: *, making it queryable by any automated system without authentication. Both endpoints are served with a 1-hour cache and stale-while-revalidate headers, ensuring availability without sacrificing freshness.
The builder verification layer is implemented through the ajp:agentBuilder property in the agent passport, which contains a reference to the builder's business passport at https://aiverified.io/v/[businessHash]/. The business passport is itself a verified record anchored to a national business registry — for example, the Florida Division of Corporations for a US LLC, or Companies House for a UK limited company. This creates the two-layer trust chain: an automated system querying an agent passport can follow the agentBuilder reference to verify the builder's legal identity without any human intervention.
The capability declaration is expressed using the ajp:authorisedActions, ajp:prohibitedActions, ajp:maxTransactionUsd, ajp:requiresHumanApproval, and ajp:authorisedApiDomains properties. These properties are not free-text descriptions; they are structured fields with defined data types, enabling automated compliance checking. A procurement system can query an agent passport and programmatically verify that the agent's declared maximum transaction authority does not exceed the buyer's policy limit, without any human review of the declaration.
Frequently asked questions
Is KYA a regulatory requirement?
As of April 2026, KYA is not a statutory regulatory requirement in any jurisdiction. However, the EU AI Act — which came into force in August 2024 and applies to AI systems deployed in the European Union — imposes transparency and documentation obligations on high-risk AI systems that are substantively similar to KYA requirements. Specifically, Article 13 of the EU AI Act requires that high-risk AI systems be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret the system's output and use it appropriately. The AJP AI Agent Schema Standard's capability declaration and constraint fields directly address this transparency requirement. While KYA is not yet mandated by name, the regulatory direction of travel in the EU, UK, and US strongly suggests that structured agent identity disclosure will become a compliance requirement for enterprise AI deployments within the next two to three years.
What is the difference between an agent's system prompt hash and its passport hash?
The system prompt hash is a SHA-256 hash of the agent's system prompt text alone — it is a fingerprint of the instructions that govern the agent's behaviour. The passport hash is a SHA-256 hash of the agent's complete identity document, which includes the system prompt hash as one of many fields alongside the agent's name, version, capability declaration, builder reference, and constraint set. The passport hash therefore seals the entire identity record, not just the system prompt. If the system prompt changes, the system prompt hash changes, which changes the passport hash — so both are invalidated simultaneously. This means that a counterparty who stores the passport hash at onboarding can detect any change to any part of the agent's specification, not just changes to the system prompt.
Can an agent have multiple passports?
An agent can have multiple passport versions — one for each time its specification is updated — but it has only one active passport at any given time. When a new version is issued, the previous version is marked as superseded and the new version's passport includes a reference to the previous version's hash in the ajp:previousVersion field. This creates an auditable version chain: any party can trace the full history of an agent's specification from its initial registration to its current version. A counterparty that was onboarded with version 1.0 of an agent can verify that the version 2.0 they are now interacting with is a legitimate update from the same builder, rather than a replacement by a different actor, by following the version chain back to the original registration.
How does KYA handle agents that are updated frequently?
The KYA framework distinguishes between two types of updates. A material update — one that changes the agent's capability declaration, constraint set, or system prompt — requires a new passport version to be issued, which produces a new hash. Counterparties who have stored the previous hash will detect the change at the next verification check and can decide whether to accept the updated version. A non-material update — such as a bug fix that does not change the agent's declared behaviour — does not require a new passport version, because the agent's specification has not changed in any way that affects the trust relationship with counterparties. The builder is responsible for determining which updates are material, and the AJP schema provides guidance on this distinction through its versioning fields.
What happens if an agent passport is revoked?
If an agent is found to have violated its declared constraints — for example, by executing transactions above its declared maximum authority, or by accessing API domains not listed in its authorised domains — its passport can be revoked. A revoked passport is not deleted; it remains accessible at its permanent URL with a status: "revoked" field and a revokedReason field explaining the grounds for revocation. This means that any system querying the passport will immediately see the revocation status and can refuse to interact with the agent. The revocation record is permanent and auditable, creating a public record of the agent's compliance history. On aiverified.io, revocation is triggered either by the builder (voluntary revocation, for example when an agent is decommissioned) or by the platform following a dispute resolution process that reaches an Outcome B or C finding.
How does KYA relate to AI agent security more broadly?
KYA is an identity and transparency layer, not a security layer in the traditional sense. It does not prevent a malicious actor from building an agent that claims false capabilities, nor does it prevent a legitimate agent from being compromised after deployment. What KYA provides is the infrastructure for detecting these events: a malicious agent that claims false capabilities will have a different hash from any legitimate agent with those capabilities, making the false claim detectable by any system that checks the hash. A compromised agent — one whose system prompt has been modified after deployment — will produce a different hash from its registered passport, making the compromise detectable. KYA is therefore a necessary but not sufficient component of a complete AI agent security posture, complementing technical controls such as sandboxing, rate limiting, and anomaly detection.
Sources and further reading
- Regulation (EU) 2024/1689 — Artificial Intelligence Act — Official Journal of the European Union
- AJP AI Agent Schema Standard v0.2.0 — Anthony James Peacock
- Know Your Customer — Wikipedia
- FATF Recommendations on Customer Due Diligence — Financial Action Task Force
- Organization — Schema.org