| Internet-Draft | API Index | April 2026 |
| Rehfeld | Expires 26 October 2026 | [Page] |
The internet was designed for human actors. Its discovery infrastructure — search engines, directories, and hyperlinked documents — assumes a human reading and navigating. Autonomous agents (bots) operating on the internet today face a structural gap: there is no machine-native, globally accessible index of services they can consume.¶
This document proposes the API Index (APIX): a HATEOAS-based, globally accessible, commercially sustainable service discovery infrastructure designed for autonomous agents as its primary consumers. The APIX provides a central, always-up-to-date, searchable index of machine-consumable API services, together with a structured three-dimensional trust model that allows consuming agents to apply their own trust policies against verifiable metadata.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 26 October 2026.¶
Copyright (c) 2026 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document.¶
The internet's foundational infrastructure — HTTP, HTML, DNS, and search engines — was designed with human actors as the primary consumers. Web pages render visual layouts for human eyes. CAPTCHA systems explicitly discriminate against non-human access. Discovery mechanisms such as search engines index content for human-readable navigation.¶
Autonomous agents — software programs that independently execute tasks, consume APIs, and interact with external services without per-action human instruction — are not recognized as legitimate, first-class internet participants in this architecture. They are systematically treated as threats to be filtered, blocked, or rate-limited.¶
This situation is changing. The rapid growth of large language model-based agents, robotic process automation, and programmatic service consumers means that non-human actors now represent a significant and growing proportion of internet traffic. As this proportion increases, internet service providers will increasingly need to serve autonomous agents as a recognized user class alongside humans.¶
The API Index is premised on this trajectory: bots are becoming first-class internet participants, and the infrastructure to support them — starting with service discovery — does not yet exist.¶
The API Index was not conceived in the abstract. It emerged from a concrete practical failure.¶
A buying bot was built for a private use case: monitoring online shops for a specific product and purchasing it automatically the moment it became available. This is a straightforward task for an autonomous agent — exactly the kind of task agents are well-suited for.¶
The bot failed, not because the task was technically complex, but because the internet's infrastructure is actively hostile to it:¶
HTML-only product pages. Product availability, price, and purchase state were encoded in HTML rendered for a human eye. No machine-readable API existed. The bot had to parse HTML — fragile, maintenance-intensive, and broken by every redesign.¶
Cloudflare Bot Management and equivalent shields. The majority of commercial web services now sit behind bot mitigation infrastructure. Cloudflare Bot Management, and equivalent products from Akamai, Imperva, and others, are deployed specifically to detect and block non-human request patterns. Repeated automated requests — even at modest frequency — trigger rate limiting, CAPTCHA challenges, or silent blocking. A buying bot that polls a product page to detect availability is treated identically to a malicious scraper or a DDoS participant.¶
CAPTCHA payment barriers. Even when product pages were reachable, payment flows required solving CAPTCHAs that explicitly excluded non-human actors. The purchasing step — the final, necessary action — was deliberately made inaccessible to the bot.¶
Proxy network pollution. To work around rate limits and bot detection, the bot required a rotating proxy network — different IP addresses on each request to disguise its automated origin. This is not a solution: it pollutes internet traffic with avoidable requests, raises the cost of operation, and contributes directly to the adversarial dynamic between bots and infrastructure operators. Every proxy request is a wasted roundtrip that a machine-readable API endpoint would have made unnecessary.¶
Polling as the only state-change mechanism. Because the bot had no way to subscribe to product availability events, it had to poll the product page continuously. This is architecturally wasteful: the bot consumes server resources and network bandwidth to repeatedly ask a question whose answer has not changed. If the service had provided a notification registration endpoint — a webhook, a server-sent event stream, or a WebSocket channel — the bot could subscribe once and receive a push notification when the product became available. No polling. No proxy network. No CAPTCHA exposure.¶
These are not edge cases. They are the standard experience for any autonomous agent attempting to consume a commercial internet service today. The buying bot illustrates why the API Index is necessary: not as an academic exercise, but as the infrastructure layer that makes autonomous agents functional participants in the commercial internet.¶
When an autonomous agent must fulfill a task that requires an external service, it faces a fundamental discovery problem: how does it find services that can fulfill its requirement?¶
Current approaches are inadequate:¶
Hardcoded URLs: brittle, require human maintenance, do not adapt to new or changed services.¶
LLM training data: stale, non-authoritative, not machine-verifiable.¶
Human-curated lists: do not scale, not machine-navigable, lack structured metadata.¶
Web search: returns HTML documents designed for humans, not structured service descriptions for agents.¶
What is needed is a machine-native equivalent of a search engine: a global, always-current, structured index of services that autonomous agents can query by capability, trust level, liveness, and other machine-relevant criteria.¶
The APIX is not the first attempt at a global service registry. Prior efforts must be understood explicitly so that their failure modes are not repeated.¶
UDDI (Universal Description, Discovery and Integration)
UDDI was a SOAP-era standard for a global service registry with the same
conceptual goal as APIX, published as an OASIS Committee Draft in October
2004 (editors: Clement, Hately, von Riegen, Rogers). It failed for three
reasons: (1) extreme complexity of the XML-based data model; (2) no
automatic verification — all data was self-asserted with no crawling or
validation; (3) no adoption incentive — there was no commercial model to
sustain registration or discovery. APIX addresses all three directly: a
simple JSON manifest, automated spider verification, and a commercial tier
model.¶
robots.txt (Robots Exclusion Protocol)
Machine-readable, but concerned with exclusion — telling crawlers what not
to access — not with discovery of capabilities. Per-domain only. Not a
registry.¶
MCP (Model Context Protocol)
Defines tool and capability descriptions for LLM-based agents. Excellent
for consumption once a server URL is known. Does not address the discovery
problem: there is no index of MCP servers. APIX is complementary to MCP —
it can index MCP servers as one supported spec type.¶
Well-Known URIs (RFC 8615)
Per-domain machine-readable metadata at /.well-known/. Useful for
per-service metadata but requires the consumer to already know the domain.
No cross-service search or global index.¶
DNS
DNS resolves names to addresses but carries no capability semantics. It is
an architectural analogy for APIX's federation model, not a comparable system.¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].¶
Agent / Bot
An autonomous software program that independently executes tasks by consuming
external services, without per-action human instruction. The terms are used
interchangeably in this document.¶
Service
A machine-consumable API offered by an organisation, registered in the APIX,
and described by a APIX Manifest.¶
Service Owner
The organisation responsible for registering, maintaining, and operating a
Service in the APIX.¶
APIX Manifest (APM)
The structured metadata document that describes a Service to the APIX,
including its technical specification reference, capability taxonomy,
trust metadata, and commercial terms.¶
API Index (APIX)
The global, centralised index of registered Services, operated by the Bot
Foundation, queryable by autonomous agents via the Index API.¶
Index API
The HATEOAS-compliant HTTP API exposed by the APIX for agent discovery and
navigation.¶
Spider
The automated crawler operated by the APIX that verifies registered Services
by reading their technical specifications and performing liveness checks.¶
Accredited Verifier
A trusted third-party organisation, accredited by the Bot Standards Foundation, that
performs human-intensive trust verification at Organisation levels O-3 and
O-4.¶
Accredited Regional Representative
An organisation accredited by the Bot Standards Foundation to operate commercial
onboarding, contracting, and customer relationships within a defined
geographic jurisdiction, under the Bot Standards Foundation's standard and governance.¶
Trust Policy
A set of minimum trust requirements expressed by a consuming agent that a
Service must satisfy before the agent will use it.¶
Liveness
The confirmed operational status and response availability of a Service,
as measured by the APIX Spider at a frequency determined by the Service's
commercial tier.¶
Tier
A commercial subscription level that determines a Service's visibility in
the APIX, liveness check frequency, and API query rate allocation.¶
The APIX MUST be queryable by autonomous agents via a stable, globally accessible URL without prior knowledge of any specific service.¶
The Index API MUST follow HATEOAS principles: agents MUST be able to navigate the full index starting from a single entry-point URL.¶
Every Service record MUST expose machine-readable trust metadata across all three trust dimensions (Organisation, Service, Liveness).¶
Service registration MUST involve a human-initiated B2B onboarding process with a contractual relationship between the Service Owner and the Bot Foundation or its Accredited Regional Representative.¶
The APIX Spider MUST verify Service technical specifications automatically after registration and on a schedule determined by the Service's tier.¶
The APIX MUST expose trust metadata as verifiable facts, not as recommendations. Trust decisions MUST remain with the consuming agent.¶
The APIX Manifest (APM) MUST be format-agnostic: it MUST support referencing OpenAPI, MCP, AsyncAPI, and GraphQL specifications, with additional spec types addable via extension.¶
The APIX MUST be operated as a neutral, non-profit infrastructure under the governance of the Bot Standards Foundation.¶
The Index API SHOULD support full-text and structured search by capability, category, organisation trust level, service verification level, liveness freshness, and protocol type.¶
The APIX SHOULD provide SDKs in common agent development languages to lower the integration barrier for consuming agents.¶
The APIX SHOULD support a federated accredited verifier model so that Organisation trust levels O-3 and O-4 can be verified at scale without centralising all human review in the Bot Standards Foundation.¶
Accredited Regional Representatives SHOULD be established in major jurisdictions to allow Service Owners to contract in their local language and legal framework.¶
The APIX SHOULD publish a public transparency report at least annually, disclosing the number of registered services by tier and trust level, spider coverage statistics, and verifier accreditation status.¶
The following are explicitly not addressed by this document. Items marked MUST NOT are normative constraints on conforming implementations; remaining items are editorial scope boundaries.¶
Bot identity and authentication: how a bot proves its own identity to a service it consumes. This is a separate standards problem addressed by complementary work such as draft-meunier-webbotauth-registry. This document takes no position on bot identity mechanisms.¶
Bot rights and legal personhood: the political and legal recognition of autonomous agents as rights-bearing entities. This is outside the scope of a technical infrastructure standard.¶
Service execution: a conforming APIX implementation MUST NOT proxy, mediate, or execute service calls on behalf of consuming agents. The APIX is a discovery layer only; all service interactions occur directly between the consuming agent and the Service Owner's infrastructure.¶
Content indexing: a conforming APIX implementation MUST NOT index service response content. The APIX indexes service metadata — capability declarations, trust levels, liveness signals — not the data that services return when called.¶
Mandatory consumer registration: a conforming APIX implementation MUST NOT require consuming agents to register or identify themselves as a condition of performing discovery queries (see Section 8.3).¶
+----------------------------------------------------------+
| Bot Standards Foundation |
| (Swiss Stiftung -- neutral, non-profit) |
| Owns: APIX standard, Index infrastructure, APM format |
| Accredits: Regional Representatives, Verifiers |
+---------------------+------------------------------------+
|
+---------------+---------------+
| | |
+-----+------+ +-----+------+ +-----+-----------+
| Index | | Spider | | Registration |
| API | | (Crawler) | | Portal |
| (HATEOAS) | | | | (B2B / human) |
+-----+------+ +-----+------+ +-----+-----------+
| | |
| +-----+------+ |
| | Service | |
+-------->| Record |<-------+
| Store |
+------------+
^ ^
| |
+-----+------+ +--------+-----------+
| Consuming | | Service Owner |
| Agent | | (+ Accredited |
| (Bot) | | Regional Rep) |
+------------+ +--------------------+
¶
Flow:¶
A Service Owner initiates registration via the Registration Portal, providing company details, service metadata, and agreeing to a commercial contract (directly with the Bot Standards Foundation or via an Accredited Regional Representative).¶
The Registration Portal creates a draft Service Record and triggers the APIX Spider.¶
The Spider crawls the registered service endpoint, reads and validates the referenced technical specification, performs a liveness check, and updates the Service Record with verified technical metadata.¶
The Service Record becomes queryable via the Index API.¶
A consuming agent queries the Index API from the single entry-point URL, navigates by HATEOAS links, applies its Trust Policy, and selects services that satisfy its requirements.¶
The Spider continues to recheck services on the schedule defined by each service's liveness monitoring configuration.¶
The APIX MUST be operated by a neutral governing body that satisfies the following normative requirements. These requirements apply to any conforming APIX implementation; the specific legal form of the governing body is an implementation choice.¶
Neutrality requirements:¶
The governing body MUST have no commercial interest in preferring any registrant's services over another in index results or liveness scheduling.¶
The governing body MUST NOT offer exclusive discovery advantages, ranking preferences, or prioritised Spider treatment to any registrant regardless of commercial relationship.¶
Governance of the APIX standard and APM specification MUST be separated from operation of the commercial index. A single entity may not simultaneously control standard evolution and derive commercial benefit from preferential application of that standard.¶
Operational requirements:¶
The governing body MUST accredit Regional Representatives who may handle service onboarding in specific jurisdictions. Regional Representatives operate under licence from the governing body; the index itself remains a single global store.¶
The governing body MUST accredit Verifiers who perform Organisation trust assessments at O-3 and O-4. Accredited Verifiers are structurally analogous to Certificate Authorities in the TLS ecosystem: trusted third parties whose assessments the index relies upon without independently replicating.¶
The governing body MUST maintain the capability taxonomy and publish all versions of the APM specification and Index API specification as open standards under a permissive licence.¶
The governing body MUST perform sanctions screening on service registrants (see Section 7.3).¶
Openness requirements:¶
The full index MUST be made available as a freely downloadable bulk dataset at regular intervals, under an open data licence. No entity, including the governing body, may hold an exclusive lock on the index data.¶
Discovery queries to the Index API MUST be available without authentication or payment (subject to rate limits; see Section 8.3).¶
A conforming APIX implementation SHOULD establish mechanisms to ensure global representation in the capability taxonomy, including service categories relevant to underrepresented regions. Where regional institutional partners (intergovernmental bodies, standards organisations, or civil society organisations) are willing to co-sponsor regional participation, the governing body SHOULD establish formal co-sponsorship relationships and associated governance representation for those regions.¶
Regional Spider nodes are RECOMMENDED in regions with significant service registrant populations to eliminate intercontinental latency in liveness verification.¶
The APIX standard maintains normative registries of enumerated values. Registries are authoritative lists of valid values for specific APM and Index API fields. Using values not present in the relevant registry is a protocol violation.¶
Registry location: registries are published as live endpoints at the canonical APIX domain and are updated independently of the RFC revision cycle. The RFC defines the registry structure and lifecycle rules; the live endpoints are the authoritative source of current valid values.¶
| Registry | Live endpoint | Used in |
|---|---|---|
| Protocol types |
api-index.org/registry/protocols
|
APM spec.type
|
| Capability taxonomy |
api-index.org/registry/capabilities
|
APM capabilities[]
|
| Notification channel types |
api-index.org/registry/notification-channels
|
APM notifications.channels[].type
|
Registry entry lifecycle:¶
Each registry entry transitions through three phases. The public
standard_warnings flag in a Service Record does not appear until the
grace period has elapsed — service operators have a silent window to
update their APM before any public signal is issued.¶
active -> deprecated (announced)
|
+-- [grace period: 90 days min]
| silent: operator notified, no public flag
|
+-- [warning period: remainder of deprecation window]
| standard_warnings visible in Service Record
|
+-- sunset
new registrations rejected; existing use flagged non-compliant
¶
| Phase | Registry status | standard_warnings visible | New registrations |
|---|---|---|---|
| Normal use |
active
|
No | Accepted |
| Grace period |
deprecated
|
No | Accepted |
| Warning period |
deprecated
|
Yes | Accepted |
| Sunset |
sunset
|
Yes (non-compliant) | Rejected |
Deprecation rules:¶
The governing body MUST publish a deprecated_in_version, sunset_date,
grace_period_ends, and replacement value when deprecating any
registry entry.¶
The minimum total deprecation window (announcement to sunset) is 12 months. The governing body MAY extend this window but MUST NOT shorten it.¶
The minimum grace period is 90 days from the deprecation announcement.
During the grace period, standard_warnings MUST NOT be set on any
Service Record, regardless of whether the service uses the deprecated value.¶
The governing body MUST notify all registered Service Owners whose
services use the deprecated value before the grace period begins.
Notification MUST include the grace_period_ends date, the sunset_date,
and the replacement value.¶
After the grace period, the index operator MUST set standard_warnings
on Service Records that still use the deprecated value.¶
At sunset, the index operator MUST reject new APM submissions using
the sunsetted value and MUST escalate existing Service Records from
standard_warnings to a non_compliant status flag.¶
Registry versioning: each registry is independently versioned. The Index root resource (Section 9.2) exposes the current version of each registry so consuming agents may detect changes.¶
The APIX Manifest is the structured document that a Service Owner provides at registration and that the APIX Spider validates and enriches. It is the index-facing contract for a Service: format-agnostic, extensible, and designed for machine consumption.¶
{
"bsm_version": "1.0",
"service_id": "globally unique stable identifier -- UUID v4 or APIX-issued",
"name": "human-readable service name",
"description": "machine-parseable capability summary",
"api_version": "semantic version string -- e.g. 2.1.0",
"lifecycle_stage": "experimental | beta | stable | deprecated | sunset",
"supersedes": "service_id of the version this entry supersedes -- OPTIONAL",
"owner": {
"organisation_name": "legal entity name",
"jurisdiction": "ISO 3166-1 alpha-2 country code",
"registration_number": "company registration number -- required for O-2+",
"contacts": {
"operations": "email -- receives incident and spec fetch failure notifications",
"escalation": "email -- Cluster 3 escalation; OPTIONAL but RECOMMENDED"
}
},
"spec": {
"type": "value from api-index.org/registry/protocols",
"url": "URL to the machine-readable specification document",
"version": "spec version string"
},
"capabilities": [
"term from api-index.org/registry/capabilities",
"term from api-index.org/registry/capabilities"
],
"entry_point": "base URL of the service",
"trust": {
"organisation_level": "O-0 through O-4 -- set by index, not service owner",
"service_level": "S-0 through S-4 -- set by index, not service owner",
"spec_consistency": "null | consistent | mismatch | unreachable -- set by Spider",
"spec_fetch_consecutive_failures": 0,
"next_spider_run_at": "ISO 8601 timestamp -- next scheduled Spider run",
"liveness": {
"last_ping_at": "ISO 8601 timestamp",
"ping_interval_seconds": 300,
"uptime_30d_percent": 99.9,
"avg_response_ms": 142.0
}
},
"notifications": {
"supported": true,
"channels": [
{
"type": "value from api-index.org/registry/notification-channels",
"registration_url": "URL to register a subscription",
"events": ["event-type"],
"spec_url": "URL to event schema -- OPTIONAL"
}
]
},
"legal": {
"terms_of_service_url": "URL",
"privacy_policy_url": "URL",
"data_processing_agreement_url": "URL -- required for O-3+",
"gdpr_applicable": true,
"jurisdiction_flags": ["ISO 3166-1 alpha-2 country code"]
},
"standard_warnings": [
{
"field": "APM field path",
"value": "the deprecated value in use",
"registry_status": "deprecated",
"deprecated_in_apix_version": "version string",
"sunset_date": "ISO 8601 date",
"replacement": "replacement value",
"message": "human-readable warning"
}
]
}
¶
Field notes:¶
owner.contacts.operations MUST be provided. It is the primary notification
address for all automated Spider alerts: spec fetch failures at Cluster 2
entry, liveness degradation, and recovery confirmations. This address
SHOULD reach the team responsible for keeping the service registration
current.¶
owner.contacts.escalation is OPTIONAL but RECOMMENDED. It is the
escalation address sent to when failures reach Cluster 3 — indicating
a persistent problem that has not been resolved through the Cluster 1
and Cluster 2 retry windows and likely requires management attention or
a deliberate APM configuration update. This address SHOULD reach a team
lead, service owner, or on-call manager. It MUST NOT be identical to
operations — if the same person handles both, the escalation path
provides no additional coverage.¶
api_version MUST follow semantic versioning (semver.org). It describes
the version of the service's own API, not the APM format version.¶
Each api_version value is bound to exactly one registered spec snapshot.
A Service Owner who modifies the live spec document at spec.url without
submitting a APM update with a new api_version value WILL produce a
structural mismatch between the live document and the stored snapshot. The
Spider MUST record this as an S-2 consistency failure and MUST surface it
in the Service Record as a standard_warnings entry.¶
This is intentional. The APIX enforces spec immutability per version as a
structural consequence of the snapshot model: a version string identifies
a contract, and that contract MUST NOT change after it has been registered.
Operators who need to change their API MUST register a new api_version.
This protects consuming agents from silent contract breakage.¶
lifecycle_stage MUST be one of the values defined in the APIX lifecycle
registry. Default if omitted is stable. Services at experimental or
beta are excluded from default search results (see Section 9.3).¶
supersedes is OPTIONAL. When set, the index MUST automatically set
superseded_by on the referenced entry. The referencing service MUST be
registered under the same organisation account.¶
trust fields are set exclusively by the index operator based on
verification outcomes. APM submissions that include trust field values
MUST have those values overwritten by the index upon processing.¶
standard_warnings is set exclusively by the index operator. It is
populated only after the grace period for the relevant deprecation has
elapsed (see Section 4.3). During the grace period the field MUST be
empty even if the service uses a deprecated value. Service Owners MUST
NOT submit this field; submitted values MUST be ignored.¶
notifications is OPTIONAL for experimental and beta lifecycle stages
and RECOMMENDED for stable. If notifications.supported is true,
notifications.channels MUST contain at least one entry.¶
entry_point is the base HTTPS URL of the service, used by consuming agents
to construct API calls. The following normative requirements apply:¶
entry_point MUST use the https scheme. HTTP entry points MUST be
rejected at registration.¶
entry_point MUST remain stable for the lifetime of the service
registration. A change to entry_point MUST be submitted as a APM
update and MUST trigger immediate Spider re-verification.¶
The Spider MUST NOT hit entry_point directly for liveness checks.
Instead, the Spider checks entry_point + /health (see Section 10.2).¶
HTTP redirects from entry_point are permitted for consuming agents
but MUST NOT be present at entry_point/health (the health endpoint
MUST respond directly without redirect).¶
entry_point/health is the mandatory liveness endpoint. Every registered
service MUST expose a health endpoint at the path /health relative to
entry_point. This endpoint:¶
MUST return HTTP 2xx when the service is operational.¶
MUST return without requiring authentication.¶
MUST respond within a reasonable timeout (RECOMMENDED: 5 seconds).¶
SHOULD return a JSON body of the form {"status": "ok", "api_version":
"<semver>"}. If api_version is present, the Spider SHOULD cross-check
it against the APM api_version field; a mismatch MUST be recorded as
a warning in the Service Record.¶
MUST NOT be used by consuming agents for API calls — it is a Spider-facing infrastructure endpoint only.¶
spec.url is the URL to the machine-readable API specification document
(OpenAPI JSON/YAML, MCP manifest, AsyncAPI document, or GraphQL SDL).¶
spec.url MUST use the https scheme.¶
spec.url MUST be publicly accessible without authentication. A spec
behind authentication cannot be fetched by the Spider and permanently
prevents the service from reaching S-2.¶
On the initial Spider run following registration, the Spider fetches
the spec document and stores it as the registered spec snapshot.
All subsequent Spider runs compare the live document at spec.url
against this snapshot to detect breaking changes (S-3 assessment).
The snapshot is updated when the Service Owner submits a APM update
that increments api_version.¶
A APM update that changes spec.url MUST trigger immediate Spider
re-verification and snapshot replacement (see Section 10.1).¶
The service_id MUST be stable across re-registrations and version updates.
It is the canonical identity of the service in the APIX and MUST be a UUID v4
or a APIX-issued deterministic identifier.¶
The spec.type field MUST contain a value from the Protocol Type Registry
at api-index.org/registry/protocols. The registry is the authoritative
and always-current list of valid values. The entries below are the v1.0
starter set; the governing body extends the registry as additional protocol
types reach sufficient adoption. Registry entries follow the lifecycle
defined in Section 4.3.¶
| Registry value | Standard | Spider behaviour | Status |
|---|---|---|---|
openapi
|
OpenAPI 3.x | Parses paths, schemas, auth requirements | active |
mcp
|
Model Context Protocol | Parses tool definitions and capability list | active |
asyncapi
|
AsyncAPI 2.x / 3.x | Parses channels, message schemas | active |
graphql
|
GraphQL SDL | Introspects schema via introspection query | active |
Services whose specification type is not yet in the registry SHOULD request addition via the governing body's registry extension process before registering. Until the type is added, such services cannot achieve S-2 or above, as the Spider has no parser for unregistered types.¶
The capabilities field MUST contain terms from the Capability Taxonomy
Registry at api-index.org/registry/capabilities. The registry is the
authoritative and always-current list of valid terms. Terms are
hierarchical, dot-separated (e.g., commerce.marketplace), and follow
the lifecycle defined in Section 4.3.¶
The governing body extends the taxonomy based on observed service registrations and regional input (including the Africa Regional Development Track). Requests for new taxonomy terms are submitted via the governing body's registry extension process.¶
The following are the v1.0 starter set. The live registry is the authoritative source; this list is illustrative only.¶
| Term | Description | Status |
|---|---|---|
commerce
|
E-commerce and marketplace services | active |
commerce.marketplace
|
Multi-vendor marketplace | active |
commerce.retail
|
Single-vendor retail | active |
payments
|
Payment processing | active |
payments.card
|
Card payment processing | active |
payments.crypto
|
Cryptocurrency payments | active |
data.financial
|
Financial data and market information | active |
data.legal
|
Legal documents and information | active |
nlp
|
Natural language processing | active |
nlp.translation
|
Language translation | active |
identity
|
Authentication and identity verification | active |
communication
|
Messaging, email, and notification delivery | active |
storage
|
File and object storage | active |
compute
|
Code execution and computation | active |
media
|
Image, audio, video generation or processing | active |
iot
|
Sensor and device data | active |
search
|
Information retrieval | active |
A Service MUST declare at least one capability term. Declared capabilities
are validated by the Spider against the parsed specification where the spec
type supports it. Services using deprecated taxonomy terms receive a
standard_warnings entry in their Service Record.¶
The APIX Trust Model has three independent dimensions. Each dimension produces a machine-readable value in the Service Record. Consuming agents combine these values according to their own Trust Policy.¶
The APIX provides trust metadata. It does not make trust decisions.¶
Describes the verified identity and compliance posture of the organisation that owns the service.¶
| Level | Label | Requirements |
|---|---|---|
| O-0 | Unverified | Self-registered. No checks performed. |
| O-1 | Identity Verified | Valid business email confirmed. Domain ownership verified via DNS TXT record. |
| O-2 | Legal Entity Verified | Company registration number confirmed against official registry of declared jurisdiction. |
| O-3 | Legally Compliant | Terms of Service, Privacy Policy, and Data Processing Agreement reviewed and confirmed present and accessible. GDPR applicability assessed. Verified by Accredited Verifier. |
| O-4 | Audited | Third-party compliance audit completed (SOC 2 Type II, ISO 27001, or equivalent). Audit certificate on file with Bot Standards Foundation. Verified by Accredited Verifier. |
Organisation levels are assessed against the organisation as a whole, not per service. An organisation that achieves O-3 applies that level to all its registered services.¶
Describes what has been automatically verified about the service itself by the APIX Spider.¶
| Level | Label | How achieved |
|---|---|---|
| S-0 | Unchecked | Registered. Spider has not yet run. |
| S-1 | Reachable | Spider confirmed entry_point/health returns HTTP 2xx without authentication. |
| S-2 | Spec Verified | Specification document at spec.url is publicly fetchable, parseable according to the declared spec.type, and structurally consistent with the APM registration snapshot taken at initial registration. |
| S-3 | Schema Stable | No breaking changes detected between the registered spec snapshot and the live spec document across at least three consecutive Spider runs. |
| S-4 | Security Reviewed | Automated vulnerability scan completed with no critical findings, OR third-party penetration test certificate provided and validated by Accredited Verifier. |
Describes the confirmed operational availability of the service, including how recent and how frequent the availability data is. Liveness data is expressed as a set of metrics, not a single level.¶
| Metric | Type | Description |
|---|---|---|
last_ping_at
|
ISO 8601 timestamp | Time of the most recent successful Spider ping |
ping_interval_seconds
|
integer | Configured interval between Spider pings |
uptime_30d_percent
|
float | Percentage of pings successful over the last 30 days |
avg_response_ms
|
float | Mean response time in milliseconds over the last 30 days |
consecutive_failures
|
integer | Number of consecutive failed pings at last check |
The ping interval is determined by the service's liveness monitoring
configuration (see Section 8.2). A service configured at initial-only
frequency receives no recurring pings; its last_ping_at reflects
only the initial Spider verification run.¶
A consuming agent expresses its Trust Policy as a set of minimum thresholds across all three dimensions. Example policy expressed in pseudo-notation:¶
require: organisation_level >= O-2 service_level >= S-2 last_ping_age < 3600 # seconds since last_ping_at uptime_30d_percent >= 99.0 consecutive_failures == 0¶
The Index API SHOULD support filtering by trust dimension thresholds so that agents can retrieve only records that satisfy their policy without downloading the full index.¶
Trust Policies are defined and enforced by consuming agents. The APIX does not validate or enforce Trust Policies.¶
Organisation levels O-3 and O-4 require human review that cannot be automated at scale. The APIX uses a federated Accredited Verifier model, analogous to the Certificate Authority model in TLS:¶
The Bot Standards Foundation defines the verification criteria for each level.¶
Organisations apply to the Bot Standards Foundation for Verifier accreditation.¶
Accredited Verifiers perform O-3 and O-4 assessments and sign verification attestations.¶
The Bot Standards Foundation maintains a public registry of Accredited Verifiers.¶
A Service Record at O-3 or O-4 MUST include the identifier of the Accredited Verifier that performed the assessment and the date of assessment.¶
Accreditation of Verifiers is reviewed annually by the Bot Standards Foundation.¶
Service registration is a human-initiated B2B process. Autonomous self-registration without human involvement is explicitly not supported, as the commercial contract and legal accountability require a human counterparty.¶
Registration MUST be scoped at the organisation level. An organisation registers once and undergoes identity verification once; multiple services may then be registered under that organisational identity. This requirement ensures:¶
Identity verification and sanctions screening are performed once per legal entity, not repeated per service.¶
Organisation trust (O-level) established at registration propagates to all services registered under that organisation without re-verification of the organisation's identity.¶
Definition: one service equals one APIX Manifest (APM) document
with one distinct entry_point. Logical bundling of API paths under a
single entry point is the registrant's responsibility and is permitted.¶
The registration process:¶
The Service Owner (or their Accredited Regional Representative) creates an Organisation Account in the APIX Registration Portal. The index operator MUST screen the Service Owner against applicable sanctions lists before account activation. Entities subject to applicable sanctions MUST be refused registration (see Section 7.3).¶
The Service Owner provides organisation details sufficient for the target Organisation trust level. This step is performed once per organisation.¶
The Service Owner submits a APIX Manifest for each service to be registered, including the spec URL and entry point. Each service is associated with a liveness monitoring configuration that determines Spider check frequency (see Section 8.2).¶
For O-1: email and domain ownership verification is completed automatically.¶
For O-2: the index operator or Regional Representative verifies the declared company registration number.¶
For O-3 and O-4: the Service Owner engages an Accredited Verifier.¶
Upon completion of applicable checks, the service is activated in the index and the Spider is triggered.¶
The APIX Spider is triggered automatically at two points:¶
At registration: once a service is activated, the Spider performs an initial verification run to establish the baseline Service Verification Level.¶
On schedule: thereafter, the Spider rechecks the service at the interval defined by the service's commercial tier (see Section 8).¶
During a Spider run, the Spider:¶
Performs an HTTPS request to {entry_point}/health and records the
response code, response time, and timestamp (Liveness: S-1).¶
Fetches the spec document from spec.url (HTTPS, no authentication).¶
Parses the fetched document and compares it structurally against the registered spec snapshot (S-2 if fetchable and consistent; S-3 assessed if no breaking changes detected across three or more consecutive runs).¶
Updates all Liveness metrics in the Service Record.¶
Records any failures and increments consecutive_failures.¶
The Spider MUST NOT call any API endpoint beyond {entry_point}/health
and spec.url. The Spider MUST NOT submit data to, create resources in,
or otherwise interact with the production API of a registered service.¶
The Spider MUST respect HTTP rate limits declared by the service. Spider
requests MUST include a User-Agent header identifying the APIX Spider
and version.¶
Every registered service MUST be covered by a commercial agreement between the Service Owner and the index operator (or its Accredited Regional Representative). The agreement MUST define:¶
The liveness monitoring configuration and its obligations.¶
The index operator's obligations regarding Spider frequency and Index API availability.¶
Acceptable use terms.¶
Data processing terms in accordance with applicable law.¶
Sanctions compliance: the index operator MUST screen all service registrants against applicable sanctions lists prior to account activation. At minimum, screening MUST cover the UN Security Council consolidated sanctions list. Operators subject to additional jurisdictional sanctions regimes (e.g., EU, US OFAC, Swiss SECO) MUST additionally screen against those lists as applicable to their jurisdiction of incorporation. Entities subject to applicable sanctions MUST be refused registration regardless of commercial tier.¶
Registrants MUST represent and warrant in the commercial agreement that they are not subject to applicable sanctions, and MUST notify the index operator immediately of any change in that status.¶
Unauthenticated discovery queries to the Index API are not subject to registration screening and MUST remain available without restriction, consistent with the APIX's mission as open global infrastructure (see Section 8.3).¶
A conforming APIX implementation MUST be funded primarily by service registration fees paid by Service Owners (supply side). Discovery queries by consuming agents MUST NOT be the primary revenue mechanism. This principle is normative: an implementation that charges consuming agents for standard discovery queries is not conformant with the APIX model, as doing so contradicts the open infrastructure mission and undermines the network effect that makes the supply side valuable.¶
The APIX model is structurally analogous to the DNS model: registrants pay to be listed; queries are free.¶
Each registered service MUST have a liveness monitoring configuration that determines Spider check frequency. This configuration:¶
Is set per service, not per organisation account. An organisation MAY configure different check frequencies for different services registered under the same account.¶
MUST be agreed in the commercial contract between the Service Owner and the index operator.¶
Determines the maximum age of last_ping_at data available to
consuming agents for that service.¶
Implementations MUST support at minimum the following frequency
classes, identified by their normative spider_interval value in
the Service Record:¶
| Frequency class | Maximum spider_interval | Normative label |
|---|---|---|
| Initial only | N/A (one run at activation) |
"initial"
|
| Daily | 86,400 seconds |
"daily"
|
| Hourly | 3,600 seconds |
"hourly"
|
| High-frequency | 300 seconds |
"high"
|
Implementations MAY define additional frequency classes. The
spider_interval field in the Service Record MUST reflect the
actual configured interval in seconds.¶
Effect on trust signal quality: A consuming agent applying a
last_ping_age < N filter will structurally exclude services whose
check frequency cannot produce sufficiently fresh liveness data.
Liveness monitoring configuration is therefore a market signal:
services requiring discovery by latency-sensitive agents must invest
in check frequency sufficient to satisfy those agents' trust policies.¶
Services configured at initial-only frequency MUST be excluded from standard discovery query results by default. Consuming agents MUST explicitly opt in to include initial-only services in result sets.¶
Discovery queries to the Index API MUST be available without authentication or payment. Rate limits MAY be applied to protect infrastructure integrity but MUST NOT be set at levels that prevent reasonable agent operation. Implementations MUST support at minimum three consumer access layers:¶
Layer 1 — Unauthenticated access¶
Any agent MUST be able to query the Index API without authentication or registration, subject to a per-IP rate limit. This layer is sufficient for individual agents and proof-of-concept deployments.¶
Layer 2 — Authenticated access (free)¶
Any agent MAY register a consumer identity token at no cost. Token registration requires a valid email address. Authenticated access MUST provide a higher rate limit than unauthenticated access and MAY additionally provide result caching hints and webhook subscriptions for service record changes.¶
Consumer tokens SHOULD be compatible with the webbotauth identity model (draft-meunier-webbotauth-registry) to enable interoperability with bot authentication infrastructure.¶
Layer 3 — High-volume access (paid, optional)¶
Implementations MAY offer a paid high-volume access tier for platforms operating agents at scale that require guaranteed query capacity and operational SLAs. This tier is supplementary; the index's operational sustainability MUST NOT depend on it.¶
Public bulk download (REQUIRED)¶
Implementations MUST provide the full index as a freely downloadable bulk dataset at intervals not exceeding 24 hours, without authentication, under an open data licence. This requirement implements the openness requirement of Section 4.2: no entity, including the index operator, may hold an exclusive lock on the index data.¶
The APIX exposes a single globally stable entry-point URL:¶
https://api-index.org/¶
A GET request to this URL returns the Index root resource, which includes:¶
{
"apix_version": "1.0",
"total_services": 12483,
"last_updated": "2026-04-24T00:00:00Z",
"_links": {
"self": {
"href": "https://api-index.org/"
},
"search": {
"href": "https://api-index.org/search{?q,capability,protocol,org_level_min,service_level_min,spec_consistency,max_ping_age,uptime_30d_min,lifecycle_stage,include_superseded,page,page_size}",
"templated": true
},
"browse": {
"href": "https://api-index.org/browse"
},
"capabilities": {
"href": "https://api-index.org/capabilities"
},
"docs": {
"href": "https://api-index.org/docs"
}
}
}
¶
The search endpoint applies server-side filters to reduce result sets before transmission. Only filters on indexed scalar values are server-side; filters requiring deep metadata evaluation are applied client-side after fetching the Level 2 Service Record (Section 9.5).¶
Example — buying bot querying for marketplace services:¶
GET /search?capability=commerce.marketplace
&protocol=mcp,openapi
&org_level_min=O-2
&service_level_min=S-2
&max_ping_age=3600
&uptime_30d_min=95.0
&lifecycle_stage=stable
&page=1
&page_size=20
¶
Normative server-side filter parameters:¶
| Parameter | Type | Default | Description |
|---|---|---|---|
q
|
string | — | Free-text search across name and description
|
capability
|
string | — | Capability taxonomy term (exact or prefix match). MUST be an active or deprecated registry value |
protocol
|
string | — | Comma-separated protocol type values. MUST be values from the Protocol Type Registry |
org_level_min
|
enum |
O-0
|
Minimum Organisation trust level. Excludes services below threshold |
service_level_min
|
enum |
S-0
|
Minimum Service verification level |
max_ping_age
|
integer | — | Maximum seconds since last_ping_at. Excludes services with older liveness data |
uptime_30d_min
|
float | — | Minimum 30-day uptime percentage |
lifecycle_stage
|
enum |
stable
|
Filter by lifecycle stage. Default excludes experimental, beta, deprecated, and sunset
|
include_superseded
|
boolean |
false
|
When false, excludes services for which superseded_by is set. When true, all matching versions are returned |
spec_consistency
|
enum | — | Filter by spec consistency status. Values: consistent, mismatch, unreachable. null (Spider not yet run) is excluded when any value is specified. When absent, no constraint is applied. Agents performing consequential tasks SHOULD explicitly pass consistent
|
page
|
integer |
1
|
Result page number |
page_size
|
integer |
20
|
Results per page. Maximum: 100 |
All filter parameters are OPTIONAL. When absent, the parameter imposes no
constraint except lifecycle_stage (default stable) and
include_superseded (default false).¶
Results are returned as paginated Level 1 Search Result records (Section 9.4) with HATEOAS links to Level 2 Service Records. Pagination is REQUIRED.¶
Search results return lightweight summary records. These contain only the
fields needed to evaluate candidates and decide which detail pages to fetch.
Complex metadata (auth requirements, version history, notifications, legal,
standard_warnings) is available only at Level 2 and is evaluated
client-side after fetching the detail resource.¶
{
"service_id": "svc-payment-stripe-v2",
"name": "Stripe Payment API",
"description": "Card and subscription payment processing",
"api_version": "2.4.1",
"lifecycle_stage": "stable",
"capabilities": ["payments.card", "payments.subscription"],
"protocol": "openapi",
"trust": {
"organisation_level": "O-3",
"service_level": "S-2",
"spec_consistency": "consistent",
"spec_fetch_consecutive_failures": 0,
"next_spider_run_at": "2026-04-20T14:55:00Z",
"liveness": {
"last_ping_at": "2026-04-20T13:55:00Z",
"ping_interval_seconds": 3600,
"uptime_30d_percent": 99.87,
"consecutive_failures": 0
}
},
"_links": {
"self": { "href": "https://api-index.org/services/svc-payment-stripe-v2" },
"latest_stable": { "href": "https://api-index.org/services/svc-payment-stripe-v2" }
}
}
¶
The latest_stable link points to the leaf version of the service's version
chain. When latest_stable differs from self, a newer stable version
exists and the agent SHOULD follow the link before proceeding.¶
The full Service Record is returned when a consuming agent fetches the
detail resource via the self link. It is the APM plus Spider-enriched
trust metadata, versioning links, and any standard_warnings.¶
{
"service_id": "svc-payment-stripe-v2",
"bsm_version": "1.0",
"name": "Stripe Payment API",
"description": "Card and subscription payment processing",
"api_version": "2.4.1",
"lifecycle_stage": "stable",
"supersedes": "svc-payment-stripe-v1",
"superseded_by": null,
"owner": { "...": "..." },
"spec": {
"type": "openapi",
"url": "https://stripe.com/openapi.json",
"version": "2.4.1"
},
"capabilities": ["payments.card", "payments.subscription"],
"entry_point": "https://api.stripe.com/v2",
"trust": {
"organisation_level": "O-3",
"organisation_verified_at": "2026-03-01T00:00:00Z",
"organisation_verifier_id": "verifier-ch-001",
"service_level": "S-2",
"service_level_updated_at": "2026-04-19T08:00:00Z",
"spec_consistency": "consistent",
"spec_consistency_checked_at": "2026-04-20T13:55:00Z",
"spec_fetch_consecutive_failures": 0,
"next_spider_run_at": "2026-04-20T14:55:00Z",
"liveness": {
"last_ping_at": "2026-04-20T13:55:00Z",
"ping_interval_seconds": 3600,
"uptime_30d_percent": 99.87,
"avg_response_ms": 142.3,
"consecutive_failures": 0
}
},
"notifications": { "...": "..." },
"legal": { "...": "..." },
"standard_warnings": [],
"registered_at": "2026-01-15T00:00:00Z",
"last_updated_at": "2026-04-20T13:00:00Z",
"_links": {
"self": { "href": "https://api-index.org/services/svc-payment-stripe-v2" },
"owner": { "href": "https://api-index.org/organisations/org-stripe" },
"spec": { "href": "https://stripe.com/openapi.json" },
"previous_version": { "href": "https://api-index.org/services/svc-payment-stripe-v1" },
"latest_stable": { "href": "https://api-index.org/services/svc-payment-stripe-v2" }
}
}
¶
Trust metadata is always included in full Service Records (Level 2) and
MUST NOT be omitted or summarised. Consuming agents rely on the full set
of trust fields to evaluate their Trust Policy. Partial trust metadata
MUST be represented with explicit null values, not omitted fields.¶
Trust metadata is included in summary form (Level 1) for server-side filter compatibility. The Level 1 trust object omits verification timestamps and verifier IDs; these are available only at Level 2.¶
The Index API is consumed by autonomous agents at machine speed. Response payloads are structured JSON with highly repetitive field names across result arrays. Transport-layer compression achieves 70–85% size reduction on typical search result payloads with no information loss and no application-layer schema changes.¶
Compression support requirements:¶
The Index API MUST support the following Accept-Encoding values:¶
| Encoding | Requirement | Notes |
|---|---|---|
gzip
|
MUST | Universally supported baseline |
br (Brotli) |
SHOULD | Higher ratio than gzip; suitable for large result sets |
zstd
|
SHOULD | Comparable ratio to Brotli; significantly faster decompression |
The Index API MUST perform content negotiation via the Accept-Encoding
request header. Responses MUST include a Content-Encoding header
identifying the applied encoding. If a client sends no Accept-Encoding
header, the server MAY respond uncompressed.¶
Consuming agents SHOULD include Accept-Encoding: zstd, br, gzip in
all Index API requests.¶
Binary encoding (optional):¶
The Index API MAY additionally support CBOR (RFC 8949) as a binary
alternative to JSON. A client that prefers CBOR MUST signal this via
Accept: application/cbor. The server MAY respond with
Content-Type: application/cbor. CBOR responses carry identical
information to JSON responses; the encoding difference is transparent
to the data model.¶
Clients MUST NOT assume CBOR support. JSON over compressed transport is the normative interchange format.¶
The Spider is triggered by the following events:¶
| Trigger | Description |
|---|---|
| Registration activation | Immediate first run when a service is activated |
| Scheduled interval | Recurring, per service liveness monitoring configuration (Section 8.2) |
| Manual re-trigger | Service Owner may request a manual re-trigger once per 24 hours |
| Spec URL change | A APM update that changes the spec.url triggers an immediate run |
The Spider performs the following checks in sequence. Each check's result is stored independently; a failure at one level does not prevent checks at other levels from being recorded.¶
The Spider MUST use HTTPS for all outbound requests. The Spider MUST NOT
send authentication credentials to any registered service. Spider requests
to entry_point/health or spec.url MUST NOT include Authorization headers,
API keys, cookies, or client certificates.¶
If a request returns an HTTP redirect (3xx), the Spider MUST follow the redirect only if the redirect target also uses HTTPS. The Spider MUST NOT follow redirects from HTTPS to HTTP.¶
Liveness check: HTTPS GET to {entry_point}/health. Record HTTP
status code, response time, and timestamp. A 2xx response without
authentication constitutes a successful liveness check (S-1). If the
response body is valid JSON containing an api_version field, the Spider
MUST cross-check this value against the api_version declared in the APM.
A mismatch is recorded as a metadata warning, not a liveness failure.¶
Spec fetch: HTTPS GET to spec.url. The Spider MUST NOT send
authentication credentials. A successful fetch (2xx response, non-empty
body) is the prerequisite for steps 3 and 4. Record content type and
document size.¶
Spec parse and consistency check: Parse the fetched document
according to the declared spec.type. Compare it structurally against
the registered spec snapshot stored at initial registration time.
The Spider MUST set spec_consistency to one of three values after
every run:¶
consistent — document is fetchable, parseable, and structurally
matches the registered snapshot. Constitutes S-2 verification.¶
mismatch — document is fetchable and parseable, but structurally
differs from the snapshot (paths removed, required fields changed,
response schemas changed). S-2 is revoked; standard_warnings is
updated. This indicates operator-caused contract breakage.¶
unreachable — spec.url returned a non-2xx response, was not
reachable, or the document could not be parsed. S-2 is not achieved
or is suspended. This indicates an availability problem, not a
contract violation.
spec_consistency MUST be null only before the Spider's first run
on a newly registered service. Once any run completes, the field MUST
carry one of the three values above.
The Spider MUST NOT call any API endpoint declared in the spec. Spec
verification is document comparison only.¶
Breaking change detection: Compare the current parsed spec against the registered spec snapshot. Flag removed paths, changed required fields, or changed response schemas as breaking changes. Three or more consecutive runs with no breaking changes detected are required for S-3 verification.¶
Liveness metrics update: Update last_ping_at, uptime_30d_percent,
avg_response_ms, consecutive_failures, and next_spider_run_at.¶
Liveness failures (entry_point/health unreachable):¶
A single failed ping increments consecutive_failures and updates
last_ping_at with the failure timestamp.¶
After 3 consecutive failures, the Service Record is flagged as
status: degraded in the index.¶
After 10 consecutive failures, the Service Record is flagged as
status: unreachable and is excluded from standard search results.¶
contacts.operations is notified at the 3-failure threshold (incident
warning). Both contacts.operations and contacts.escalation are
notified at the 10-failure threshold (service unreachable escalation).¶
A service that recovers (next ping succeeds) has its status restored
and consecutive_failures reset to 0 automatically.¶
Spec fetch failures (spec_consistency: unreachable):¶
Spec fetch failures have distinct probable causes depending on how long
the failure persists. The Spider MUST apply a three-cluster retry model
that targets the likely cause window at each stage. Cluster escalation
is triggered by spec_fetch_consecutive_failures crossing a threshold.¶
| Cluster | Assumed cause | Failure count | Retry interval | Notification |
|---|---|---|---|---|
| 1 — Infrastructure / network | Transient: brief network loss, host restart, CDN hiccup | 1–3 | 5 min -> 15 min -> 30 min | None — transient, operator not yet disturbed |
| 2 — Application | Software instability: crash loop, OOM, application startup failure | 4–6 | 2 h -> 4 h -> 8 h | Email to contacts.operations on Cluster 2 entry (failure 4): incident warning |
| 3 — Configuration | Persistent misconfiguration: wrong spec.url, auth gate added, URL moved |
7+ | 24 h -> 72 h (cap) | Email to contacts.operations AND contacts.escalation on Cluster 3 entry (failure 7): explicit action request — verify and update spec.url
|
spec_consistency is set to unreachable on the first failure and
remains unreachable until a successful fetch.¶
next_spider_run_at is set to the next retry timestamp after each
failed run so Service Owners can observe when the retry will occur.¶
A successful spec fetch resets spec_fetch_consecutive_failures to 0
and sets spec_consistency to consistent or mismatch as
appropriate.¶
spec_fetch_consecutive_failures MUST be visible in the Service Record
so Service Owners can monitor retry cluster state without contacting
the Index operator.¶
Manual re-trigger:¶
The Index operator SHOULD provide a mechanism for Service Owners to request an immediate Spider re-run outside of the scheduled interval. This is the primary recovery mechanism when a service has been repaired and the operator does not want to wait for the next scheduled retry.¶
When a manual re-trigger is requested:
- next_spider_run_at MUST be set to the current timestamp.
- spec_fetch_consecutive_failures MUST be reset to 0, returning the
service to Cluster 1 retry behaviour for the next run.
- The Spider MUST execute the run as soon as scheduling permits.¶
The Index MAY rate-limit manual re-triggers to at most once per hour per service to prevent abuse.¶
The mandatory B2B onboarding with contract requirement provides a first barrier against malicious actors listing fake or harmful services. Service Owners must provide verifiable identity information. However, O-0 and O-1 registration provides minimal verification.¶
Consuming agents SHOULD apply Trust Policies that exclude O-0 services for any task involving sensitive data or consequential actions.¶
The Bot Standards Foundation MUST maintain an abuse reporting mechanism and MUST be
able to suspend or remove a Service Record within 24 hours of confirmed
abuse. Suspended service records MUST remain in the index with a
status: suspended flag and MUST NOT be silently deleted, to provide
transparency to agents that had cached the record.¶
Organisation and Service trust levels in the Service Record are set only
by the APIX itself, not by the Service Owner. APM submissions that include
trust field values MUST have those values overwritten by the APIX upon
processing. The Index API MUST NOT expose self-asserted trust values.¶
All URLs submitted as entry_point or spec.url values in a APM MUST use
the https scheme. The Index MUST reject APM submissions that provide HTTP
(non-TLS) values for these fields.¶
The {entry_point}/health endpoint MUST NOT require authentication of any
kind. Requiring authentication at /health defeats liveness verification and
MUST be treated as a registration defect. The Index MUST record a metadata
warning if the /health endpoint returns a 401 or 407 status.¶
The spec.url endpoint MUST be publicly accessible without authentication.
A spec.url that requires authentication cannot be verified by the Spider and
MUST be treated as an S-2 failure until authentication is removed.¶
The Spider MUST enforce HTTPS for all outbound requests and MUST NOT follow redirects from HTTPS to HTTP.¶
A service that knows when the Spider will visit could serve compliant responses only to Spider requests, presenting a different interface to consuming agents. Mitigations:¶
The APIX provides discovery and trust metadata. It does not guarantee the safety, correctness, or availability of listed services. Consuming agents MUST NOT assume that a service listed in the APIX is safe to use without applying their own Trust Policy.¶
Consuming agents SHOULD treat Index API responses as untrusted input and validate the structure of Service Records before acting on them.¶
The Index API MUST be served exclusively over TLS. Certificate validity MUST be verified by consuming agents. Agents MUST NOT bypass TLS certificate verification when querying the Index API.¶
The following questions are unresolved and explicitly invited for community input:¶
Capability Taxonomy governance: Who contributes new taxonomy terms? What is the process for deprecating terms? Should the taxonomy be versioned independently of the APM specification?¶
APM spec type extensions: What is the formal process for registering
new spec.type values? Should this be an IANA registry?¶
Trust Policy standardisation: Should the APIX define a standard Trust Policy expression language, or leave this entirely to consuming agents? A standard language would enable Index API server-side filtering but risks constraining agent-side policy flexibility.¶
Verifier accreditation criteria: What are the full requirements for an organisation to become an Accredited Verifier? What ongoing obligations apply? What is the revocation process?¶
Regional Representative model: How are jurisdictional boundaries defined for Regional Representatives? What happens in jurisdictions with no appointed Representative?¶
Free tier abuse: Is the current Free tier visibility restriction sufficient to prevent abuse? Should Free tier require payment information on file even if no charge is made?¶
Bot identity: This document explicitly excludes bot identity from scope. Should a future version of the APIX include optional bot consumer registration to enable personalised discovery, rate limit management, or abuse tracking?¶
Centralised index vs. decentralised discovery: The APIX architecture takes a deliberate position: a single authoritative global index, governed by a neutral non-profit, with a commercial sustainability model. An alternative approach — represented by proposals such as draft-vandemeent-ains-discovery (AINS — AInternet Name Service), which uses signed, append-only replication logs with no central authority — takes the opposite position: cryptographic verifiability and censorship resistance over governed accountability.¶
The two approaches represent a genuine design tension. Arguments for the APIX model:¶
Business adoption: Enterprise service providers, regulated industries, and government bodies require a contractual counterparty, an accountable governance structure, and an enforceable compliance audit trail. A leaderless federated registry cannot provide these. The stakeholders with the largest service catalogues and the greatest need for agent-consumable APIs operate in environments where "no central authority" is not a feature — it is a disqualification.¶
Institutional co-sponsorship as an adoption flywheel: The APIX's regional co-sponsorship model is designed to recruit institutional champions — such as regional telecommunications bodies and internet governance organisations — who have reputational and financial incentives to promote APIX registration in their region. A decentralised system cannot offer institutional co-sponsorship because there is no accountable entity to co-sponsor. The announcement credibility that comes from an institution saying "we endorse this infrastructure" is only available to a governed model.¶
Regional financial backflow as a registration incentive: Ten percent of registration fees collected from a region are reinvested into that region's bot ecosystem via the Regional Development Pool. This creates a direct financial incentive for regional institutions to actively promote service registration — more local registrations means more capital returning to local infrastructure. A decentralised model with no registration fees cannot replicate this structural alignment. The result is that APIX's commercial model is not merely a sustainability mechanism; it is an adoption flywheel whose velocity compounds with regional institutional support.¶
Single entry point: A consuming agent needs zero prior knowledge of any registry to begin discovery. Federated models require the agent to either know a registry endpoint or solve the bootstrapping problem of finding one. The simpler the agent-side integration, the faster adoption.¶
Arguments for the decentralised model:¶
Censorship resistance: The APIX can delist a service. A signed append-only log cannot. For agents and service owners in jurisdictions with adversarial regulatory environments, a governed central index is a liability.¶
No single point of failure or control: The BSF, however well governed, is an organisational risk. A decentralised model survives the failure or capture of any single operator.¶
Cryptographic verifiability: Trust in a governed index ultimately depends on trusting the governor. Signed logs allow any party to verify the full history of a service record independently.¶
Community input is explicitly invited on whether the APIX and AINS-style approaches are mutually exclusive or whether a future APIX version could expose a verifiable, signed export of index records that enables third-party verification without requiring a federated registry.¶
This document has no IANA actions.¶
RFC 2119 — Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997.¶
RFC 8615 — Nottingham, M., "Well-Known Uniform Resource Identifiers (URIs)", RFC 8615, May 2019.¶
RFC 8446 — Rescorla, E., "The Transport Layer Security (TLS) Protocol Version 1.3", RFC 8446, August 2018.¶
RFC 9110 — Fielding, R., Nottingham, M., Reschke, J. (Eds.), "HTTP Semantics", RFC 9110, June 2022.¶
OpenAPI Specification 3.1 — OpenAPI Initiative, https://spec.openapis.org/oas/v3.1.0¶
Model Context Protocol — Anthropic, https://modelcontextprotocol.io¶
AsyncAPI Specification 3.0 — AsyncAPI Initiative, https://www.asyncapi.com/docs/reference/specification/v3.0.0¶
RFC 8949 — Bormann, C., Hoffman, P., "Concise Binary Object Representation (CBOR)", RFC 8949, December 2020.¶
Robots Exclusion Protocol — Koster, M., 1994. https://www.robotstxt.org/¶
draft-cui-ai-agent-discovery-invocation-01 — Cui, Y. (Tsinghua University), Chao, Y., Du, C. (Zhongguancun Laboratory), "AI Agent Discovery and Invocation Protocol", IETF Individual Submission, February 2026. Expires August 2026. https://datatracker.ietf.org/doc/draft-cui-ai-agent-discovery-invocation/¶
draft-am-layered-ai-discovery-architecture-00 — Moussa, H., Akhavain, A. (Huawei Canada), "A Layered Approach to AI discovery", IETF Individual Submission, March 2026. Expires September 2026. https://datatracker.ietf.org/doc/draft-am-layered-ai-discovery-architecture/¶
draft-hood-agtp-discovery-00 — Hood, C. (Nomotic, Inc.), "AGTP Agent Discovery and Name Service", IETF Individual Submission, March¶
Expires September 2026. https://datatracker.ietf.org/doc/draft-hood-agtp-discovery/¶
draft-mozleywilliams-dnsop-dnsaid-01 — Mozley, J., Williams, N. (Infoblox), Sarikaya, B. (Unaffiliated), Schott, R. (Deutsche Telekom), "DNS for AI Discovery", IETF Individual Submission, March 2026. Expires September 2026. https://datatracker.ietf.org/doc/draft-mozleywilliams-dnsop-dnsaid/¶
draft-batum-aidre-00 — Batum, F. (Istanbul), "AI Discovery and Retrieval Endpoint (AIDRE)", IETF Individual Submission, April 2026. Expires October 2026. https://datatracker.ietf.org/doc/draft-batum-aidre/¶
draft-mozley-aidiscovery-01 — Mozley, J., Williams, N. (Infoblox), Sarikaya, B. (Unaffiliated), Schott, R. (Deutsche Telekom), "AI Agent Discovery (AID) Problem Statement", IETF Individual Submission, April¶
Expires October 2026. https://datatracker.ietf.org/doc/draft-mozley-aidiscovery/¶
draft-pioli-agent-discovery-01 — Pioli, R. (Independent), "Agent Registration and Discovery Protocol (ARDP)", IETF Individual Submission, February 2026. Expires August 2026. https://datatracker.ietf.org/doc/draft-pioli-agent-discovery/¶
draft-narajala-courtney-ansv2-01 — Courtney, S., Narajala, V.S., Huang, K., Habler, I., Sheriff, A., "Agent Name Service v2 (ANS): A Domain-Anchored Trust Layer for Autonomous AI Agent Identity", IETF Individual Submission, April 2026. Expires October 2026. Supersedes draft-narajala-ans-00. https://datatracker.ietf.org/doc/draft-narajala-courtney-ansv2/¶
draft-vandemeent-ains-discovery-01 — van de Meent, J., Root AI (Humotica), "AINS: AInternet Name Service - Agent Discovery and Trust Resolution Protocol", IETF Individual Submission, March 2026. Expires September 2026. https://datatracker.ietf.org/doc/draft-vandemeent-ains-discovery/¶
draft-aiendpoint-ai-discovery-00 — Choi, Y. (AIEndpoint), "The AI Discovery Endpoint: A Structured Mechanism for AI Agent Service Discovery and Capability Exposure", IETF Individual Submission, March 2026. Expires September 2026. https://datatracker.ietf.org/doc/draft-aiendpoint-ai-discovery/¶
draft-meunier-webbotauth-registry-01 — Guerreiro, M. (Cloudflare), Kirazci, U. (Amazon), Meunier, T. (Cloudflare), "Registry and Signature Agent card for Web bot auth", IETF Individual Submission, October 2025. Expired April 2026; renewal expected. https://datatracker.ietf.org/doc/draft-meunier-webbotauth-registry/¶
webbotauth IETF Working Group — Chairs: Schinazi, D., Shekh-Yusef, R. AD: Bishop, M. Active WG. https://datatracker.ietf.org/wg/webbotauth/¶
W3C AI Agent Protocol Community Group — Chairs: Chang, G., Xu, S. Established May 8, 2025. 216 participants as of April 2026. https://www.w3.org/community/agentprotocol/¶
UDDI Version 3.0.2 — Clement, L., Hately, A., von Riegen, C., Rogers, T. OASIS Committee Draft, October 19, 2004. (Historical reference; see Section 1.3 for analysis of failure modes.) https://www.oasis-open.org/committees/uddi-spec/doc/spec/v3/uddi-v3.0.2-20041019.htm¶