Verifiable credentials provide a mechanism to express credentials on the Web in a way that is cryptographically secure, privacy respecting, and machine-verifiable. This specification provides data model and HTTP protocols to issue, verify, present, and manage data used in such an ecosystem.
This specification is highly experimental and changing rapidly. Implementation in non-experimental systems is discouraged unless you are participating in the weekly meetings that coordinate activity around this specification.
Comments regarding this document are welcome. Please file issues directly on GitHub, or send them to public-credentials@w3.org ( subscribe, archives).
The Verifiable Credentials specification [[VC-DATA-MODEL-2.0]] provides a data model and serialization to express digital credentials in a way that is cryptographically secure, privacy respecting, and machine-verifiable. This specification provides a set of HTTP Application Programming Interfaces (HTTP APIs) and protocols for issuing, verifying, presenting, and managing Verifiable Credentials.
When managing verifiable credentials, there are two general types of APIs that are contemplated. The first type of APIs are designed to be used within a single security domain. The second type of APIs can be used to communicate across different security domains. This specification defines both types of APIs.
The APIs that are designed to be used within a single security domain are used by systems that are operating on behalf of a single role such as an Issuer, Verifier, or Holder. One benefit of these APIs for the Verifiable Credentials ecosystem is that they define a useful, common, and vetted modular architecture for managing Verifiable Credentials. For example, this approach helps software architects integrate with common components and speak a common language when implementing systems that issue verifiable credentials. Knowing that a particular architecture has been vetted is also beneficial for architects that do not specialize in verifiable credentials. Documented architectures and APIs increase market competition and reduce vendor lock-in and switching costs.
The APIs that are designed to operate across multiple security domains are used by systems that are communicating between two different roles in a verifiable credential interaction, such as an API that is used to communicate presentations between a Holder and a Verifier. In order to achieve protocol interoperability in verifiable credentials interactions, it is vital that these APIs be standardized. The additional benefits of documenting these APIs are the same for documenting the single-security-domain APIs: common, vetted architecture and APIs, increased market competition, and reduced vendor lock-in and switching costs.
This specification contains the following sections that software architects and implementers might find useful:
The Verifiable Credentials API is optimized towards the following design goals:
Goal | Description |
---|---|
Modularity | Implementers need only implement the APIs that are required for their use case enabling modularity between Issuing, Verifying, and Presenting. |
Simplicity | The number of APIs and optionality are kept to a minimum to ensure that they are easy to implement and audit from a security standpoint. |
Composability | The APIs are designed to be composable such that complex flows are possible using a small number of simple API primitives. |
Extensibility | Extensions to API endpoints are expected and catered to in the API design enabling experimentation and the addition of value-added services on top of the base API platform. |
The Verifiable Credentials Data Model defines three fundamental roles, the Issuer, the Verifier, and the Holder.
Actors fulfilling each of these roles may use a number of software or service components to realize the VC API for exchanging Verifiable Credentials.
Each role associates with a role-specific Coordinator, Service, and Admin as well as their own dedicated Storage Service. In addition, the Issuer may also manage a Status Service for revocable credentials issued by the Issuer.
Any given VC API implementation may choose to combine any or all of these components into a single functional application. The boundaries and interfaces between these components are defined in this specification to ensure interoperability and substitutability across the Verifiable Credential conformant ecosystem.
In addition to aggregating components into a single app, implementers may choose to operationalize any given role over any number active instances of deployed software. For example, a browser-based Holder Coordinator should be considered as an amalgam of a web browser, various code running in that browser, one or more web servers (in the case of cross-origin AJAX or remote embedded content), and the code running on that server. Each of those elements runs as different software packages in different configurations, each executing just part of the overall functionality of the component. For the sake of the VC API, each component satisfies all of its required functionality as a whole, regardless of deployment architecture.
We define these components as follows:
Issuer Coordinator • Verifier Coordinator • Holder Coordinator
Coordinators execute the business rules and policies set by the associated role. Often this is a custom or proprietary Coordinator developed specifically for a single party acting in that role, it is the integration glue that connects the controlling party to the VC ecosystem.
Coordinators may or may not provide a visual user interface, depending on the implementation. Pure command-line or continuously running services may also be able to realize this component.
With the exception of the Status Service, all role-to-role communication is between Coordinators acting on behalf of its particular actor to fulfill its role.
The Issuer Coordinator executes the rules about who gets what credentials, including how the parties creating or receiving those credentials are authenticated and authorized. Typically the Issuer Coordinator integrates the Issuer's back-end system with the Issuer service. This integration uses whatever technologies are Appropriate; the interfaces between the Issuer App and back-end services are out of scope for the VC-API. The Issuer Coordinator drives the Issuer service.
The Verifier Coordinator communicates with a Verifier service to first check authenticity and timeliness of a given VC or VP, then Applies the Verifier's business rules before ultimately accepting or rejecting that VC or VP. Such business rules may include evaluating the Issuer of a particular claim or simply checking a configured allow-list. The Verifier App exposes an API for submitting VCs to the Verifier per the Verifier's policies. For example, the Verifier Coordinator may only accept VCs from current users of the Verifier's other services. These rules typically require bespoke integration with the Verifier's existing back-end.
The Holder Coordinator executes the business rules for Approving the flow of credentials under the control of the Holder, from Issuers to Verifiers. In several deployments this means exposing a user interface that gives individual Holders a visual way to authorize or Approve VC storage or transfer. Some functionality of the Holder Coordinator is commonly referred to as a wallet. In the VC API, the Holder Coordinator initiates all flows. They request VCs from Issuers. They decide if, and when, to share those VCs with Verifiers. Within the VC API, there is no way for either the Issuer or the Verifier to initiate a VC transfer. In many scenarios, the Holder Coordinator is expected to be under the control of an individual human, ensuring a person is directly involved in the communication of VCs, even if only at the step of authorizing the transfer. However, many VCs are about organizations, not individuals. How individuals using Holder Coordinators related to organizations, and in particular, how organizational credentials are securely shared with, and presented by, (legal) agents of those organizations is not yet specified as in scope for the VC API.
Issuer Service • Verifier Service • Holder Service
Services provide generic VC API functionality, driven by its associated App. Designed to enable infrastructure providers to offer VC capability through Software-as-a-Service. All services expose network endpoints to their authorized Coordinators, which are themselves operating on behalf of the associated role. Although deployed services MAY provide their own HTML interfaces, such interfaces are out of scope for the VC API. Only the network endpoints of services are defined herein.
The Issuer Service takes requests to issue VCs from authorized Issuer Coordinators and returns well-formed, signed Verifiable Credentials. This service MUST have access to private keys (or key services which utilize private keys) in order to create the proofs for those VCs. The API between the Issuer service and its associated key service is believed to be out of scope for the VC API, but may be addressed by WebKMS or similar specifications.
The Verifier service takes requests to verify Verifiable Credentials and Verifiable Presentations and returns the result of checking their proofs and status (if present). The service only checks the authenticity and timeliness of the VC; leaving the Verifier Coordinator to finish Applying any business rules needed.
The Holder service takes requests to create Verifiable Presentations from an optional set of VCs and returns well-formed, signed Verifiable Presentations containing those VCs. These VPs are used with Issuers to demonstrate control over DIDs prior to issuance and with Verifiers to present specific VCs.
The Status Service provides a privacy-preserving means for publishing and checking the status of any Verifiable Credentials issued by the Issuer. Verifier services use the Issuer's status endpoint (as specified in each revocable verifiable credential) to check the timeliness of a given VC as part of verification.
Storage Service (Issuer) •Storage Service (Verifier) • Storage Service (Holder)
Each actor in the system is expected to store their own verifiable credentials, as needed. Several known implementations use secure data storage such as encrypted data vaults for storing the Holder's VCs and use cryptographic authorizations to grant access to those VCs to Verifier Coordinators, as directed by the Holder. In-browser retrieval of such stored credentials can enable web-based Verifier Coordinators to integrate data from the Holder without sharing that data with the Verifier—the data is only ever present in the browser. Authorizing third-party remote access to Holder storage is likely in-scope for the VC API, although we expect this to be defined using extensible mechanisms to support a variety of storage and authorization approaches.
The Issuer and Verifier storage solutions may or may not use secure data storage. Since all such storage interaction is moderated by the bespoke Issuer and Storage Coordinators, any necessary integrations can simply be part of that bespoke customization. We expect different implementations to compete on the ease of integration into various back-end storage platforms.
The Workflow Service provides a way for coordinators to automate specific interactions for specific users. Each role (Holder, Issuer, and Verifier) can run their own Workflow Service to create and manage exchanges that realize particular workflows. Administrators configure the workflow system to support particular flows. Then, when the business rules justify it, coordinators create exchanges at their Workflow Service and give authorized access to those exchanges to any party.
Issuer Admin • Holder Admin • Verifier Admin
The Admin component is an acknowledgement that each of the other components need a way to be configured and managed, without prescribing the interfaces or means of that configuration. Some components may use JSON files to drive a semi-automated Issuer. Others might expose HTML pages. We expect different Coordinators and Services to compete on the power, ease, and flexibility of their administration and therefore, as of this writing, we anticipate Admin functionality to be out of scope for the VC API. However, we actually believe that to the extent we can standardize configuration setting across implementations, the more substitutable each component.
Based on this architectural thinking, we may want to frame the VC API as a roadmap of related specifications, integrated in an extensible way for maximum substitutability. Several technologies, such as EDVs and WebKMSs would likely benefit from the crypto suite Approach taken for VC proofs. Defining a generic mechanism that can be realized by any functionally conformant technology enables flexibility while laying the groundwork with current existing functionality. In this way, we may be able to acknowledge that elements like Key Services, Storage, and Status are necessary parts of the VC API while deferring the definition of how those elements work to specification already in development as well as those yet to be written.
A conforming VC API client is ...
A conforming VC API server is ...
There are no restrictions put on the base URL location of the implementation.
The URL paths used throughout this specification are shown as absolute paths and
their base URL MAY be the host name of the server (e.g., example.com
), a
subdomain (e.g., api.example.com
), or a path within that host (e.g.,
example.com/api
).
The VC API can be deployed in a variety of networking environments which might contain hostile actors. As a result, conforming VC API servers require conforming VC API clients to utilize secure authorization technologies when performing certain types of requests. Each HTTP endpoint defined in this document specifies whether or not authorization is required when performing a request. With the exception of the class of forbidden authorization protocols discussed later in this section, the VC API is agnostic regarding authorization mechanism.
The VC API is meant to be generic and useful in many scenarios that require the issuance, possession, presentation, and/or verification of Verifiable Credentials. To this end, implementers are advised to consider the following classifications of use cases:
The rest of this section gives examples of the authorization technologies that have been contemplated for use by conforming implementations. Other equivalent authorization technologies can be used. Implementers are cautioned against using non-standard or legacy authorization technologies.
Requests to the VC API MUST NOT utilize any authorization protocol that includes long-lived static credentials such as usernames and passwords or similar values in those requests. An example of such a forbidden protocol is HTTP Basic Authentication [[RFC7617]].
If the OAuth 2.0 Authorization Framework [[RFC6749]] is utilized for authorization, the access tokens utilized by clients MAY be OAuth 2.0 Bearer Tokens [[RFC6750]] or any other valid OAuth 2.0 token type. Any valid OAuth 2.0 grant type MAY be used to request the access tokens. However, OAuth 2.0 MUST be implemented in the following way:
OAuth2 tokens for this purpose have an audience of the particular issuer instance, e.g., `origin/issuers/zc612332f3`.
The scopes are generalized to read/write actions on particular endpoints:
`write:/credentials/issue` would only allow writing to that particular API.
Other authorization mechanisms that support delegation might be defined in the future.
Service configuration is determined by administrators on per instance basis. Instance configuration can include, but is not limited to, credential format, key options, credential status mechanisms, and/or credential templates. Administrators are expected to configure service instances such that `options` included in client requests cannot result in incorrect action or problematic responses by the service.
A coordinator instance can have access to multiple service instances in order to support different use cases. Runtime discovery of service instance configuration is not supported by the VC API as services are expected to be known by the coordinator at the time of coordinator deployment.
Some of the endpoints defined in the following sections accept an `options` object. All properties of the `options` object are OPTIONAL when configuring each instance, as these properties are intended to meet per-deployment needs that might vary. Thus, any given instance configuration MAY prohibit client use of some `options` properties in order to prevent clients from passing certain data to that instance. Likewise, an instance configuration MAY require that clients include some `options` properties.
Implementations MAY extend an `options` object with additional properties.
As extension properties are implementation specific, they ought not be mandatory. This is to maintain interoperability by avoiding clients needing to be modified to use a specific implementation.
When adding an extension `options` property, consider whether providing optionality to clients is necessary. If not, using instance configuration to vary API functionality might be a preferable approach.
All entity bodies in requests and responses sent to or received from the API endpoints defined by this specification MUST be serialized as JSON and include the `Content-Type` header with a media type value of `application/json`.
Many of the endpoints defined in the following sections receive data and options in request bodies.
Implementations MUST throw an error if an endpoint receives data, options, or option values that it does not understand or know how to process.
This section gives an overview of all endpoints in the VC-API by the component the endpoint is expected be callable from. If a component does not have a listing below it means the VC-API does not currently specify any endpoints for that component.
Below are all endpoints expected to be exposed by the Issuer Coordinator, along with the component that is expected to call the endpoint
Below are all endpoints expected to be exposed by the Issuer Service, along with the component that is expected to call the endpoint
Below are all endpoints expected to be exposed by the Status Service, along with the component that is expected to call the endpoint
Below are all endpoints expected to be exposed by the Verification Service, along with the component that is expected to call the endpoint
Below are all endpoints expected to be exposed by the Holder Service, along with the component that is expected to call the endpoint
Below are all endpoints expected to be exposed by the Workflow Service, along with the component that is expected to call the endpoint
The following APIs are defined for issuing a Verifiable Credential:
An `EnvelopedVerifiableCredential` can be returned in the `IssueCredentialSuccess` response in order to issue credentials with a media type other than `application/vc`, such as `application/vc+sd-jwt`.
The following APIs are defined for verifying a Verifiable Credential:
The instance should create a challenge for use during verification, and track the number of times the challenge has been passed to verification endpoints as `options.challenge`.
The following APIs are defined for presenting a Verifiable Credential:
The URL path values exchange-id
and transaction-id
are
meaningful to the server but are opaque to the client. While some server
implementations might use values that happen to be human-readable, clients are
strongly advised to not assign semantics to any human-readable values.
An `EnvelopedVerifiablePresentation` can be returned in the response in order to create presentations with a media type other than `application/vp`, such as `application/vp+jwt`.
Discovery is an optional call for the Holder Coordinator to ensure the Holder Coordinator can support the exchange protocol requirements before calling the endpoint. Coordinators SHOULD support the exchange discovery endpoint.
A VC API workflow defines a particular set of steps for exchanging verifiable credentials between two parties across a trust boundary. Each step can involve the issuance, verification, transmission, or presentation of verifiable credentials. Examples of VC API workflows include, but are not limited to:
Workflow instances are expected to be created by administrators, for use with, for example, coordinator websites. A workflow instance is created by performing an HTTP POST to the workflow service's `/workflows` endpoint. The HTTP request body includes the configuration for the workflow instance. This includes, but is not limited to, information about the steps that define the workflow and any credential templates that will be used to issue verifiable credentials. The steps that define the workflow might also be templates, enabling additional flexibility. If a workflow involves the issuance of verifiable credentials, or the verification of presentations or credentials, then the workflow instance configuration can include authorization capabilities to use one or more VC API issuer and/or verification services.
Once a workflow instance exists, authorization to create and query particular workflow interactions, called VC API exchanges, can be given to coordinators.
A VC API exchange represents a particular interaction based on a given VC API workflow. The interaction will take place between an exchange client and the workflow service. Exchanges are expected to be transitory, only existing as long as the interaction takes to complete. The workflow service stores state information about each exchange such as whether the exchange is pending, active, or complete, as well as the current step in the workflow, any workflow-specific variables and data, and any verifiable presentations and credentials received while the exchange executes.
An issuer, verifier, or holder coordinator is responsible for creating exchanges. The coordinator creates an exchange by performing an HTTP POST to the `/exchanges` subpath of a chosen workflow, on the workflow service. The HTTP request body includes a time-to-live (TTL) for the exchange and any variables to be used to populate the workflow's templates for the particular exchange. The request body can also include configuration options to enable the exchange to be executed using additional protocols beyond VC API. Once the exchange is created, an exchange URL that identifies the exchange and enables interaction with it is returned to the coordinator.
The exchange URL is given to the exchange client so that it can initiate the exchange. Initiating the exchange does not require any authorization beyond the exchange URL. Depending on the workflow service implementation, exchange URLs can also be capability URLs (i.e., the URL is an unguessable secret such that only whomever is given the URL can initiate the exchange). If the workflow that the exchange is based on requires any additional authorization beyond the possession of the exchange URL, this is to be obtained during the exchange, not at its initiation.
The exchange URL can also be used by the coordinator to query the current state of the exchange as it progresses and to obtain information associated with the exchange that the workflow service has stored. Querying the exchange in this way requires additional authorization that the coordinator is expected to have and that the exchange client is not.
How the exchange URL is transmitted from a coordinator to an exchange client is out of scope for this specification. Known mechanisms for sharing the exchange URL with the client include the Credential Handler API (aka CHAPI), a QR code, or a universal link.
VC API exchanges are designed to be executable using other protocols in addition to the VC API exchange protocol; for example, an exchange could potentially be executable with any of the OID4VCI, OID4VP, DIDComm, and VC API exchange protocols. The protocols supported depend on the complexity of the workflow the exchange is based on, and the options provided by the coordinator when the exchange was created.
The exchange client is expected to initiate the exchange using a protocol that is compatible with how the client received the exchange URL. For example, the exchange URL could have been provided over CHAPI with a protocol identifier indicating that the VC API protocol ought to be used. Alternatively, the exchange URL could be included as the "credential_issuer" in an OID4VCI credential offer, or as the "client_id" of an OID4VP authorization request, indicating that OID4VCI or OID4VP, respectively, ought to be used. This section focuses on how an exchange client uses VC API to interact with the exchange; see Appendix TBD to see how to combine VC API exchanges with other protocols such as OID4VCI, OID4VP, and DIDComm.
Exchanges that are executed using the VC API protocol involve messages sent as request and response bodies over HTTP. Each message consists of a simple JSON object that includes zero or more of the following properties and values:
Custom properties and values might also be included, but are expected to trigger errors in implementations that do not recognize them.
To initiate an exchange using the VC API protocol, an exchange client performs an HTTP POST sending a JSON object as the request body. In the simplest case, when the client has no constraints of its own on the exchange — i.e., it has nothing to request from the other party — the JSON object is empty (`{}`). The workflow service responds with its own JSON object in the response body.
If that response object is empty, the exchange is complete and nothing is requested from nor offered to the exchange client. If the object includes `verifiablePresentationRequest`, then the exchange is not yet complete and some additional information is requested, as specified by the contents of the associated verifiable presentation request. If the object includes `verifiablePresentation`, then some information is offered, such as verifiable credentials issued to the holder operating the exchange client or verifiable credentials with information about the exchange server's operator based on the exchange client's request. If the object includes `redirectUrl`, the exchange is complete and the workflow service recommends that the client proceed to another place to continue the interaction in another form.
Many verifiable credential use cases can be implemented using these basic primitives. Either party to an exchange is capable of requesting verifiable presentations and of providing one or more verifiable credentials that might be necessary to establish trust and/or gain authorization capabilities, and either party is capable of presenting credentials that they hold or that they have issued. Specific workflows can be configured to expect specific presentations and credentials and to reject deviations from the expected flow of information. When a workflow service determines that a particular message is not acceptable, it raises an error by responding with a `4xx` HTTP status message and a JSON object that expresses information about the error.
The VC API exchange design approach is layered: it aims to provide a minimal communication message layer and a set of primitives that enable most use cases to be implemented via specific verifiable presentation requests and verifiable credentials at a layer above. See the appendices that follow for examples of workflows and exchanges that use specific verifiable presentation requests and verifiable credentials.
These examples will be added later.The following APIs are defined for using workflows and exchanges for credential use cases that require crossing trust boundaries:
There is a `ttl` or time-to-live property associated with exchanges created using the /workflows/{localWorkflowId}/exchanges endpoint. This impacts the lifetime of challenges associated with such an exchange: if a challenge is bound to an exchange, the lifetime of that challenge is the `ttl` of the exchange.
The APIs in this specification enables unmediated (automated, machine-to-machine) or mediated (person in the loop) exchanges to be executed. These exchanges are initiated by a Holder Coordinator and responded to by any Coordinator that implements exchanges. The flows consist of the following steps:
The Holder Coordinator MAY call the Coordinator's exchange discovery endpoint to determine if the Holder Coordinator supports the Coordinator's protocol requirements on a particular endpoint, before actually initiating the exchange.
A diagram of the steps outlined above is presented below:
The general exchange above can be performed in a way that is fully automated, mediated by a person, or in a hybrid fashion where portions are automated but interaction by a person is required at certain stages. The second step above is used to provide guidance on whether the next step is automated or requires an individual to intervene.
Error handling and messaging in the VC-API aligns with Problem Details for HTTP APIs [[RFC9457]]. Implementers SHOULD include a status and a title in the error response body relating to the specifics of the endpoint on which the error occurs.
Aligning on error handling and messaging will greatly improve test-suites accuracy when identifying technical friction impacting interoperability.
Leveraging other fields such as detail, instance and type is encouraged, to provide more contextual feedback about the error, while being conscious of security concerns and hence not disclosing sensitive information.Implementers should handle all server errors to the best of their capabilities. Endpoints should avoid returning improperly handled 500 errors in production environments, as these may lead to information disclosure.
It is recommended to avoid raising errors while performing verification, and instead gather ProblemDetails objects to include in the verification results.
The example `type` URLs will work in the future after VCDM v2.0 becomes a global standard. To ensure the error links to the appropriate location, you can replace the base URL of `https://www.w3.org/TR/vc-data-model` with `www.w3.org/TR/vc-data-model-2.0` for the time being.
{ "type": "https://www.w3.org/TR/vc-data-model#CRYPTOGRAPHIC_SECURITY_ERROR", "status": 400, "title": "CRYPTOGRAPHIC_SECURITY_ERROR", "detail": "The cryptographic security mechanism couldn't be verified. This is likely due to a malformed proof or an invalid verificationMethod." }
{ "verified": false, "document": verifiableCredential, "mediaType": "application/vc", "controller": issuer, "controllerDocument": didDocument, "warnings": [ProblemDetails], "errors": [ProblemDetails] }
Verifiable credentials [[VC-DATA-MODEL-2.0]] are a standard data model designed to mitigate risks of misuse and fraud. As a data model, verifiable credentials are protocol-neutral and consider at least two types of entities: issuer and subject. When the subject of a verifiable credential is a natural person or linked to a natural person, privacy and human rights can be impacted by the vastly more efficient processing of standardized verifiable credentials as compared to their analog ancestors.
Technology, in the form of standardized APIs and protocols for issuing verifiable credentials, further enhances the efficiency of processing verifiable credentials and adds to the risks of unforeseen privacy and human rights consequences.
Verifiable credentials issuance has a request phase and a delivery phase. The request might be made by the subject or another role, and delivery can be to a client that might or might not be controlled by the subject. Delegation is highly relevant for both phases. The issuer might delegate processing of the request to a separate entity. The subject, for their part, might also delegate the ability to request a verifiable credential to a separate entity. Note that the subject might not always have the capability or ability to perform delegation. Examples include: a new born baby, a pet, and a person with dementia. So the request might be performed by a third party who was not delegated by the subject. The ability to delegate is a third dimension in the enhanced efficiency of processing verifiable credentials and has impact on privacy and human rights.
The architecture described in this specification is designed for market acceptance through a combination of efficiency and respect for privacy and human rights. APIs and protocols for processing verifiable credentials do not favor delegation by the issuer role over delegation by the subject role.
It is considered a bad privacy practice for a verifier to contact an issuer about a specific verifiable credential. This practice is known as "phoning home" and can result in a mismatch in privacy expectations between holders, issuers, verifiers, and other parties expressed in a verifiable credential. Phoning home enables issuers to correlate unsuspecting parties with the use of certain verifiable credentials which can violate privacy expectations that each entity might have regarding the use of those credentials. For example, what is expected by the holder to be a private interaction between them and the verifier becomes one where the issuer is notified of the interaction.
There are some interactions where contacting the issuer in a privacy-preserving manner upholds the privacy expectations of the holder. For example, contacting the issuer to get revocation status information in a privacy-respecting manner, such as through a status list that provides group privacy can be acceptable as long as the issuer is not able to single out which verifiable credential is being queried based on the retrieval of the status list. For more information on one such mechanism see the [[[VC-BITSTRING-STATUS-LIST]]] specification.
Verifiers are urged to not "phone home" in ways that will create privacy violations. When retrieving content that is linked from a verifiable credential, using mechanisms such as [[[?OHTTP]]] and aggressively caching results can improve the privacy characteristics of the ecosystem.
The APIs provided by this specification enable the deletion of verifiable credentials and verifiable presentations from storage services. The result of these deletions and the side-effects they might cause are out of scope for this specification. However, implementers are advised to understand the various ways deletion can be implemented. There are at least two types of deletion that are contemplated by this specification.
Partial deletion marks a record for deletion but continues to store some or all of the original information. This mode of operation can be useful if there are audit requirements for all credentials and/or presentations over a particular time period, or if recovering an original credential might be a useful feature to provide.
Complete deletion purges all information related to a given verifiable credential or verifiable presentation in a way that is unrecoverable. This mode of operation can be useful when removing information that is outdated and beyond the needs of any audit or when responding to any sort of "right to be forgotten" request.
When deleting a verifiable credential, handling of its status information needs to be considered. Some use cases might call for deletion of a particular verifiable credential to also set the revocation and suspension bits of that verifiable credential, such that any sort of status check for the deleted credential fails and use of the credential is halted.
Given the scenarios above, implementers are advised to allow the system actions that occur after a delete to be configurable, such that system flexibility is sufficient to address any verifiable credential use case.
The Working Group thanks the following individuals for their contributions to this specification: The final list of acknowledgements will be compiled at the end of the Candidate Recommendation phase.
Portions of the work on this specification have been funded by the United States Department of Homeland Security's Silicon Valley Innovation Program under contracts 70RSAT20T00000003, 70RSAT20T00000010, 70RSAT20T00000029, 70RSAT20T00000031, 70RSAT20T00000033, and 70RSAT20T00000043. The content of this specification does not necessarily reflect the position or the policy of the U.S. Government and no official endorsement should be inferred.
Development of this specification has also been supported by the W3C Credentials Community Group, chaired by Kim Hamilton Duffy, Heather Vescent, and Wayne Chang.