The project aims to enable supply chains to share insights and data across multiple data platforms, enhancing the efficiency of the supply chain network. Specifically it facilitates the onboarding of new actors into a network, facilitating connections between those and allows for interaction between the actors. Interactions include issuance of credentials, acceptance of issued credentials, requesting proofs of credentials from other actors and those being able to verify those requests.
- Attribution
- Setup
- Getting Started
- Development Mode
- Environment Variables
- Testing
- WebSocket and webhooks
- Verified DRPC
- Schema Definition
- Credentials and Proofs
- Explicit Credential Selection
Thanks for all the hard work to everybody who contributed to the project. That project is under Apache-2.0 license
In case npm i fails on MacOS with references to node-gyp, please ensure XCode is installed via the App Store and accept the licence via sudo xcodebuild -license accept. Before executing npm i, delete your node_modules directory (if it was created) in case it has some old references rm -rf ./node_modules.
The RPC client can be configured through Envs, and follows the following format:
{
...,
"verifiedDrpcOptions": {
"proofTimeoutMs": <int> (timeout on proof requests, default: 5000),
"requestTimeoutMs": <int> (timeout on DRPC requests, default: 5000),
"credDefId": <string>, (credential definition ID to add to restrictions, can be easily set through "VERIFIED_DRPC_OPTIONS_CRED_DEF_ID" env)
"proofRequestOptions": (proof options) <CreateProofRequestOptions<ProofProtocol[]>>
}
},
...
}Note that the proof options must be set through envs due to the fact that a DRPC request handler is configured during initialisation.
- postgres:16.3+
- xcode utilities (node-gyp)
- npm 11.0.0+
- node 24.0.0+
The REST API provides an OpenAPI schema that can easily be viewed using the SwaggerUI (http://localhost:3000/swagger) that is provided with the server. The OpenAPI spec can be viewed on the /api-docs endpoint (e.g. http://localhost:3000/api-docs).
Bellow you will find commands for starting up the containers in docker (see the Using Docker section).
The OpenAPI spec is generated from the model classes used by Aries Framework JavaScript. Due to limitations in the inspection of these classes, the generated schema does not always exactly match the expected format. Keep this in mind when using this package. If you encounter any issues, feel free to open an issue.
This service includes an optional did:web server (DID_WEB_ENABLED env). Since did:web always resolves to HTTPS, the server runs as HTTPS in dev mode. A local trusted certificate and key must be generated before it can be accessed in your browser.
- Install mkcert.
- Run the following commands:
npm run setup:certs
export NODE_EXTRA_CA_CERTS="$(mkcert -CAROOT)/rootCA.pem"This will create certs/dev-cert.pem and certs/dev-key.pem, covered by a single SAN certificate valid for alice, bob, charlie, and localhost. These are mounted into each agent container and referenced by the DID_WEB_DEV_CERT_PATH and DID_WEB_DEV_KEY_PATH envs.
If you also want browser/system trust for the local CA, run mkcert -install once (this may prompt for sudo).
NODE_EXTRA_CA_CERTS is set to the root CA so that dev certificates are trusted when the agent resolves to https.
In production, the server runs HTTP only (DID_WEB_USE_DEV_CERT=false) and HTTPS is handled by ingress.
When the did:web server is enabled, a matching did:web document is automatically created, added to the did:web server and imported to the agent at startup. The did of the document matches the server's configured DID_WEB_DOMAIN e.g. localhost%3A8443 produces a document with "id": "did:web:localhost%3A8443" which will correctly resolve to the server https://localhost:8443/did.json. This ensures the document can be resolved by other parties.
To configure the autogenerated did:web, set the following environment variables:
DID_WEB_SERVICE_ENDPOINT=https://yourdomain.com- The service endpoint URL
The generated DID:web document follows the W3C DID Core specification and includes:
- @context: Standard DID contexts
- id: The DID identifier
- verificationMethod: Canonical Credo v0.6 methods (
Ed25519VerificationKey2020for#auth-key/#assertion-key,X25519KeyAgreementKey2019for#agreement-key) - authentication: References
#auth-key - assertionMethod: References
#assertion-key - keyAgreement: References
#agreement-key - capabilityInvocation: References
#auth-key - service: DIDComm v1-compatible
did-communicationservice withrecipientKeysreferencing#auth-key
For migration details from the legacy generated shape (#owner/#encryption, multibase/base58 keys), see docs/credo-v0.6-migration-release-notes.md (DID:web generated document shape section).
DID documents are saved to postgres, primary keyed by DID, and if a new document is uploaded with the same DID as an existing one, the new document overwrites. Apart from checking for an id key, the server doesn't validate the DID, so it's up to the user to load valid DIDs (make sure to match the domain in the DID with the domain of the did:web server).
For docker-compose and docker-compose-testnet, volumes are configured to mount contents of the /dids/alice, /dids/bob and /dids/charlie directories in their respective container's ./dids directory. e.g. ./dids/alice:/dids - a did:web document placed locally in /dids/alice/did.json, will be loaded by alice in docker at startup from /dids.
The did:web server maps any GET request ending in did.json to a did:web ID and returns the matching document from the database if found. The domain of the server is set by the DID_WEB_DOMAIN env. For example, if DID_WEB_DOMAIN=example.com:
GET '/did.json'maps todid:web:example.com. If the database contains a row whereDID==did:web:example.com, the document will be returned.GET '/.well-known/did.json'also maps todid:web:example.com..well-knownis a special case as part of thedid:webspec.
Port numbers are allowed in DID_WEB_DOMAIN but : must be encoded e.g. localhost%3A8443.
The following command will spin up the infrastructure (IPFS node, Postgres database, testnet network) for local testing and development purposes:
docker compose -f docker-compose.yml up --build -dNext begin the local agent for development with:
npm run devThe agent API is now accessible via a Swagger (OpenAPI) interface on port 3000.
If you wish to also start the agent (Alice) within docker, run the following command:
docker compose -f docker-compose.yml up --build -d --scale alice=1The following command will create a containerised private network consisting of 3 agents (Alice, Bob and Charlie) and a 3-node private IPFS cluster.
docker compose -f docker-compose-testnet.yml up --build -dThis private testnet has the following ports available to the user for testing:
| Agent | OpenAPI | HTTP | WS |
|---|---|---|---|
| Alice | 3000 | 5002 | 5003 |
| Bob | 3001 | 5102 | 5103 |
| Charlie | 3002 | 5202 | 5203 |
| IPFS | 8080 |
Network name: testnet
The following lifecycle commands can be run using npm
| command | description |
|---|---|
depcheck |
Runs dependency analysis to ensure all package.json dependencies are used and included |
lint |
Static linting check with eslint |
lint:fix |
Static linting check with eslint fixing issues where possible |
check |
Check types are valid according to Typescript language words |
clean |
Remove build artefacts |
build |
Compile build artefacts including tsoa artefacts. Note this does not perform type checking |
tsoa:build |
Build tsoa artefacts routes.ts and swagger.json |
tsoa:watch |
Build tsoa artefacts routes.ts and swagger.json and watches for changes. Rebuilds on changes |
dev |
Runs tsoa:watch and a development server concurrently in watch mode. Can be used for live debugging. Configure with environment variables |
start |
Start production server from build |
test:unit |
Run unit tests. Configure with environment variables |
test:integration |
Run integration tests |
test-watch |
Run unit tests and re-run on changes |
The Envs are defined under src > env.ts They are used to start up a container. They mostly have defaults and if you wish to overwrite these, provide them under environment in docker compose. For any envs that are an array of strings please provide them comma-separated like so: - ENDPOINT=http://charlie:5002,ws://charlie:5003.
| variable | required | default | description |
|---|---|---|---|
| LABEL | Y | "Veritable Cloudagent" | A label that is used to identify the owner of the wallet |
| WALLET_ID | Y | "walletId" | An id of the Agent's wallet |
| WALLET_KEY | Y | "walletKey" | A key for the Agent's wallet |
| ENDPOINT | Y | ['http://localhost:5002', 'ws://localhost:5003'] | An array of endpoint for the agent app, if passing as an environment variable in docker, please pass as a comma delimited string |
| LOG_LEVEL | Y | info | Log level for the app. Choices are trace, debug, info, warn, error or silent |
| USE_DID_SOV_PREFIX_WHERE_ALLOWED | N | false | Allows the usage of 'sov' prefix in DIDs where possible |
| USE_DID_KEY_IN_PROTOCOLS | N | true | Allows the use of DID keys in protocols |
| OUTBOUND_TRANSPORT | Y | ['http', 'ws'] | Specifies the type of outbound transport |
| INBOUND_TRANSPORT | Y | "[{"transport": "http", "port": 5002}, {"transport": "ws", "port": 5003}]" | Specifies the inbound transport, needs to be provided as a JSON parseable string |
| AUTO_ACCEPT_CONNECTIONS | N | false | Allows for connection requests to be automatically acceptedupon being received |
| AUTO_ACCEPT_CREDENTIALS | N | "never" | Allows for credentials to be automatically accepted upon being received |
| AUTO_ACCEPT_MEDIATION_REQUESTS | N | false | Allows for mediatioons requests to be automatically accepted |
| AUTO_ACCEPT_PROOFS | N | "never" | Allows for proofs to be automatically accepted upon being received |
| AUTO_UPDATE_STORAGE_ON_STARTUP | N | true | Updates storage on startup |
| BACKUP_BEFORE_STORAGE_UPDATE | N | false | Creates a backup before the storage update |
| CONNECTION_IMAGE_URL | N | "https://image.com/image.png" | Url for connection image |
| WEBHOOK_URL | Y | ['https://my-webhook-server'] | An array of webhook urls |
| ADMIN_PORT | N | 3000 | The port for the app |
| ADMIN_PING_INTERVAL_MS | N | 10000 | The time interval in ms on which to perform WebSocket ping checks |
| IPFS_ORIGIN | Y | "http://ipfs0:5001" | The IPFS url endpoint |
| IPFS_TIMEOUT_MS | N | 15000 | Universal timeout in ms for IPFS network requests (upload and download) |
| PERSONA_TITLE | N | "Veritable Cloudagent" | Tab name which you can see in your browser |
| PERSONA_COLOR | N | "white" | Defines the background colour of swagger documentation |
| STORAGE_TYPE | Y | "postgres" | The type of storage to be used by the app |
| POSTGRES_HOST | N | "postgres" | If type of storage is set to "postgres" a host for the database needs to be provided |
| POSTGRES_PORT | N | "postgres" | If type of storage is set to "postgres" a port for the database needs to be provided |
| POSTGRES_USERNAME | N | "postgres" | If type of storage is set to "postgres" a username for the database needs to be provided |
| POSTGRES_PASSWORD | N | "postgres" | If type of storage is set to "postgres" a password for the database needs to be provided |
| VERIFIED_DRPC_OPTIONS_PROOF_TIMEOUT_MS | N | 5000 | Timeout in ms on proof requests |
| VERIFIED_DRPC_OPTIONS_REQUEST_TIMEOUT_MS | N | 5000 | Timeout in ms for DRCP requests |
| VERIFIED_DRPC_OPTIONS_PROOF_REQUEST_OPTIONS | Y | {"protocolVersion": "v2", "proofFormats": {"anoncreds": {"name": "drpc-proof-request", "version": "1.0", "requested_attributes": {"companiesHouseNumberExists": {"name": "companiesHouseNumber"}}}}} |
Options for proof request |
| DID_WEB_ID | N | "" | The DID:web identifier to generate (e.g., "did:web:example.com") |
| DID_WEB_SERVICE_ENDPOINT | N | "" | The service endpoint URL for the DID:web document |
| DID_WEB_ENABLED | N | false | Enables the did:web server. |
| DID_WEB_PORT | N | 8443 | Port for the did:web server. |
| DID_WEB_USE_DEV_CERT | N | false | Use dev certificates for did:web server HTTPS. Set to false in production. |
| DID_WEB_DEV_CERT_PATH | N | "" | Path to dev-only HTTPS certificate for did:web server. |
| DID_WEB_DEV_KEY_PATH | N | "" | Path to dev-only HTTPS key for did:web server. |
| DID_WEB_DB_NAME | N | "did-web-server" | Name of the database used by the did:web server |
| DID_WEB_DOMAIN | N | "localhost%3A8443" | Domain for did:web server. Requests are mapped to DIDs using this domain GET '/dir/did.json' -> did:web:{DID_WEB_DOMAIN}:dir |
Unit tests and integration tests are defined in the top-level tests directory.
Unit tests can be run with npm run test:unit.
Integration tests require certificates + setting an env for the path to the root CA:
npm run setup:certs
export NODE_EXTRA_CA_CERTS="$(mkcert -CAROOT)/rootCA.pem"Then the testnet orchestration can be deployed.
If the testnet is already running locally: (through the command docker compose -f docker-compose-testnet.yml up --build for example), the integration tests can be run by first building the tests docker image and then running it against the testnet stack:
docker build --target test -t veritable-cloudagent-integration-tests . && \
docker run -it \
--network=testnet \
-e ALICE_BASE_URL=http://alice:3000 \
-e BOB_BASE_URL=http://bob:3000 \
-e CHARLIE_BASE_URL=http://charlie:3000 \
veritable-cloudagent-integration-testsIf the testnet is not already running: The entire stack can be run with integration tests using the following command:
docker compose \
-f docker-compose-testnet.yml \
-f docker-compose-integration-tests.yml \
up --build --exit-code-from integration-testsDatabase is managed by the third party library credo-ts and more specifically askar.
The REST API provides the option to connect as a client and receive events emitted from your agent using WebSocket and webhooks.
You can hook into the events listener using webhooks, or connect a WebSocket client directly to the default server.
The currently supported events are:
Basic messagesTrustPingConnectionsCredentialsProofsDRPCVerified DRPC
Webhook urls can be specified using the WEBHOOK_URL env.
When using the REST server as an library, the WebSocket server and webhook urls can be configured in the startServer and setupServer methods.
// You can either call startServer() or setupServer() and pass the ServerConfig interface with a webhookUrl and/or a WebSocket server
const run = async (agent: Agent) => {
const config = {
port: 3000,
webhookUrl: ['http://test.com'],
socketServer: new Server({ port: 8080 }),
}
await startServer(agent, config)
}
run()The startServer method will create and start a WebSocket server on the default http port if no socketServer is provided, and will use the provided socketServer if available.
However, the setupServer method does not automatically create a socketServer, if one is not provided in the config options.
In case of an event, we will send the event to the webhookUrls with the topic of the event added to the url (http://test.com/{topic}).
So in this case when a connection event is triggered, it will be sent to: http://test.com/connections
The payload of the webhook contains the serialized record related to the topic of the event. For the connections topic this will be a ConnectionRecord, for the credentials topic it will be a CredentialRecord, and so on.
For the WebSocket clients, the events are sent as JSON stringified objects
The Verified DRPC module is built on a clone of the credo-ts DRPC package, which supports request-response style messaging according to the json-rpc spec.
In addition to RPC messaging, Verified DRPC adds a proof verification step on both the client (requester) and server (responder) peers. This is implemented by executing a proof request before sending an outbound DRPC request and before processing an inbound request. This is represented by the following states:
Verified DRPC request and responses are exposed through the /verified-drpc/ REST endpoints, verified-drpc webhooks and VerifiedDrpc internal events.
The repo contains 'schema' folder with a schema body json which can be imported into ts files like so:
import _schema from './schema/schemaAttributes.json'
const schema = _schema as AnonCredsSchemaThe Json file contains attributes required for a schema to be registered. The attributes are:
- checkName => what kind of check this is (e.g. NASDAQ, Rev >500, Production Capacity etc)
- companyName => name of the company that is being checked (max 200 characters)
- companiesHouseNumber => unique identifier for companies provided by Companiess House (max 8 characters - alphanumeric)
- issueDate => date when this check was issued (dateInt e.g. 20230101 for 1st Jan 2023)
- expiryDate => date when this check expires (dateInt e.g. 20230101 for 1st Jan 2023)
The schema definition can be posted to /schemas to register it. Upon a successful call a response is returned with an 'id' property which refers to this schema and can be used to refer to the schema when creating a new credential definition like so:
{
"tag": "myTestDefinition",
"schemaId": "ipfs://example",
"issuerId": "did:key:exampleDidKey"
}(Note: Each credential definition is unique, because a different set of cryptographic materials is created each time.)
A credential definition can then be used to issue a credential which contains both information about an issuer of a credential and the check itself. (Note: Because the schema and definition is saved on ipfs. One must have an instance of ipfs running or be connected to global ipfs when registering a schema and definition.)
For a comprehensive guide on the supported credential formats (AnonCreds and W3C Verifiable Credentials), including detailed API payloads for issuance and verification, see the Credentials and Proofs Guide.
When responding to proof requests, you can explicitly select which credentials to use. This is useful when multiple credentials satisfy the request criteria.
See Explicit Credential Selection Documentation for details on the API and usage.
