Skip to main content

Public Key Infrastructure for Data Sharing: A Technical Case Study

· 24 min read
Wojciech Kotłowski
Developer Advocate

Public Key Infrastructure (PKI) is the cryptographic foundation that data sharing ecosystems depend on for three guarantees: participant identities, transport security, and message integrity. Its original design assumed a simpler world—one CA, one trust anchor, a handful of relying parties. Whether you are an enterprise sharing data with a few trusted partners or a regulated ecosystem with hundreds of participants, PKI delivers. This case study shows how it evolved, where it remains insufficient on its own, and how federation and modern cryptography fill the gaps.

The classic Public Key Infrastructure trust model

Before looking at modern extensions, it is worth understanding the moving parts of a traditional PKI deployment. The diagram below illustrates how trust flows from a root Certificate Authority through the infrastructure components that issue, validate, and consume certificates in a data sharing ecosystem.

The root CA sits offline and signs intermediate CA certificates. A Registration Authority handles identity vetting before the intermediate CA issues an end-entity certificate. When two participants connect via mTLS, each presents its certificate, and the other side validates the chain and checks revocation through CRLs or OCSP. The sections below break down each layer of this model.

Root CAs, intermediate CAs, and the end-entity certificate

PKI organizes trust into a hierarchy anchored by a root Certificate Authority. In the simplest case, the root CA directly issues end-entity certificates to participants. Larger or more complex deployments add one or more intermediate CAs—the root delegates issuance authority to intermediates, which then issue end-entity certificates. When a relying party receives a certificate, it walks backward through the chain until it reaches a root it already trusts.

RFC 5280 specifies how this validation works: check signatures, verify validity periods, enforce constraints like Basic Constraints and Key Usage, and confirm the chain terminates at a configured trust anchor. The model is elegant because it distributes trust without requiring every party to know every other party in advance.

Why identity is not the same as authorization

PKI's elegance comes with a limitation. A certificate proves that this public key belongs to that subject. It does not prove what the subject is allowed to do.

In data sharing, identity is only the starting point. Any organization sharing data needs to know:

  • Is this participant currently eligible to access this API?

  • Which scopes and data types can it request?

  • Has the end user consented?

Certificates cannot express these business semantics. Policy OIDs and name constraints help, but they are static and coarse. That is why OAuth and OpenID Connect authorization servers become essential. They interpret policy, apply consent, issue access tokens with scoped permissions, and enforce constraints that PKI alone cannot model. PKI provides the cryptographic foundation; authorization provides the decision-making layer.

Revocation at scale (CRLs and OCSP)

Certificates can become invalid before they expire—keys get compromised, participants leave the ecosystem, roles change. Revocation is the safety valve.

Certificate Revocation Lists (CRLs) are signed lists of revoked serial numbers. They work, but they are periodic: a relying party might cache a CRL for hours or days, creating a window where a revoked certificate is still accepted. OCSP (Online Certificate Status Protocol) improves freshness by enabling real-time queries, but it adds latency, availability risk, and operational overhead.

At ecosystem scale—thousands of participants, frequent rotations—revocation infrastructure can become a bottleneck. It remains necessary, but it is not sufficient for the dynamic, policy-rich trust that modern data sharing demands.

Cryptographic keys and algorithm choices for data sharing

Certificates are only part of the story. Behind every certificate is a key pair, and the security of your data sharing setup depends on how those keys are generated, stored, used, and rotated. This section covers how certificates and keys relate, how to choose algorithms, and how to align cryptographic choices across PKI and application-layer protocols like OAuth and JOSE.

How certificates and keys work together

A certificate binds a public key to an identity. The corresponding private key never leaves the holder's control—it is used to prove ownership of the certificate by signing challenges or establishing encrypted channels.

In practice, this means:

  • mTLS authentication: The client holds a private key; the certificate (containing the public key) is sent to the server. The client proves possession of the private key by signing the TLS handshake transcript, and the server validates both the certificate chain and that signature.

  • JWS signing: The signer uses a private key to sign a JWT. Relying parties fetch the public key (often from a JWKS endpoint) to verify the signature.

  • JWE encryption: The sender encrypts content using the recipient's public key (or a key derived from it). Only the recipient's private key can decrypt.

The certificate adds trust context—who issued it, what constraints apply, when it expires—but the cryptographic operations always happen at the key level.

Key types and usage separation

Cryptographic keys serve different purposes:

  • Signing keys create and verify digital signatures (JWS, certificate signatures).

  • Key agreement keys negotiate shared secrets (ECDH for JWE key wrapping).

  • Encryption keys protect data confidentiality (RSA-OAEP key transport, AES payload encryption).

Using the same key for multiple purposes—so-called dual-use—increases risk. If a signing key is also used for decryption, a compromise exposes both authentication and confidentiality. Separation limits blast radius and simplifies rotation.

At the certificate layer, keyUsage and extendedKeyUsage extensions should reflect intent. At the JOSE layer, JWK key_ops enforces the same discipline.

Raidiam Connect applies this separation by default—issuing distinct key pairs for mTLS transport, JWS signing, and JWE encryption, each scoped to a single purpose.

Algorithms and trade-offs (by use case)

The JOSE algorithm identifiers used in OAuth and OIDC are registered in JWA (RFC 7518). The most common choices in data sharing break down by use case:

Signatures:

  • RSA-PSS (PS256, PS384, PS512): Widely supported, well understood, requires larger keys.

  • ECDSA (ES256, ES384): Strong security with smaller keys and faster operations.

  • EdDSA (Ed25519): Compact signatures, deterministic, excellent performance, growing adoption.

Key agreement:

  • ECDH with P-256 or P-384: Mature, widely deployed.

  • X25519: Modern curve, excellent performance, increasingly supported.

Encryption:

  • RSA-OAEP: Safe key transport; avoid legacy RSA1_5.

  • AES-GCM (A256GCM): AEAD mode providing confidentiality and integrity; requires unique IVs.

Interoperability constraints matter. Some ecosystems mandate conservative suites for legacy compatibility; others require modern curves for performance and agility.

Security levels and key sizes

Security strength should be consistent across the stack. Rough equivalences:

AlgorithmKey sizeApproximate symmetric strength
RSA2048 bits112 bits
RSA3072 bits128 bits
ECDSA P-256256 bits128 bits
Ed25519256 bits128 bits

NIST SP 800-57 provides guidance on acceptable lifetimes for each strength level. Pairing a strong signature algorithm with weak encryption undermines the overall security posture.

Operational security

High-assurance ecosystems treat key custody as a first-class control:

  • Store signing keys in HSMs or cloud KMS with strict access policies.

  • Define rotation cadences aligned with key strength and risk tolerance.

  • Publish JWKS with overlap windows so relying parties can validate both old and new keys during rollover. For compliance-driven rotation requirements, see Key Rotation for PCI DSS.

  • Scope keys by environment and role to contain compromise.

Protocol alignment

JOSE algorithm constraints in OAuth and OIDC profiles should match the algorithms supported by your certificate profiles. If the ecosystem mandates PS256 for JWT signatures, participants must have RSA keys that support RSA-PSS. Similarly, mTLS client authentication requires specific keyUsage (digitalSignature) and extendedKeyUsage (id-kp-clientAuth) values.

mTLS for data sharing

Mutual TLS (mTLS) is the default transport security mechanism for regulated data sharing. It provides encryption in transit and, critically, client authentication. The diagram below shows how mTLS adds a client certificate exchange to the standard TLS handshake, and how the resulting certificate-bound access token flows through to the resource server.


For deeper coverage of the topics in this section, see our related articles:

Channel security vs. client authentication

Standard TLS encrypts the channel and authenticates the server. mTLS goes further: the client also presents a certificate, and the server validates it against a trust anchor. This binds the connection to a known participant before any application-layer exchange occurs.

In ecosystem terms, mTLS is the first gate. A connection is accepted only if the certificate chains to an ecosystem-approved CA and meets identity constraints (subject name patterns, policy OIDs).

mTLS as a best practice and a FAPI requirement

mTLS client authentication is widely regarded as a best practice for any organization sharing data over APIs. It provides a stronger authentication mechanism than client secrets because the private key never leaves the client and cannot be intercepted in transit.

For organizations adopting the FAPI (Financial-grade API) security profile, mTLS is not optional—it is mandatory. FAPI requires either mTLS or DPoP for sender-constraining access tokens, and mTLS client authentication (tls_client_auth or self_signed_tls_client_auth) is the most widely adopted method. This applies beyond financial services: any sector that adopts FAPI-level security—health, energy, government—inherits the same mTLS requirement.

Even outside FAPI, mTLS provides concrete benefits:

  • Sender-constrained tokens: Access tokens are bound to the client's certificate via the cnf claim, preventing token theft and replay.

  • No shared secrets: Unlike client_secret_post or client_secret_basic, there is no credential to leak or rotate through insecure channels.

  • Defense in depth: mTLS authenticates at the transport layer before the application layer even processes the request, reducing the attack surface.

Operational considerations

mTLS requires disciplined operations regardless of scale:

  • Issuance pipelines must be aligned with onboarding and eligibility workflows. A participant cannot connect until it has a valid certificate.

  • Rotation windows must allow overlapping certificates so partners can roll keys without downtime.

  • Gateway enforcement should verify keyUsage, extendedKeyUsage, and policy OIDs—not just chain validity—to prevent mis-issued certificates from being accepted.

  • Certificate-bound token validation must be enforced end-to-end: the authorization server binds the token at issuance, and the resource server verifies the binding at every request.

Raidiam Connect's integrated PKI addresses these concerns in a single platform: a Certificate Authority, Registration Authority, certificate validation service, and public keystore handle issuance, rotation, revocation, and distribution, while participants can manage certificates programmatically through APIs.

Modern trust models: from PKI alone to PKI + federation

Establishing trust with PKI and beyond

Traditionally, organizations establish trust in data sharing through PKI alone. A trust anchor distributes root certificates, participants receive end-entity certificates, and relying parties validate chains. This works, but it carries inherent scaling friction: onboarding new participants means distributing certificates, maintaining revocation lists, and managing bilateral trust configurations.

OpenID Federation offers a more scalable approach to trust establishment. Instead of relying solely on certificate chains, participants publish signed metadata (entity statements) that describe their capabilities, roles, and keys. Trust anchors and intermediates sign subordinate statements, creating verifiable trust chains that carry richer semantics than X.509 certificates alone.

For a deeper look at how federation works, see OpenID Federation — The Missing Link for Scalable Trust and the OpenID Federation Final Release overview.

Federation does not replace PKI — it complements it

Adopting federation does not mean abandoning PKI. On the contrary, the strongest data sharing architectures use both:

  • Federation for trust establishment: Discovering participants, verifying their roles and eligibility, and resolving their metadata and keys at scale.

  • PKI for security enforcement: Using X.509 certificates and keys to enable mTLS client authentication, sender-constrained access tokens, and message-level signing and encryption.

Federation answers who is this participant and what are they allowed to do. PKI answers how do we cryptographically enforce that trust at every layer of the stack.

In practice, this means a participant's federation entity statement references the same keys and certificates used for mTLS and JOSE operations. A relying party resolves the entity statement to learn what the participant can do, then validates the certificate chain and enforces mTLS, token binding, and message integrity using PKI-backed keys.

Why both layers matter

Without federation, PKI-only trust models struggle with dynamic onboarding, role changes, and rich policy semantics. Without PKI, federation metadata provides context but lacks the cryptographic enforcement for transport security, client authentication, and token binding.

Together, they create a trust model where:

  • Trust discovery and policy are handled by federation metadata and trust registries.

  • Transport security is enforced by mTLS with X.509 certificates.

  • Client authentication uses mTLS client auth to bind identity at the transport layer.

  • Access tokens are sender-constrained via the cnf claim, preventing theft and replay.

  • Request and response integrity is protected through JWS-signed objects and JWE-encrypted payloads.

Raidiam Connect's Trust Anchor is designed around this dual-layer model, providing both an OpenID Federation infrastructure and a PKI with X.509 certificates—either together or independently, depending on what the ecosystem needs.

Operational governance and ecosystem interoperability

Governance and auditability

Cryptography alone does not create trust—governance does. The algorithms and certificates discussed in this article only work if there are clear rules about who can issue certificates, how keys are managed, and what happens when something goes wrong.

At the CA level, governance means defining who is authorized to operate a root or intermediate CA, what controls protect the signing keys, and how issuance decisions are audited. In most regulated ecosystems, CA operators must meet specific compliance standards—whether that is WebTrust for CAs, ETSI EN 319 411, or an ecosystem-specific accreditation. These standards require documented key ceremonies, multi-person controls for root key access, and regular third-party audits.

Incident response is equally important. When a key is compromised or a participant violates policy, the ecosystem needs a defined escalation path: who is notified, how quickly must revocation happen, what evidence must be preserved, and how is the incident communicated to relying parties. Without these playbooks, a single compromise can erode trust across the entire network.

Audit trails tie everything together. Every certificate issuance, revocation, key rotation, and metadata update should be logged in a way that is tamper-evident and accessible to governance bodies. This is not just a compliance checkbox—it is how ecosystems detect anomalies, investigate incidents, and demonstrate trustworthiness to regulators and participants.

Interoperability profiles

One of the most common causes of friction in multi-party data sharing is configuration sprawl. Without agreed profiles, each participant may choose different algorithms, key sizes, certificate formats, and rotation intervals. The result is a combinatorial explosion of interoperability issues that slows onboarding and increases the cost of integration.

Interoperability profiles solve this by publishing a constrained set of choices that all participants must support. A well-designed profile typically includes:

  • Algorithm suites: A mandatory baseline (e.g., PS256 for signatures, A256GCM for content encryption) plus optional algorithms for forward compatibility.

  • Key size minimums: For example, RSA 2048-bit minimum with a recommended migration path to 3072-bit or elliptic curve keys.

  • Certificate profile templates: Standardized Subject and SAN patterns, required policy OIDs, and explicit keyUsage/extendedKeyUsage values.

  • Rotation intervals: Maximum certificate lifetimes and recommended JWKS rollover windows.

These profiles make conformance testing practical. Instead of testing against every possible configuration, participants test against the profile. Ecosystem operators can automate conformance checks during onboarding, catching misconfigurations before they cause runtime failures.

Profiles also future-proof the ecosystem. By including a version or date in the profile specification, ecosystems can publish updated profiles that introduce stronger algorithms or deprecate weaker ones, giving participants a clear migration timeline rather than ad-hoc, uncoordinated upgrades.

For a practical walkthrough of defining these choices—access model, trust scheme, registration framework—see the Modelling Frameworks guide.

Post-quantum cryptography and the future of PKI

Every asymmetric algorithm discussed in this article—RSA, ECDSA, EdDSA, ECDH, X25519—relies on the computational hardness of integer factorization or discrete logarithm problems. A sufficiently powerful quantum computer running Shor's algorithm could solve these problems in polynomial time, breaking the mathematical assumptions that make today's PKI secure.

For data sharing ecosystems, the threat is not abstract. It manifests in two concrete ways:

Signature forgery. If RSA or ECDSA signatures can be forged, an attacker could create fraudulent certificates that chain to a legitimate root CA, impersonate participants, or tamper with signed JWTs. Every layer of trust discussed in this article—certificate validation, mTLS client authentication, JWS-signed request objects—depends on signature integrity.

Harvest now, decrypt later. Encrypted data exchanged today using ECDH or RSA key transport could be intercepted and stored by an adversary. Once a quantum computer becomes available, that adversary could derive the session keys and decrypt the archived traffic. For ecosystems handling financial records, health data, or government information, the confidentiality window extends far beyond the lifetime of the keys used to protect the data.

The timeline remains uncertain—estimates range from ten to thirty years—but the cryptographic community treats it as a when, not an if. Root CA certificates with twenty-year validity periods issued today may still be in service when quantum-capable hardware arrives.

NIST post-quantum standards

In August 2024, NIST released three finalized PQC standards built on mathematical problems believed to resist both classical and quantum attacks:

  • FIPS 203 — ML-KEM (Module-Lattice-Based Key-Encapsulation Mechanism): Replaces ECDH and RSA key transport for establishing shared secrets. ML-KEM operates on structured lattice problems and provides key encapsulation at three security levels (ML-KEM-512, ML-KEM-768, ML-KEM-1024). It is the primary candidate for protecting TLS key exchanges and JWE key agreement against quantum adversaries.

  • FIPS 204 — ML-DSA (Module-Lattice-Based Digital Signature Algorithm): Replaces RSA-PSS, ECDSA, and EdDSA for digital signatures. ML-DSA is the primary candidate for signing certificates, JWTs, and federation entity statements. It offers three parameter sets (ML-DSA-44, ML-DSA-65, ML-DSA-87) with increasing security strength.

  • FIPS 205 — SLH-DSA (Stateless Hash-Based Digital Signature Algorithm): A conservative, hash-based signature scheme that relies only on the security of hash functions rather than lattice assumptions. SLH-DSA provides a diversification option—if lattice-based schemes are found vulnerable, SLH-DSA signatures remain secure. The trade-off is larger signatures and slower verification compared to ML-DSA.

NIST has also selected the Falcon digital signature algorithm and the HQC key encapsulation mechanism for ongoing standardization as additional options.

Impact on PKI trust chains

Post-quantum migration is not a simple algorithm swap. The new algorithms produce significantly larger keys, signatures, and ciphertexts than their classical counterparts. An ML-DSA-65 public key is roughly 1,952 bytes compared to 32 bytes for Ed25519 or 256 bytes for RSA-2048. ML-KEM-768 ciphertexts are 1,088 bytes. These size increases have real consequences:

  • Certificate size: X.509 certificates carrying PQC public keys and signatures will be substantially larger, increasing TLS handshake sizes and potentially exceeding MTU limits in constrained networks.

  • Trust chain overhead: In a typical chain of root → intermediate → end-entity, every certificate carries a PQC signature and a PQC public key. The cumulative size can reach several kilobytes where classical chains measured in hundreds of bytes.

  • JOSE payload growth: JWS-signed JWTs and JWE-encrypted payloads will grow, affecting token sizes, HTTP header limits, and storage requirements.

  • Performance: Lattice-based operations are generally fast, but hash-based signatures (SLH-DSA) are significantly slower to generate and verify, which matters for high-throughput API environments.

Migration planning and hybrid approaches

Most ecosystems will not switch to PQC overnight. The transition will be incremental, and the dominant strategy is hybrid cryptography—combining a classical algorithm with a PQC algorithm so that the system remains secure as long as either algorithm holds.

Hybrid certificates embed both a classical and a PQC public key, with both corresponding signatures from the issuing CA. A relying party that supports PQC validates both signatures; a relying party that does not yet support PQC can fall back to the classical signature. This approach provides backward compatibility during the transition and protects against the possibility that one of the two algorithms is broken.

Composite signatures formalize this pattern at the protocol level. A single signature object contains two signatures over the same data, using different algorithms. The signature is valid only if both component signatures verify, preventing downgrade attacks where an adversary strips the PQC signature and presents only the classical one.

Crypto agility is the broader architectural principle that makes migration feasible. Systems designed with crypto agility can negotiate algorithms dynamically, accept multiple key types in JWKS endpoints, and upgrade certificate profiles without rewriting application logic. The interoperability profiles discussed earlier in this article are the natural vehicle for managing this transition: publish an updated profile that adds PQC algorithm suites, set a migration timeline, and retire classical-only profiles on schedule.

Under the transition timeline proposed in NIST IR 8547, NIST plans to deprecate quantum-vulnerable algorithms by 2030 and remove them from standards entirely by 2035. Organizations that have not begun planning will face compressed timelines and increased risk.

What ecosystem operators should do now

Post-quantum migration is a governance and architecture problem as much as a cryptographic one. Practical steps for ecosystem operators include:

  • Inventory cryptographic dependencies: Identify every place where RSA, ECDSA, ECDH, or related algorithms are used—certificates, TLS configurations, JOSE signing and encryption, key storage, and HSM firmware.

  • Assess HSM and KMS readiness: Not all hardware supports PQC key types today. Engage vendors to understand firmware upgrade paths and timelines for ML-KEM and ML-DSA support.

  • Prototype hybrid deployments: Test hybrid certificates and composite signatures in non-production environments. Measure the impact of larger keys and signatures on handshake latency, payload sizes, and gateway throughput.

  • Update interoperability profiles: Add PQC algorithm suites as optional in the near term, with a roadmap to make them mandatory. Include version identifiers so participants can signal PQC readiness.

  • Track standards evolution: Beyond the initial FIPS publications, watch for updates to TLS, JOSE, and X.509 standards that integrate PQC algorithms. The IETF is actively developing PQC extensions for TLS 1.3 and hybrid key exchange mechanisms.

PKI and federation with Raidiam Connect

Raidiam Connect is a multi-trust platform that operationalises the architecture described in this case study. Ecosystems choose the trust model that fits their requirements: PKI with X.509 certificates only, OpenID Federation only, or both layers working together.

On the PKI side, Raidiam provides an integrated Public Key Infrastructure with a Certificate Authority, Registration Authority, certificate validation service, and public keystore. Participants receive transport certificates for mTLS, signing keys for JWS, and encryption keys for JWE—each issued with proper usage separation and manageable through the platform's APIs.

When ecosystems adopt federation, Raidiam acts as a Trust Anchor for OpenID Federation, issuing entity statements, enforcing metadata policies, and enabling automated trust chain resolution. The PKI does not disappear—it continues to supply the certificates and keys that enforce security at the transport and message layers, while federation handles trust discovery, participant metadata, and policy propagation.

This flexibility means an ecosystem can start with PKI-only trust and introduce federation later without re-architecting its security infrastructure. For teams ready to put these concepts into practice, the Onboard Your First Partner in 30 Days guide walks through the full lifecycle—from trust framework setup through certificate issuance to a working mTLS connection.

Conclusion

PKI is not legacy infrastructure—it is the enforcement layer that makes secure data sharing work. But as this case study has shown, it reaches its full potential only when combined with the right cryptographic choices, mTLS for transport-level authentication, federation for scalable trust discovery, and governance that turns ad-hoc configurations into auditable, testable profiles.

Security is an architecture, not a single technology decision. Start with a solid PKI foundation, layer federation for discovery and policy, define interoperability profiles, and plan now for the post-quantum transition ahead.