ReplayState Trust Center

Methods, Metrics, and Controls

This page is written for technical diligence and procurement review. It defines what is measured, how claims are verified, where model limits apply, and which controls are in place versus planned.

Validation Date: March 5, 2026 (UTC) Scope: Artifact Snapshot (Binary + Static Export) Audience: Helius, Validators, Market Makers, VCs

Trust Metric Framework

Metric Family Definition Control Objective Snapshot Status
Access Integrity Unauthorized request rejection rate and key validation behavior. Prevent unauthorized simulation and data exfiltration. Verified
Evidence Integrity Hash-valid packet exports for simulation artifacts. Reproducible audit chain from request to output. Verified
Model Transparency Documented assumptions, confidence bounds, and known limits. Avoid overclaiming and reduce decision misuse. Verified
Operational Reliability Queue completion behavior, retry semantics, and stall recovery. See Operational Reliability Controls. Consistent production-grade job completion. Implemented baseline controls (watchdog + restart + deploy gates)
Enterprise Governance RBAC, release controls, incident runbooks, and SLO dashboards. See Enterprise Governance Controls. Procurement-ready security and compliance posture. Implemented baseline governance pack (runbooks + release controls + SLO cadence)

Verification Commands

./scripts/institutional_smoke.sh
./scripts/generate_evidence_packet.sh --scenario archival_clean --trials 500
cd evidence-packets/<run-id>
sha256sum -c checksums.sha256

Method and Decision Limits

Legal and Policy Controls

Policy Purpose Location
Privacy Policy Defines data handling, retention, and disclosure boundaries. /privacy-policy.html
Terms of Service Defines commercial use terms, disclaimers, and liability boundaries. /terms-of-service.html
Security Policy Defines security commitments, disclosure workflow, and response posture. /security-policy.html