Security & Trust
Your data never touches our infrastructure.
Handoff Wizard never sees your customer data. Knowledge handovers run on the departing employee’s laptop, write to your company’s own Google Cloud project, and use your own Anthropic API key. Our infrastructure stores licensing metadata and crash reports — nothing else. That architecture isn’t an afterthought; it’s the product.
How customer data flows
View Mermaid source
flowchart LR
EE[Departing Employee Laptop] -->|local file scan| EE
EE -->|BYO Anthropic API key| ANT[Customer's Anthropic Account]
EE -->|BYO GCP service account| GCP[Customer's GCP Project]
GCP -->|stores synth docs| NLM[NotebookLM Enterprise]
EM[Employer Admin Dashboard] -->|read-only link| NLM
EE -.->|telemetry only, no content| HW[Handoff Licensing API on Render]
EE -.->|crash reports, no PII| SE[Sentry handoff-wizard org]
classDef customerOwned fill:#e8f5e9,stroke:#2e7d32
classDef handoffOwned fill:#fff3e0,stroke:#ef6c00
class EE,ANT,GCP,NLM,EM customerOwned
class HW,SE handoffOwned Green = customer-owned. Customer data, source files, AI outputs, and the published knowledge base all live in customer-controlled infrastructure.
Orange = Handoff-owned. Receives only licensing checkpoints (timestamp + project code + version) and anonymized crash reports.
What’s on Handoff’s infrastructure (and what isn’t)
| Class of data | Lives where | Retention |
|---|---|---|
| Customer source files | Customer laptop | Until employee deletes |
| AI synthesis docs | Customer’s GCP (NotebookLM) | Customer policy |
| AI prompts + responses | Customer’s Anthropic account | Anthropic policy under customer key |
| License records | Render Postgres (Oregon, US) | 7 years |
| Telemetry events | Render Postgres | 90 days |
| Crash reports | Sentry (US) | 90 days, PII scrubbed |
Architecture controls
Bring-Your-Own-Key (BYO) by design
The departing employee enters their employer’s Anthropic API key during onboarding. That key is stored in the employee’s OS keychain (Windows Credential Manager / macOS Keychain). It never travels to Handoff servers. All LLM calls go customer-laptop ↔ Anthropic-direct. We can’t read your prompts because we never see them.
Data minimization at the boundary
Our backend rejects payloads that contain anything resembling customer content. The licensing API enforces a strict Zod-validated schema on every request. Anything else 400s. There is no path for a Handoff backend route to receive customer content even by accident.
Encryption
TLS 1.2+ in transit on every public endpoint. AES-256 at rest on PostgreSQL. CMEK-encrypted engine backups in GCS.
Vulnerability disclosure
Email security@handoffwiz.com. We acknowledge within 24 hours and follow coordinated disclosure (typically 90 days from initial report or sooner if patched).
DPA
Our Data Processing Agreement is available on request to any customer with a signed NDA. Email legal@handoffwiz.com.
Plain-language FAQ
- Where does my company’s data live?
- On the departing employee’s laptop, in your own Google Cloud project, and in your own Anthropic account. None of it touches Handoff’s infrastructure.
- Who at Handoff has access to it?
- No one. Architecturally — there is no path. We don’t store it.
- Do you train AI on my data?
- No. AI calls are customer-laptop ↔ Anthropic-direct using your own API key. Handoff has no training pipeline because we have no customer content to train on.