AI Workflows That Pass POPIA, Not Skirt It
Automate the repetitive work your team should not be doing, on infrastructure that keeps personal information inside your control. Built from day one with data minimisation, audit logging, and lawful-basis mapping. Ready for Information Regulator scrutiny.
Indicative scope
- function
- regulated-data deployment
- volume
- tenant-private
- region
- ZA-JNB / ZA-CPT
Indicative scope. Real engagement values confirmed at proposal.
What We Deliver
Most AI pipelines you will see pitched in South Africa send personal information to US-based LLMs with no data residency guarantees and no POPIA-aligned processor agreements. We build the harder version. Prompts are scrubbed before they leave your environment. Responses are logged with who, what, and why. Models run in-region where possible. Every workflow comes with the paperwork your information officer will want to see.
Everything Included
Data Minimisation by Default
Prompts are scrubbed of personal information before they reach any external model. You decide what leaves, and what stays inside your walls.
Full Audit Trail
Every AI call is logged with prompt, response, user, time, and purpose. When your information officer asks what the model saw, you have a definitive answer.
Processor Agreements Ready
We supply POPIA-aligned operator agreements covering every third-party model in the pipeline, drafted by SA legal counsel.
In-Region Where Possible
For workloads that justify it, we run open-weight models in SA or EU regions instead of defaulting to US-hosted APIs.
Lawful Basis Mapped
Every workflow is tied to a documented lawful basis: consent, contract, legitimate interest, or legal obligation. No ambiguity.
Fits Your Existing Stack
We integrate with Microsoft 365, Google Workspace, Salesforce, HubSpot, and any line-of-business system you already run.
What Success Looks Like
Every engagement is defined by the outcomes we commit to. Work output matters only to the extent that those outcomes land.
- An AI workflow handling the work your team currently does manually
- POPIA-aligned operator agreements covering every model in the pipeline
- Full audit logs tying every AI decision back to a lawful basis
- A workflow your information officer can sign off with a clear conscience
- Production monitoring that catches drift before it becomes a breach
Our Process
Function Scoping
Lock the target workflow, the data it touches, and the lawful basis for processing.
Data Flow Mapping
Every field, every system, every hop. Visualised and signed off by your information officer.
Guardrail Design
We spec the scrubbing, logging, retention, and exception-handling rules for the workflow.
Build
Engineering against the guardrail spec, with production-grade observability from commit one.
Audit Pack
Operator agreements, DPIA template, audit log configuration, and runbooks handed over with the system.
Deployment
Go-live with monitoring, plus a thirty-day review where we tune against real usage.
Your POPIA-Safe AI questions, answered
STATUS // RESPONSE WITHIN ONE BUSINESS DAY
Tell us the function.
Share the cost line you want to address. We will come back inside one business day with a scoped proposal.
Across the practice
Other ways we replace expensive work
Core AI
