A trust layer for civic AI.
Verify AI is not just a chatbot. It is a browser-based civic AI safety system built to protect community users from harmful or misleading AI-generated information.
Why this exists
AI systems often generate confident-sounding responses that contain fake NGO phone numbers, incorrect hospital details, fabricated government schemes, or dangerously wrong civic guidance. In real community contexts, these errors can cause genuine harm.
Verify AI solves this by introducing a mandatory verification layer between AI generation and user delivery. No response reaches a user without passing an independent safety audit.
The dual-agent pipeline
Verify AI separates the generation and verification responsibilities into two fully independent agents. The agent that creates a response is never the agent that approves it.
PASS or BLOCK
The auditor returns a strict structured JSON object. The decision engine acts on it without interpretation.
- General civic guidance
- Safe educational content
- No specific unverifiable details
- Low-risk informational answers
- Hallucinated phone numbers
- Fake NGO or clinic contacts
- Unverifiable local addresses
- Overconfident civic claims
How it's built
What comes next
- →Google Search grounding for real-time civic fact verification
- →Google Maps integration for location-based community queries
- →Multi-auditor system (Fact, Bias, Logic auditors)
- →Confidence scoring and audit transparency logs
- →LiveKit realtime voice interaction
- →External civic databases for ground-truth verification