Search should not feel like wrestling a web browser. Toru makes finding information feel simple, fast, and private, whether you ask with text, voice, files, or images.
Why build Toru?
- Signal over noise: Results should be crisp and actionable, not ten blue links and clutter.
- Multimodal by default: Users do not only type, they speak, upload, and point.
- Privacy first: Search data should not become an ad profile.
What Toru does today
- Natural language Q and A: Ask in plain language and get concise answers with sources.
- File and image understanding: Drop in PDFs, docs, or images and query them directly.
- Fast response loop: Low latency retrieval and reasoning for a feels instant experience.
How Toru works (at a high level)
- Ingestion: Crawl or upload content. We tokenize, chunk, and embed with AI models.
- Retrieval: Hybrid search, keyword plus vector, to fetch the right passages fast.
- Reasoning: An AI agent synthesizes, cites, and formats answers.
Example API call
curl -X POST https://api.chiatech.xyz/v1/toru/search \
-H "Authorization: Bearer $CHIA_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"query": "Summarize the key differences between supervised and unsupervised learning",
"sources": ["web", "file:docs/ml-intro.pdf"]
}'Privacy by design
Toru is built on a strict privacy model: no ads, no third party data brokers, and transparent controls for what is stored.
What is next
- Continuous source grounded answers with inline citations
- Team workspaces and shared knowledge bases
- Native mobile apps