Chia Cloud is our AI native infrastructure layer, built for low latency, privacy, and developer speed. We are reducing the surface area you need to manage while giving you fine grained control where it matters.
Principles
- AI first: Inference is a first class workload, not an afterthought.
- Minimal APIs: Fewer endpoints, clearer contracts, better ergonomics.
- Privacy and control: Regional isolation, encryption, and transparent data paths.
Core building blocks
- Compute: On demand GPU and CPU instances optimized for inference and batch jobs.
- Storage: Object storage with lifecycle policies and signed URLs.
- Data: Vector indexes and relational stores with managed ingestion pipelines.
- Delivery: Global CDN and multi cloud failover.
Spin up compute (API)
curl -X POST https://api.chiatech.com/v1/compute \
-H "Authorization: Bearer $CHIA_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"region": "us west",
"size": "gpu small",
"image": "pytorch 2.4",
"labels": { "project": "demo" }
}'Upload a file (SDK)
import { ChiaClient } from "chia-sdk";
const client = new ChiaClient({ apiKey: process.env.CHIA_API_KEY });
await client.storage.upload("docs/spec.pdf", { bucket: "public" });
// -> returns { url, checksum, size }Operational posture
- Observability: Metrics, traces, and budget caps out of the box.
- Multi cloud: Run across providers, fail over gracefully.
- Compliance: Data residency and audit friendly logging.
Roadmap
- Serverless inference endpoints
- Managed embeddings and retrieval
- Private networking between services