Skip to main content
OpenAI-Compatible · Zero Data Egress

Sovereign AI Infrastructure
for Regulated Industries

Run open-source LLMs and training jobs on your own hardware, in your own jurisdiction. The hyperscaler alternative for regulated industries, research institutions, and AI teams who can't send their data to AWS or OpenAI — OpenAI-compatible, SOC 2-aligned, zero data egress.

Production deployments

Powering sovereign AI for the largest telecom and bank in Central Asia, NVFP4 research at U.S. universities, and API-first AI teams running 400+ GPUs in-jurisdiction.

Sovereignty

Built for Regulated AI Workloads

Every component of the platform — from GPU orchestration to billing — is designed to keep your data, models, and compute inside your perimeter.

SOC 2 Certified

Independently audited security controls and compliance documentation. Available under NDA for procurement review.

End-to-End Encryption

All data encrypted at rest and in transit across the platform. Zero-trust architecture between tenants and the control plane.

Full Tenant Isolation

Dedicated VMs and containers with IOMMU-enforced GPU isolation. No shared resources, no cross-tenant data leakage.

Data Residency

Workloads run on your hardware, in your jurisdiction. Data never crosses national borders or transits third-party clouds.

Full Audit Logging

Every operation tracked. Full visibility into access patterns, configuration changes, and workload lifecycle.

On-Premise Control

Runs on your hardware, in your facility, under your operational control. No third-party dependency for production workloads.

Drop-In Compatible

Your OpenAI Code, Sovereign Endpoint

CloudRift exposes vLLM-served open-source models through an OpenAI-compatible API. Any client library that talks to OpenAI talks to your sovereign deployment — no rewrites, no new SDK, no proprietary lock-in.

Same for streaming, embeddings, function-calling, and vision. Drop in any open-weight model — Llama, Qwen, DeepSeek, Mistral, Mixtral.

inference.py
from openai import OpenAI

client = OpenAI(
    base_url="https://inference.your-domain.example/v1",
    api_key="your-api-key",
)

response = client.chat.completions.create(
    model="meta-llama/Llama-3.3-70B-Instruct",
    messages=[
        {"role": "system", "content": "You are an internal compliance assistant."},
        {"role": "user", "content": "Summarize Q3 fraud-detection alerts."},
    ],
)

print(response.choices[0].message.content)

Use Cases

From Inference to Air-Gapped

Production deployments today spanning inference, fine-tuning, RAG, and air-gapped operations — across telco, banking, government, and research.

Sovereign LLM Inference

Open-weight Llama, Qwen, DeepSeek, and Mistral models served through OpenAI-compatible APIs on your infrastructure. Production deployments today serving Central Asia’s largest banking and government end-customers from in-country H200 capacity.

Research & Fine-Tuning

Train and fine-tune on sensitive data — patient records, financial transactions, classified corpora — under your existing security controls. Used today for NVFP4 research on RTX 5090 at U.S. universities.

Private RAG Systems

Retrieval-augmented AI over regulated document stores — internal policy, customer records, technical archives — without exposing any of it to a third-party cloud.

Air-Gapped Deployments

Run the full platform offline in classified or disconnected environments. No outbound calls, no telemetry, no model phone-home. Updates ship as signed bundles your team validates before installing.

The Team

Inference Engineers from Apple and Roblox

CloudRift's founding team has shipped 40+ high-profile AI inference systems serving billions of users — Vision Pro hand tracking, Roblox face tracking and voice chat, ARKit, Core ML, iOS Portrait Mode, and many more.

Featured in Tom's Hardware and TechPowerUp

Dmitry Trifonov

Dmitry Trifonov

CEO & Co-Founder

Led AI initiatives at HP, Apple, and Roblox, delivering features for iPhone, Vision Pro, and Roblox platforms.

Apple, Roblox, HP

Slawomir Strumecki

Slawomir Strumecki

Systems Lead & Co-Founder

Systems engineer with a background in game engine development, contributing to Roblox and Rainbow Six: Siege.

Roblox, Ubisoft

Dimitrios Verraros

Dimitrios Verraros

Cloud Lead & Co-Founder

Seasoned infrastructure engineer, previously at Apple, where he built ML orchestration systems for Apple Vision Pro.

Apple

Versus the Alternatives

Why Regulated Buyers Pick CloudRift

Three options if you need AI compute under your control. The tradeoffs aren't where most teams expect.

Why Regulated Buyers Pick CloudRift: comparison across 6 dimensions.
 CloudRiftHyperscaler API (OpenAI / Bedrock)DIY Self-Hosted
Where your data livesIn your jurisdictionIn vendor’s regionIn your jurisdiction
Provider access to inputs/outputsNonePer vendor termsNone
Time to productionDaysHours6–12 months
Engineering effort to operateLow — platform managedLowHigh — full-stack ownership
Air-gap supportableYesNoYes
SOC 2 / data-residency alignmentBuilt-inVendor’s controls onlyBuild & certify yourself

FAQ

Common Questions About Sovereign AI

Three things together: (1) the compute runs on hardware you own or control, (2) all data — model weights, training corpora, inference inputs and outputs — stays inside your jurisdiction, and (3) you retain operational control over upgrades, access, and audit. CloudRift delivers all three without forcing you onto a hyperscaler stack.
Data lives only on the GPUs and storage you designate, in the datacenter and country you choose. CloudRift's control plane orchestrates jobs but does not have access to model weights, training data, or inference traffic. For air-gapped deployments, the control plane runs entirely inside your network.
No. By default, model weights, datasets, and inference payloads never leave your infrastructure. We do not collect telemetry on workload contents. The only outbound traffic is platform health metrics, which can be disabled or fully air-gapped on request.
CloudRift is SOC 2 certified. The platform supports controls aligned with GDPR, HIPAA, and national data-residency frameworks (e.g. BSI in Germany, OSI in Italy). For regulated deployments, we provide documentation and configuration support to meet your specific certification requirements.
Yes. The full platform — control plane, console, billing, and inference endpoints — can run inside a disconnected network with no outbound calls. Updates and model artifacts ship via signed offline bundles that your team validates before installing.
You control the update cadence. New CloudRift releases ship as signed bundles you can stage, test, and roll out on your schedule. Open-weight model updates are pulled from your designated registry (internal mirror or public Hugging Face), never from CloudRift.
Get started

Ready to deploy sovereign AI?

Book a compliance review with our deployment team to map your security requirements to a deployment plan, or jump straight to the form below.

Contact Us

Let us know if you're looking to:

  • Find an affordable GPU provider
  • Sell your compute online
  • Manage on-prem infrastructure
  • Build a hybrid cloud solution
  • Optimize your AI deployment
hello@cloudrift.ai
CloudRift Inc., a Delaware corporation
PO Box 1224, Santa Clara, CA 95052, USA
+1 (831) 534-3437
Follow us on X

I'm interested in: