One Database. Every Workload.
Documents, vectors, hot/cold tiering, time travel, multi-tenancy, and real-time change streams. All in a single platform that speaks the MongoDB protocol.
Unified Storage
Documents and vectors in one engine. No syncing between MongoDB and Pinecone. One query combines structured filters with semantic similarity search.
// One query: filter + vector search
const results = await db
.collection('products')
.aggregate([
{
$vectorSearch: {
queryVector: embedding,
path: "$vector",
limit: 10,
filter: { category: "electronics",
price: { $lt: 500 } }
}
},
{
$project: {
name: 1, price: 1, score: 1
}
}
])
.toArray();Replace three databases with one
MongoDB Compatible
Full wire protocol support. Change one connection string and your app works. No rewrites, no new query language, no migration headaches.
// Before
const uri = "mongodb+srv://user:pass"
+ "@cluster0.mongodb.net/mydb";
// After - just change the host
const uri = "mongodb+srv://user:pass"
+ "@cluster.thermoclinecloud.com/mydb";
// Everything else stays the same
const client = new MongoClient(uri);
const db = client.db("mydb");
const users = await db
.collection("users")
.find({ active: true })
.toArray();Works with every driver
ODMs and tools
Intelligent Storage TieringOnly in Thermocline
Three tiers: hot data on NVMe SSDs, cold data in columnar Parquet on S3, and archive for long-term retention. Queries transparently span all tiers. No application changes needed.
NVMe SSDs with LSM-tree indexing. Frequently accessed documents stay here for sub-millisecond reads.
Columnar Parquet on S3. Historical data stays queryable at a fraction of the cost. No warehouse needed.
Glacier-class storage for regulatory retention. Queryable on demand with automatic rehydration.
Move to cold after N days of no writes
Tier when collection exceeds a size threshold
System analyzes reads and tiers automatically
Native Vector SearchUnique
HNSW + DiskANN indexes on every collection. Store embeddings alongside your documents. Run hybrid queries that combine filters with billion-scale k-NN search in a single call.
// Create a vector index
await db.collection('articles').createIndex(
{ embedding: "vectorSearch" },
{
vectorSearchOptions: {
dimensions: 1536,
metric: "cosine"
}
}
);
// Semantic search with filters
const results = await db
.collection('articles')
.aggregate([{
$vectorSearch: {
queryVector: await embed(query),
path: "embedding",
limit: 20,
filter: { published: true }
}
}]).toArray();Distance metrics
Time Travel QueriesOnly in Thermocline
Query your data as it existed at any point in time. Thermocline's MVCC engine maintains version chains for every document, enabling instant snapshot reads at any historical timestamp.
// Read data as it existed yesterday
db.orders.find(
{ customerId: "cust_123" },
{
readConcern: {
atClusterTime:
Timestamp(1706745600, 0)
}
}
)
// Compare current vs historical state
const now = db.accounts.findOne(
{ _id: accountId }
)
const then = db.accounts.findOne(
{ _id: accountId },
{ readConcern: {
atClusterTime: yesterday
} }
)Use cases
Prove the exact state of your data at any point for regulatory compliance.
See the state of data when a bug occurred. Compare before and after.
Undo accidental deletes or updates in seconds. No backup restoration needed.
Compare data across time periods. Track trends without maintaining separate snapshots.
Built-in Multi-TenancyOnly in Thermocline
Stop building tenant routing in your application layer. Thermocline provides native multi-tenancy with three isolation models, per-tenant resource quotas, and tenant-aware vector search.
Database per Tenant
Full isolation for enterprise customers. Each tenant gets a dedicated database with independent storage, indexes, and configuration.
Collection per Tenant
Balanced isolation and efficiency. Tenants share a database but get their own collections with independent indexes and tiering policies.
Shared Collections
Row-level security with automatic tenant filtering. Most cost-efficient model for high tenant counts with smaller data volumes.
Change Streams
WAL-powered event streams with guaranteed ordering and resume tokens. Build event-driven architectures without Kafka or RabbitMQ.
const stream = db
.collection('orders')
.watch([
{ $match: {
operationType: { $in: [
"insert", "update"
]}
}}
]);
stream.on('change', async (event) => {
// Update search index
await searchIndex.upsert(event.fullDocument);
// Invalidate cache
await cache.del(event.documentKey._id);
// Trigger webhook
await notify(event);
});Event-driven patterns
Push data changes to your UI the moment they happen.
Automatically expire cached entries when source data changes.
Re-embed and update vector indexes when documents change.
Fire downstream API calls on inserts, updates, or deletes.
Built-in Auth & Document-Level SecurityOnly in Thermocline
JWT, API keys, OAuth 2.0, and MFA built into the platform. Document-level security (DLS) policies use $auth.* variables evaluated directly in the storage engine for zero-overhead access control.
// Document-level security policy
{
"collection": "medical_records",
"rules": [
{
"role": "doctor",
"filter": {
"assignedDoctor": "$auth.userId"
}
},
{
"role": "patient",
"filter": {
"patientId": "$auth.userId"
}
}
]
}Authentication methods
Verify and extract claims from JWTs issued by your identity provider.
Scoped API keys with per-key permissions and rate limits.
Built-in OAuth flows with Google, GitHub, and custom providers.
Time-based one-time passwords (TOTP) for sensitive operations.
Auto-Generated REST API
Every collection automatically gets REST endpoints with OpenAPI 3.1 documentation. Query parameters for filtering, time travel via ?at=, and full CRUD out of the box.
# List documents with filtering
curl https://api.thermoclinecloud.com/v1
/db/mydb/collections/orders
?status=shipped&limit=10
# Time travel: read data at a past timestamp
curl https://api.thermoclinecloud.com/v1
/db/mydb/collections/orders
?at=2025-01-15T00:00:00Z
# Insert a document
curl -X POST
https://api.thermoclinecloud.com/v1
/db/mydb/collections/orders
-d '{"item": "widget", "qty": 5}'API features
Auto-generated schema documentation for every collection endpoint.
Add ?at= to any read request to query historical data.
Filter, sort, project, and paginate with URL query parameters.
Document-level security policies applied automatically to all API requests.
Realtime EngineOnly in Thermocline
WebSocket subscriptions push database changes to clients in real time. Broadcast channels, presence tracking with CRDTs, and DLS-filtered events. No separate pub/sub infrastructure needed.
import { ThermoclineClient } from '@thermocline/client';
const tc = new ThermoclineClient(url);
// Subscribe to database changes
tc.subscribe('orders', {
filter: { status: 'pending' },
onChange(event) {
console.log(event.type, event.doc);
}
});
// Presence tracking
const room = tc.presence('doc:abc');
room.enter({ cursor: { x: 0, y: 0 } });
room.on('update', (peers) => {
// CRDT-merged peer state
renderCursors(peers);
});Realtime capabilities
WebSocket push of insert, update, and delete events with server-side filtering.
Pub/sub messaging between connected clients without database writes.
Track online users, cursors, and shared state with CRDT conflict resolution.
Clients only receive events for documents they have access to.
Edge Functions & Triggers
WAL-based triggers fire on database events. Edge functions run on the Controller layer. Built-in cron scheduler, auto-embed on insert for RAG pipelines, and webhook delivery.
// WAL-based trigger: auto-embed on insert
export default {
collection: 'articles',
events: ['insert', 'update'],
async handler(event, ctx) {
const embedding = await ctx
.embed(event.doc.content);
await ctx.db
.collection('articles')
.updateOne(
{ _id: event.doc._id },
{ $set: { $vector: embedding } }
);
}
};Compute at the edge
Guaranteed-delivery triggers that fire on insert, update, or delete events.
Schedule functions with cron expressions. Managed execution with retry logic.
Automatically generate vector embeddings on insert for RAG pipelines.
Deliver events to external URLs with configurable retry and backoff.
Built-in QueuesOnly in Thermocline
Durable message queues backed by the same storage engine as your data. Visibility timeouts, dead-letter queues, 50K+ messages per second, and full ACID guarantees. No Kafka or SQS needed.
// Enqueue a message
await tc.queue('email-jobs').send({
to: 'user@example.com',
template: 'welcome',
data: { name: 'Alice' }
});
// Process messages with visibility timeout
tc.queue('email-jobs').process(
async (msg) => {
await sendEmail(msg.body);
msg.ack();
},
{ visibilityTimeout: 30, batchSize: 10 }
);Queue capabilities
Messages stored as documents with full ACID guarantees and replication.
Messages re-appear if not acknowledged within the timeout period.
Failed messages automatically routed to DLQ after max retries.
High throughput backed by the same LSM-tree engine as your data.
Database BranchingOnly in Thermocline
O(1) copy-on-write branches via MVCC snapshots. Each branch gets its own WAL and memtable. Merge back with conflict detection. Perfect for preview environments and CI/CD pipelines.
# Create a branch (instant, O(1))
tc branch create feature/new-schema
--from main
# Branch gets its own WAL + memtable
# Run migrations, seed test data...
tc branch connect feature/new-schema
# Merge back with conflict detection
tc branch merge feature/new-schema
--into main
--strategy last-write-wins
# Use in CI: auto-create per PR
tc branch create pr-${PR_NUMBER}
--ttl 7dBranching use cases
Every pull request gets an isolated database branch with production data.
Test migrations on a branch before merging. Roll back instantly if something breaks.
Auto-create branches per PR with TTL-based cleanup. Zero manual teardown.
Merge strategies with automatic conflict detection and resolution policies.
File Storage
S3-backed file storage with metadata stored in your database. Same DLS policies protect files. Upload and download via REST API with image transforms and bucket management.
// Upload a file
const file = await tc.storage('avatars')
.upload('user-123.jpg', buffer, {
contentType: 'image/jpeg',
metadata: { userId: 'user_123' }
});
// Get a public URL with transforms
const url = tc.storage('avatars')
.getPublicUrl('user-123.jpg', {
transform: {
width: 200,
height: 200,
format: 'webp'
}
});Storage features
Files stored on S3 with metadata in your database for unified queries.
Resize, crop, and convert images on the fly via URL parameters.
Same document-level security policies control file access.
Create buckets, set size limits, configure public/private access policies.
MCP Server
AI coding assistant integration via the Model Context Protocol. Database, schema, management, and docs tool groups. Distributed as a single Go binary with stdio and SSE transports.
{
"mcpServers": {
"thermocline": {
"command": "thermocline-mcp",
"args": ["--transport", "stdio"],
"env": {
"TC_API_KEY": "tc_key_..."
}
}
}
}
// Tool groups: database, schema,
// management, docsMCP tool groups
Query, insert, update, and delete documents directly from your AI assistant.
Inspect collections, indexes, and validation rules. Generate migration scripts.
Monitor cluster health, manage branches, configure tiering policies.
Search Thermocline documentation and get context-aware help inline.
Client SDKs
TypeScript (@thermocline/client) and Python SDKs. Pure HTTP/WebSocket clients with auth helpers, query builder, and realtime subscriptions. No native dependencies.
import { ThermoclineClient } from
'@thermocline/client';
const tc = new ThermoclineClient({
url: 'https://my.thermoclinecloud.com',
apiKey: 'tc_key_...'
});
// Query builder
const orders = await tc
.from('orders')
.where('status', 'eq', 'shipped')
.limit(25)
.select('id', 'total', 'createdAt')
.execute();
// Realtime subscription
tc.subscribe('orders', {
onChange: (e) => update(e)
});SDK features
Full type safety with generics. Works in Node.js, Deno, Bun, and browsers.
Async/await support with type hints. Compatible with asyncio and trio.
JWT refresh, API key management, and OAuth flow utilities built in.
Pure HTTP/WebSocket clients. No native modules or binary downloads.
Unmatched Economics at Scale
Intelligent tiering, columnar cold storage, and unified architecture eliminate the cost of running multiple services. The more data you store, the more you save.
Enterprise Security
Encryption at rest and in transit. Role-based access control. Comprehensive audit logging. Built for teams that take security seriously.
AES-256 at Rest
All data encrypted on disk with AES-256-GCM. Customer-managed keys available on Enterprise.
TLS 1.3 in Transit
Every connection encrypted with TLS 1.3. Older protocol versions are not supported.
Audit Logging
Every query, connection, and config change logged. Export to S3, GCS, or webhook.
RBAC
Admin, readWrite, and read roles at database and collection level. Least-privilege by default.
Build AI Applications on a Database That Gets It
One database for your vectors, your context, your agent memory, and your application data. Start free. Scale without limits.