RedDB Cloud · Open core

Stop running 7 databases.
Run RedDB.

Postgres, Mongo, Redis, Pinecone, Neo4j, Influx, RabbitMQ — replaced by one engine that also answers natural-language questions across your data. For startups that can't afford a 5-person infra team.

AGPL-3.0 self-host Cloud in private beta No credit card
7→1
databases consolidated
<1ms
cache-hit latency
30MB
embedded binary
~20
lines for full RAG

red://ask

$ red server --http-bind 127.0.0.1:8080 --path ./data.rdb

red> INSERT INTO hosts (ip, os) VALUES ('10.0.0.1', 'linux');

red> SEARCH SIMILAR TEXT 'suspicious login' COLLECTION logs;

red> ASK 'who owns passport AB1234567 and what services do they use?'

grounded answer

Owner: Alice Costa. Services: billing, admin-console, vpn. Related records were found across table rows, vector matches, graph edges and KV config.

bulk insert 241K/s
cache hit <1ms
providers 11

The stack problem

Your app should not need seven databases to answer one question.

Most startups end up paying five vendors and writing the glue between them. RedDB makes the data model a query capability instead of a separate product to deploy, sync, observe and recover.

fragmented stack

Postgresrows
Mongodocs
Neo4jgraphs
Pineconevectors
Rediskv
Influxmetrics
RabbitMQqueues

RedDB

Collectionsone engine
ASKcross-model context
DriversRust · JS · Python

ASK

RAG inside the database layer.

One keyword retrieves rows, documents, vectors, graph edges and KV at once, then calls the model. No retrieval pipeline. No vector store to sync.

ASK 'what vulnerabilities affect host 10.0.0.1?'

RedDB pulls context from SQL rows, graph expansion, document bodies, vector search and KV metadata before generating a grounded response — with provider, sources and prompt token usage attached.

Compare

DIY stack vs RedDB.

Five managed services and a custom RAG pipeline, or one engine with native ASK. Same workload, very different surface area.

DIY stack

Postgres + pgvector + Redis + Pinecone + Neo4j

Relational tables
yes
Vector search
pgvector + Pinecone
Graph traversal
Neo4j
Time-series + retention
Influx
Queues / consumer groups
RabbitMQ
Natural-language ASK / RAG
glue code
Domain types (IP, GeoPoint, Money, …)
app code
Services to operate
5+
Embedded mode (single binary)
no
Self-host option
yes
Vendor lock-in
low

Supabase

Postgres + extensions

Relational tables
yes
Vector search
pgvector
Graph traversal
no
Time-series + retention
no
Queues / consumer groups
no
Natural-language ASK / RAG
glue code
Domain types (IP, GeoPoint, Money, …)
app code
Services to operate
1
Embedded mode (single binary)
no
Self-host option
yes
Vendor lock-in
medium

RedDB Cloud

one engine, 7 models

Relational tables
yes
Vector search
native
Graph traversal
native
Time-series + retention
native
Queues / consumer groups
native
Natural-language ASK / RAG
one keyword
Domain types (IP, GeoPoint, Money, …)
48 built-in
Services to operate
1
Embedded mode (single binary)
~30MB
Self-host option
yes (AGPL)
Vendor lock-in
low
Methodology

DIY stack assumes Postgres + pgvector for relational and embeddings, Pinecone for production vector search, Redis for KV / queues, Neo4j for graph traversal, and Influx for time-series. Pricing and ops cost are not in this table — they belong in the TCO calculator.

Supabase row reflects the managed Postgres tier with pgvector and pgmq. Graph traversal, native time-series and queues are listed as “no” because they require app-side modelling or external services.

RedDB Cloud row reflects the engine's public capabilities at v0.2.x. Domain types and ASK are documented on the RedDB docs.

One query surface

Insert anything. Select across everything.

RedDB lets collections carry different data semantics without forcing your app to stitch together separate APIs for every model.

Mental model

Collections are where data lives. Models are how you use it.

In RedDB, a collection is a named logical container. A collection can behave like a table, document store, graph, vector index, key-value namespace, time-series series or queue depending on what you write into it.

users table rows INSERT INTO users (name, email) VALUES ('Alice', 'alice@co.com')
events documents INSERT INTO events DOCUMENT (body) VALUES ('{"level":"warn"}')
identity graph edges INSERT INTO identity EDGE (label, from, to) VALUES ('OWNS', 'alice', 'passport:AB1234567')
notes vectors INSERT INTO notes (body) VALUES ('suspicious login') WITH AUTO EMBED (body) USING openai
settings key-value PUT settings.risk_threshold = 'high'
write examples 7 models
row INSERT INTO users (id, name, email) VALUES (42, 'Alice', 'alice@co.com')
document INSERT INTO logs DOCUMENT (body) VALUES ('{"level":"warn","ip":"10.0.0.1"}')
graph INSERT INTO identity EDGE (label, from, to) VALUES ('OWNS', 'alice', 'passport:AB1234567')
vector INSERT INTO notes (body) VALUES ('suspicious vpn login') WITH AUTO EMBED (body) USING openai
kv PUT config.risk_threshold = 'high'
timeseries INSERT INTO cpu_metrics (metric, value, tags) VALUES ('cpu.idle', 95.2, '{"host":"srv1"}')
queue QUEUE PUSH investigations '{"case":"AB1234567","priority":"high"}'
cross-model context actual primitives

SEARCH CONTEXT 'passport:AB1234567' DEPTH 2;

ASK 'who owns passport AB1234567 and what services do they use?';

SEARCH CONTEXT shape
tables users

table · indexed passport match

graph.edges identity

graph_edge · OWNS passport edge

vectors notes

vector · 0.91 semantic similarity

documents logs

document · warning login payload

key_values config

kv · risk_threshold=high

compact JSON preview
{
  "query": "passport:AB1234567",
  "tables": [{
    "score": 1,
    "red_entity_type": "table",
    "red_capabilities": ["structured", "table"],
    "discovery": { "type": "indexed", "field": "passport" },
    "collection": "users",
    "entity": { "id": 42, "identity": { "table": "users", "row_id": 42 } }
  }],
  "graph": {
    "nodes": [],
    "edges": [{
      "score": 0.82,
      "red_entity_type": "graph_edge",
      "red_capabilities": ["graph", "graph_edge"],
      "discovery": { "type": "graph_traversal", "source_id": 42, "edge_type": "adjacent", "depth": 1 },
      "collection": "identity",
      "entity": { "id": 77, "identity": { "label": "OWNS", "from_node": "alice", "to_node": "passport:AB1234567" } }
    }]
  },
  "vectors": [{
    "score": 0.91,
    "red_entity_type": "vector",
    "red_capabilities": ["embedding", "similarity", "vector"],
    "discovery": { "type": "vector_query", "similarity": 0.91 },
    "collection": "notes",
    "entity": { "id": 91, "data": { "content": "suspicious vpn login" } }
  }],
  "documents": [{ "collection": "logs", "red_capabilities": ["document", "structured", "table"] }],
  "key_values": [{ "collection": "config", "red_entity_type": "kv" }],
  "connections": [{
    "from_id": 42,
    "to_id": 77,
    "type": "graph_edge",
    "edge_type": "identity",
    "weight": 1
  }],
  "summary": {
    "total_entities": 5,
    "direct_matches": 3,
    "expanded_via_graph": 1,
    "expanded_via_cross_refs": 0,
    "expanded_via_vector_query": 1,
    "collections_searched": 6,
    "execution_time_us": 1432,
    "tiers_used": ["index", "multimodal", "scan"],
    "entities_reindexed": 1
  }
}
{
  "ok": true,
  "query": "ASK 'who owns passport AB1234567 and what services do they use?'",
  "mode": "sql",
  "capability": "table",
  "statement": "ask",
  "engine": "runtime-ai",
  "record_count": 1,
  "result": {
    "columns": ["answer", "provider", "model", "prompt_tokens", "completion_tokens", "sources_count"],
    "records": [{
      "values": {
        "answer": "Alice Costa owns passport AB1234567. The answer is based on the users collection, the identity graph edge, the logs document, the notes vector match and config key risk_threshold=high.",
        "provider": "groq",
        "model": "llama-3.3-70b-versatile",
        "prompt_tokens": 1834,
        "completion_tokens": 74,
        "sources_count": 5
      },
      "nodes": {}, "edges": {}, "paths": [], "vector_results": []
    }],
    "stats": { "nodes_scanned": 0, "edges_scanned": 0, "rows_scanned": 0, "exec_time_us": 0 }
  },
  "selection": { "scope": "any" }
}

why this matters

The app asks one question. RedDB builds one context set from user rows, evidence documents, graph relationships, semantic matches, configuration state and pending workflow records.

7 models, 1 engine

Stop shipping a database zoo.

RedDB keeps one storage format and one query surface while letting each collection behave like the model your workload needs.

01

Tables

relational rows and joins

02

Documents

JSON records with schema where useful

03

Graphs

edges, context expansion and traversal

04

Vectors

semantic search without a sidecar store

05

KV

fast configuration and state

06

Time-series

retention, downsampling and telemetry

07

Queues

FIFO, priority and consumer groups

Types

The database understands your domain.

RedDB ships with 48 built-in types so the app does not parse, normalize and revalidate every IP, coordinate, email, money value, timestamp or cross-model reference.

CREATE TABLE hosts (

ip IpAddr,

cidr Cidr,

owner Email,

location GeoPoint,

monthly_cost Money,

service_ref TableRef

) WITH CONTEXT INDEX ON (ip, owner);

Network

IpAddr · Ipv4 · Ipv6 · MacAddr · Cidr · Subnet · Port

Geo

Latitude · Longitude · GeoPoint

Identity

Uuid · Email · Url · Phone · Semver

Money

Currency · AssetCode · Money · Decimal

Visual

Color · ColorAlpha

Temporal

Timestamp · TimestampMs · Date · Time · Duration

Refs

NodeRef · EdgeRef · VectorRef · RowRef · KeyRef · DocRef

insert typed values INSERT INTO hosts (ip, cidr, owner, location, monthly_cost, service_ref) VALUES ('10.0.0.1', '10.0.0.0/24', 'alice@co.com', '-23.550520,-46.633308', MONEY('USD', 1299, 2), 'services');
invalid values fail at write INSERT INTO hosts (ip, owner) VALUES ('not-an-ip', 'alice'); -- rejected before the record is persisted
query output shape
{
  "ok": true,
  "query": "SELECT ip, owner, location, monthly_cost, service_ref FROM hosts",
  "mode": "sql",
  "capability": "table",
  "statement": "select",
  "engine": "runtime-sql",
  "record_count": 1,
  "result": {
    "columns": ["ip", "owner", "location", "monthly_cost", "service_ref"],
    "records": [{
      "values": {
        "ip": "10.0.0.1",
        "owner": "alice@co.com",
        "location": "-23.550520,-46.633308",
        "monthly_cost": { "asset_code": "USD", "minor_units": 1299, "scale": 2 },
        "service_ref": "services"
      },
      "nodes": {},
      "edges": {},
      "paths": [],
      "vector_results": []
    }]
  },
  "selection": { "scope": "any" }
}
Less app code

No custom parsers for IP, email, CIDR, money, geo and refs in every service.

Bad data stops early

Validation happens before persistence, so downstream queries do not inherit malformed values.

Better queries

The engine can route typed values into geo, network, ref and semantic operations.

Cleaner JSON

Clients get predictable JSON shapes for values like Money, vectors and refs.

Embeddings + clustering

Semantic memory, similarity and vector groups in one place.

RedDB can auto-embed content on write, search semantically, expand context for ASK and cluster vector collections with K-Means or DBSCAN.

Auto embed INSERT INTO articles (body) VALUES ('AI Safety') WITH AUTO EMBED (body) USING openai
Similarity SEARCH SIMILAR TEXT 'suspicious login' COLLECTION logs USING groq
Context SEARCH CONTEXT '192.168.1.1' FIELD ip DEPTH 2

Deploy modes

Start local. Scale out. Give agents memory.

embedded file://

Use it like SQLite inside a Rust app or local tool.

server http + grpc

Expose query, admin, backup and operational APIs.

agent mcp

Let AI agents query durable state directly.

AI providers

Swap models without rewriting your app.

OpenAIAnthropicGroqOpenRouterTogetherVeniceDeepSeekOllamaLocal
self-host quick start 3 paths

$ curl -fsSL https://raw.githubusercontent.com/forattini-dev/reddb/main/install.sh | bash

$ npx reddb-cli@latest server --http --bind 127.0.0.1:8080

$ docker run --rm -p 8080:8080 ghcr.io/forattini-dev/reddb:latest

Design partners wanted

10 startups. 12 months free. Real input on the roadmap.

We are picking ten startups already running four or more datastores in production and feeling the pain. You get free Cloud Starter for a year. We get the founding-customer feedback that shapes RedDB's first stable release.

What we ask of you

  • 30 minutes of feedback every two weeks during private beta.
  • Permission to quote you (with approval) when we exit beta.
  • Willingness to break a few things together. Pre-1.0 is real.

What you get

  • 12 months free on Cloud Starter, no credit card.
  • Direct line to the engine team in Slack.
  • Founding pricing locked when GA lands.

FAQ

The questions you actually have.

Still uncertain? Email us or jump on GitHub Discussions.

Why not just Postgres + pgvector + Redis? +

You can — it works until you also need graphs, time-series with retention, queues with consumer groups, geo, and a RAG pipeline. Then you are wiring five services and writing the glue. RedDB collapses that surface area into one engine and keeps SQL for the workload that wants SQL.

Is this production-ready? What does pre-1.0 mean for me? +

The engine is in production with design partners but the public surface (HTTP, gRPC, file format) can change at minor versions. Use it where moving fast matters more than locking the API. We document every breaking change in the CHANGELOG and ship migration scripts.

read more →
AGPL means I cannot ship RedDB inside my SaaS? +

AGPL kicks in when you distribute or expose modifications. If you run RedDB as your database behind a SaaS, that is fine — you just cannot fork the engine into a closed-source product. Need a non-AGPL license? Contact sales.

How do I migrate from Postgres / Mongo / Pinecone? +

RedDB speaks the Postgres wire protocol, so existing clients (psql, pgx, JDBC, Prisma) connect without changes. Document import is a single CLI command per collection. Vector backfill streams from any OpenAI-compatible embedding API while the new collection is being populated.

What is the difference between self-host and Cloud? +

Same engine, same file format. Self-host means you run the binary, manage backups, scale yourself. Cloud is the same binary plus a control plane that handles deploys, backups, metrics and updates. There is no Cloud-only feature in the engine.

How do you handle backups, replication and multi-region? +

Self-host: continuous backup hooks for S3, R2, GCS, Turso and D1, plus WAL archiving for point-in-time recovery. Cloud: managed backups, scheduled snapshots and (on the Scale tier) read replicas across regions.

What happens to my data if you go out of business? +

Cloud writes the same .rdb file format as self-host and exports continuous backups to a bucket you own. The CLI imports those backups directly. Worst case: you keep running the open-source build with no migration.

How does it compare to SurrealDB / EdgeDB / Supabase? +

SurrealDB and EdgeDB are multi-model engines but neither ships native ASK / RAG. Supabase is managed Postgres with extensions — great if Postgres is enough for you. RedDB&apos;s differentiation is the cross-model ASK plus the seven first-class data models in one engine, with both managed and embedded distribution.

How does performance actually compare? +

Cache-hit reads land under 1ms. Bulk insert via gRPC sustains roughly 241k ops/sec on commodity hardware. OLAP-heavy GROUP BY work is currently 2–5× behind ClickHouse. Benchmarks and methodology live in the repo.

read more →
How do I get support? +

Self-host: GitHub Issues and the community Discord. Cloud Starter: email with a 24h response target. Cloud Scale: a dedicated Slack/Teams channel and a 99.9% SLA.

Ready to ship one database?

Stop running 7 databases. Start with RedDB today.

Self-host the open-source build in 60 seconds, or reserve a Cloud Starter slot in private beta.