Resources

SQLite

Managed SQLite databases via Turso — globally replicated, edge-ready, provisioned by your agent in seconds

Ink provides managed SQLite databases powered by Turso — globally replicated, edge-ready SQLite. Your agent provisions a database and gets connection credentials in seconds.

Creating a database

Your agent uses the create_resource MCP tool:

Tool Call
(
  : "app_db",
  : 
)
{
  "resource_id": "res_k8x2m",
  "name": "app_db",
  "type": "sqlite",
  "region": "eu-central",
  "database_url": "libsql://app-db-abc123.turso.io",
  "auth_token": "eyJ...",
  "status": "ready"
}
ParameterValue
typesqlite
nameAlphanumeric and underscores only
regioneu-central (default)

Connecting

After creation, your agent retrieves connection details using get_resource and passes them as environment variables to your service:

  • DATABASE_URL — the Turso database URL (e.g., libsql://your-db.turso.io)
  • DATABASE_AUTH_TOKEN — the authentication token

Client libraries

Node.js

npm install @libsql/client
import { createClient } from '@libsql/client';

const db = createClient({
  url: process.env.DATABASE_URL,
  authToken: process.env.DATABASE_AUTH_TOKEN,
});

const result = await db.execute('SELECT * FROM users');

Python

pip install libsql-experimental
import libsql_experimental as libsql

conn = libsql.connect(
    database=os.environ["DATABASE_URL"],
    auth_token=os.environ["DATABASE_AUTH_TOKEN"]
)
cursor = conn.execute("SELECT * FROM users")

libSQL includes native vector support — no extensions required. Store embeddings alongside your data and run similarity search directly in SQL. This makes it straightforward to build RAG (Retrieval-Augmented Generation) pipelines, semantic search, and recommendation systems without a separate vector database.

Supported vector types

TypeDescription
FLOAT32(n)32-bit float vectors (default, best accuracy)
FLOAT64(n)64-bit float vectors (highest precision)
FLOAT16(n)16-bit float vectors (half memory)
FLOAT8(n)8-bit quantized vectors
FLOAT1BIT(n)Binary vectors (minimal storage)
-- Store documents with embeddings
CREATE TABLE documents (
  id INTEGER PRIMARY KEY,
  content TEXT,
  embedding FLOAT32(1536)
);

-- Create a vector index for fast similarity search
CREATE INDEX documents_idx ON documents (
  libsql_vector_idx(embedding)
);

-- Insert a document with its embedding
INSERT INTO documents VALUES (
  1,
  'How to deploy a service on Ink',
  vector('[0.1, 0.2, ...]')
);

-- Find the 10 most similar documents to a query embedding
SELECT d.id, d.content
FROM vector_top_k('documents_idx', vector('[0.05, 0.18, ...]'), 10) AS v
JOIN documents AS d ON d.id = v.id;

Building RAG with Ink

Your agent can provision a vector-enabled SQLite database and deploy a RAG service in a single conversation:

  1. create_resource to provision a SQLite database
  2. Your service creates tables with FLOAT32(n) columns for embeddings
  3. Ingest documents — generate embeddings with any provider (OpenAI, Cohere, etc.) and store them
  4. Query with vector_top_k() to retrieve relevant context, then pass it to an LLM

Since vector search is built into libSQL, there's no separate infrastructure to manage. One database handles both your application data and embeddings.

Specs

PropertyValue
ProviderTurso
EnginelibSQL (SQLite fork)
Max size100 MB (default)
RegionEU Central
ReplicationGlobal edge replicas
LatencySub-millisecond reads at the edge

Deleting

Use the delete_resource MCP tool. This permanently deletes the database and all data — this action is irreversible.

On this page