Python API
Use haiku.rag
directly in your Python applications.
Basic Usage
from pathlib import Path
from haiku.rag.client import HaikuRAG
# Use as async context manager (recommended)
async with HaikuRAG("path/to/database.db") as client:
# Your code here
pass
Document Management
Creating Documents
From text:
doc = await client.create_document(
content="Your document content here",
uri="doc://example",
metadata={"source": "manual", "topic": "example"}
)
From file:
From URL:
Retrieving Documents
By ID:
By URI:
List all documents:
Updating Documents
Deleting Documents
Searching Documents
Basic search:
results = await client.search("machine learning algorithms", limit=5)
for chunk, score in results:
print(f"Score: {score:.3f}")
print(f"Content: {chunk.content}")
print(f"Document ID: {chunk.document_id}")
With options:
results = await client.search(
query="machine learning",
limit=5, # Maximum results to return
k=60 # RRF parameter for reciprocal rank fusion
)
# Process results
for chunk, relevance_score in results:
print(f"Relevance: {relevance_score:.3f}")
print(f"Content: {chunk.content}")
print(f"From document: {chunk.document_id}")
print(f"Document URI: {chunk.document_uri}")
print(f"Document metadata: {chunk.document_meta}")
Question Answering
Ask questions about your documents:
The QA agent will search your documents for relevant information and use the configured LLM to generate a comprehensive answer.
The QA provider and model can be configured via environment variables (see Configuration).