Skip to main content
Go to documentation:
⌘U
Weaviate Database

Develop AI applications using Weaviate's APIs and tools

Deploy

Deploy, configure, and maintain Weaviate Database

Weaviate Agents

Build and deploy intelligent agents with Weaviate

Weaviate Cloud

Manage and scale Weaviate in the cloud

Additional resources

Integrations
Contributor guide
Events & Workshops
Weaviate Academy

Need help?

Weaviate LogoAsk AI Assistant⌘K
Community Forum

Quickstart: Locally hosted

Weaviate is an open-source vector database built to power AI applications. This quickstart guide will show you how to:

  1. Set up a collection - Create a collection and import data into it.
  2. Search - Perform a similarity (vector) search on your data.
  3. RAG - Perform Retrieval Augmented Generation (RAG) with a generative model.

If you encounter any issues along the way or have additional questions, use the feature.

Prerequisites

Before we get started, install Docker on your machine. We will be running Weaviate and Ollama language models locally. We recommend that you use a modern computer with at least 8GB of RAM, preferably 16GB or more.


Start Weaviate and Ollama with Docker Compose

Save the following code to a file named docker-compose.yml in your project directory.

services:
weaviate:
command:
- --host
- 0.0.0.0
- --port
- '8080'
- --scheme
- http
image: cr.weaviate.io/semitechnologies/weaviate:1.34.0
ports:
- 8080:8080
- 50051:50051
volumes:
- weaviate_data:/var/lib/weaviate
restart: on-failure:0
environment:
AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'true'
PERSISTENCE_DATA_PATH: '/var/lib/weaviate'
ENABLE_MODULES: 'text2vec-ollama,generative-ollama'
CLUSTER_HOSTNAME: 'node1'
OLLAMA_API_ENDPOINT: 'http://ollama:11434'
depends_on:
- ollama

ollama:
image: ollama/ollama:0.12.9
ports:
- "11434:11434"
volumes:
- ollama_data:/root/.ollama

volumes:
weaviate_data:
ollama_data:

Run the following command to start a Weaviate instance and the Ollama server inside Docker containers:

docker-compose up -d

Once the Ollama service starts, you can pull the required embedding model (nomic-embed-text) and generative model (llama3.2) in the ollama container:

docker compose exec ollama ollama pull nomic-embed-text
docker compose exec ollama ollama pull llama3.2

Install a client library

Follow the instructions below to install one of the official client libraries, available in Python, JavaScript/TypeScript, Go, and Java.

py docs  API docs
More infoCode snippets in the documentation reflect the latest client library and Weaviate Database version. Check the Release notes for specific versions.

If a snippet doesn't work or you have feedback, please open a GitHub issue.
pip install -U weaviate-client[agents]

Step 1: Create a collection & import data

There are two paths you can choose from when importing data:


The following example creates a collection called Movie with the Ollama embedding model provider (text2vec-ollama) for vectorizing data during import and for querying. You are also free to use any other available embedding model provider.

py docs  API docs
More infoCode snippets in the documentation reflect the latest client library and Weaviate Database version. Check the Release notes for specific versions.

If a snippet doesn't work or you have feedback, please open a GitHub issue.
import weaviate
from weaviate.classes.config import Configure

# Step 1.1: Connect to your local Weaviate instance
with weaviate.connect_to_local() as client:
# Step 1.2: Create a collection
movies = client.collections.create(
name="Movie",
vector_config=Configure.Vectors.text2vec_ollama( # Configure the Ollama embedding integration
api_endpoint="http://ollama:11434", # If using Docker you might need: http://host.docker.internal:11434
model="nomic-embed-text", # The model to use
),
)

# Step 1.3: Import three objects
data_objects = [
{"title": "The Matrix", "description": "A computer hacker learns about the true nature of reality and his role in the war against its controllers.", "genre": "Science Fiction"},
{"title": "Spirited Away", "description": "A young girl becomes trapped in a mysterious world of spirits and must find a way to save her parents and return home.", "genre": "Animation"},
{"title": "The Lord of the Rings: The Fellowship of the Ring", "description": "A meek Hobbit and his companions set out on a perilous journey to destroy a powerful ring and save Middle-earth.", "genre": "Fantasy"},
]

movies = client.collections.use("Movie")
with movies.batch.fixed_size(batch_size=200) as batch:
for obj in data_objects:
batch.add_object(properties=obj)

print(f"Imported & vectorized {len(movies)} objects into the Movie collection")

Semantic search finds results based on meaning. This is called nearText in Weaviate. The following example searches for 2 objects (limit) whose meaning is most similar to that of sci-fi.

py docs  API docs
More infoCode snippets in the documentation reflect the latest client library and Weaviate Database version. Check the Release notes for specific versions.

If a snippet doesn't work or you have feedback, please open a GitHub issue.
import weaviate
import json

# Step 2.1: Connect to your local Weaviate instance
with weaviate.connect_to_local() as client:

# Step 2.2: Use this collection
movies = client.collections.use("Movie")

# Step 2.3: Perform a semantic search with NearText
response = movies.query.near_text(
query="sci-fi",
limit=2
)

for obj in response.objects:
print(json.dumps(obj.properties, indent=2)) # Inspect the results
Example response
{
"genre": "Science Fiction",
"title": "The Matrix",
"description": "A computer hacker learns about the true nature of reality and his role in the war against its controllers."
}
{
"genre": "Fantasy",
"title": "The Lord of the Rings: The Fellowship of the Ring",
"description": "A meek Hobbit and his companions set out on a perilous journey to destroy a powerful ring and save Middle-earth."
}
Weaviate Agents

Try the Query Agent with a Weaviate Cloud instance. You simply provide a prompt/question in natural language, and the Query Agent takes care of all the needed steps to provide an answer.

Step 3: Retrieval augmented generation (RAG)

Retrieval augmented generation (RAG), also called generative search, works by prompting a large language model (LLM) with a combination of a user query and data retrieved from a database.

The following example combines the semantic search for the query sci-fi with a prompt to generate a tweet using the Ollama generative model (generative-ollama).

py docs  API docs
More infoCode snippets in the documentation reflect the latest client library and Weaviate Database version. Check the Release notes for specific versions.

If a snippet doesn't work or you have feedback, please open a GitHub issue.
import weaviate
from weaviate.classes.generate import GenerativeConfig

# Step 2.1: Connect to your local Weaviate instance
with weaviate.connect_to_local() as client:

# Step 2.2: Use this collection
movies = client.collections.use("Movie")

# Step 2.3: Perform RAG with on NearText results
response = movies.generate.near_text(
query="sci-fi",
limit=1,
grouped_task="Write a tweet with emojis about this movie.",
generative_provider=GenerativeConfig.ollama( # Configure the Ollama generative integration
api_endpoint="http://ollama:11434", # If using Docker you might need: http://host.docker.internal:11434
model="llama3.2", # The model to use
),
)

print(response.generative.text) # Inspect the results
Example response
🕶️ Unplug from the system & join Neo's journey 💊🐰

"The Matrix" will blow your mind 🤯 as reality unravels 🌀

Kung-fu, slow-mo & mind-bending sci-fi 🥋🕴️

Are you ready to see how deep the rabbit hole goes? 🔴🔵 #TheMatrix #WakeUp

Next steps

We recommend you check out the following resources to continue learning about Weaviate.


Questions and feedback

If you have any questions or feedback, let us know in the user forum.