Skip to main content
Go to documentation:
⌘U
Weaviate Database

Develop AI applications using Weaviate's APIs and tools

Deploy

Deploy, configure, and maintain Weaviate Database

Weaviate Agents

Build and deploy intelligent agents with Weaviate

Weaviate Cloud

Manage and scale Weaviate in the cloud

Additional resources

Academy
Integrations
Contributor guide

Need help?

Weaviate LogoAsk AI Assistant⌘K
Community Forum

DeepEval

DeepEval is an open-source LLM evaluation framework, built for engineers to unit-test LLM applications and AI Agents. It provides out-of-the-box LLM-powered metrics, including RAG, conversational, red-teaming, agentic, multimodal, and custom metrics.

DeepEval and Weaviate

You can use DeepEval to optimize search, retrieval, and RAG with Weaviate by leveraging DeepEval's custom and RAG metrics to select the best hyperparameters like embedding model and top-K for your Weaviate collection.

Custom Metrics

  1. G-Eval
  2. DAG

RAG Metrics

  1. Answer Relevancy
  2. Faithfulness
  3. Contextual Precision
  4. Contextual Recall
  5. Contextual Relevancy

Hands on Learning

TopicDescriptionResource
Optimizing RAG with DeepEvalThis notebook shows how to build a RAG pipeline using Weaviate and how to optimize its performance with DeepEval.Notebook