Skip to main content
Go to documentation:
⌘U
Weaviate Database

Develop AI applications using Weaviate's APIs and tools

Deploy

Deploy, configure, and maintain Weaviate Database

Weaviate Agents

Build and deploy intelligent agents with Weaviate

Weaviate Cloud

Manage and scale Weaviate in the cloud

Additional resources

Academy
Integrations
Contributor guide
Events & Workshops

Need help?

Weaviate LogoAsk AI Assistant⌘K
Community Forum

Scalar Quantization (SQ)

Added in v1.26.0

Scalar quantization (SQ) is a vector compression technique that can reduce the size of a vector.

To use SQ, enable it in the collection definition, then add data to the collection.

Enable compression for new collection

SQ can be enabled at collection creation time through the collection definition:

from weaviate.classes.config import Configure

client.collections.create(
name="MyCollection",
vector_config=Configure.Vectors.text2vec_openai(
name="default",
quantizer=Configure.VectorIndex.Quantizer.sq(),
),
)

Enable compression for existing collection

Added in v1.31

The ability to enable SQ compression after collection creation was added in Weaviate v1.31.

SQ can also be enabled for an existing collection by updating the collection definition:

from weaviate.classes.config import Reconfigure

collection = client.collections.get("MyCollection")
collection.config.update(
vector_config=Reconfigure.Vectors.update(
name="default",
vector_index_config=Reconfigure.VectorIndex.hnsw(
quantizer=Reconfigure.VectorIndex.Quantizer.sq(
rescore_limit=20
),
)
)
)

SQ parameters

To tune SQ, set these vectorIndexConfig parameters.

ParameterTypeDefaultDetails
sq: enabledbooleanfalseUses SQ when true.

The Python client v4 does not use the enabled parameter. To enable SQ with the v4 client, set a quantizer in the collection definition.
sq: rescoreLimitinteger-1The minimum number of candidates to fetch before rescoring.
sq: trainingLimitinteger100000The size of the training set to determine scalar bucket boundaries.
sq: cachebooleanfalseUse the vector cache when true.
vectorCacheMaxObjectsinteger1e12Maximum number of objects in the memory cache. By default, this limit is set to one trillion (1e12) objects when a new collection is created. For sizing recommendations, see Vector cache considerations.
from weaviate.classes.config import Configure

client.collections.create(
name="MyCollection",
vector_config=Configure.Vectors.text2vec_openai(
name="default",
quantizer=Configure.VectorIndex.Quantizer.sq(
rescore_limit=200,
training_limit=50000,
cache=True,
),
vector_index_config=Configure.VectorIndex.hnsw(
vector_cache_max_objects=100000,
),
),
)

Additional considerations

Multiple vector embeddings (named vectors)

Added in v1.24

Collections can have multiple named vectors. The vectors in a collection can have their own configurations, and compression must be enabled independently for each vector. Every vector is independent and can use PQ, BQ, RQ, SQ, or no compression.

Multi-vector embeddings (ColBERT, ColPali, etc.)

Added in v1.30

Multi-vector embeddings (implemented through models like ColBERT, ColPali, or ColQwen) represent each object or query using multiple vectors instead of a single vector. Just like with single vectors, multi-vectors support PQ, BQ, RQ, SQ, or no compression.

During the initial search phase, compressed vectors are used for efficiency. However, when computing the MaxSim operation, uncompressed vectors are utilized to ensure more precise similarity calculations. This approach balances the benefits of compression for search efficiency with the accuracy of uncompressed vectors during final scoring.

Further resources

Questions and feedback

If you have any questions or feedback, let us know in the user forum.