Glossary
Embeddings

Embeddings

Definition: Embeddings are dense numerical vectors that represent the semantic meaning of text, images, or other data. In AI chatbots, embeddings allow the system to find content that is semantically similar to a user's question — even when the exact words don't match.

What are embeddings?

Embeddings are the bridge between human language and machine computation. A word, sentence, or document is converted into a list of hundreds or thousands of numbers — a "vector" — where each number captures some aspect of the content's meaning. This process is called "embedding" the text.

What makes embeddings powerful is that semantically similar content produces similar vectors. The words "refund" and "money back" end up as vectors that are mathematically close to each other. The phrases "cancel my subscription" and "how do I stop being charged?" end up close together. This is how AI finds relevant content even when users phrase things differently from how it's written in your documentation.

The map analogy

Think of embeddings as coordinates on a map. Related concepts are positioned close together. "Refund" and "money back" end up near each other. "Cancel" and "terminate subscription" are neighbors. When a user asks about getting their money back, the search finds the refund policy because it's geographically close on the semantic map — even though the exact words differ.

The role of embeddings in a RAG chatbot

1

Your content is embedded

Every paragraph of your documentation is passed through an embedding model, producing a dense vector. These vectors are stored in a vector database.

2

User question is embedded

When a user asks a question, that question text is also converted to a vector using the same embedding model.

3

Similarity search finds relevant content

The vector database finds the content chunks whose embeddings are mathematically closest to the question's embedding — the most semantically similar content.

4

LLM generates the answer

The retrieved content chunks are passed to the LLM along with the original question. The LLM generates a clear, accurate answer grounded in your content.

Why embeddings matter for your chatbot

Semantic accuracy

Your chatbot finds the right answer regardless of how the question is phrased. No more missed answers due to word choice.

Multi-language by default

Embedding models understand semantic similarity across languages. A question in Spanish can match English content.

Sub-second retrieval

Embedding similarity search is extremely fast — results return in milliseconds, enabling real-time chat responses.

Scales with your content

Add more content and the embedding index grows accordingly. No manual tagging, labeling, or categorization required.

See embeddings in action

Converso automatically generates and manages embeddings for all your content — no technical knowledge needed.