path stringlengths 22 47 | title stringlengths 8 50 | contents stringlengths 143 82.3k |
|---|---|---|
docs/semantic_catalog/more-questions.md | More Questions to Try |
# More Questions to Try
| ID | Complexity | Question |
|----|-----------:|----------|
| 1 | easy | What is the total number of aircraft in the fleet? |
| 2 | easy | How many international airports are in the database? |
| 3 | easy | What is the average flight duration for all flights? |
| 4 | easy | Which airports ar... |
docs/semantic_catalog/quickstart-demo-data.md | Quickstart with demo data | # Quickstart with demo data
## Overview
We are going to use an open source postgres database named "postgres air" to demonstrate SQL generation.
There are a few setup steps to take first.
We will use pgai to find database objects in the postgres air database and automatically generate natural language descriptions o... |
docs/semantic_catalog/README.md | Semantic Catalog | # Semantic Catalog
## What is "text-to-sql"?
Text-to-SQL is a natural language processing technology that converts questions made in human language into SQL queries
that you execute in relational databases such as PostgreSQL. Text-to-SQL is a tool that enables non-technical users to
interact with data and database sy... |
docs/semantic_catalog/cli.md | Semantic Catalog CLI Reference | # Semantic Catalog CLI Reference
The pgai semantic catalog feature provides a comprehensive command-line interface for managing semantic catalogs that enable natural language to SQL functionality. This document provides detailed information about each CLI command, their usage, and their purpose in the text-to-SQL work... |
docs/semantic_catalog/quickstart-your-data.md | Quickstart with your data | # Quickstart with your data
## Overview
This quickstart will help you get up and running with the semantic catalog on your own database.
We will first need to create a semantic catalog in a database.
This semantic catalog will house all the semantic descriptions of your database model.
You can put the semantic catal... |
docs/utils/chunking.md | Chunk text with SQL functions | # Chunk text with SQL functions
The `ai.chunk_text` and `ai.chunk_text_recursively` functions allow you to split text into smaller chunks.
## Example usage
Given a table like this
```sql
create table blog
( id int not null primary key
, title text
, body text
);
```
You can chunk the text in the `body` column like... |
docs/vectorizer/document-embeddings.md | Document embeddings in pgai | # Document embeddings in pgai
This is a comprehensive walkthrough of how embedding generation for documents work in pgai. If you want to get started quickly check out the [runnable example](/examples/embeddings_from_documents).
To process documents, you need to:
1. [Set up document storage](#setting-up-document-stor... |
docs/vectorizer/adding-embedding-integration.md | Adding a Vectorizer embedding integration | # Adding a Vectorizer embedding integration
We welcome contributions to add new vectorizer embedding integrations.
The vectorizer consists of two components: the configuration, and the
vectorizer worker.
## Configuration
The vectorizer configuration lives in the database, in the `ai.vectorizer`
table. The `ai.creat... |
docs/vectorizer/api-reference.md | pgai Vectorizer API reference | # pgai Vectorizer API reference
This page provides an API reference for Vectorizer functions. For an overview
of Vectorizer and how it works, see the [Vectorizer Guide](/docs/vectorizer/overview.md).
A vectorizer provides you with a powerful and automated way to generate and
manage LLM embeddings for your PostgreSQL... |
docs/vectorizer/migrating-from-extension.md | Migrating from the extension to the python library | Previous versions of pgai vectorizer used an extension to provide the vectorizer
functionality. We have removed the need for the extension and put the
vectorizer code into the pgai python library. This change allows the vectorizer
to be used on more PostgreSQL cloud providers (AWS RDS, Supabase, etc.) and
simplifies t... |
docs/vectorizer/s3-documents.md | Pgai vectorizer S3 integration guide | # Pgai vectorizer S3 integration guide
Pgai vectorizers can be configured to create vector embeddings for documents stored in S3 buckets. We have a [general guide for embedding documents](./document-embeddings.md) that walks you through the steps to configure your vectorizer to load, parse, chunk and embed documents. ... |
docs/vectorizer/python-integration.md | Overview |
# Overview
This document describes how to create and run vectorizers from Python.
# Installation
First, install the pgai library:
```bash
pip install pgai
```
Then, you need to install the necessary database tables and functions. All database objects will be created in the `ai` schema. This is done by running the ... |
docs/vectorizer/quick-start.md | Vectorizer quick start | # Vectorizer quick start
This page shows you how to create an Ollama-based vectorizer in a self-hosted Postgres instance. We also show how simple it is to do semantic search on the automatically embedded data!
If you prefer working with the OpenAI API instead of self-hosting models, you can jump over to the [openai qu... |
docs/vectorizer/quick-start-ollama.md | Vectorizer quick start with Ollama | # Vectorizer quick start with Ollama
## Go to our vectorizer-quickstart [here](/docs/vectorizer/quick-start.md) to start with pgai and ollama. |
docs/vectorizer/sqlalchemy-integration.md | SQLAlchemy Integration with pgai Vectorizer | # SQLAlchemy Integration with pgai Vectorizer
When creating vectorizers that use the `ai.destination_table` option, the vectorizer will create a new table in the database to store the vector embeddings. This guide describes how to integrate this new table,
and it's relationship to your other tables, into your SQLAlche... |
docs/vectorizer/quick-start-openai.md | Vectorizer quick start with OpenAI | # Vectorizer quick start with OpenAI
This page shows you how to create a vectorizer in a self-hosted Postgres instance, then use
the pgai vectorizer worker to create embeddings from data in your database. To finish off we show how simple it
is to do semantic search on the embedded data in one query!
## Setup a loca... |
docs/vectorizer/quick-start-voyage.md | Vectorizer quick start with VoyageAI | # Vectorizer quick start with VoyageAI
This page shows you how to create a vectorizer and run a semantic search on the automatically embedded data on a self-hosted Postgres instance.
To follow this tutorial you need to have a Voyage AI account API key. You can get one [here](https://www.voyageai.com/).
## Setup a loc... |
docs/vectorizer/worker.md | Running on Timescale Cloud | # Running on Timescale Cloud
When you install pgai on **Timescale Cloud**, the vectorizer worker is automatically activated and run on a schedule
so you don't need to do anything, everything just works out of the box.
**How it works**: When you deploy a pgai vectorizer on Timescale Cloud, a scheduled job detects
whet... |
docs/vectorizer/overview.md | Automate AI embedding with pgai Vectorizer | # Automate AI embedding with pgai Vectorizer
Vector embeddings have emerged as a powerful tool for transforming text into
compact, semantically rich representations. This approach unlocks the potential
for more nuanced and context-aware searches, surpassing traditional
keyword-based methods. By leveraging vector embed... |
docs/vectorizer/alembic-integration.md | Alembic integration | # Alembic integration
Alembic is a database migration tool that allows you to manage your database schema. This document describes how to use Alembic to manage your vectorizer definitions, since those should be considered part of your database schema.
We first cover how to create vectorizers using the Alembic operat... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.