跳转到内容

如何评估 Text to SQL 代理

在本指南中,您将学习如何使用 Ragas 系统地评估和改进 text-to-SQL 系统。

您将完成什么

  • 建立一个用于评估的基准 text-to-SQL 系统
  • 学习如何创建评估指标
  • 为您的 SQL 代理构建一个可重用的评估流程
  • 基于错误分析实施改进

设置您的环境

我们创建了一个您可以安装和运行的简单模块,这样您就可以专注于理解评估过程,而不是创建应用程序。

uv pip install "ragas-examples[text2sql]"

快速代理测试

测试 text-to-SQL 代理,看它如何将自然语言转换为 SQL

import os
import asyncio
from openai import AsyncOpenAI
from ragas_examples.text2sql.text2sql_agent import Text2SQLAgent

# Set your OpenAI API key
os.environ["OPENAI_API_KEY"] = "your-api-key-here"

# Create agent
openai_client = AsyncOpenAI(api_key=os.environ["OPENAI_API_KEY"])
agent = Text2SQLAgent(client=openai_client, model_name="gpt-5-mini")

# Test with a sample query
test_query = "How much open credit does customer Andrew Bennett?"
result = asyncio.run(agent.query(test_query))

print(f"Natural Query: {result['query']}")
print(f"Generated SQL: {result['sql']}")
输出
Natural Query: How much open credit does customer Andrew Bennett?
Generated SQL: select sum(open_balance) from ( select distinct transaction_id, open_balance from master_txn_table where customers = "Andrew Bennett" )

这将从自然语言查询生成 SQL。现在让我们构建一个系统化的评估过程。

下载 BookSQL

在运行代理或数据库实用程序之前,请从 Hugging Face 下载受限的 BookSQL 数据集

huggingface-cli login
uv run python -m ragas_examples.text2sql.data_utils --download-data

如果您看到身份验证错误,请访问数据集页面并首先接受条款:Hugging Face 上的 BookSQL

完整代码

您可以在这里查看代理和评估流程的完整代码。

准备您的数据集

我们从 BookSQL 数据集中准备了一个包含 99 个示例(简单、中等和困难查询各 33 个)的平衡样本数据集。您可以立即开始评估,或按照下一节创建自己的数据集。

下载并检查样本数据集

# Download the sample CSV from GitHub
curl -o booksql_sample.csv https://raw.githubusercontent.com/vibrantlabsai/ragas/main/examples/ragas_examples/text2sql/datasets/booksql_sample.csv
# View the first few rows to understand the structure
head -5 booksql_sample.csv
查询 SQL 级别 拆分
Richard Aguirre 的应付余额是多少? select sum(open_balance) from ( select distinct transaction_id, open_balance from master_txn_table where customers = "Richard Aguirre" ) 中等 训练
Sarah Oconnor 的应付余额是多少? select sum(open_balance) from ( select distinct transaction_id, open_balance from master_txn_table where customers = "Sarah Oconnor" ) 中等 训练
我从 Jeffrey Moore 那里的平均发票金额是多少? select avg(amount) from (select distinct transaction_id, amount from master_txn_table where customers = "Jeffrey Moore" and transaction_type = 'invoice') 困难 训练
客户 Andrew Bennett 有多少未结信用? select sum(open_balance) from ( select distinct transaction_id, open_balance from master_txn_table where customers = "Andrew Bennett" ) 简单 训练
📋 可选:我们如何准备样本数据集

下载并检查数据集

在本指南中,我们将使用 BookSQL 数据集。如果您有自己的数据集,请跳过此部分。

下载数据集

export HF_TOKEN=your-huggingface-token
uv run python -m ragas_examples.text2sql.data_utils --download-data

注意: BookSQL 是受限数据集。如果遇到身份验证错误,请访问数据集页面,接受条款,并运行 huggingface-cli login

检查数据集结构

# Check the database schema
sqlite3 BookSQL-files/BookSQL/accounting.sqlite ".schema" | head -20

预期的 schema 输出

CREATE TABLE master_txn_table(
                    id INTEGER ,
                    businessID INTEGER NOT NULL ,
                    Transaction_ID INTEGER NOT NULL,
                    Transaction_DATE DATE NOT NULL,
                    Transaction_TYPE TEXT NOT NULL,
                    Amount DOUBLE NOT NULL,
                    CreatedDATE DATE NOT NULL,
                    CreatedUSER TEXT NOT NULL,
                    Account TEXT NOT NULL,
                    AR_paid TEXT,
                    AP_paid TEXT,
                    Due_DATE DATE,
                    Open_balance DOUBLE,
                    Customers TEXT,
                    Vendor TEXT,
                    Product_Service TEXT,
                    Quantity INTEGER,
                    Rate DOUBLE,
                    Credit DOUBLE,

数据集包含

  • 数据库:包含会计数据(发票、客户等)的 SQLite 文件
  • 问题:英文的自然语言查询
  • SQL:相应的 SQL 查询
  • 难度级别:简单、中等、困难三类

创建一个平衡的评估子集

uv run python -m ragas_examples.text2sql.data_utils --create-sample --samples 33 --validate --require-data

这将创建一个平衡的 CSV 文件,其中包含返回实际数据的已验证查询。

预期输出

📖 Loading data from BookSQL-files/BookSQL/train.json...
📊 Loaded 70828 total records
🚂 Found 70828 train records
🔍 Removed 35189 duplicate records (same Query + SQL)
📊 35639 unique records remaining
📈 Difficulty distribution (after deduplication):
   • medium: 20576 records
   • hard: 11901 records
   • easy: 3162 records
✅ Added 33 validated 'easy' records
✅ Added 33 validated 'medium' records
✅ Added 33 validated 'hard' records
💾 Saved 99 records to datasets/booksql_sample.csv
📋 Final distribution:
   • medium: 33 records
   • hard: 33 records
   • easy: 33 records

这将创建 datasets/booksql_sample.csv 文件,其中包含 99 个跨越不同难度级别的平衡示例。

BookSQL 在 CC BY-NC-SA(仅限非商业用途)许可下发布。详情和引用信息见下文。

📋 许可与引用详情

许可和使用

BookSQL 数据集在 CC BY-NC-SA 4.0 许可下发布。您只能将其用于非商业研究。不允许商业使用。

如果您在研究中使用 BookSQL,请引用该论文

@inproceedings{kumar-etal-2024-booksql,
    title = {BookSQL: A Large Scale Text-to-SQL Dataset for Accounting Domain},
    author = {Kumar, Rahul and Raja, Amar and Harsola, Shrutendra and Subrahmaniam, Vignesh and Modi, Ashutosh},
    booktitle = {Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)},
    month = {June},
    year = {2024},
    address = {Mexico City, Mexico},
    publisher = {Association for Computational Linguistics},
}

关于如何创建您自己的评估数据集的建议,请参阅数据集 - 核心概念

设置您的 text-to-SQL 系统

创建您的提示

提取数据库 schema

uv run python -m ragas_examples.text2sql.db_utils --schema
📋 预期的 schema 输出
=== Database Schema ===
             name  type                                     sql
chart_of_accounts table CREATE TABLE chart_of_accounts(
                         id INTEGER ,
                         businessID INTEGER NOT NULL,
                         Account_name TEXT NOT NULL,
                         Account_type TEXT NOT NULL,
                         PRIMARY KEY(id,businessID,Account_name)
                         )
        customers table CREATE TABLE customers(
                         id INTEGER ,
                         businessID INTEGER NOT NULL,
                         customer_name TEXT NOT NULL,
                         customer_full_name TEXT ,
                         ... (continues for all columns)
                         PRIMARY KEY(id,businessID,Customer_name)
                         )
... (continues for all 7 tables with complete DDL)

编写提示内容

我们的提示遵循此模板结构

You are a SQL query generator for a business accounting database. Convert natural language queries to SQL queries.

DATABASE CONTEXT:
This is an accounting database (accounting.sqlite) containing business transaction and entity data.

TABLES AND THEIR PURPOSE:

- master_txn_table: Main transaction records for all business transactions
- chart_of_accounts: Account names and their types for all businesses  
- products_service: Products/services and their types used by businesses
- customers: Customer records with billing/shipping details
- vendors: Vendor records with billing address details
- payment_method: Payment methods used by businesses
- employees: Employee details including name, ID, hire date

DATABASE SCHEMA (DDL):

[Complete DDL statements for all tables]

INSTRUCTIONS:
Convert the user's natural language query into a valid SQL SELECT query. Return only the SQL query, no explanations or formatting.

定义评估指标

对于 text-to-SQL 系统,我们需要评估结果准确性的指标。我们将使用执行准确性作为主要指标,以验证生成的 SQL 是否返回正确的数据。

执行准确性指标:使用 datacompy 比较预期和预测 SQL 查询之间的实际结果。这验证了两个查询是否返回相同的数据,这是正确性的最终测试。

评估系统将结果分类为

  • "correct"(正确):查询成功并与预期结果匹配
  • "incorrect"(不正确):查询不成功或成功但返回错误结果

设置指标函数

使用 Ragas 离散指标创建您的评估指标。

# File: examples/ragas_examples/text2sql/evals.py
from ragas.metrics.discrete import discrete_metric
from ragas.metrics.result import MetricResult
from ragas_examples.text2sql.db_utils import execute_sql

@discrete_metric(name="execution_accuracy", allowed_values=["correct", "incorrect"])
def execution_accuracy(expected_sql: str, predicted_success: bool, predicted_result):
    """Compare execution results of predicted vs expected SQL using datacompy."""
    try:
        # Execute expected SQL
        expected_success, expected_result = execute_sql(expected_sql)
        if not expected_success:
            return MetricResult(
                value="incorrect",
                reason=f"Expected SQL failed to execute: {expected_result}"
            )

        # If predicted SQL fails, it's incorrect
        if not predicted_success:
            return MetricResult(
                value="incorrect",
                reason=f"Predicted SQL failed to execute: {predicted_result}"
            )

        # Both queries succeeded - compare DataFrames using datacompy
        if isinstance(expected_result, pd.DataFrame) and isinstance(predicted_result, pd.DataFrame):
            # Handle empty DataFrames
            if expected_result.empty and predicted_result.empty:
                return MetricResult(value="correct", reason="Both queries returned empty results")

            if expected_result.empty != predicted_result.empty:
                return MetricResult(
                    value="incorrect",
                    reason=f"Expected returned {len(expected_result)} rows, predicted returned {len(predicted_result)} rows"
                )

            # Use datacompy to compare DataFrames with index-based comparison
            comparison = datacompy.Compare(
                expected_result.reset_index(drop=True), 
                predicted_result.reset_index(drop=True),
                on_index=True,  # Compare row-by-row by index position
                abs_tol=1e-10,  # Very small tolerance for floating point comparison
                rel_tol=1e-10,
                df1_name='expected',
                df2_name='predicted'
            )

            if comparison.matches():
                return MetricResult(
                    value="correct",
                    reason=f"DataFrames match exactly ({len(expected_result)} rows, {len(expected_result.columns)} columns)"
                )
            else:
                return MetricResult(
                    value="incorrect",
                    reason="DataFrames do not match - different data returned"
                )

    except Exception as e:
        return MetricResult(
            value="incorrect",
            reason=f"Execution accuracy evaluation failed: {str(e)}"
        )

实验函数

实验函数协调整个评估流程——运行 text-to-SQL 代理并为每个查询计算指标

# File: examples/ragas_examples/text2sql/evals.py
from typing import Optional
from openai import AsyncOpenAI
from ragas import experiment
from ragas_examples.text2sql.text2sql_agent import Text2SQLAgent
from ragas_examples.text2sql.db_utils import execute_sql

@experiment()
async def text2sql_experiment(
    row,
    model: str,
    prompt_file: Optional[str],
):
    """Experiment function for text-to-SQL evaluation."""
    # Create text-to-SQL agent
    openai_client = AsyncOpenAI(api_key=os.environ["OPENAI_API_KEY"])
    agent = Text2SQLAgent(
        client=openai_client,
        model_name=model,
        prompt_file=prompt_file
    )

    # Generate SQL from natural language query
    result = await agent.query(row["Query"])

    # Execute predicted SQL
    try:
        predicted_success, predicted_result = execute_sql(result["sql"])
    except Exception as e:
        predicted_success, predicted_result = False, f"SQL execution failed: {str(e)}"

    # Score the response using execution accuracy
    accuracy_score = await execution_accuracy.ascore(
        expected_sql=row["SQL"],
        predicted_success=predicted_success,
        predicted_result=predicted_result,
    )

    return {
        "query": row["Query"],
        "expected_sql": row["SQL"],
        "predicted_sql": result["sql"],
        "level": row["Levels"],
        "execution_accuracy": accuracy_score.value,
        "accuracy_reason": accuracy_score.reason,
    }

数据集加载器

将您的评估数据集加载到 Ragas Dataset 对象中以执行实验

# File: examples/ragas_examples/text2sql/evals.py
import pandas as pd
from pathlib import Path
from typing import Optional
from ragas import Dataset

def load_dataset(limit: Optional[int] = None):
    """Load the text-to-SQL dataset from CSV file."""
    dataset_path = Path(__file__).parent / "datasets" / "booksql_sample.csv"

    # Read CSV
    df = pd.read_csv(dataset_path)

    # Limit dataset size if requested
    if limit is not None and limit > 0:
        df = df.head(limit)

    # Create Ragas Dataset
    dataset = Dataset(name="text2sql_booksql", backend="local/csv", root_dir=".")

    for _, row in df.iterrows():
        dataset.append({
            "Query": row["Query"],
            "SQL": row["SQL"], 
            "Levels": row["Levels"],
            "split": row["split"],
        })

    return dataset

数据集加载器包含一个用于开发工作流的 limit 参数——从少量样本开始,快速捕捉基本错误,然后扩展到全面评估。

运行基准评估

执行评估流程并收集结果

import asyncio
from ragas_examples.text2sql.evals import text2sql_experiment, load_dataset

async def run_evaluation():
    """Run text-to-SQL evaluation with direct code approach."""
    # Load dataset
    dataset = load_dataset()
    print(f"Dataset loaded with {len(dataset)} samples")

    # Run the experiment
    results = await text2sql_experiment.arun(
        dataset, 
        name="gpt-5-mini-prompt-v1",
        model="gpt-5-mini",
        prompt_file=None,
    )

    # Report results
    print(f"✅ gpt-5-mini-prompt-v1: {len(results)} cases evaluated")

    # Calculate and display accuracy
    accuracy_rate = sum(1 for r in results if r["execution_accuracy"] == "correct") / max(1, len(results))
    print(f"gpt-5-mini-prompt-v1 Execution Accuracy: {accuracy_rate:.2%}")

# Run the evaluation
await run_evaluation()
📋 输出 (prompt v1)
Loading dataset...
Dataset loaded with 99 samples
Running text-to-SQL evaluation with model: gpt-5-mini
Using prompt file: prompt.txt
Running experiment: 100%|██████████████████████| 99/99 [01:06<00:00,  1.49it/s]
✅ gpt-5-mini-prompt-v1: 99 cases evaluated
gpt-5-mini-prompt-v1 Execution Accuracy: 2.02%

配置选项

  • model:要使用的 OpenAI 模型(默认为 "gpt-5-mini")
  • prompt_file:自定义提示文件(默认为 None,使用内置提示)
  • limit:样本数量(默认为 None,表示所有样本;指定一个数字以限制数量)
  • name:用于结果跟踪的自定义实验名称

初步性能分析

评估会生成全面的 CSV 结果以供分析。每行包含

  • query:自然语言输入
  • expected_sql:基准 SQL
  • predicted_sql:生成的 SQL
  • level:难度(简单/中等/困难)
  • execution_accuracy:正确/不正确(结果比较)
  • accuracy_reason:详细的比较结果

分析错误和失败模式

运行评估后,您可以分析失败模式,以了解您的 Text2SQL 代理在哪些方面出错。以下是我们基准评估结果中的一些示例行,让您全面了解评估数据的样子

查询 预期 SQL 预测 SQL 级别 执行准确性
Richard Aguirre 的应付余额是多少? select sum(open_balance) from ( select distinct transaction_id, open_balance from master_txn_table where customers = "Richard Aguirre" ) SELECT customer_name, customer_full_name, Balance FROM customers WHERE customer_name = 'Richard Aguirre' OR customer_full_name = 'Richard Aguirre'; 中等 不正确
客户 Natalie Myers 有多少未结信用? select sum(open_balance) from ( select distinct transaction_id, open_balance from master_txn_table where customers = "Natalie Myers" ) SELECT Balance FROM customers WHERE customer_name = 'Natalie Myers' OR customer_full_name = 'Natalie Myers'; 简单 不正确
我从 Ashley Thompson 那里的平均发票金额是多少? select avg(amount) from (select distinct transaction_id, amount from master_txn_table where customers = "Ashley Thompson" and transaction_type = 'invoice') SELECT AVG(Amount) FROM master_txn_table WHERE Customers = 'Ashley Thompson' AND Transaction_TYPE = 'Invoice'; 困难 不正确
显示与 Carol Smith 的交易数量 select count(distinct transaction_id) from master_txn_table where customers = 'Carol Smith' or vendor = 'Carol Smith' SELECT COUNT(*) AS num_transactions FROM master_txn_table WHERE Customers = 'Carol Smith'; 中等 不正确

这些示例的关键观察

执行准确性:0% - 没有任何查询返回与预期相同的结果

常见失败模式

  • 使用错误的表(customers vs master_txn_table
  • 缺少去重(count(distinct transaction_id) vs COUNT(*)
  • 不完整的筛选逻辑(缺少 OR vendor = 'Name' 条件)
  • 错误的列名(Balance vs open_balance

这表明,尽管代理生成了有效的 SQL,但在理解业务逻辑和数据库 schema 关系方面仍需大幅改进。

错误分析

为了系统地分析您的失败案例,请手动审查并标注结果 CSV 中的每一行,对您观察到的错误类型进行分类。您可以使用 AI 借助此提示来帮助您分类

📋 错误分析分类提示
You are analyzing why a Text2SQL prediction failed. Given the following information, identify the error codes and provide a brief analysis.

Available error codes:
- AGGR_DISTINCT_MISSING: Used COUNT/SUM without DISTINCT or deduplication
- WRONG_FILTER_COLUMN: Filtered on the wrong column 
- WRONG_SOURCE_TABLE_OR_COLUMN: Selected metric from the wrong table/column
- EXTRA_TRANSFORMATION_OR_CONDITION: Added ABS(), extra filters that change results
- OUTPUT_COLUMN_ALIAS_MISMATCH: Output column names don't match
- NULL_OR_EMPTY_RESULT: Result is None/empty due to wrong filters or source
- GENERIC_VALUE_MISMATCH: Aggregation computed but numeric value differs for unclear reasons
- OTHER: Fallback

Query: [YOUR_QUERY]
Expected SQL: [EXPECTED_SQL]
Predicted SQL: [PREDICTED_SQL]
Execution Accuracy: [ACCURACY_RESULT]
Accuracy Reason: [ACCURACY_REASON]

Respond with:
- error_codes: array of applicable error codes (1 or more)
- error_analysis: brief 1-3 sentence explanation of what went wrong

复制此提示,并与您偏好的 LLM 一起使用,以分析结果 CSV 中的单个失败案例。

为方便起见,我们将其制成一个脚本,您可以使用 GPT 5 自动对错误进行分类:uv run python -m ragas_examples.text2sql.analyze_errors --input experiments/your_results.csv

审查流程

  1. 检查摘要中的高频错误代码
  2. 手动检查每种主要错误类型的 5-10 个示例
  3. 验证 AI 的分类是否与您的理解相符
  4. 检查“预期” SQL 是否真的适合您的 schema
  5. 寻找 AI 可能遗漏的模式
  6. 根据影响和频率确定修复的优先级

只有在手动验证之后,您才应该使用这些见解来改进您的提示、少样本示例或代理架构。

错误分析结果示例

以下是标注后结果中的一些示例行,显示了第一轮改进后仍然存在的错误类型

查询 预期 SQL 预测 SQL 错误分析 错误代码
我们从 Fuel 收到了多少钱? select sum(credit) from master_txn_table as T1 join chart_of_accounts as T2 on T1.account = T2.account_name where account_type in ('Income', 'Other Income') and instr(account,"Fuel") SELECT SUM(Amount) FROM (SELECT DISTINCT Transaction_ID, Amount FROM master_txn_table WHERE Customers = 'Fuel'); 预测查询对 master_txn_table 中的 Amount 求和,而不是 credit,并且省略了与 chart_of_accounts 的连接,使用了 Customers='Fuel' 而不是筛选包含 'Fuel' 的账户名并限制为收入/其他收入。 ["WRONG_SOURCE_TABLE_OR_COLUMN", "WRONG_FILTER_COLUMN", "NULL_OR_EMPTY_RESULT"]
Roberta Shaw 最近的发票是什么? select transaction_id from master_txn_table where transaction_type = 'invoice' and customers = "Roberta Shaw" order by transaction_date desc limit 1 SELECT Transaction_ID, Transaction_DATE, Amount, Open_balance, CreatedDATE, CreatedUSER, Account, Customers, Product_Service FROM master_txn_table WHERE Transaction_TYPE = 'invoice' AND Customers = 'Roberta Shaw' ORDER BY Transaction_DATE DESC LIMIT 1; 预测查询选择了九个列,而不是预期的仅 Transaction_ID,导致列/schema 不匹配,尽管返回了相同的行。 ["OUTPUT_COLUMN_ALIAS_MISMATCH"]
我从 Jeremy Strong 那里的平均发票金额是多少? select avg(amount) from (select distinct transaction_id, amount from master_txn_table where customers = "Jeremy Strong" and transaction_type = 'invoice') SELECT AVG(Amount) FROM (SELECT DISTINCT Transaction_ID, Amount FROM master_txn_table WHERE Transaction_TYPE = 'invoice' AND Vendor = 'Jeremy Strong') AS t; 查询在 Vendor = 'Jeremy Strong' 上进行筛选,而不是正确的 customers 列,因此很可能没有匹配到任何行。这导致 AVG(amount) 返回 NULL。 ["WRONG_FILTER_COLUMN", "NULL_OR_EMPTY_RESULT"]

结果的关键观察

  • 错误模式
  • 缺少 OR 条件:关于与某人“进行”的交易的查询应同时检查 customersvendor
  • 错误列选择:对财务查询使用 Amount 而不是 credit
  • 输出 schema 不匹配:选择过多列或错误的列名
  • 缺少连接:未与 chart_of_accounts 进行连接以进行账户类型筛选

这些模式为下一轮提示改进提供了信息,重点是完善筛选逻辑和正确的财务查询处理。

使用通用规则来决定在提示中更改什么,而不是针对每一行进行修复。避免添加特定案例的示例;倾向于使用基于 schema 的防护栏,这样你就不会对数据过拟合。

迭代重复这个循环

  • 运行 → 标注 → 审查 → 决定通用防护栏 → 更新 prompt_vX.txt → 重新运行 → 比较 → 重复。
  • 保持防护栏简洁且基于 schema,以便改进具有泛化性而不过拟合。
  • 对您的提示进行版本控制(prompt_v2.txtprompt_v3.txtprompt_v4.txt),并为每个版本维护一个简短的变更日志。
  • 当执行准确性在连续两次迭代中趋于平稳或达到您的业务阈值时停止。

改进您的系统

创建并使用新的提示版本

我们保持基准提示不变,并创建一个新版本进行迭代。

创建 prompt_v2.txt 以包含简洁、可重用的防护栏。保持它们足够通用以广泛适用,同时基于提供的 schema。我们添加到 prompt_v1.txt 中以创建 prompt_v2.txt 的一个部分示例

- Use exact table and column names from the schema; do not invent fields
- Prefer transactional facts from `master_txn_table`; use entity tables for static attributes
- Map parties correctly in filters:
  - Customer-focused → filter on `Customers`
  - Vendor-focused → filter on `Vendor`
- Disambiguate events via `Transaction_TYPE` (e.g., invoices → `Transaction_TYPE = 'invoice'`)
- Avoid double-counting by deduplicating on `Transaction_ID` for counts and aggregates:
  - Counts: `count(distinct Transaction_ID)`
  - Aggregates: compute over a deduplicated subquery on `(Transaction_ID, metric_column)`
- For open credit/balance due per customer, aggregate `Open_balance` from `master_txn_table` filtered by `Customers` with deduplication
- Do not add extra transforms or filters (e.g., `abs()`, `< 0`) unless explicitly asked
- Keep a single `SELECT`; avoid aliases for final column names

我们将这个改进后的提示保存为 prompt_v2.txt

使用新提示重新运行评估

import asyncio
from ragas_examples.text2sql.evals import text2sql_experiment, load_dataset

async def run_v2_evaluation():
    """Run evaluation with prompt v2."""
    # Load dataset
    dataset = load_dataset()
    print(f"Dataset loaded with {len(dataset)} samples")

    # Run experiment
    results = await text2sql_experiment.arun(
        dataset, 
        name="gpt-5-mini-prompt-v2",
        model="gpt-5-mini",
        prompt_file="prompt_v2.txt",
    )

    # Report results
    print(f"✅ gpt-5-mini-prompt-v2: {len(results)} cases evaluated")

    # Calculate accuracy
    accuracy_rate = sum(1 for r in results if r["execution_accuracy"] == "correct") / max(1, len(results))
    print(f"gpt-5-mini-prompt-v2 Execution Accuracy: {accuracy_rate:.2%}")

await run_v2_evaluation()
📋 输出 (prompt v2)
Loading dataset...
Dataset loaded with 99 samples
Running text-to-SQL evaluation with model: gpt-5-mini
Using prompt file: prompt_v2.txt
Running experiment: 100%|██████████████████████| 99/99 [01:00<00:00,  1.63it/s]
✅ gpt-5-mini-prompt-v2: 99 cases evaluated
gpt-5-mini-prompt-v2 Execution Accuracy: 60.61%

我们看到使用 prompt_v2 后,执行准确性从 2.02% 提高到了 60.61%。

审查 experiments/ 目录中新的结果 CSV 文件,并再次继续循环。

继续迭代:创建提示 v3

即使 prompt_v2.txt 取得了重大改进,60% 的准确率仍有提升空间。对失败案例的深入分析揭示了几个反复出现的模式

  1. 对财务概念的误解:模型始终默认聚合 Amount 列,而不是正确的 Credit(用于收入)或 Debit(用于支出)列。它也经常无法与 chart_of_accounts 表进行 JOIN 以按账户类型(例如,'Income')进行筛选。
  2. 添加不必要的转换:模型经常用非请求的 DISTINCT 子句或额外的筛选条件(如 Transaction_TYPE = 'invoice')使查询复杂化,这会改变结果。
  3. 不正确的列选择:对于“显示所有交易”的查询,它经常使用 SELECT * 而不是预期的 SELECT DISTINCT Transaction_ID,导致 schema 不匹配。它还会为聚合生成错误的列名(例如,max(transaction_date) 而不是 transaction_date)。
  4. 不完整的筛选:它经常遗漏 OR 条件(例如,检查 CustomersVendor 以查找与某人的交易)或完全在错误的列上进行筛选。

基于这种更深入的分析,创建 prompt_v3.txt,其中包含更具体、基于 schema 的指导方针,以解决这些反复出现的问题

prompt_v3.txt 的关键补充

### CORE QUERY GENERATION GUIDELINES

1.  **Use Correct Schema**: Use exact table and column names...
2.  **Simplicity First**: Keep the query as simple as possible...
...

### ADVANCED QUERY PATTERNS

5.  **Financial Queries (Revenue, Sales, Expenses)**:
    -   **Metric Selection**:
        -   For revenue, income, sales, or money **received**: aggregate the `Credit` column.
        -   For expenses, bills, or money **spent**: aggregate the `Debit` column.
        -   Use the `Amount` column only when...
    -   **Categorical Financial Queries**: For questions involving financial categories... you **MUST** `JOIN` `master_txn_table` with `chart_of_accounts`...

6.  **Filtering Logic**:
    -   **Ambiguous Parties**: For questions about transactions "with" or "involving" a person or company, you **MUST** check both `Customers` and `Vendor` columns. E.g., `WHERE Customers = 'Name' OR Vendor = 'Name'`.
    -   **Avoid Extra Filters**: Do not add implicit filters...

7.  **Column Selection and Naming**:
    -   **Avoid `SELECT *`**: When asked to "show all transactions", return only `DISTINCT Transaction_ID`...
    -   **"Most Recent" / "Last" Queries**: To get the 'most recent' or 'last' record, use `ORDER BY Transaction_DATE DESC LIMIT 1`. This preserves the original column names... Avoid using `MAX()`...

这些新规则旨在通用,但直接针对观察到的失败模式。

使用 prompt_v3.txt 重新运行评估

import asyncio
from ragas_examples.text2sql.evals import text2sql_experiment, load_dataset

async def run_v3_evaluation():
    """Run evaluation with prompt v3."""
    # Load dataset
    dataset = load_dataset()
    print(f"Dataset loaded with {len(dataset)} samples")

    # Run experiment
    results = await text2sql_experiment.arun(
        dataset, 
        name="gpt-5-mini-prompt-v3",
        model="gpt-5-mini",
        prompt_file="prompt_v3.txt",
    )

    # Report results
    print(f"✅ gpt-5-mini-prompt-v3: {len(results)} cases evaluated")

    # Calculate accuracy
    accuracy_rate = sum(1 for r in results if r["execution_accuracy"] == "correct") / max(1, len(results))
    print(f"gpt-5-mini-prompt-v3 Execution Accuracy: {accuracy_rate:.2%}")

await run_v3_evaluation()

我们看到使用 prompt_v3 后,执行准确性从 60.61% 提高到了 70.71%。

持续迭代的关键原则

通过 prompt_v3.txt 实现的 70% 准确率展示了系统性迭代的力量。您可以继续这个过程,以进一步提高准确率。

持续迭代的关键原则

  • 每次迭代应针对最新结果中的 3-5 个高频错误模式
  • 保持新规则通用且基于 schema,以避免过拟合
  • 当准确率在连续 2-3 次迭代中趋于平稳时停止
  • 如果您在提示改进方面遇到瓶颈,可以尝试使用更好的模型,或者将任何 sql 错误返回给 LLM 进行修复,从而形成一个真正的代理流程。

比较结果

运行所有提示版本后,我们可以比较最终结果。

提示 执行准确性 结果 CSV
v1 (prompt.txt) 2.02% experiments/...-prompt-v1.csv
v2 (prompt_v2.txt) 60.61% experiments/...-prompt-v2.csv
v3 (prompt_v3.txt) 70.71% experiments/...-prompt-v3.csv

进展分析: - v1 → v2:通过基本的去重和业务逻辑指导,从 2.02% 大幅跃升 58 个百分点至 60.61% - v2 → v3:通过增强的财务查询指导、更好的筛选逻辑和列选择规则,从 60.61% 额外提升 10 个百分点至 70.71% - 这些改进针对通过错误分析识别出的特定失败模式:财务概念、不必要的转换和不完整的筛选

结论

本指南向您展示了如何为 text-to-SQL 系统构建一个系统化的评估过程。

关键要点

  • 设置执行准确性指标以比较实际查询结果
  • 遵循迭代过程:评估 → 分析错误 → 改进 → 重复

该评估框架为您提供了一种可靠的方法来衡量和改进您的系统,Ragas 会自动处理协调和结果聚合。