跳转到内容

自定义模型

自定义模型

Ragas 可能会使用 LLM 和/或 Embedding 模型进行评估和合成数据生成。这两种模型都可以根据您的可用性进行自定义。

Ragas 提供了支持多个提供商的工厂函数(llm_factoryembedding_factory)。

  • 直接支持的提供商:OpenAI, Anthropic, Google
  • 通过 LiteLLM 支持的其他提供商:Azure OpenAI, AWS Bedrock, Google Vertex AI, 以及 100+ 其他提供商

工厂函数使用 Instructor 库来处理结构化输出,并使用 LiteLLM 库来统一访问多个 LLM 提供商。

示例

Azure OpenAI

pip install litellm

import litellm
from ragas.llms import llm_factory
from ragas.embeddings.base import embedding_factory

azure_configs = {
    "api_base": "https://<your-endpoint>.openai.azure.com/",
    "api_key": "your-api-key",
    "api_version": "2024-02-15-preview",
    "model_deployment": "your-deployment-name",
    "embedding_deployment": "your-embedding-deployment-name",
}

# Configure LiteLLM for Azure OpenAI (used by LLM calls)
litellm.api_base = azure_configs["api_base"]
litellm.api_key = azure_configs["api_key"]
litellm.api_version = azure_configs["api_version"]

# Create LLM using llm_factory with litellm provider
# Note: Use deployment name, not model name for Azure
# Important: Pass litellm.completion (the function), not the module
azure_llm = llm_factory(
    f"azure/{azure_configs['model_deployment']}",
    provider="litellm",
    client=litellm.completion,
)

# Create embeddings using embedding_factory
# Note: Pass Azure config directly to embedding_factory
azure_embeddings = embedding_factory(
    "litellm",
    model=f"azure/{azure_configs['embedding_deployment']}",
    api_base=azure_configs["api_base"],
    api_key=azure_configs["api_key"],
    api_version=azure_configs["api_version"],
)
太棒了!现在您已准备好将 Ragas 与 Azure OpenAI 端点一起使用。

Google Vertex

pip install litellm google-cloud-aiplatform

import litellm
import os
from ragas.llms import llm_factory
from ragas.embeddings.base import embedding_factory

config = {
    "project_id": "<your-project-id>",
    "location": "us-central1",  # e.g., "us-central1", "us-east1"
    "chat_model_id": "gemini-1.5-pro-002",
    "embedding_model_id": "text-embedding-005",
}

# Set environment variables for Vertex AI (used by litellm)
os.environ["VERTEXAI_PROJECT"] = config["project_id"]
os.environ["VERTEXAI_LOCATION"] = config["location"]

# Create LLM using llm_factory with litellm provider
# Important: Pass litellm.completion (the function), not the module
vertex_llm = llm_factory(
    f"vertex_ai/{config['chat_model_id']}",
    provider="litellm",
    client=litellm.completion,
)

# Create embeddings using embedding_factory
# Note: Embeddings use the environment variables set above
vertex_embeddings = embedding_factory(
    "litellm",
    model=f"vertex_ai/{config['embedding_model_id']}",
)
太棒了!现在您已准备好将 Ragas 与 Google VertexAI 端点一起使用。

AWS Bedrock

pip install litellm

import litellm
import os
from ragas.llms import llm_factory
from ragas.embeddings.base import embedding_factory

config = {
    "region_name": "us-east-1",  # E.g. "us-east-1"
    "llm": "anthropic.claude-3-5-sonnet-20241022-v2:0",  # Your LLM model ID
    "embeddings": "amazon.titan-embed-text-v2:0",  # Your embedding model ID
    "temperature": 0.4,
}

# Set AWS credentials as environment variables
# Option 1: Use AWS credentials file (~/.aws/credentials)
# Option 2: Set environment variables directly
os.environ["AWS_REGION_NAME"] = config["region_name"]
# os.environ["AWS_ACCESS_KEY_ID"] = "your-access-key"
# os.environ["AWS_SECRET_ACCESS_KEY"] = "your-secret-key"

# Create LLM using llm_factory with litellm provider
# Important: Pass litellm.completion (the function), not the module
bedrock_llm = llm_factory(
    f"bedrock/{config['llm']}",
    provider="litellm",
    client=litellm.completion,
    temperature=config["temperature"],
)

# Create embeddings using embedding_factory
# Note: Embeddings use the environment variables set above
bedrock_embeddings = embedding_factory(
    "litellm",
    model=f"bedrock/{config['embeddings']}",
)
太棒了!现在您已准备好将 Ragas 与 AWS Bedrock 端点一起使用。