跳转到内容

模式

TestsetSample

基类:BaseSample

表示测试集中的一个样本。

属性

名称 类型 描述
eval_sample Union[SingleTurnSample, MultiTurnSample]

评估样本,可以是单轮或多轮样本。

synthesizer_name str

用于生成此样本的合成器的名称。

TestsetPacket

Bases: BaseModel

一个待上传到服务器的测试集样本包。

Testset dataclass

Testset(samples: List[TestsetSample], run_id: str = (lambda: str(uuid4()))(), cost_cb: Optional[CostCallbackHandler] = None)

基类:RagasDataset[TestsetSample]

表示包含多个测试样本的测试集。

属性

名称 类型 描述
samples List[TestsetSample]

表示测试集中样本的 TestsetSample 对象列表。

to_evaluation_dataset

to_evaluation_dataset() -> EvaluationDataset

将 Testset 转换为 EvaluationDataset。

源代码位于 src/ragas/testset/synthesizers/testset_schema.py
def to_evaluation_dataset(self) -> EvaluationDataset:
    """
    Converts the Testset to an EvaluationDataset.
    """
    return EvaluationDataset(
        samples=[sample.eval_sample for sample in self.samples]
    )

to_list

to_list() -> List[Dict]

将 Testset 转换为字典列表。

源代码位于 src/ragas/testset/synthesizers/testset_schema.py
def to_list(self) -> t.List[t.Dict]:
    """
    Converts the Testset to a list of dictionaries.
    """
    list_dict = []
    for sample in self.samples:
        sample_dict = sample.eval_sample.model_dump(exclude_none=True)
        sample_dict["synthesizer_name"] = sample.synthesizer_name
        list_dict.append(sample_dict)
    return list_dict

from_list classmethod

from_list(data: List[Dict]) -> Testset

将字典列表转换为 Testset。

源代码位于 src/ragas/testset/synthesizers/testset_schema.py
@classmethod
def from_list(cls, data: t.List[t.Dict]) -> Testset:
    """
    Converts a list of dictionaries to a Testset.
    """
    # first create the samples
    samples = []
    for sample in data:
        synthesizer_name = sample["synthesizer_name"]
        # remove the synthesizer name from the sample
        sample.pop("synthesizer_name")
        # the remaining sample is the eval_sample
        eval_sample = sample

        # if user_input is a list it is MultiTurnSample
        if "user_input" in eval_sample and not isinstance(
            eval_sample.get("user_input"), list
        ):
            eval_sample = SingleTurnSample(**eval_sample)
        else:
            eval_sample = MultiTurnSample(**eval_sample)

        samples.append(
            TestsetSample(
                eval_sample=eval_sample, synthesizer_name=synthesizer_name
            )
        )
    # then create the testset
    return Testset(samples=samples)

total_tokens

total_tokens() -> Union[List[TokenUsage], TokenUsage]

计算评估中使用的总 token 数。

源代码位于 src/ragas/testset/synthesizers/testset_schema.py
def total_tokens(self) -> t.Union[t.List[TokenUsage], TokenUsage]:
    """
    Compute the total tokens used in the evaluation.
    """
    if self.cost_cb is None:
        raise ValueError(
            "The Testset was not configured for computing cost. Please provide a token_usage_parser function to TestsetGenerator to compute cost."
        )
    return self.cost_cb.total_tokens()

total_cost

total_cost(cost_per_input_token: Optional[float] = None, cost_per_output_token: Optional[float] = None) -> float

计算评估的总成本。

源代码位于 src/ragas/testset/synthesizers/testset_schema.py
def total_cost(
    self,
    cost_per_input_token: t.Optional[float] = None,
    cost_per_output_token: t.Optional[float] = None,
) -> float:
    """
    Compute the total cost of the evaluation.
    """
    if self.cost_cb is None:
        raise ValueError(
            "The Testset was not configured for computing cost. Please provide a token_usage_parser function to TestsetGenerator to compute cost."
        )
    return self.cost_cb.total_cost(
        cost_per_input_token=cost_per_input_token,
        cost_per_output_token=cost_per_output_token,
    )

from_annotated classmethod

from_annotated(path: str) -> Testset

从已标注的 JSON 文件加载测试集。

源代码位于 src/ragas/testset/synthesizers/testset_schema.py
@classmethod
def from_annotated(cls, path: str) -> Testset:
    """
    Loads a testset from an annotated JSON file.
    """
    import json

    with open(path, "r") as f:
        annotated_testset = json.load(f)

    samples = []
    for sample in annotated_testset:
        if sample["approval_status"] == "approved":
            samples.append(TestsetSample(**sample))
    return cls(samples=samples)

QueryLength

基类:str, Enum

查询长度的枚举。可用选项有:LONG、MEDIUM、SHORT

QueryStyle

基类:str, Enum

查询样式的枚举。可用选项有:MISSPELLED、PERFECT_GRAMMAR、POOR_GRAMMAR、WEB_SEARCH_LIKE

BaseScenario

Bases: BaseModel

用于表示生成测试样本场景的基类。

属性

名称 类型 描述
nodes List[Node]

场景中涉及的节点列表。

style QueryStyle

查询的样式。

length QueryLength

查询的长度。

persona Persona

与场景关联的人设。

SingleHopSpecificQuerySynthesizer dataclass

SingleHopSpecificQuerySynthesizer(name: str = 'single_hop_specific_query_synthesizer', llm: Union[BaseRagasLLM, 'InstructorBaseRagasLLM'] = _default_llm_factory(), generate_query_reference_prompt: PydanticPrompt = QueryAnswerGenerationPrompt(), theme_persona_matching_prompt: PydanticPrompt = ThemesPersonasMatchingPrompt(), property_name: str = 'entities')

基类:SingleHopQuerySynthesizer

MultiHopSpecificQuerySynthesizer dataclass

MultiHopSpecificQuerySynthesizer(name: str = 'multi_hop_specific_query_synthesizer', llm: Union[BaseRagasLLM, 'InstructorBaseRagasLLM'] = _default_llm_factory(), generate_query_reference_prompt: PydanticPrompt = QueryAnswerGenerationPrompt(), property_name: str = 'entities', relation_type: str = 'entities_overlap', relation_overlap_property: str = 'overlapped_items', theme_persona_matching_prompt: PydanticPrompt = ThemesPersonasMatchingPrompt())

基类:MultiHopQuerySynthesizer

根据实体重叠定义的块集群合成多跳查询。

get_node_clusters

get_node_clusters(knowledge_graph: KnowledgeGraph) -> List[Tuple]

根据指定的关系条件识别节点集群。

源代码位于 src/ragas/testset/synthesizers/multi_hop/specific.py
def get_node_clusters(self, knowledge_graph: KnowledgeGraph) -> t.List[t.Tuple]:
    """Identify clusters of nodes based on the specified relationship condition."""
    node_clusters = knowledge_graph.find_two_nodes_single_rel(
        relationship_condition=lambda rel: rel.type == self.relation_type
    )
    logger.info("found %d clusters", len(node_clusters))
    return node_clusters