跳到主要内容

转换

BaseGraphTransformation dataclass

BaseGraphTransformation(name: str = '', filter_nodes: Callable[[Node], bool] = lambda: default_filter())

基类: ABC

应用于知识图谱的图谱转换的抽象基类。

transform abstractmethod async

transform(kg: KnowledgeGraph) -> Any

转换知识图谱的抽象方法。转换应是幂等的,即多次应用该转换应产生与应用一次相同的结果。

参数

名称 类型 描述 默认
kg 知识图谱

待转换的知识图谱。

必填

返回

类型 描述
任意类型

转换后的知识图谱。

源代码位于 src/ragas/testset/transforms/base.py
@abstractmethod
async def transform(self, kg: KnowledgeGraph) -> t.Any:
    """
    Abstract method to transform the KnowledgeGraph. Transformations should be
    idempotent, meaning that applying the transformation multiple times should
    yield the same result as applying it once.

    Parameters
    ----------
    kg : KnowledgeGraph
        The knowledge graph to be transformed.

    Returns
    -------
    t.Any
        The transformed knowledge graph.
    """
    pass

过滤

过滤知识图谱并返回过滤后的图谱。

参数

名称 类型 描述 默认
kg 知识图谱

待过滤的知识图谱。

必填

返回

类型 描述
知识图谱

过滤后的知识图谱。

源代码位于 src/ragas/testset/transforms/base.py
def filter(self, kg: KnowledgeGraph) -> KnowledgeGraph:
    """
    Filters the KnowledgeGraph and returns the filtered graph.

    Parameters
    ----------
    kg : KnowledgeGraph
        The knowledge graph to be filtered.

    Returns
    -------
    KnowledgeGraph
        The filtered knowledge graph.
    """

    return KnowledgeGraph(
        nodes=[node for node in kg.nodes if self.filter_nodes(node)],
        relationships=[
            rel
            for rel in kg.relationships
            if rel.source in kg.nodes and rel.target in kg.nodes
        ],
    )

generate_execution_plan abstractmethod

generate_execution_plan(kg: KnowledgeGraph) -> List[Coroutine]

生成执行器将按顺序执行的协程列表。执行此协程后,会将转换结果写入知识图谱。

参数

名称 类型 描述 默认
kg 知识图谱

待转换的知识图谱。

必填

返回

类型 描述
列表[协程]

待并行执行的协程列表。

源代码位于 src/ragas/testset/transforms/base.py
@abstractmethod
def generate_execution_plan(self, kg: KnowledgeGraph) -> t.List[t.Coroutine]:
    """
    Generates a list of coroutines to be executed in sequence by the Executor. This
    coroutine will, upon execution, write the transformation into the KnowledgeGraph.

    Parameters
    ----------
    kg : KnowledgeGraph
        The knowledge graph to be transformed.

    Returns
    -------
    t.List[t.Coroutine]
        A list of coroutines to be executed in parallel.
    """
    pass

提取器 dataclass

Extractor(name: str = '', filter_nodes: Callable[[Node], bool] = lambda: default_filter())

基类: BaseGraphTransformation

提取器的抽象基类,通过从知识图谱的节点提取特定属性来转换知识图谱。

方法

名称 描述
转换

通过从知识图谱的节点提取属性来转换知识图谱。

提取

从节点提取特定属性的抽象方法。

转换 async

transform(kg: KnowledgeGraph) -> List[Tuple[Node, Tuple[str, Any]]]

通过从知识图谱的节点提取属性来转换知识图谱。使用 filter 方法过滤图谱,并使用 extract 方法从每个节点提取属性。

参数

名称 类型 描述 默认
kg 知识图谱

待转换的知识图谱。

必填

返回

类型 描述
列表[元组[Node, 元组[str, 任意类型]]]

元组列表,每个元组包含一个节点和提取的属性。

示例

>>> kg = KnowledgeGraph(nodes=[Node(id=1, properties={"name": "Node1"}), Node(id=2, properties={"name": "Node2"})])
>>> extractor = SomeConcreteExtractor()
>>> extractor.transform(kg)
[(Node(id=1, properties={"name": "Node1"}), ("property_name", "extracted_value")),
 (Node(id=2, properties={"name": "Node2"}), ("property_name", "extracted_value"))]
源代码位于 src/ragas/testset/transforms/base.py
async def transform(
    self, kg: KnowledgeGraph
) -> t.List[t.Tuple[Node, t.Tuple[str, t.Any]]]:
    """
    Transforms the KnowledgeGraph by extracting properties from its nodes. Uses
    the `filter` method to filter the graph and the `extract` method to extract
    properties from each node.

    Parameters
    ----------
    kg : KnowledgeGraph
        The knowledge graph to be transformed.

    Returns
    -------
    t.List[t.Tuple[Node, t.Tuple[str, t.Any]]]
        A list of tuples where each tuple contains a node and the extracted
        property.

    Examples
    --------
    >>> kg = KnowledgeGraph(nodes=[Node(id=1, properties={"name": "Node1"}), Node(id=2, properties={"name": "Node2"})])
    >>> extractor = SomeConcreteExtractor()
    >>> extractor.transform(kg)
    [(Node(id=1, properties={"name": "Node1"}), ("property_name", "extracted_value")),
     (Node(id=2, properties={"name": "Node2"}), ("property_name", "extracted_value"))]
    """
    filtered = self.filter(kg)
    return [(node, await self.extract(node)) for node in filtered.nodes]

提取 abstractmethod async

extract(node: Node) -> Tuple[str, Any]

从节点提取特定属性的抽象方法。

参数

名称 类型 描述 默认
节点 Node

从中提取属性的节点。

必填

返回

类型 描述
元组[str, 任意类型]

包含属性名称和提取值的元组。

源代码位于 src/ragas/testset/transforms/base.py
@abstractmethod
async def extract(self, node: Node) -> t.Tuple[str, t.Any]:
    """
    Abstract method to extract a specific property from a node.

    Parameters
    ----------
    node : Node
        The node from which to extract the property.

    Returns
    -------
    t.Tuple[str, t.Any]
        A tuple containing the property name and the extracted value.
    """
    pass

生成执行计划

generate_execution_plan(kg: KnowledgeGraph) -> List[Coroutine]

生成执行器将并行执行的协程列表。

参数

名称 类型 描述 默认
kg 知识图谱

待转换的知识图谱。

必填

返回

类型 描述
列表[协程]

待并行执行的协程列表。

源代码位于 src/ragas/testset/transforms/base.py
def generate_execution_plan(self, kg: KnowledgeGraph) -> t.List[t.Coroutine]:
    """
    Generates a list of coroutines to be executed in parallel by the Executor.

    Parameters
    ----------
    kg : KnowledgeGraph
        The knowledge graph to be transformed.

    Returns
    -------
    t.List[t.Coroutine]
        A list of coroutines to be executed in parallel.
    """

    async def apply_extract(node: Node):
        property_name, property_value = await self.extract(node)
        if node.get_property(property_name) is None:
            node.add_property(property_name, property_value)
        else:
            logger.warning(
                "Property '%s' already exists in node '%.6s'. Skipping!",
                property_name,
                node.id,
            )

    filtered = self.filter(kg)
    return [apply_extract(node) for node in filtered.nodes]

节点过滤器 dataclass

NodeFilter(name: str = '', filter_nodes: Callable[[Node], bool] = lambda: default_filter())

基类: BaseGraphTransformation

自定义过滤器 abstractmethod async

custom_filter(node: Node, kg: KnowledgeGraph) -> bool

根据 Prompt 过滤节点的抽象方法。

参数

名称 类型 描述 默认
节点 Node

待过滤的节点。

必填

返回

类型 描述
布尔值

指示是否应过滤该节点的布尔值。

源代码位于 src/ragas/testset/transforms/base.py
@abstractmethod
async def custom_filter(self, node: Node, kg: KnowledgeGraph) -> bool:
    """
    Abstract method to filter a node based on a prompt.

    Parameters
    ----------
    node : Node
        The node to be filtered.

    Returns
    -------
    bool
        A boolean indicating whether the node should be filtered.
    """
    pass

生成执行计划

generate_execution_plan(kg: KnowledgeGraph) -> List[Coroutine]

生成待执行的协程列表

源代码位于 src/ragas/testset/transforms/base.py
def generate_execution_plan(self, kg: KnowledgeGraph) -> t.List[t.Coroutine]:
    """
    Generates a list of coroutines to be executed
    """

    async def apply_filter(node: Node):
        if await self.custom_filter(node, kg):
            kg.remove_node(node)

    filtered = self.filter(kg)
    return [apply_filter(node) for node in filtered.nodes]

关系构建器 dataclass

RelationshipBuilder(name: str = '', filter_nodes: Callable[[Node], bool] = lambda: default_filter())

基类: BaseGraphTransformation

在知识图谱中构建关系的抽象基类。

方法

名称 描述
转换

通过构建关系来转换知识图谱。

transform abstractmethod async

transform(kg: KnowledgeGraph) -> List[Relationship]

通过构建关系来转换知识图谱。

参数

名称 类型 描述 默认
kg 知识图谱

待转换的知识图谱。

必填

返回

类型 描述
列表[Relationship]

新关系列表。

源代码位于 src/ragas/testset/transforms/base.py
@abstractmethod
async def transform(self, kg: KnowledgeGraph) -> t.List[Relationship]:
    """
    Transforms the KnowledgeGraph by building relationships.

    Parameters
    ----------
    kg : KnowledgeGraph
        The knowledge graph to be transformed.

    Returns
    -------
    t.List[Relationship]
        A list of new relationships.
    """
    pass

生成执行计划

generate_execution_plan(kg: KnowledgeGraph) -> List[Coroutine]

生成执行器将并行执行的协程列表。

参数

名称 类型 描述 默认
kg 知识图谱

待转换的知识图谱。

必填

返回

类型 描述
列表[协程]

待并行执行的协程列表。

源代码位于 src/ragas/testset/transforms/base.py
def generate_execution_plan(self, kg: KnowledgeGraph) -> t.List[t.Coroutine]:
    """
    Generates a list of coroutines to be executed in parallel by the Executor.

    Parameters
    ----------
    kg : KnowledgeGraph
        The knowledge graph to be transformed.

    Returns
    -------
    t.List[t.Coroutine]
        A list of coroutines to be executed in parallel.
    """

    async def apply_build_relationships(
        filtered_kg: KnowledgeGraph, original_kg: KnowledgeGraph
    ):
        relationships = await self.transform(filtered_kg)
        original_kg.relationships.extend(relationships)

    filtered_kg = self.filter(kg)
    return [apply_build_relationships(filtered_kg=filtered_kg, original_kg=kg)]

分段器 dataclass

Splitter(name: str = '', filter_nodes: Callable[[Node], bool] = lambda: default_filter())

基类: BaseGraphTransformation

分段器的抽象基类,通过将知识图谱的节点分成更小的块来转换知识图谱。

方法

名称 描述
转换

通过将知识图谱的节点分成更小的块来转换知识图谱。

分段

将节点分成更小块的抽象方法。

转换 async

transform(kg: KnowledgeGraph) -> Tuple[List[Node], List[Relationship]]

通过将知识图谱的节点分成更小的块来转换知识图谱。

参数

名称 类型 描述 默认
kg 知识图谱

待转换的知识图谱。

必填

返回

类型 描述
元组[列表[Node], 列表[Relationship]]

包含新节点列表和新关系列表的元组。

源代码位于 src/ragas/testset/transforms/base.py
async def transform(
    self, kg: KnowledgeGraph
) -> t.Tuple[t.List[Node], t.List[Relationship]]:
    """
    Transforms the KnowledgeGraph by splitting its nodes into smaller chunks.

    Parameters
    ----------
    kg : KnowledgeGraph
        The knowledge graph to be transformed.

    Returns
    -------
    t.Tuple[t.List[Node], t.List[Relationship]]
        A tuple containing a list of new nodes and a list of new relationships.
    """
    filtered = self.filter(kg)

    all_nodes = []
    all_relationships = []
    for node in filtered.nodes:
        nodes, relationships = await self.split(node)
        all_nodes.extend(nodes)
        all_relationships.extend(relationships)

    return all_nodes, all_relationships

分段 abstractmethod async

split(node: Node) -> Tuple[List[Node], List[Relationship]]

将节点分成更小块的抽象方法。

参数

名称 类型 描述 默认
节点 Node

待分段的节点。

必填

返回

类型 描述
元组[列表[Node], 列表[Relationship]]

包含新节点列表和新关系列表的元组。

源代码位于 src/ragas/testset/transforms/base.py
@abstractmethod
async def split(self, node: Node) -> t.Tuple[t.List[Node], t.List[Relationship]]:
    """
    Abstract method to split a node into smaller chunks.

    Parameters
    ----------
    node : Node
        The node to be split.

    Returns
    -------
    t.Tuple[t.List[Node], t.List[Relationship]]
        A tuple containing a list of new nodes and a list of new relationships.
    """
    pass

生成执行计划

generate_execution_plan(kg: KnowledgeGraph) -> List[Coroutine]

生成执行器将并行执行的协程列表。

参数

名称 类型 描述 默认
kg 知识图谱

待转换的知识图谱。

必填

返回

类型 描述
列表[协程]

待并行执行的协程列表。

源代码位于 src/ragas/testset/transforms/base.py
def generate_execution_plan(self, kg: KnowledgeGraph) -> t.List[t.Coroutine]:
    """
    Generates a list of coroutines to be executed in parallel by the Executor.

    Parameters
    ----------
    kg : KnowledgeGraph
        The knowledge graph to be transformed.

    Returns
    -------
    t.List[t.Coroutine]
        A list of coroutines to be executed in parallel.
    """

    async def apply_split(node: Node):
        nodes, relationships = await self.split(node)
        kg.nodes.extend(nodes)
        kg.relationships.extend(relationships)

    filtered = self.filter(kg)
    return [apply_split(node) for node in filtered.nodes]

并行

Parallel(*transformations: BaseGraphTransformation)

待并行应用的转换集合。

示例

>>> Parallel(HeadlinesExtractor(), SummaryExtractor())
源代码位于 src/ragas/testset/transforms/engine.py
def __init__(self, *transformations: BaseGraphTransformation):
    self.transformations = list(transformations)

嵌入提取器 dataclass

EmbeddingExtractor(name: str = '', filter_nodes: Callable[[Node], bool] = lambda: default_filter(), property_name: str = 'embedding', embed_property_name: str = 'page_content', embedding_model: BaseRagasEmbeddings = embedding_factory())

基类: 提取器

用于从知识图谱节点中提取嵌入的类。

属性

名称 类型 描述
property_name str

用于存储嵌入的属性名称

embed_property_name str

包含要嵌入的文本的属性名称

embedding_model BaseRagasEmbeddings

用于生成嵌入的嵌入模型

提取 async

extract(node: Node) -> Tuple[str, Any]

为给定节点提取嵌入。

抛出异常

类型 描述
ValueError

如果待嵌入的属性不是字符串。

源代码位于 src/ragas/testset/transforms/extractors/embeddings.py
async def extract(self, node: Node) -> t.Tuple[str, t.Any]:
    """
    Extracts the embedding for a given node.

    Raises
    ------
    ValueError
        If the property to be embedded is not a string.
    """
    text = node.get_property(self.embed_property_name)
    if not isinstance(text, str):
        raise ValueError(
            f"node.property('{self.embed_property_name}') must be a string, found '{type(text)}'"
        )
    embedding = self.embedding_model.embed_query(text)
    return self.property_name, embedding

标题提取器 dataclass

HeadlinesExtractor(name: str = '', filter_nodes: Callable[[Node], bool] = lambda: default_filter(), llm: BaseRagasLLM = llm_factory(), merge_if_possible: bool = True, max_token_limit: int = 32000, tokenizer: Encoding = DEFAULT_TOKENIZER, property_name: str = 'headlines', prompt: HeadlinesExtractorPrompt = HeadlinesExtractorPrompt(), max_num: int = 5)

基类: LLMBasedExtractor

从给定文本中提取标题。

属性

名称 类型 描述
property_name str

待提取属性的名称。

prompt HeadlinesExtractorPrompt

用于提取的 Prompt。

关键词提取器 dataclass

KeyphrasesExtractor(name: str = '', filter_nodes: Callable[[Node], bool] = lambda: default_filter(), llm: BaseRagasLLM = llm_factory(), merge_if_possible: bool = True, max_token_limit: int = 32000, tokenizer: Encoding = DEFAULT_TOKENIZER, property_name: str = 'keyphrases', prompt: KeyphrasesExtractorPrompt = KeyphrasesExtractorPrompt(), max_num: int = 5)

基类: LLMBasedExtractor

从给定文本中提取热门关键词。

属性

名称 类型 描述
property_name str

待提取属性的名称。

prompt KeyphrasesExtractorPrompt

用于提取的 Prompt。

摘要提取器 dataclass

SummaryExtractor(name: str = '', filter_nodes: Callable[[Node], bool] = lambda: default_filter(), llm: BaseRagasLLM = llm_factory(), merge_if_possible: bool = True, max_token_limit: int = 32000, tokenizer: Encoding = DEFAULT_TOKENIZER, property_name: str = 'summary', prompt: SummaryExtractorPrompt = SummaryExtractorPrompt())

基类: LLMBasedExtractor

从给定文本中提取摘要。

属性

名称 类型 描述
property_name str

待提取属性的名称。

prompt SummaryExtractorPrompt

用于提取的 Prompt。

标题提取器 dataclass

TitleExtractor(name: str = '', filter_nodes: Callable[[Node], bool] = lambda: default_filter(), llm: BaseRagasLLM = llm_factory(), merge_if_possible: bool = True, max_token_limit: int = 32000, tokenizer: Encoding = DEFAULT_TOKENIZER, property_name: str = 'title', prompt: TitleExtractorPrompt = TitleExtractorPrompt())

基类: LLMBasedExtractor

从给定文本中提取标题。

属性

名称 类型 描述
property_name str

待提取属性的名称。

prompt TitleExtractorPrompt

用于提取的 Prompt。

自定义节点过滤器 dataclass

CustomNodeFilter(name: str = '', filter_nodes: Callable[[Node], bool] = lambda: default_filter(), llm: BaseRagasLLM = llm_factory(), scoring_prompt: PydanticPrompt = QuestionPotentialPrompt(), min_score: int = 2, rubrics: Dict[str, str] = lambda: DEFAULT_RUBRICS())

基类: LLMBasedNodeFilter

如果分数小于 min_score,则返回 True

摘要余弦相似度构建器 dataclass

SummaryCosineSimilarityBuilder(name: str = '', filter_nodes: Callable[[Node], bool] = lambda: default_filter(), property_name: str = 'summary_embedding', new_property_name: str = 'summary_cosine_similarity', threshold: float = 0.1)

基类: CosineSimilarityBuilder

过滤

过滤知识图谱,仅包含具有摘要嵌入的节点。

源代码位于 src/ragas/testset/transforms/relationship_builders/cosine.py
def filter(self, kg: KnowledgeGraph) -> KnowledgeGraph:
    """
    Filters the knowledge graph to only include nodes with a summary embedding.
    """
    nodes = []
    for node in kg.nodes:
        if node.type == NodeType.DOCUMENT:
            emb = node.get_property(self.property_name)
            if emb is None:
                raise ValueError(f"Node {node.id} has no {self.property_name}")
            nodes.append(node)
    return KnowledgeGraph(nodes=nodes)

默认转换

default_transforms(documents: List[Document], llm: BaseRagasLLM, embedding_model: BaseRagasEmbeddings) -> Transforms

创建并返回用于处理知识图谱的默认转换集。

此函数定义了一系列要应用于知识图谱的转换步骤,包括提取摘要、关键词、标题、头条和嵌入,以及在节点之间构建相似性关系。

返回

类型 描述
转换

待应用于知识图谱的转换步骤列表。

源代码位于 src/ragas/testset/transforms/default.py
def default_transforms(
    documents: t.List[LCDocument],
    llm: BaseRagasLLM,
    embedding_model: BaseRagasEmbeddings,
) -> Transforms:
    """
    Creates and returns a default set of transforms for processing a knowledge graph.

    This function defines a series of transformation steps to be applied to a
    knowledge graph, including extracting summaries, keyphrases, titles,
    headlines, and embeddings, as well as building similarity relationships
    between nodes.



    Returns
    -------
    Transforms
        A list of transformation steps to be applied to the knowledge graph.

    """

    def count_doc_length_bins(documents, bin_ranges):
        data = [num_tokens_from_string(doc.page_content) for doc in documents]
        bins = {f"{start}-{end}": 0 for start, end in bin_ranges}

        for num in data:
            for start, end in bin_ranges:
                if start <= num <= end:
                    bins[f"{start}-{end}"] += 1
                    break  # Move to the next number once it’s placed in a bin

        return bins

    def filter_doc_with_num_tokens(node, min_num_tokens=500):
        return (
            node.type == NodeType.DOCUMENT
            and num_tokens_from_string(node.properties["page_content"]) > min_num_tokens
        )

    def filter_docs(node):
        return node.type == NodeType.DOCUMENT

    def filter_chunks(node):
        return node.type == NodeType.CHUNK

    bin_ranges = [(0, 100), (101, 500), (501, 100000)]
    result = count_doc_length_bins(documents, bin_ranges)
    result = {k: v / len(documents) for k, v in result.items()}

    transforms = []

    if result["501-100000"] >= 0.25:
        headline_extractor = HeadlinesExtractor(
            llm=llm, filter_nodes=lambda node: filter_doc_with_num_tokens(node)
        )
        splitter = HeadlineSplitter(min_tokens=500)
        summary_extractor = SummaryExtractor(
            llm=llm, filter_nodes=lambda node: filter_doc_with_num_tokens(node)
        )

        theme_extractor = ThemesExtractor(
            llm=llm, filter_nodes=lambda node: filter_chunks(node)
        )
        ner_extractor = NERExtractor(
            llm=llm, filter_nodes=lambda node: filter_chunks(node)
        )

        summary_emb_extractor = EmbeddingExtractor(
            embedding_model=embedding_model,
            property_name="summary_embedding",
            embed_property_name="summary",
            filter_nodes=lambda node: filter_doc_with_num_tokens(node),
        )

        cosine_sim_builder = CosineSimilarityBuilder(
            property_name="summary_embedding",
            new_property_name="summary_similarity",
            threshold=0.7,
            filter_nodes=lambda node: filter_doc_with_num_tokens(node),
        )

        ner_overlap_sim = OverlapScoreBuilder(
            threshold=0.01, filter_nodes=lambda node: filter_chunks(node)
        )

        node_filter = CustomNodeFilter(
            llm=llm, filter_nodes=lambda node: filter_chunks(node)
        )
        transforms = [
            headline_extractor,
            splitter,
            summary_extractor,
            node_filter,
            Parallel(summary_emb_extractor, theme_extractor, ner_extractor),
            Parallel(cosine_sim_builder, ner_overlap_sim),
        ]
    elif result["101-500"] >= 0.25:
        summary_extractor = SummaryExtractor(
            llm=llm, filter_nodes=lambda node: filter_doc_with_num_tokens(node, 100)
        )
        summary_emb_extractor = EmbeddingExtractor(
            embedding_model=embedding_model,
            property_name="summary_embedding",
            embed_property_name="summary",
            filter_nodes=lambda node: filter_doc_with_num_tokens(node, 100),
        )

        cosine_sim_builder = CosineSimilarityBuilder(
            property_name="summary_embedding",
            new_property_name="summary_similarity",
            threshold=0.5,
            filter_nodes=lambda node: filter_doc_with_num_tokens(node, 100),
        )

        ner_extractor = NERExtractor(llm=llm)
        ner_overlap_sim = OverlapScoreBuilder(threshold=0.01)
        theme_extractor = ThemesExtractor(
            llm=llm, filter_nodes=lambda node: filter_docs(node)
        )
        node_filter = CustomNodeFilter(llm=llm)

        transforms = [
            summary_extractor,
            node_filter,
            Parallel(summary_emb_extractor, theme_extractor, ner_extractor),
            Parallel(cosine_sim_builder, ner_overlap_sim),
        ]
    else:
        raise ValueError(
            "Documents appears to be too short (ie 100 tokens or less). Please provide longer documents."
        )

    return transforms

应用转换

apply_transforms(kg: KnowledgeGraph, transforms: Transforms, run_config: RunConfig = RunConfig(), callbacks: Optional[Callbacks] = None)

原地将转换列表应用于知识图谱。

源代码位于 src/ragas/testset/transforms/engine.py
def apply_transforms(
    kg: KnowledgeGraph,
    transforms: Transforms,
    run_config: RunConfig = RunConfig(),
    callbacks: t.Optional[Callbacks] = None,
):
    """
    Apply a list of transformations to a knowledge graph in place.
    """
    # apply nest_asyncio to fix the event loop issue in jupyter
    apply_nest_asyncio()

    # if single transformation, wrap it in a list
    if isinstance(transforms, BaseGraphTransformation):
        transforms = [transforms]

    # apply the transformations
    # if Sequences, apply each transformation sequentially
    if isinstance(transforms, t.List):
        for transform in transforms:
            asyncio.run(
                run_coroutines(
                    transform.generate_execution_plan(kg),
                    get_desc(transform),
                    run_config.max_workers,
                )
            )
    # if Parallel, collect inside it and run it all
    elif isinstance(transforms, Parallel):
        asyncio.run(
            run_coroutines(
                transforms.generate_execution_plan(kg),
                get_desc(transforms),
                run_config.max_workers,
            )
        )
    else:
        raise ValueError(
            f"Invalid transforms type: {type(transforms)}. Expects a list of BaseGraphTransformations or a Parallel instance."
        )

回滚转换

rollback_transforms(kg: KnowledgeGraph, transforms: Transforms)

从知识图谱回滚转换列表。

注意

此功能尚未实现。

源代码位于 src/ragas/testset/transforms/engine.py
def rollback_transforms(kg: KnowledgeGraph, transforms: Transforms):
    """
    Rollback a list of transformations from a knowledge graph.

    Note
    ----
    This is not yet implemented. Please open an issue if you need this feature.
    """
    # this will allow you to roll back the transformations
    raise NotImplementedError