LlamaIndex Postprocessor Module

From GM-RKB
Jump to navigation Jump to search

A LlamaIndex Postprocessor Module is a LlamaIndex module that contains implementations of node postprocessors.


References

2023

  • https://gpt-index.readthedocs.io/en/latest/core_modules/query_modules/node_postprocessors/root.html
    • QUOTE: Node postprocessors are a set of modules that take a set of nodes, and apply some kind of transformation or filtering before returning them.
    • In LlamaIndex, node postprocessors are most commonly applied within a query engine, after the node retrieval step and before the response synthesis step.
    • LlamaIndex offers several node postprocessors for immediate use, while also providing a simple API for adding your own custom postprocessors.
    • ...
    • Below you can find guides for each node postprocessor.
      • Modules
        • SimilarityPostprocessor
        • KeywordNodePostprocessor
        • MetadataReplacementPostProcessor
        • SentenceEmbeddingOptimizer
        • CohereRerank
        • SentenceTransformerRerank
        • LLM Rerank
        • FixedRecencyPostprocessor
        • EmbeddingRecencyPostprocessor
        • TimeWeightedPostprocessor
        • (Beta) PIINodePostprocessor
        • (Beta) PrevNextNodePostprocessor
        • (Beta) AutoPrevNextNodePostprocessor

2023

  • GBard, 2023-09-12
    • In LlamaIndex, a node postprocessor is a module that takes a set of nodes and applies some kind of transformation or filtering before returning them. Node postprocessors are most commonly applied within a query engine, after the node retrieval step and before the response synthesis step.
    • LlamaIndex offers several node postprocessors for immediate use, including:
      • Keyword filtering: This postprocessor filters out nodes that do not contain the specified keywords.
      • Temporal reasoning: This postprocessor exploits temporal relationships between nodes in order to retrieve additional context, in the event that the existing context may not directly answer the question.
      • Recency: This postprocessor sorts nodes by their creation date and returns the most recent nodes.
      • Previous/next: This postprocessor retrieves additional nodes from the document store, based on the prev/next relationships of the nodes.
    • LlamaIndex also provides a simple API for adding your own custom node postprocessors.
    • Here is an example of how to use the keyword filtering postprocessor. The code will first create a query engine and then create a query. The query will be "What is the capital of France?". The code will then apply the keyword filtering postprocessor to the query, with the keyword "capital". The results of the query will then be printed:
import llamaindex
# Create a query engine
engine = llamaindex.QueryEngine()
# Create a query
query = "What is the capital of France?"
# Apply the keyword filtering postprocessor
results = engine.query(query, postprocessors=[KeywordFilteringPostprocessor("capital")])
# Print the results
for result in results:
   print(result)