LlamaIndex Postprocessor Module
(Redirected from llama index.indices.postprocessor)
Jump to navigation
Jump to search
A LlamaIndex Postprocessor Module is a LlamaIndex module that contains implementations of node postprocessors.
- AKA: llama_index.indices.postprocessor.
- Context:
- It can (typically) be a submodule of a LlamaIndex Index Module.
- It can (typically) be used within LlamaIndex query engines after LlamaIndex node retrieval and before LlamaIndex response synthesis.
- KeywordFilteringPostprocessor to filter nodes by keywords.
- TemporalReasoningPostprocessor for temporal context.
- RecencyPostprocessor to sort nodes by date.
- ...
- It can provide postprocessors like keyword filtering, temporal reasoning, recency sorting etc., such as:
- Keyword filtering to filter irrelevant nodes.
- Temporal reasoning to expand context.
- Recency sorting to prioritize recent nodes.
- ...
- It can take in a set of nodes from a query and apply transformations.
- It can help filter, reorder, augment or process nodes.
- It can allow custom postprocessors to be added through a simple API.
- ...
- Example(s):
- Counter-Example(s):
- ...
- See: LlamaIndex Node, LlamaIndex Query Engine.
References
2023
- https://gpt-index.readthedocs.io/en/latest/core_modules/query_modules/node_postprocessors/root.html
- QUOTE: Node postprocessors are a set of modules that take a set of nodes, and apply some kind of transformation or filtering before returning them.
- In LlamaIndex, node postprocessors are most commonly applied within a query engine, after the node retrieval step and before the response synthesis step.
- LlamaIndex offers several node postprocessors for immediate use, while also providing a simple API for adding your own custom postprocessors.
- ...
- Below you can find guides for each node postprocessor.
- Modules
- SimilarityPostprocessor
- KeywordNodePostprocessor
- MetadataReplacementPostProcessor
- SentenceEmbeddingOptimizer
- CohereRerank
- SentenceTransformerRerank
- LLM Rerank
- FixedRecencyPostprocessor
- EmbeddingRecencyPostprocessor
- TimeWeightedPostprocessor
- (Beta) PIINodePostprocessor
- (Beta) PrevNextNodePostprocessor
- (Beta) AutoPrevNextNodePostprocessor
- Modules
2023
- GBard, 2023-09-12
- In LlamaIndex, a node postprocessor is a module that takes a set of nodes and applies some kind of transformation or filtering before returning them. Node postprocessors are most commonly applied within a query engine, after the node retrieval step and before the response synthesis step.
- LlamaIndex offers several node postprocessors for immediate use, including:
- Keyword filtering: This postprocessor filters out nodes that do not contain the specified keywords.
- Temporal reasoning: This postprocessor exploits temporal relationships between nodes in order to retrieve additional context, in the event that the existing context may not directly answer the question.
- Recency: This postprocessor sorts nodes by their creation date and returns the most recent nodes.
- Previous/next: This postprocessor retrieves additional nodes from the document store, based on the prev/next relationships of the nodes.
- LlamaIndex also provides a simple API for adding your own custom node postprocessors.
- Here is an example of how to use the keyword filtering postprocessor. The code will first create a query engine and then create a query. The query will be "What is the capital of France?". The code will then apply the keyword filtering postprocessor to the query, with the keyword "capital". The results of the query will then be printed:
import llamaindex # Create a query engine engine = llamaindex.QueryEngine() # Create a query query = "What is the capital of France?" # Apply the keyword filtering postprocessor results = engine.query(query, postprocessors=[KeywordFilteringPostprocessor("capital")]) # Print the results for result in results: print(result)