Rhubarb is a light-weight Python framework that makes it easy to build document understanding applications using Multi-modal Large Language Models (LLMs) and Embedding models. Rhubarb is created from the ground up to work with Amazon Bedrock and supports multiple foundation models including Anthropic Claude V3 Multi-modal Language Models and Amazon Nova models for document processing, along with Amazon Titan Multi-modal Embedding model for embeddings.
Visit Rhubarb documentation.
Rhubarb can do multiple document processing tasks such as
- ✅ Document Q&A
- ✅ Streaming chat with documents (Q&A)
- ✅ Document Summarization
- 🚀 Page level summaries
- 🚀 Full summaries
- 🚀 Summaries of specific pages
- 🚀 Streaming Summaries
- ✅ Structured data extraction
- ✅ Extraction Schema creation assistance
- ✅ Named entity recognition (NER)
- 🚀 With 50 built-in common entities
- ✅ PII recognition with built-in entities
- ✅ Figure and image understanding from documents
- 🚀 Explain charts, graphs, and figures
- 🚀 Perform table reasoning (as figures)
- ✅ Large document processing with sliding window approach
- ✅ Document Classification with vector sampling using multi-modal embedding models
- ✅ Logs token usage to help keep track of costs
Rhubarb comes with built-in system prompts that makes it easy to use it for a number of different document understanding use-cases. You can customize Rhubarb by passing in your own system prompts. It supports exact JSON schema based output generation which makes it easy to integrate into downstream applications.
- Supports PDF, TIFF, PNG, JPG, DOCX files (support for Excel, PowerPoint, CSV, Webp, eml files coming soon)
- Performs document to image conversion internally to work with the multi-modal models
- Works on local files or files stored in S3
- Supports specifying page numbers for multi-page documents
- Supports chat-history based chat for documents
- Supports streaming and non-streaming mode
- Supports Converse API
- Supports Cross-Region Inference
Start by installing Rhubarb using pip
.
pip install pyrhubarb
Create a boto3
session.
import boto3
session = boto3.Session()
Local file
from rhubarb import DocAnalysis
da = DocAnalysis(file_path="./path/to/doc/doc.pdf",
boto3_session=session)
resp = da.run(message="What is the employee's name?")
resp
With file in Amazon S3
from rhubarb import DocAnalysis
da = DocAnalysis(file_path="s3://path/to/doc/doc.pdf",
boto3_session=session)
resp = da.run(message="What is the employee's name?")
resp
Rhubarb supports processing documents with more than 20 pages using a sliding window approach. This feature is particularly useful when working with Claude models, which have a limitation of processing only 20 pages at a time.
To enable this feature, set sliding_window_overlap
to a value between 1 and 10 when creating a DocAnalysis
object:
doc_analysis = DocAnalysis(
file_path="path/to/large-document.pdf",
boto3_session=session,
sliding_window_overlap=2 # Number of pages to overlap between windows (1-10)
)
When the sliding window approach is enabled, Rhubarb will:
- Break the document into chunks of 20 pages
- Process each chunk separately
- Combine the results from all chunks
Note: The sliding window technique is not yet supported for document classification. When using classification with large documents, only the first 20 pages will be considered.
For more details, see the Large Document Processing Cookbook.
For more usage examples see cookbooks.
See CONTRIBUTING for more information.
This project is licensed under the Apache-2.0 License.