Skip to main content

View Source Code

Browse the complete example on GitHub
A Python CLI that extracts payment details from invoice PDFs. This is a practical example of building local AI tools and apps with
  • No cloud costs
  • No network latency
  • No data privacy loss

What’s inside?

In this example, you will learn how to:
  • Set up local AI inference using llama.cpp to run Liquid models entirely on your machine without requiring cloud services or API keys
  • Build a file monitoring system that automatically processes new files dropped into a directory
  • Extract structured output from images using LFM2.5-VL-1.6B, a small vision-language model.

Environment setup

You will need
  • llama.cpp to serve the Language Models locally.
  • uv to manage Python dependencies and run the application efficiently without creating virtual environments manually.

Install llama.cpp

macOS:
brew install llama.cpp
Linux (build from source):
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build --config Release
# Add build/bin to your PATH
Windows: Follow the instructions in the llama.cpp repository.
Verify that llama-server is available:
which llama-server

Install UV

macOS/Linux:
curl -LsSf https://astral.sh/uv/install.sh | sh
Windows:
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"

How to run it?

Let’s start by cloning the repository:
git clone https://github.com/Liquid4All/cookbook.git
cd cookbook/examples/invoice-parser
The tool supports two modes: watch (continuous monitoring) and process (one-shot). The tool automatically starts and stops llama-server for you β€” no need to run it separately.

Watch mode

Run it as a background service that continuously monitors a directory and automatically parses invoice images as they land in the folder:
uv run python src/invoice_parser/main.py watch \
    --dir invoices/ \
    --image-model LiquidAI/LFM2.5-VL-1.6B-GGUF:Q8_0 \
    --output bills.csv \
    --process-existing

Process mode

Process specific files or folders and exit:
# Process an entire folder
uv run python src/invoice_parser/main.py process \
    --image-model LiquidAI/LFM2.5-VL-1.6B-GGUF:Q8_0 \
    invoices/

# Process specific files
uv run python src/invoice_parser/main.py process \
    --image-model LiquidAI/LFM2.5-VL-1.6B-GGUF:Q8_0 \
    invoices/water_australia.png invoices/british_gas.png

# Save results to a CSV file
uv run python src/invoice_parser/main.py process \
    --image-model LiquidAI/LFM2.5-VL-1.6B-GGUF:Q8_0 \
    --output bills.csv \
    invoices/
Feel free to modify the path to the invoices directory and the model IDs to suit your needs.
If you have make installed, you can run the application with the following commands:
make run       # watch mode
make process   # one-shot process mode

Results

You can run the tool with a sample of images under invoices/ with
uv run python src/invoice_parser/main.py process \
    --image-model LiquidAI/LFM2.5-VL-1.6B-GGUF:Q8_0 \
    invoices/
or, if you have Make installed in your system, just do
make process
Results are then printed on console. The model correctly extracts all 4 out of 4 invoices:
FileUtilityAmountCurrency
water_australia.pngwater68.46AUD
Sample-electric-Bill-2023.jpgelectricity28.32USD
castlewater1.pngwater436.55GBP
british_gas.pngelectricity81.31GBP

Next steps

The model works perfectly out-of-the-box on our sample of invoices. However, depending on your specific invoice formats and layouts, you may encounter cases where the extraction is not accurate enough. In those cases, you can fine-tune the model on your own dataset to improve accuracy.

Fine-tune Vision Language Models

Learn how to fine-tune Vision Language Models on your own dataset to improve extraction accuracy.

Need help?

Edit this page