Marker

Marker converts documents to markdown, JSON, and HTML quickly and accurately.

Performance

Marker benchmarks favorably compared to cloud services like Llamaparse and Mathpix, as well as other open source tools.

The above results are running single PDF pages serially. Marker is significantly faster when running in batch mode, with a projected throughput of 122 pages/second on an H100 (.18 seconds per page across 22 processes).

See below for detailed speed and accuracy benchmarks, and instructions on how to run your own benchmarks.

Hybrid Mode

For the highest accuracy, pass the --use_llm flag to use an LLM alongside marker. This will do things like merge tables across pages, handle inline math, format tables properly, and extract values from forms. It can use any gemini or ollama model. By default, it uses gemini-2.0-flash. See below for details.

Here is a table benchmark comparing marker, gemini flash alone, and marker with use_llm:

As you can see, the use_llm mode offers higher accuracy than marker or gemini alone.

Examples

PDF File type Markdown JSON
Think Python Textbook View View
Switch Transformers arXiv paper View View
Multi-column CNN arXiv paper View View

Commercial usage

I want marker to be as widely accessible as possible, while still funding my development/training costs. Research and personal usage is always okay, but there are some restrictions on commercial usage.

The weights for the models are licensed cc-by-nc-sa-4.0, but I will waive that for any organization under $5M USD in gross revenue in the most recent 12-month period AND under $5M in lifetime VC/angel funding raised. You also must not be competitive with the Datalab API. If you want to remove the GPL license requirements (dual-license) and/or use the weights commercially over the revenue limit, check out the options here.

Hosted API

There's a hosted API for marker available here:

Community

Discord is where we discuss future development.

Installation

You'll need python 3.10+ and PyTorch. You may need to install the CPU version of torch first if you're not using a Mac or a GPU machine. See here for more details.

Install with:

pip install marker-pdf

If you want to use marker on documents other than PDFs, you will need to install additional dependencies with:

pip install marker-pdf[full]

Usage

First, some configuration:

Interactive App

I've included a streamlit app that lets you interactively try marker with some basic options. Run it with:

pip install streamlit
marker_gui

Convert a single file

marker_single /path/to/file.pdf

You can pass in PDFs or images.

Options:

The list of supported languages for surya OCR is here. If you don't need OCR, marker can work with any language.

Convert multiple files

marker /path/to/input/folder --workers 4

Convert multiple files on multiple GPUs

NUM_DEVICES=4 NUM_WORKERS=15 marker_chunk_convert ../pdf_in ../md_out

Use from python

See the PdfConverter class at marker/converters/pdf.py function for additional arguments that can be passed.

from marker.converters.pdf import PdfConverter
from marker.models import create_model_dict
from marker.output import text_from_rendered

converter = PdfConverter(
    artifact_dict=create_model_dict(),
)
rendered = converter("FILEPATH")
text, _, images = text_from_rendered(rendered)

rendered will be a pydantic basemodel with different properties depending on the output type requested. With markdown output (default), you'll have the properties markdown, metadata, and images. For json output, you'll have children, block_type, and metadata.

Custom configuration

You can pass configuration using the ConfigParser. To see all available options, do marker_single --help.

from marker.converters.pdf import PdfConverter
from marker.models import create_model_dict
from marker.config.parser import ConfigParser

config = {
    "output_format": "json",
    "ADDITIONAL_KEY": "VALUE"
}
config_parser = ConfigParser(config)

converter = PdfConverter(
    config=config_parser.generate_config_dict(),
    artifact_dict=create_model_dict(),
    processor_list=config_parser.get_processors(),
    renderer=config_parser.get_renderer(),
    llm_service=config_parser.get_llm_service()
)
rendered = converter("FILEPATH")

Extract blocks

Each document consists of one or more pages. Pages contain blocks, which can themselves contain other blocks. It's possible to programmatically manipulate these blocks.

Here's an example of extracting all forms from a document:

from marker.converters.pdf import PdfConverter
from marker.models import create_model_dict
from marker.schema import BlockTypes

converter = PdfConverter(
    artifact_dict=create_model_dict(),
)
document = converter.build_document("FILEPATH")
forms = document.contained_blocks((BlockTypes.Form,))

Look at the processors for more examples of extracting and manipulating blocks.

Other converters

You can also use other converters that define different conversion pipelines:

Extract tables

The TableConverter will only convert and extract tables:

from marker.converters.table import TableConverter
from marker.models import create_model_dict
from marker.output import text_from_rendered

converter = TableConverter(
    artifact_dict=create_model_dict(),
)
rendered = converter("FILEPATH")
text, _, images = text_from_rendered(rendered)

This takes all the same configuration as the PdfConverter. You can specify the configuration force_layout_block=Table to avoid layout detection and instead assume every page is a table. Set output_format=json to also get cell bounding boxes.

You can also run this via the CLI with

marker_single FILENAME --use_llm --force_layout_block Table --converter_cls marker.converters.table.TableConverter --output_format json

Output Formats

Markdown

Markdown output will include:

HTML

HTML output is similar to markdown output:

JSON

JSON output will be organized in a tree-like structure, with the leaf nodes being blocks. Examples of leaf nodes are a single list item, a paragraph of text, or an image.

The output will be a list, with each list item representing a page. Each page is considered a block in the internal marker schema. There are different types of blocks to represent different elements.

Pages have the keys:

The child blocks have two additional keys:

Note that child blocks of pages can have their own children as well (a tree structure).

{
      "id": "/page/10/Page/366",
      "block_type": "Page",
      "html": "<content-ref src='/page/10/SectionHeader/0'></content-ref><content-ref src='/page/10/SectionHeader/1'></content-ref><content-ref src='/page/10/Text/2'></content-ref><content-ref src='/page/10/Text/3'></content-ref><content-ref src='/page/10/Figure/4'></content-ref><content-ref src='/page/10/SectionHeader/5'></content-ref><content-ref src='/page/10/SectionHeader/6'></content-ref><content-ref src='/page/10/TextInlineMath/7'></content-ref><content-ref src='/page/10/TextInlineMath/8'></content-ref><content-ref src='/page/10/Table/9'></content-ref><content-ref src='/page/10/SectionHeader/10'></content-ref><content-ref src='/page/10/Text/11'></content-ref>",
      "polygon": [[0.0, 0.0], [612.0, 0.0], [612.0, 792.0], [0.0, 792.0]],
      "children": [
        {
          "id": "/page/10/SectionHeader/0",
          "block_type": "SectionHeader",
          "html": "<h1>Supplementary Material for <i>Subspace Adversarial Training</i> </h1>",
          "polygon": [
            [217.845703125, 80.630859375], [374.73046875, 80.630859375],
            [374.73046875, 107.0],
            [217.845703125, 107.0]
          ],
          "children": null,
          "section_hierarchy": {
            "1": "/page/10/SectionHeader/1"
          },
          "images": {}
        },
        ...
        ]
    }

Metadata

All output formats will return a metadata dictionary, with the following fields:

{
    "table_of_contents": [
      {
        "title": "Introduction",
        "heading_level": 1,
        "page_id": 0,
        "polygon": [...]
      }
    ], // computed PDF table of contents
    "page_stats": [
      {
        "page_id":  0, 
        "text_extraction_method": "pdftext",
        "block_counts": [("Span", 200), ...]
      },
      ...
    ]
}

LLM Services

When running with the --use_llm flag, you have a choice of services you can use:

These services may have additional optional configuration as well - you can see it by viewing the classes.

Internals

Marker is easy to extend. The core units of marker are:

To customize processing behavior, override the processors. To add new output formats, write a new renderer. For additional input formats, write a new provider.

Processors and renderers can be directly passed into the base PDFConverter, so you can specify your own custom processing easily.

API server

There is a very simple API server you can run like this:

pip install -U uvicorn fastapi python-multipart
marker_server --port 8001

This will start a fastapi server that you can access at localhost:8001. You can go to localhost:8001/docs to see the endpoint options.

You can send requests like this:

import requests
import json

post_data = {
    'filepath': 'FILEPATH',
    # Add other params here
}

requests.post("http://localhost:8001/marker", data=json.dumps(post_data)).json()

Note that this is not a very robust API, and is only intended for small-scale use. If you want to use this server, but want a more robust conversion option, you can use the hosted Datalab API.

Troubleshooting

There are some settings that you may find useful if things aren't working the way you expect:

Debugging

Pass the debug option to activate debug mode. This will save images of each page with detected layout and text, as well as output a json file with additional bounding box information.

Benchmarks

Overall PDF Conversion

We created a benchmark set by extracting single PDF pages from common crawl. We scored based on a heuristic that aligns text with ground truth text segments, and an LLM as a judge scoring method.

Method Avg Time Heuristic Score LLM Score
marker 2.83837 95.6709 4.23916
llamaparse 23.348 84.2442 3.97619
mathpix 6.36223 86.4281 4.15626
docling 3.69949 86.7073 3.70429

Benchmarks were run on an H100 for markjer and docling - llamaparse and mathpix used their cloud services. We can also look at it by document type:

Document Type Marker heuristic Marker LLM Llamaparse Heuristic Llamaparse LLM Mathpix Heuristic Mathpix LLM Docling Heuristic Docling LLM
Scientific paper 96.6737 4.34899 87.1651 3.96421 91.2267 4.46861 92.135 3.72422
Book page 97.1846 4.16168 90.9532 4.07186 93.8886 4.35329 90.0556 3.64671
Other 95.1632 4.25076 81.1385 4.01835 79.6231 4.00306 83.8223 3.76147
Form 88.0147 3.84663 66.3081 3.68712 64.7512 3.33129 68.3857 3.40491
Presentation 95.1562 4.13669 81.2261 4 83.6737 3.95683 84.8405 3.86331
Financial document 95.3697 4.39106 82.5812 4.16111 81.3115 4.05556 86.3882 3.8
Letter 98.4021 4.5 93.4477 4.28125 96.0383 4.45312 92.0952 4.09375
Engineering document 93.9244 4.04412 77.4854 3.72059 80.3319 3.88235 79.6807 3.42647
Legal document 96.689 4.27759 86.9769 3.87584 91.601 4.20805 87.8383 3.65552
Newspaper page 98.8733 4.25806 84.7492 3.90323 96.9963 4.45161 92.6496 3.51613
Magazine page 98.2145 4.38776 87.2902 3.97959 93.5934 4.16327 93.0892 4.02041

Throughput

We benchmarked throughput using a single long PDF.

Method Time per page Time per document VRAM used
marker 0.18 43.42 3.17GB

The projected throughput is 122 pages per second on an H100 - we can run 22 individual processes given the VRAM used.

Table Conversion

Marker can extract tables from PDFs using marker.converters.table.TableConverter. The table extraction performance is measured by comparing the extracted HTML representation of tables against the original HTML representations using the test split of FinTabNet. The HTML representations are compared using a tree edit distance based metric to judge both structure and content. Marker detects and identifies the structure of all tables in a PDF page and achieves these scores:

Method Avg score Total tables
marker 0.816 99
marker w/use_llm 0.907 99
gemini 0.829 99

The --use_llm flag can significantly improve table recognition performance, as you can see.

We filter out tables that we cannot align with the ground truth, since fintabnet and our layout model have slightly different detection methods (this results in some tables being split/merged).

Running your own benchmarks

You can benchmark the performance of marker on your machine. Install marker manually with:

git clone https://github.com/VikParuchuri/marker.git
poetry install

Overall PDF Conversion

Download the benchmark data here and unzip. Then run the overall benchmark like this:

python benchmarks/overall.py --methods marker --scores heuristic,llm

Options:

Table Conversion

The processed FinTabNet dataset is hosted here and is automatically downloaded. Run the benchmark with:

python benchmarks/table/table.py --max_rows 100

Options:

How it works

Marker is a pipeline of deep learning models:

It only uses models where necessary, which improves speed and accuracy.

Limitations

PDF is a tricky format, so marker will not always work perfectly. Here are some known limitations that are on the roadmap to address:

Note: Passing the --use_llm flag will mostly solve these issues.

Thanks

This work would not have been possible without amazing open source models and datasets, including (but not limited to):

Thank you to the authors of these models and datasets for making them available to the community!

Join libs.tech

...and unlock some superpowers

GitHub

We won't share your data with anyone else.