๐ฅ Model Download | ๐ Paper Link | ๐ Arxiv Paper Link |
Explore the boundaries of visual-text compression.
Release
[2025/x/x]๐๐๐ We release DeepSeek-OCR, a model to investigate the role of vision encoders from an LLM-centric viewpoint.
Contents
Install
Our environment is cuda11.8+torch2.6.0.
Clone this repository and navigate to the DeepSeek-OCR folder
git clone https://github.com/deepseek-ai/DeepSeek-OCR.git
Conda
conda create -n deepseek-ocr python=3.12.9 -y conda activate deepseek-ocr
Packages
download the vllm-0.8.5 whl
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu118 pip install vllm-0.8.5+cu118-cp38-abi3-manylinux1_x86_64.whl pip install -r requirements.txt pip install flash-attn==2.7.3 --no-build-isolation
Note: if you want vLLM and transformers codes to run in the same environment, you don't need to worry about this installation error like: vllm 0.8.5+cu118 requires transformers>=4.51.1
VLLM:
Note: change the INPUT_PATH/OUTPUT_PATH and other settings in the DeepSeek-OCR-master/DeepSeek-OCR-vllm/config.py
cd DeepSeek-OCR-master/DeepSeek-OCR-vllm
image: streaming output
python run_dpsk_ocr_image.py
pdf: concurrency ~2500tokens/s(an A100-40G)
python run_dpsk_ocr_pdf.py
batch eval for benchmarks
python run_dpsk_ocr_eval_batch.py
Transformers
from transformers import AutoModel , AutoTokenizer import torch import os os . environ [ "CUDA_VISIBLE_DEVICES" ] = '0' model_name = 'deepseek-ai/DeepSeek-OCR' tokenizer = AutoTokenizer . from_pretrained ( model_name , trust_remote_code = True ) model = AutoModel . from_pretrained ( model_name , _attn_implementation = 'flash_attention_2' , trust_remote_code = True , use_safetensors = True ) model = model . eval (). cuda (). to ( torch . bfloat16 ) # prompt = "
Free OCR. " prompt = "
<|grounding|>Convert the document to markdown. " image_file = 'your_image.jpg' output_path = 'your/output/dir' res = model . infer ( tokenizer , prompt = prompt , image_file = image_file , output_path = output_path , base_size = 1024 , image_size = 640 , crop_mode = True , save_results = True , test_compress = True )
or you can
cd DeepSeek-OCR-master/DeepSeek-OCR-hf python run_dpsk_ocr.py
The current open-source model supports the following modes:
Native resolution: Tiny: 512ร512 ๏ผ64 vision tokens๏ผโ
Small: 640ร640 ๏ผ100 vision tokens๏ผโ
Base: 1024ร1024 ๏ผ256 vision tokens๏ผโ
Large: 1280ร1280 ๏ผ400 vision tokens๏ผโ
Dynamic resolution Gundam: nร640ร640 + 1ร1024ร1024 โ
Prompts examples
# document:
<|grounding|>Convert the document to markdown. # other image:
<|grounding|>OCR this image. # without layouts:
Free OCR. # figures in document:
Parse the figure. # general:
Describe this image in detail. # rec:
Locate <|ref|>xxxx<|/ref|> in the image. # 'ๅ
ๅคฉไธไนๅฟง่ๅฟง'
Visualizations
Acknowledgement
We would like to thank Vary, GOT-OCR2.0, MinerU, PaddleOCR, OneChart, Slow Perception for their valuable models and ideas.
We also appreciate the benchmarks: Fox, OminiDocBench.
Citation
coming soon๏ผ