Apertus
Table of Contents
Model Summary
Apertus is a 70B and 8B parameter language model designed to push the boundaries of fully-open multilingual and transparent models. The model supports over 1000 languages and long context, it uses only fully compliant and open training data, and achieves comparable performance to models trained behind closed doors.
The model is a decoder-only transformer, pretrained on 15T tokens with a staged curriculum of web, code and math data. The model uses a new xIELU activation function and is trained from scratch with the AdEMAMix optimizer. Post-training included supervised fine-tuning and alignment via QRPO.
Key features
Fully open model : open weights + open data + full training details including all data and training recipes
: open weights + open data + full training details including all data and training recipes Massively Multilingual : 1811 natively supported languages
: 1811 natively supported languages Compliant Apertus is trained while respecting opt-out consent of data owners (even retrospectivey), and avoiding memorization of training data
For more details refer to our technical report
How to use
The modeling code for Apertus is available in transformers v4.56.0 , so make sure to upgrade your transformers version. You can also load the model with the latest vLLM which uses transformers as a backend.
pip install -U transformers
from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "swiss-ai/Apertus-70B-2509" device = "cuda" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, ).to(device) prompt = "Give me a brief explanation of gravity in simple terms." messages_think = [ { "role" : "user" , "content" : prompt} ] text = tokenizer.apply_chat_template( messages_think, tokenize= False , add_generation_prompt= True , ) model_inputs = tokenizer([text], return_tensors= "pt" ).to(model.device) generated_ids = model.generate(**model_inputs, max_new_tokens= 32768 ) output_ids = generated_ids[ 0 ][ len (model_inputs.input_ids[ 0 ]) :] print (tokenizer.decode(output_ids, skip_special_tokens= True ))
We recommend setting temperature=0.8 and top_p=0.9 in the sampling parameters.
Long context processing
Apertus by default supports a context length up to 65,536 tokens.
Agentic Usage
Apertus supports tool use
vLLM and SGLang
You can use vLLM and SGLang to deploy the model in an API compatible with OpenAI format.
Evaluation
In this section, we report the evaluation results of Apertus model.
Base Pre-Trained Model
Instruction Model
Training
Model
Architecture: Transformer decoder
Transformer decoder Pretraining tokens: 15T
15T Precision: bfloat16
Software & hardware
GPUs: 4096 GH200
4096 GH200 Training Framework: Megatron-LM
Megatron-LM ...
Open resources
All elements used in the training process are made openly available
Training data reconstruction scripts: github.com/swiss-ai/pretrain-data
github.com/swiss-ai/pretrain-data The training intermediate checkpoints are available on the different branches of this same repository
Limitations
Apertus can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
Legal Aspects
EU AI Act Transparency Documentation and Code of Practice
Data Protection and Copyright Requests
For removal requests of personally identifiable information (PII) or of copyrighted content, please contact the respective dataset owners or us directly
Output Filter for PII
Currently no output filter is provided.
Please check this site regularly for an output filter that can be used on top of the Apertus LLM. The filter reflects data protection deletion requests which have been addressed to us as the developer of the Apertus LLM. It allows you to remove Personal Data contained in the model output. We strongly advise downloading and applying this output filter from this site every six months.
Contact
To contact us, please send an email to [email protected]
Citation