A new open-source, multilingual large language model named "Apertus" has been released by a collaboration of Swiss research institutions, including EPFL, ETH Zurich, and the Swiss National Supercomputing Centre (CSCS). The model, whose name is Latin for "open," is designed to be a transparent and ethical alternative to proprietary AI systems.
Apertus was trained on 15 trillion tokens of publicly available data, with personal data removed. It supports 1,811 languages, including underrepresented languages such as Swiss German and Romansh. The model is available in two versions: an 8-billion-parameter model for individual developers and a 70-billion-parameter version for enterprise applications.
The model is released under a custom open-source license that permits both research and commercial use. It is available on Hugging Face, Swisscom, and the Public AI Inference Utility. Future development will focus on domain-specific adaptations for fields such as healthcare, law, and education.