license: apache-2.0
datasets:
- UFRGS/brwac
- wikimedia/wikipedia
- peluz/lener_br
- nilc-nlp/assin2
language:
- pt
metrics:
- f1
- accuracy
- precision
base_model:
- answerdotai/ModernBERT-base
pipeline_tag: fill-mask
This model is a derivation from ModernBERT specialized in the Brazilian Portuguese language, pre-trained from data in this language scope.
The data used were gathered from BrWac Corpus and Wikipedia Dataset Portuguese subset.
In addition, a custom tokenizer was implemented to support ModBERTBr. This tokenizer used the Unigram Algorithm as backbone model to generate the vocabulary.
Usage
You can use these models directly with the transformers library starting from v4.48.0:
pip install -U transformers>=4.48.0
Since ModBERTBr is a Masked Language Model (MLM), you can use the fill-mask pipeline or load it via AutoModelForMaskedLM. To use ModBERTBr for downstream tasks like classification, retrieval, or QA, fine-tune it following standard BERT fine-tuning recipes.
⚠️ If your GPU supports it, we recommend using ModBERTBr with Flash Attention 2 to reach the highest efficiency. To do so, install Flash Attention as follows, then use the model as normal:
pip install flash-attn
Using AutoModelForMaskedLM:
from transformers import AutoTokenizer, AutoModelForMaskedLM
model_id = "wallacelw/ModBERTBr"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForMaskedLM.from_pretrained(model_id)
text = "A capital do Brasil é [MASK]."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
# To get predictions for the mask:
masked_index = inputs["input_ids"][0].tolist().index(tokenizer.mask_token_id)
predicted_token_id = outputs.logits[0, masked_index].argmax(axis=-1)
predicted_token = tokenizer.decode(predicted_token_id)
print("Predicted token:", predicted_token)
Using a pipeline:
import torch
from transformers import pipeline
from pprint import pprint
pipe = pipeline(
"fill-mask",
model="wallacelw/ModBERTBr",
)
input_text = "O planeta [MASK] é o terceiro do sistema solar."
results = pipe(input_text)
pprint(results)
Note: ModBERTBr does not use token type IDs, unlike some earlier BERT models. Most downstream usage is identical to standard BERT models on the Hugging Face Hub, except you can omit the token_type_ids parameter.
Acknowledgments
This work was supported in part by Advanced Micro Devices, Inc. under the AMD AI & HPC Cluster Program. Furthermore, the respective authors are appreciated for providing the Wikipedia, BrWac, ASSIN2, and LeNER-BR datasets
Citation
If you use our work, please cite:
@inproceedings{wu2025modbertbr,
author = {Wu, Wallace Ben Teng Lin and Garcia, Luis Paulo Faina},
title = {ModBERTBr: A ModernBERT-based Model for Brazilian Portuguese},
booktitle = {Anais do 22º Encontro Nacional de Inteligência Artificial e Computacional (ENIAC)},
year = {2025},
address = {Fortaleza, CE, Brasil},
pages = {2044--2055},
publisher = {Sociedade Brasileira de Computação},
issn = {2763-9061},
doi = {10.5753/eniac.2025.14516},
}