Skip to content

caisa-lab/Do-Multilingual-Large-Language-Models-Mitigate-Stereotype-Bias-

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 

Repository files navigation

Do Multilingual Large Language Models Mitigate Stereotype Bias?

This repository contains the links of code, models, and data used in our recently published paper: Do Multilingual Large Language Models Mitigate Stereotype Bias?.

list of resources

Evaluation pipeline: lm-evaluation-harness under branch bbq and crowspairs_es

Model files: huggingface_page under the Models tab, lamarr-org/2.7B_language for all monolingual models, and lamarr-org/2.7B_ENDEFRITES for the multilingual model.

Model training framework: Megatron-LM

Evaluation Datasets: huggingface_page under the Datasets tab, lamarr-org/bbq_language_reformulated for bbq dataset, and lamarr-org/crows_pairs_language for crows pair dataset.

References

Please cite our paper if you use this code or data in your own work:

@article{nie2024multilingual,
  title={Do Multilingual Large Language Models Mitigate Stereotype Bias?},
  author={Nie, Shangrui and Fromm, Michael and Welch, Charles and G{\"o}rge, Rebekka and Karimi, Akbar and Plepi, Joan and Mowmita, Nazia Afsan and Flores-Herr, Nicolas and Ali, Mehdi and Flek, Lucie},
  journal={arXiv preprint arXiv:2407.05740},
  year={2024}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published