Skip to content

russian_mfa v3.1.0

Compare
Choose a tag to compare
@mmcauliffe mmcauliffe released this 16 Jun 01:53
· 1 commit to main since this release

Russian MFA G2P model v3.1.0

Link to documentation on mfa-models

Jump to section:

Model details

  • Maintainer: Montreal Forced Aligner
  • Language: Russian
  • Dialect: N/A
  • Phone set: MFA
  • Model type: G2P model
  • Architecture: phonetisaurus
  • Model version: v3.1.0
  • Trained date: 2024-03-26
  • Compatible MFA version: v3.1.0
  • License: CC BY 4.0
  • Citation:
@techreport{mfa_russian_mfa_g2p_2024,
	author={McAuliffe, Michael and Sonderegger, Morgan},
	title={Russian MFA G2P model v3.1.0},
	address={\url{https://mfa-models.readthedocs.io/G2P model/Russian/Russian MFA G2P model v3_1_0.html}},
	year={2024},
	month={Mar},
}

Installation

Install from the MFA command line:

mfa model download g2p russian_mfa

Or download from the release page.

Intended use

This model is intended for generating pronunciations of Russian transcripts.

This model uses the MFA phone set for Russian, and was trained from the pronunciation dictionaries above. Pronunciations generated with this G2P model can be appended and used when aligning or transcribing.

Performance Factors

The trained G2P models should be relatively quick and accurate, however the model may struggle when dealing with less common orthographic characters or word types outside of what it was trained on. If so, you may need to supplement the dictionary through generating, correcting, and re-training the G2P model as necessary.

Metrics

The model was trained on 90% of the dictionary and evaluated on a held-out 10% and evaluated with word error rate and phone error rate.

Training

This model was trained on the following data set:

  • Words: 374,632
  • Phones: 94
  • Graphemes: 35

Evaluation

This model was evaluated on the following data set:

  • Words: 41,626
  • WER: 100.00%
  • PER: 100.00%

Ethical considerations

Deploying any model involving language into any production setting has ethical implications. You should consider these implications before use.

Demographic Bias

You should assume every machine learning model has demographic bias unless proven otherwise. For G2P models, the model will only process the types of tokens that it was trained on, and will not represent the full range of text or spoken words that native speakers will produce. If you are using this model in production, you should acknowledge this as a potential issue.

Surveillance

Speech-to-Text technologies may be misused to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in many countries. You should not assume consent to record and analyze private speech.