Skip to content

Releases: MontrealCorpusTools/mfa-models

german_mfa v3.0.0

13 Mar 03:08
Compare
Choose a tag to compare

German MFA acoustic model v3.0.0

Link to documentation on mfa-models

Jump to section:

Model details

  • Maintainer: Montreal Forced Aligner
  • Language: German
  • Dialect: N/A
  • Phone set: MFA
  • Model type: Acoustic
  • Features: MFCC
  • Architecture: gmm-hmm
  • Model version: v3.0.0
  • Trained date: 2024-03-07
  • Compatible MFA version: v3.0.0
  • License: CC BY 4.0
  • Citation:
@techreport{mfa_german_mfa_acoustic_2024,
	author={McAuliffe, Michael and Sonderegger, Morgan},
	title={German MFA acoustic model v3.0.0},
	address={\url{https://mfa-models.readthedocs.io/acoustic/German/German MFA acoustic model v3_0_0.html}},
	year={2024},
	month={Mar},
}

Installation

Install from the MFA command line:

mfa model download acoustic german_mfa

Or download from the release page.

Intended use

This model is intended for forced alignment of German transcripts.

This model uses the MFA phone set for German, and was trained with the pronunciation dictionaries above. Pronunciations can be added on top of the dictionary, as long as no additional phones are introduced.

Performance Factors

As forced alignment is a relatively well-constrained problem (given accurate transcripts), this model should be applicable to a range of recording conditions and speakers. However, please note that it was trained on read speech in low-noise environments, so as your data diverges from that, you may run into alignment issues or need to increase the beam size of MFA or see other recommendations in the troubleshooting section below.

Please note as well that MFA does not use state-of-the-art ASR models for forced alignment. You may get better performance (especially on speech-to-text tasks) using other frameworks like Coqui.

Metrics

Acoustic models are typically generated as one component of a larger ASR system where the metric is word error rate (WER). For forced alignment, there is typically not the same sort of gold standard measure for most languages.

As a rough approximation of the acoustic model quality, we evaluated it against the corpus it was trained on alongside a language model trained from the same data. Key caveat here is that this is not a typical WER measure on held out data, so it should not be taken as a hard measure of how well an acoustic model will generalize to your data, but rather is more of a sanity check that the training data quality was sufficiently high.

Using the pronunciation dictionaries and language models above:

  • WER: 0%
  • CER: 0%

Ethical considerations

Deploying any Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.

Demographic Bias

You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue.

Surveillance

Speech-to-Text technologies may be misused to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in many countries. You should not assume consent to record and analyze private speech.

Troubleshooting issues

Machine learning models (like this acoustic model) perform best on data that is similar to the data on which they were trained.

The primary sources of variability in forced alignment will be the applicability of the pronunciation dictionary and how similar the speech, demographics, and recording conditions are. If you encounter issues in alignment, there are couple of avenues to improve performance:

  1. Increase the beam size of MFA

    • MFA defaults to a narrow beam to ensure quick alignment and also as a way to detect potential issues in your dataset, but depending on your data, you might benefit from boosting the beam to 100 or higher.
  2. Add pronunciations to the pronunciation dictionary

    • This model was trained a particular dialect/style, and so adding pronunciations more representative of the variety spoken in your dataset will help alignment.
  3. Check the quality of your data

    • MFA includes a validator utility, which aims to detect issues in the dataset.
    • Use MFA's anchor utility to visually inspect your data as MFA sees it and correct issues in transcription or OOV items.
  4. Adapt the model to your data

    • MFA has an adaptation command to adapt some of the model to your data based on an initial alignment, and then run another alignment with the adapted model.

Training data

This model was trained on the following corpora:

french_mfa v3.0.0

13 Mar 03:07
Compare
Choose a tag to compare

French MFA acoustic model v3.0.0

Link to documentation on mfa-models

Jump to section:

Model details

  • Maintainer: Montreal Forced Aligner
  • Language: French
  • Dialect: N/A
  • Phone set: MFA
  • Model type: Acoustic
  • Features: MFCC
  • Architecture: gmm-hmm
  • Model version: v3.0.0
  • Trained date: 2024-02-29
  • Compatible MFA version: v3.0.0
  • License: CC BY 4.0
  • Citation:
@techreport{mfa_french_mfa_acoustic_2024,
	author={McAuliffe, Michael and Sonderegger, Morgan},
	title={French MFA acoustic model v3.0.0},
	address={\url{https://mfa-models.readthedocs.io/acoustic/French/French MFA acoustic model v3_0_0.html}},
	year={2024},
	month={Feb},
}

Installation

Install from the MFA command line:

mfa model download acoustic french_mfa

Or download from the release page.

Intended use

This model is intended for forced alignment of French transcripts.

This model uses the MFA phone set for French, and was trained with the pronunciation dictionaries above. Pronunciations can be added on top of the dictionary, as long as no additional phones are introduced.

Performance Factors

As forced alignment is a relatively well-constrained problem (given accurate transcripts), this model should be applicable to a range of recording conditions and speakers. However, please note that it was trained on read speech in low-noise environments, so as your data diverges from that, you may run into alignment issues or need to increase the beam size of MFA or see other recommendations in the troubleshooting section below.

Please note as well that MFA does not use state-of-the-art ASR models for forced alignment. You may get better performance (especially on speech-to-text tasks) using other frameworks like Coqui.

Metrics

Acoustic models are typically generated as one component of a larger ASR system where the metric is word error rate (WER). For forced alignment, there is typically not the same sort of gold standard measure for most languages.

As a rough approximation of the acoustic model quality, we evaluated it against the corpus it was trained on alongside a language model trained from the same data. Key caveat here is that this is not a typical WER measure on held out data, so it should not be taken as a hard measure of how well an acoustic model will generalize to your data, but rather is more of a sanity check that the training data quality was sufficiently high.

Using the pronunciation dictionaries and language models above:

  • WER: 0%
  • CER: 0%

Ethical considerations

Deploying any Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.

Demographic Bias

You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue.

Surveillance

Speech-to-Text technologies may be misused to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in many countries. You should not assume consent to record and analyze private speech.

Troubleshooting issues

Machine learning models (like this acoustic model) perform best on data that is similar to the data on which they were trained.

The primary sources of variability in forced alignment will be the applicability of the pronunciation dictionary and how similar the speech, demographics, and recording conditions are. If you encounter issues in alignment, there are couple of avenues to improve performance:

  1. Increase the beam size of MFA

    • MFA defaults to a narrow beam to ensure quick alignment and also as a way to detect potential issues in your dataset, but depending on your data, you might benefit from boosting the beam to 100 or higher.
  2. Add pronunciations to the pronunciation dictionary

    • This model was trained a particular dialect/style, and so adding pronunciations more representative of the variety spoken in your dataset will help alignment.
  3. Check the quality of your data

    • MFA includes a validator utility, which aims to detect issues in the dataset.
    • Use MFA's anchor utility to visually inspect your data as MFA sees it and correct issues in transcription or OOV items.
  4. Adapt the model to your data

    • MFA has an adaptation command to adapt some of the model to your data based on an initial alignment, and then run another alignment with the adapted model.

Training data

This model was trained on the following corpora:

bulgarian_mfa v3.0.0

27 Feb 01:03
Compare
Choose a tag to compare

Bulgarian MFA dictionary v3.0.0

Link to documentation on mfa-models

Jump to section:

Dictionary details

  • Maintainer: Montreal Forced Aligner
  • Language: Bulgarian
  • Dialect: N/A
  • Phone set: MFA
  • Number of words: 15,738
  • Phones: a b bʲ c dʒ dʲ d̪ f fʲ i j k m mʲ n̪ o p pʲ r rʲ sʲ s̪ tsʲ tʃ tʲ t̪ t̪s̪ u v vʲ x zʲ z̪ ç ŋ ɔ ɛ ɟ ɡ ɤ ɫ ɱ ɲ ʃ ʎ ʒ
  • License: CC BY 4.0
  • Compatible MFA version: v3.0.0
  • Citation:
@techreport{mfa_bulgarian_mfa_dictionary_2024,
	author={McAuliffe, Michael and Sonderegger, Morgan},
	title={Bulgarian MFA dictionary v3.0.0},
	address={\url{https://mfa-models.readthedocs.io/pronunciation dictionary/Bulgarian/Bulgarian MFA dictionary v3_0_0.html}},
	year={2024},
	month={Feb},
}
  • If you have comments or questions about this dictionary or its phone set, you can check previous MFA model discussion posts or create a new one.
  • The dictionary downloadable from this release has trained pronunciation and silence probabilities. The base dictionary is available here

##Installation

Install from the MFA command line:

mfa model download dictionary bulgarian_mfa

Or download from the release page.

The dictionary available from the release page and command line installation has pronunciation and silence probabilities estimated as part acoustic model training (see Silence probability format and training pronunciation probabilities for more information. If you would like to use the version of this dictionary without probabilities, please see the [plain dictionary](https://raw.githubusercontent.com/MontrealCorpusTools/mfa-models/main/dictionary/bulgarian/mfa/Bulgarian MFA dictionary v3_0_0.dict).

Intended use

This dictionary is intended for forced alignment of Bulgarian transcripts.

This dictionary uses the MFA phone set for Bulgarian, and was used in training the Bulgarian MFA acoustic model. Pronunciations can be added on top of the dictionary, as long as no additional phones are introduced.

Performance Factors

When trying to get better alignment accuracy, adding pronunciations is generally helpful, especially for different styles and dialects. The most impactful improvements will generally be seen when adding reduced variants that involve deleting segments/syllables common in spontaneous speech. Alignment must include all phones specified in the pronunciation of a word, and each phone has a minimum duration (by default 10ms). If a speaker pronounces a multisyllabic word with just a single syllable, it can be hard for MFA to fit all the segments in, so it will lead to alignment errors on adjacent words as well.

Ethical considerations

Deploying any Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.

Demographic Bias

You should assume every machine learning model has demographic bias unless proven otherwise. For pronunciation dictionaries, it is often the case that transcription accuracy and lexicon coverage for the prestige variety modeled in this dictionary compared to other variants. If you are using this dictionary in production, you should acknowledge this as a potential issue.

bulgarian_mfa v3.0.0

27 Feb 01:02
Compare
Choose a tag to compare

Bulgarian MFA acoustic model v3.0.0

Link to documentation on mfa-models

Jump to section:

Model details

  • Maintainer: Montreal Forced Aligner
  • Language: Bulgarian
  • Dialect: N/A
  • Phone set: MFA
  • Model type: Acoustic
  • Features: MFCC
  • Architecture: gmm-hmm
  • Model version: v3.0.0
  • Trained date: 2024-02-26
  • Compatible MFA version: v3.0.0
  • License: CC BY 4.0
  • Citation:
@techreport{mfa_bulgarian_mfa_acoustic_2024,
	author={McAuliffe, Michael and Sonderegger, Morgan},
	title={Bulgarian MFA acoustic model v3.0.0},
	address={\url{https://mfa-models.readthedocs.io/acoustic/Bulgarian/Bulgarian MFA acoustic model v3_0_0.html}},
	year={2024},
	month={Feb},
}

Installation

Install from the MFA command line:

mfa model download acoustic bulgarian_mfa

Or download from the release page.

Intended use

This model is intended for forced alignment of Bulgarian transcripts.

This model uses the MFA phone set for Bulgarian, and was trained with the pronunciation dictionaries above. Pronunciations can be added on top of the dictionary, as long as no additional phones are introduced.

Performance Factors

As forced alignment is a relatively well-constrained problem (given accurate transcripts), this model should be applicable to a range of recording conditions and speakers. However, please note that it was trained on read speech in low-noise environments, so as your data diverges from that, you may run into alignment issues or need to increase the beam size of MFA or see other recommendations in the troubleshooting section below.

Please note as well that MFA does not use state-of-the-art ASR models for forced alignment. You may get better performance (especially on speech-to-text tasks) using other frameworks like Coqui.

Metrics

Acoustic models are typically generated as one component of a larger ASR system where the metric is word error rate (WER). For forced alignment, there is typically not the same sort of gold standard measure for most languages.

As a rough approximation of the acoustic model quality, we evaluated it against the corpus it was trained on alongside a language model trained from the same data. Key caveat here is that this is not a typical WER measure on held out data, so it should not be taken as a hard measure of how well an acoustic model will generalize to your data, but rather is more of a sanity check that the training data quality was sufficiently high.

Using the pronunciation dictionaries and language models above:

  • WER: 0%
  • CER: 0%

Ethical considerations

Deploying any Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.

Demographic Bias

You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue.

Surveillance

Speech-to-Text technologies may be misused to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in many countries. You should not assume consent to record and analyze private speech.

Troubleshooting issues

Machine learning models (like this acoustic model) perform best on data that is similar to the data on which they were trained.

The primary sources of variability in forced alignment will be the applicability of the pronunciation dictionary and how similar the speech, demographics, and recording conditions are. If you encounter issues in alignment, there are couple of avenues to improve performance:

  1. Increase the beam size of MFA

    • MFA defaults to a narrow beam to ensure quick alignment and also as a way to detect potential issues in your dataset, but depending on your data, you might benefit from boosting the beam to 100 or higher.
  2. Add pronunciations to the pronunciation dictionary

    • This model was trained a particular dialect/style, and so adding pronunciations more representative of the variety spoken in your dataset will help alignment.
  3. Check the quality of your data

    • MFA includes a validator utility, which aims to detect issues in the dataset.
    • Use MFA's anchor utility to visually inspect your data as MFA sees it and correct issues in transcription or OOV items.
  4. Adapt the model to your data

    • MFA has an adaptation command to adapt some of the model to your data based on an initial alignment, and then run another alignment with the adapted model.

Training data

This model was trained on the following corpora:

mandarin_taiwan_pinyin_mfa v3.0.0

25 Feb 22:52
Compare
Choose a tag to compare

Mandarin (Taiwan Pinyin) MFA G2P model v3.0.0

Link to documentation on mfa-models

Jump to section:

Model details

@techreport{mfa_mandarin_taiwan_pinyin_mfa_g2p_2024,
	author={McAuliffe, Michael and Sonderegger, Morgan},
	title={Mandarin (Taiwan Pinyin) MFA G2P model v3.0.0},
	address={\url{https://mfa-models.readthedocs.io/G2P model/Mandarin/Mandarin (Taiwan Pinyin) MFA G2P model v3_0_0.html}},
	year={2024},
	month={Feb},
}

Installation

Install from the MFA command line:

mfa model download g2p mandarin_taiwan_pinyin_mfa

Or download from the release page.

Intended use

This model is intended for generating pronunciations of Mandarin Chinese transcripts.

This model uses the MFA phone set for Mandarin, and was trained from the pronunciation dictionaries above. Pronunciations generated with this G2P model can be appended and used when aligning or transcribing.

Performance Factors

The trained G2P models should be relatively quick and accurate, however the model may struggle when dealing with less common orthographic characters or word types outside of what it was trained on. If so, you may need to supplement the dictionary through generating, correcting, and re-training the G2P model as necessary.

Metrics

The model was trained on 90% of the dictionary and evaluated on a held-out 10% and evaluated with word error rate and phone error rate.

Training

This model was trained on the following data set:

  • Words: 93,422
  • Phones: 137
  • Graphemes: 50

Evaluation

This model was evaluated on the following data set:

  • Words: 10,379
  • WER: 100.00%
  • PER: 100.00%

Ethical considerations

Deploying any model involving language into any production setting has ethical implications. You should consider these implications before use.

Demographic Bias

You should assume every machine learning model has demographic bias unless proven otherwise. For G2P models, the model will only process the types of tokens that it was trained on, and will not represent the full range of text or spoken words that native speakers will produce. If you are using this model in production, you should acknowledge this as a potential issue.

Surveillance

Speech-to-Text technologies may be misused to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in many countries. You should not assume consent to record and analyze private speech.

mandarin_taiwan_mfa v3.0.0

25 Feb 22:52
Compare
Choose a tag to compare

Mandarin (Taiwan) MFA G2P model v3.0.0

Link to documentation on mfa-models

Jump to section:

Model details

@techreport{mfa_mandarin_taiwan_mfa_g2p_2024,
	author={McAuliffe, Michael and Sonderegger, Morgan},
	title={Mandarin (Taiwan) MFA G2P model v3.0.0},
	address={\url{https://mfa-models.readthedocs.io/G2P model/Mandarin/Mandarin (Taiwan) MFA G2P model v3_0_0.html}},
	year={2024},
	month={Feb},
}

Installation

Install from the MFA command line:

mfa model download g2p mandarin_taiwan_mfa

Or download from the release page.

Intended use

This model is intended for generating pronunciations of Mandarin Chinese transcripts.

This model uses the MFA phone set for Mandarin, and was trained from the pronunciation dictionaries above. Pronunciations generated with this G2P model can be appended and used when aligning or transcribing.

Performance Factors

The trained G2P models should be relatively quick and accurate, however the model may struggle when dealing with less common orthographic characters or word types outside of what it was trained on. If so, you may need to supplement the dictionary through generating, correcting, and re-training the G2P model as necessary.

Metrics

The model was trained on 90% of the dictionary and evaluated on a held-out 10% and evaluated with word error rate and phone error rate.

Training

This model was trained on the following data set:

  • Words: 116,512
  • Phones: 142
  • Graphemes: 17,149

Evaluation

This model was evaluated on the following data set:

  • Words: 11,798
  • WER: 100.00%
  • PER: 100.00%

Ethical considerations

Deploying any model involving language into any production setting has ethical implications. You should consider these implications before use.

Demographic Bias

You should assume every machine learning model has demographic bias unless proven otherwise. For G2P models, the model will only process the types of tokens that it was trained on, and will not represent the full range of text or spoken words that native speakers will produce. If you are using this model in production, you should acknowledge this as a potential issue.

Surveillance

Speech-to-Text technologies may be misused to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in many countries. You should not assume consent to record and analyze private speech.

mandarin_china_pinyin_mfa v3.0.0

25 Feb 22:51
Compare
Choose a tag to compare

Mandarin (China Pinyin) MFA G2P model v3.0.0

Link to documentation on mfa-models

Jump to section:

Model details

@techreport{mfa_mandarin_china_pinyin_mfa_g2p_2024,
	author={McAuliffe, Michael and Sonderegger, Morgan},
	title={Mandarin (China Pinyin) MFA G2P model v3.0.0},
	address={\url{https://mfa-models.readthedocs.io/G2P model/Mandarin/Mandarin (China Pinyin) MFA G2P model v3_0_0.html}},
	year={2024},
	month={Feb},
}

Installation

Install from the MFA command line:

mfa model download g2p mandarin_china_pinyin_mfa

Or download from the release page.

Intended use

This model is intended for generating pronunciations of Mandarin Chinese transcripts.

This model uses the MFA phone set for Mandarin, and was trained from the pronunciation dictionaries above. Pronunciations generated with this G2P model can be appended and used when aligning or transcribing.

Performance Factors

The trained G2P models should be relatively quick and accurate, however the model may struggle when dealing with less common orthographic characters or word types outside of what it was trained on. If so, you may need to supplement the dictionary through generating, correcting, and re-training the G2P model as necessary.

Metrics

The model was trained on 90% of the dictionary and evaluated on a held-out 10% and evaluated with word error rate and phone error rate.

Training

This model was trained on the following data set:

  • Words: 96,301
  • Phones: 137
  • Graphemes: 50

Evaluation

This model was evaluated on the following data set:

  • Words: 10,700
  • WER: 100.00%
  • PER: 100.00%

Ethical considerations

Deploying any model involving language into any production setting has ethical implications. You should consider these implications before use.

Demographic Bias

You should assume every machine learning model has demographic bias unless proven otherwise. For G2P models, the model will only process the types of tokens that it was trained on, and will not represent the full range of text or spoken words that native speakers will produce. If you are using this model in production, you should acknowledge this as a potential issue.

Surveillance

Speech-to-Text technologies may be misused to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in many countries. You should not assume consent to record and analyze private speech.

mandarin_china_mfa v3.0.0

25 Feb 22:51
Compare
Choose a tag to compare

Mandarin (China) MFA G2P model v3.0.0

Link to documentation on mfa-models

Jump to section:

Model details

@techreport{mfa_mandarin_china_mfa_g2p_2024,
	author={McAuliffe, Michael and Sonderegger, Morgan},
	title={Mandarin (China) MFA G2P model v3.0.0},
	address={\url{https://mfa-models.readthedocs.io/G2P model/Mandarin/Mandarin (China) MFA G2P model v3_0_0.html}},
	year={2024},
	month={Feb},
}

Installation

Install from the MFA command line:

mfa model download g2p mandarin_china_mfa

Or download from the release page.

Intended use

This model is intended for generating pronunciations of Mandarin Chinese transcripts.

This model uses the MFA phone set for Mandarin, and was trained from the pronunciation dictionaries above. Pronunciations generated with this G2P model can be appended and used when aligning or transcribing.

Performance Factors

The trained G2P models should be relatively quick and accurate, however the model may struggle when dealing with less common orthographic characters or word types outside of what it was trained on. If so, you may need to supplement the dictionary through generating, correcting, and re-training the G2P model as necessary.

Metrics

The model was trained on 90% of the dictionary and evaluated on a held-out 10% and evaluated with word error rate and phone error rate.

Training

This model was trained on the following data set:

  • Words: 119,364
  • Phones: 142
  • Graphemes: 17,098

Evaluation

This model was evaluated on the following data set:

  • Words: 12,092
  • WER: 100.00%
  • PER: 100.00%

Ethical considerations

Deploying any model involving language into any production setting has ethical implications. You should consider these implications before use.

Demographic Bias

You should assume every machine learning model has demographic bias unless proven otherwise. For G2P models, the model will only process the types of tokens that it was trained on, and will not represent the full range of text or spoken words that native speakers will produce. If you are using this model in production, you should acknowledge this as a potential issue.

Surveillance

Speech-to-Text technologies may be misused to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in many countries. You should not assume consent to record and analyze private speech.

mandarin_taiwan_mfa v3.0.0

25 Feb 22:53
Compare
Choose a tag to compare

Mandarin (Taiwan) MFA dictionary v3.0.0

Link to documentation on mfa-models

Jump to section:

Dictionary details

  • Maintainer: Montreal Forced Aligner
  • Language: Mandarin Chinese
  • Dialect: Taiwanese Mandarin
  • Phone set: MFA
  • Number of words: 15,287
  • Phones: a aj aj˥ aj˥˩ aj˧ aj˧˥ aj˨˩˦ aj˩ aw˥ aw˥˩ aw˧˥ aw˨˩˦ aw˩ a˥ a˥˩ a˧ a˧˥ a˨˩˦ a˩ e ej ej˥ ej˥˩ ej˧ ej˧˥ ej˨˩˦ ej˩ e˥ e˥˩ e˧ e˧˥ e˨˩˦ e˩ f i i˥ i˥˩ i˧ i˧˥ i˨˩˦ i˩ j k kʰ kʷ l m mʲ n n̩˥˩ n̩˧˥ n̩˨˩˦ o ow ow˥ ow˥˩ ow˧ ow˧˥ ow˨˩˦ ow˩ o˥ o˥˩ o˧ o˧˥ o˨˩˦ o˩ p pʰ pʲ pʷ s t ts tsʰ tɕ tɕʰ tɕʷ tʰ tʲ tʷ u u˥ u˥˩ u˧ u˧˥ u˨˩˦ u˩ w x xʷ y˥ y˥˩ y˧ y˧˥ y˨˩˦ y˩ z̩ z̩˥ z̩˥˩ z̩˧ z̩˧˥ z̩˨˩˦ z̩˩ ŋ ŋ̍˥˩ ŋ̍˧˥ ŋ̍˨˩˦ ɕ ɕʷ ə ə˥ ə˥˩ ə˧ ə˧˥ ə˨˩˦ ə˩ ɥ ɲ ɻ ʂ ʈʂ ʈʂʰ ʎ ʐ ʐ̩˥ ʐ̩˥˩ ʐ̩˧˥ ʐ̩˨˩˦ ʐ̩˩ ʔ
  • License: CC BY 4.0
  • Compatible MFA version: v3.0.0
  • Citation:
@techreport{mfa_mandarin_taiwan_mfa_dictionary_2024,
	author={McAuliffe, Michael and Sonderegger, Morgan},
	title={Mandarin (Taiwan) MFA dictionary v3.0.0},
	address={\url{https://mfa-models.readthedocs.io/pronunciation dictionary/Mandarin/Mandarin (Taiwan) MFA dictionary v3_0_0.html}},
	year={2024},
	month={Feb},
}
  • If you have comments or questions about this dictionary or its phone set, you can check previous MFA model discussion posts or create a new one.
  • The dictionary downloadable from this release has trained pronunciation and silence probabilities. The base dictionary is available here

##Installation

Install from the MFA command line:

mfa model download dictionary mandarin_taiwan_mfa

Or download from the release page.

The dictionary available from the release page and command line installation has pronunciation and silence probabilities estimated as part acoustic model training (see Silence probability format and training pronunciation probabilities for more information. If you would like to use the version of this dictionary without probabilities, please see the [plain dictionary](https://raw.githubusercontent.com/MontrealCorpusTools/mfa-models/main/dictionary/mandarin/mfa/Mandarin (Taiwan) MFA dictionary v3_0_0.dict).

Intended use

This dictionary is intended for forced alignment of Mandarin Chinese transcripts.

This dictionary uses the MFA phone set for Mandarin, and was used in training the Mandarin MFA acoustic model. Pronunciations can be added on top of the dictionary, as long as no additional phones are introduced.

Performance Factors

When trying to get better alignment accuracy, adding pronunciations is generally helpful, especially for different styles and dialects. The most impactful improvements will generally be seen when adding reduced variants that involve deleting segments/syllables common in spontaneous speech. Alignment must include all phones specified in the pronunciation of a word, and each phone has a minimum duration (by default 10ms). If a speaker pronounces a multisyllabic word with just a single syllable, it can be hard for MFA to fit all the segments in, so it will lead to alignment errors on adjacent words as well.

Ethical considerations

Deploying any Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.

Demographic Bias

You should assume every machine learning model has demographic bias unless proven otherwise. For pronunciation dictionaries, it is often the case that transcription accuracy and lexicon coverage for the prestige variety modeled in this dictionary compared to other variants. If you are using this dictionary in production, you should acknowledge this as a potential issue.

mandarin_china_mfa v3.0.0

25 Feb 22:53
Compare
Choose a tag to compare

Mandarin (China) MFA dictionary v3.0.0

Link to documentation on mfa-models

Jump to section:

Dictionary details

  • Maintainer: Montreal Forced Aligner
  • Language: Mandarin Chinese
  • Dialect: Standard Mandarin Chinese
  • Phone set: MFA
  • Number of words: 75,580
  • Phones: a aj aj˥ aj˥˩ aj˧ aj˧˥ aj˨˩˦ aj˩ aw aw˥ aw˥˩ aw˧ aw˧˥ aw˨˩˦ aw˩ a˥ a˥˩ a˧ a˧˥ a˨˩˦ a˩ e ej ej˥ ej˥˩ ej˧ ej˧˥ ej˨˩˦ ej˩ e˥ e˥˩ e˧ e˧˥ e˨˩˦ e˩ f i i˥ i˥˩ i˧ i˧˥ i˨˩˦ i˩ j k kʰ kʷ l m mʲ m̩˥ m̩˧ m̩˨˩˦ n n̩˥˩ n̩˧˥ n̩˨˩˦ o ow ow˥ ow˥˩ ow˧ ow˧˥ ow˨˩˦ ow˩ o˥ o˥˩ o˧ o˧˥ o˨˩˦ o˩ p pʰ pʲ pʷ s t ts tsʰ tɕ tɕʰ tɕʷ tʰ tʲ tʷ u u˥ u˥˩ u˧ u˧˥ u˨˩˦ u˩ w x xʷ y y˥ y˥˩ y˧ y˧˥ y˨˩˦ y˩ z̩ z̩˥ z̩˥˩ z̩˧ z̩˧˥ z̩˨˩˦ z̩˩ ŋ ŋ̍ ŋ̍˥˩ ŋ̍˧˥ ŋ̍˨˩˦ ɕ ɕʷ ə ə˥ ə˥˩ ə˧ ə˧˥ ə˨˩˦ ə˩ ɥ ɲ ɻ ʂ ʈʂ ʈʂʰ ʎ ʐ ʐ̩ ʐ̩˥ ʐ̩˥˩ ʐ̩˧ ʐ̩˧˥ ʐ̩˨˩˦ ʐ̩˩ ʔ
  • License: CC BY 4.0
  • Compatible MFA version: v3.0.0
  • Citation:
@techreport{mfa_mandarin_china_mfa_dictionary_2024,
	author={McAuliffe, Michael and Sonderegger, Morgan},
	title={Mandarin (China) MFA dictionary v3.0.0},
	address={\url{https://mfa-models.readthedocs.io/pronunciation dictionary/Mandarin/Mandarin (China) MFA dictionary v3_0_0.html}},
	year={2024},
	month={Feb},
}
  • If you have comments or questions about this dictionary or its phone set, you can check previous MFA model discussion posts or create a new one.
  • The dictionary downloadable from this release has trained pronunciation and silence probabilities. The base dictionary is available here

##Installation

Install from the MFA command line:

mfa model download dictionary mandarin_china_mfa

Or download from the release page.

The dictionary available from the release page and command line installation has pronunciation and silence probabilities estimated as part acoustic model training (see Silence probability format and training pronunciation probabilities for more information. If you would like to use the version of this dictionary without probabilities, please see the [plain dictionary](https://raw.githubusercontent.com/MontrealCorpusTools/mfa-models/main/dictionary/mandarin/mfa/Mandarin (China) MFA dictionary v3_0_0.dict).

Intended use

This dictionary is intended for forced alignment of Mandarin Chinese transcripts.

This dictionary uses the MFA phone set for Mandarin, and was used in training the Mandarin MFA acoustic model. Pronunciations can be added on top of the dictionary, as long as no additional phones are introduced.

Performance Factors

When trying to get better alignment accuracy, adding pronunciations is generally helpful, especially for different styles and dialects. The most impactful improvements will generally be seen when adding reduced variants that involve deleting segments/syllables common in spontaneous speech. Alignment must include all phones specified in the pronunciation of a word, and each phone has a minimum duration (by default 10ms). If a speaker pronounces a multisyllabic word with just a single syllable, it can be hard for MFA to fit all the segments in, so it will lead to alignment errors on adjacent words as well.

Ethical considerations

Deploying any Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.

Demographic Bias

You should assume every machine learning model has demographic bias unless proven otherwise. For pronunciation dictionaries, it is often the case that transcription accuracy and lexicon coverage for the prestige variety modeled in this dictionary compared to other variants. If you are using this dictionary in production, you should acknowledge this as a potential issue.