Skip to content

Releases: MontrealCorpusTools/mfa-models

hausa_cv v2.0.0

23 Mar 01:35
e092254
Compare
Choose a tag to compare

Hausa CV acoustic model v2.0.0

Link to documentation on mfa-models

Jump to section:

Model details

  • Maintainer: Vox Communis
  • Language: Hausa
  • Dialect: N/A
  • Phone set: Epitran
  • Model type: Acoustic model
  • Features: MFCC
  • Architecture: gmm-hmm
  • Model version: v2.0.0
  • Trained date: 02-11-2022
  • Compatible MFA version: v2.0.0
  • License: CC-0
  • Citation:
@misc{
	Ahn_Chodroff_2022,
	author={Ahn, Emily and Chodroff, Eleanor},
	title={VoxCommunis Corpus},
	address={\url{https://osf.io/t957v}},
	publisher={OSF},
	year={2022},
	month={Jan}
}

Installation

Install from the MFA command line:

mfa models download acoustic hausa_cv

Or download from the release page.

Intended use

This model is intended for forced alignment of Hausa transcripts.

This model uses the Epitran phone set for Hausa, and was trained with the pronunciation dictionaries above. Pronunciations can be added on top of the dictionary, as long as no additional phones are introduced.

Performance Factors

As forced alignment is a relatively well-constrained problem (given accurate transcripts), this model should be applicable to a range of recording conditions and speakers. However, please note that it was trained on read speech in low-noise environments, so as your data diverges from that, you may run into alignment issues or need to increase the beam size of MFA or see other recommendations in the troubleshooting section below.

Please note as well that MFA does not use state-of-the-art ASR models for forced alignment. You may get better performance (especially on speech-to-text tasks) using other frameworks like Coqui.

Ethical considerations

Deploying any Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.

Demographic Bias

You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue.

Surveillance

Speech-to-Text technologies may be misused to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in many countries. You should not assume consent to record and analyze private speech.

Troubleshooting issues

Machine learning models (like this acoustic model) perform best on data that is similar to the data on which they were trained.

The primary sources of variability in forced alignment will be the applicability of the pronunciation dictionary and how similar the speech, demographics, and recording conditions are. If you encounter issues in alignment, there are couple of avenues to improve performance:

  1. Increase the beam size of MFA

    • MFA defaults to a narrow beam to ensure quick alignment and also as a way to detect potential issues in your dataset, but depending on your data, you might benefit from boosting the beam to 100 or higher.
  2. Add pronunciations to the pronunciation dictionary

    • This model was trained a particular dialect/style, and so adding pronunciations more representative of the variety spoken in your dataset will help alignment.
  3. Check the quality of your data

    • MFA includes a validator utility, which aims to detect issues in the dataset.
    • Use MFA's anchor utility to visually inspect your data as MFA sees it and correct issues in transcription or OOV items.
  4. Adapt the model to your data

    • MFA has an adaptation command to adapt some of the model to your data based on an initial alignment, and then run another alignment with the adapted model.

Training data

This model was trained on the following corpora:

guarani_cv v2.0.0

23 Mar 01:35
e092254
Compare
Choose a tag to compare

Guarani CV acoustic model v2.0.0

Link to documentation on mfa-models

Jump to section:

Model details

  • Maintainer: Vox Communis
  • Language: Guarani
  • Dialect: N/A
  • Phone set: XPF
  • Model type: Acoustic model
  • Features: MFCC
  • Architecture: gmm-hmm
  • Model version: v2.0.0
  • Trained date: 02-11-2022
  • Compatible MFA version: v2.0.0
  • License: CC-0
  • Citation:
@misc{
	Ahn_Chodroff_2022,
	author={Ahn, Emily and Chodroff, Eleanor},
	title={VoxCommunis Corpus},
	address={\url{https://osf.io/t957v}},
	publisher={OSF},
	year={2022},
	month={Jan}
}

Installation

Install from the MFA command line:

mfa models download acoustic guarani_cv

Or download from the release page.

Intended use

This model is intended for forced alignment of Guarani transcripts.

This model uses the XPF phone set for Guarani, and was trained with the pronunciation dictionaries above. Pronunciations can be added on top of the dictionary, as long as no additional phones are introduced.

Performance Factors

As forced alignment is a relatively well-constrained problem (given accurate transcripts), this model should be applicable to a range of recording conditions and speakers. However, please note that it was trained on read speech in low-noise environments, so as your data diverges from that, you may run into alignment issues or need to increase the beam size of MFA or see other recommendations in the troubleshooting section below.

Please note as well that MFA does not use state-of-the-art ASR models for forced alignment. You may get better performance (especially on speech-to-text tasks) using other frameworks like Coqui.

Ethical considerations

Deploying any Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.

Demographic Bias

You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue.

Surveillance

Speech-to-Text technologies may be misused to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in many countries. You should not assume consent to record and analyze private speech.

Troubleshooting issues

Machine learning models (like this acoustic model) perform best on data that is similar to the data on which they were trained.

The primary sources of variability in forced alignment will be the applicability of the pronunciation dictionary and how similar the speech, demographics, and recording conditions are. If you encounter issues in alignment, there are couple of avenues to improve performance:

  1. Increase the beam size of MFA

    • MFA defaults to a narrow beam to ensure quick alignment and also as a way to detect potential issues in your dataset, but depending on your data, you might benefit from boosting the beam to 100 or higher.
  2. Add pronunciations to the pronunciation dictionary

    • This model was trained a particular dialect/style, and so adding pronunciations more representative of the variety spoken in your dataset will help alignment.
  3. Check the quality of your data

    • MFA includes a validator utility, which aims to detect issues in the dataset.
    • Use MFA's anchor utility to visually inspect your data as MFA sees it and correct issues in transcription or OOV items.
  4. Adapt the model to your data

    • MFA has an adaptation command to adapt some of the model to your data based on an initial alignment, and then run another alignment with the adapted model.

Training data

This model was trained on the following corpora:

greek_cv v2.0.0

23 Mar 01:35
e092254
Compare
Choose a tag to compare

Greek CV acoustic model v2.0.0

Link to documentation on mfa-models

Jump to section:

Model details

  • Maintainer: Vox Communis
  • Language: Greek
  • Dialect: N/A
  • Phone set: XPF
  • Model type: Acoustic model
  • Features: MFCC
  • Architecture: gmm-hmm
  • Model version: v2.0.0
  • Trained date: 02-11-2022
  • Compatible MFA version: v2.0.0
  • License: CC-0
  • Citation:
@misc{
	Ahn_Chodroff_2022,
	author={Ahn, Emily and Chodroff, Eleanor},
	title={VoxCommunis Corpus},
	address={\url{https://osf.io/t957v}},
	publisher={OSF},
	year={2022},
	month={Jan}
}

Installation

Install from the MFA command line:

mfa models download acoustic greek_cv

Or download from the release page.

Intended use

This model is intended for forced alignment of Greek transcripts.

This model uses the XPF phone set for Greek, and was trained with the pronunciation dictionaries above. Pronunciations can be added on top of the dictionary, as long as no additional phones are introduced.

Performance Factors

As forced alignment is a relatively well-constrained problem (given accurate transcripts), this model should be applicable to a range of recording conditions and speakers. However, please note that it was trained on read speech in low-noise environments, so as your data diverges from that, you may run into alignment issues or need to increase the beam size of MFA or see other recommendations in the troubleshooting section below.

Please note as well that MFA does not use state-of-the-art ASR models for forced alignment. You may get better performance (especially on speech-to-text tasks) using other frameworks like Coqui.

Ethical considerations

Deploying any Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.

Demographic Bias

You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue.

Surveillance

Speech-to-Text technologies may be misused to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in many countries. You should not assume consent to record and analyze private speech.

Troubleshooting issues

Machine learning models (like this acoustic model) perform best on data that is similar to the data on which they were trained.

The primary sources of variability in forced alignment will be the applicability of the pronunciation dictionary and how similar the speech, demographics, and recording conditions are. If you encounter issues in alignment, there are couple of avenues to improve performance:

  1. Increase the beam size of MFA

    • MFA defaults to a narrow beam to ensure quick alignment and also as a way to detect potential issues in your dataset, but depending on your data, you might benefit from boosting the beam to 100 or higher.
  2. Add pronunciations to the pronunciation dictionary

    • This model was trained a particular dialect/style, and so adding pronunciations more representative of the variety spoken in your dataset will help alignment.
  3. Check the quality of your data

    • MFA includes a validator utility, which aims to detect issues in the dataset.
    • Use MFA's anchor utility to visually inspect your data as MFA sees it and correct issues in transcription or OOV items.
  4. Adapt the model to your data

    • MFA has an adaptation command to adapt some of the model to your data based on an initial alignment, and then run another alignment with the adapted model.

Training data

This model was trained on the following corpora:

georgian_cv v2.0.0

23 Mar 01:35
e092254
Compare
Choose a tag to compare

Georgian CV acoustic model v2.0.0

Link to documentation on mfa-models

Jump to section:

Model details

  • Maintainer: Vox Communis
  • Language: Georgian
  • Dialect: N/A
  • Phone set: XPF
  • Model type: Acoustic model
  • Features: MFCC
  • Architecture: gmm-hmm
  • Model version: v2.0.0
  • Trained date: 02-11-2022
  • Compatible MFA version: v2.0.0
  • License: CC-0
  • Citation:
@misc{
	Ahn_Chodroff_2022,
	author={Ahn, Emily and Chodroff, Eleanor},
	title={VoxCommunis Corpus},
	address={\url{https://osf.io/t957v}},
	publisher={OSF},
	year={2022},
	month={Jan}
}

Installation

Install from the MFA command line:

mfa models download acoustic georgian_cv

Or download from the release page.

Intended use

This model is intended for forced alignment of Georgian transcripts.

This model uses the XPF phone set for Georgian, and was trained with the pronunciation dictionaries above. Pronunciations can be added on top of the dictionary, as long as no additional phones are introduced.

Performance Factors

As forced alignment is a relatively well-constrained problem (given accurate transcripts), this model should be applicable to a range of recording conditions and speakers. However, please note that it was trained on read speech in low-noise environments, so as your data diverges from that, you may run into alignment issues or need to increase the beam size of MFA or see other recommendations in the troubleshooting section below.

Please note as well that MFA does not use state-of-the-art ASR models for forced alignment. You may get better performance (especially on speech-to-text tasks) using other frameworks like Coqui.

Ethical considerations

Deploying any Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.

Demographic Bias

You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue.

Surveillance

Speech-to-Text technologies may be misused to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in many countries. You should not assume consent to record and analyze private speech.

Troubleshooting issues

Machine learning models (like this acoustic model) perform best on data that is similar to the data on which they were trained.

The primary sources of variability in forced alignment will be the applicability of the pronunciation dictionary and how similar the speech, demographics, and recording conditions are. If you encounter issues in alignment, there are couple of avenues to improve performance:

  1. Increase the beam size of MFA

    • MFA defaults to a narrow beam to ensure quick alignment and also as a way to detect potential issues in your dataset, but depending on your data, you might benefit from boosting the beam to 100 or higher.
  2. Add pronunciations to the pronunciation dictionary

    • This model was trained a particular dialect/style, and so adding pronunciations more representative of the variety spoken in your dataset will help alignment.
  3. Check the quality of your data

    • MFA includes a validator utility, which aims to detect issues in the dataset.
    • Use MFA's anchor utility to visually inspect your data as MFA sees it and correct issues in transcription or OOV items.
  4. Adapt the model to your data

    • MFA has an adaptation command to adapt some of the model to your data based on an initial alignment, and then run another alignment with the adapted model.

Training data

This model was trained on the following corpora:

english_us_arpa v2.0.0

23 Mar 01:35
e092254
Compare
Choose a tag to compare

English (US) ARPA acoustic model v2.0.0

Link to documentation on mfa-models

Jump to section:

Model details

@techreport{
	mfa_english_us_arpa_acoustic_2022,
	author={McAuliffe, Michael and Sonderegger, Morgan},
	title={English (US) ARPA acoustic model v2.0.0},
	address={\url{https://mfa-models.readthedocs.io/acoustic/English/English (US) ARPA acoustic model v2_0_0.html}},
	year={2022},
	month={Mar},
}

Installation

Install from the MFA command line:

mfa models download acoustic english_us_arpa

Or download from the release page.

Intended use

This model is intended for forced alignment of English transcripts.

This model uses the ARPA phone set for English, and was trained with the pronunciation dictionaries above. Pronunciations can be added on top of the dictionary, as long as no additional phones are introduced.

Performance Factors

As forced alignment is a relatively well-constrained problem (given accurate transcripts), this model should be applicable to a range of recording conditions and speakers. However, please note that it was trained on read speech in low-noise environments, so as your data diverges from that, you may run into alignment issues or need to increase the beam size of MFA or see other recommendations in the troubleshooting section below.

Please note as well that MFA does not use state-of-the-art ASR models for forced alignment. You may get better performance (especially on speech-to-text tasks) using other frameworks like Coqui.

Metrics

Acoustic models are typically generated as one component of a larger ASR system where the metric is word error rate (WER). For forced alignment, there is typically not the same sort of gold standard measure for most languages.

As a rough approximation of the acoustic model quality, we evaluated it against the corpus it was trained on alongside a language model trained from the same data. Key caveat here is that this is not a typical WER measure on held out data, so it should not be taken as a hard measure of how well an acoustic model will generalize to your data, but rather is more of a sanity check that the training data quality was sufficiently high.

Using the pronunciation dictionaries and language models above:

  • WER: 0%
  • CER: 0%

Ethical considerations

Deploying any Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.

Demographic Bias

You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue.

Surveillance

Speech-to-Text technologies may be misused to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in many countries. You should not assume consent to record and analyze private speech.

Troubleshooting issues

Machine learning models (like this acoustic model) perform best on data that is similar to the data on which they were trained.

The primary sources of variability in forced alignment will be the applicability of the pronunciation dictionary and how similar the speech, demographics, and recording conditions are. If you encounter issues in alignment, there are couple of avenues to improve performance:

  1. Increase the beam size of MFA

    • MFA defaults to a narrow beam to ensure quick alignment and also as a way to detect potential issues in your dataset, but depending on your data, you might benefit from boosting the beam to 100 or higher.
  2. Add pronunciations to the pronunciation dictionary

    • This model was trained a particular dialect/style, and so adding pronunciations more representative of the variety spoken in your dataset will help alignment.
  3. Check the quality of your data

    • MFA includes a validator utility, which aims to detect issues in the dataset.
    • Use MFA's anchor utility to visually inspect your data as MFA sees it and correct issues in transcription or OOV items.
  4. Adapt the model to your data

    • MFA has an adaptation command to adapt some of the model to your data based on an initial alignment, and then run another alignment with the adapted model.

Training data

This model was trained on the following corpora:

dutch_cv v2.0.0

23 Mar 01:35
e092254
Compare
Choose a tag to compare

Dutch CV acoustic model v2.0.0

Link to documentation on mfa-models

Jump to section:

Model details

  • Maintainer: Vox Communis
  • Language: Dutch
  • Dialect: N/A
  • Phone set: Epitran
  • Model type: Acoustic model
  • Features: MFCC
  • Architecture: gmm-hmm
  • Model version: v2.0.0
  • Trained date: 02-11-2022
  • Compatible MFA version: v2.0.0
  • License: CC-0
  • Citation:
@misc{
	Ahn_Chodroff_2022,
	author={Ahn, Emily and Chodroff, Eleanor},
	title={VoxCommunis Corpus},
	address={\url{https://osf.io/t957v}},
	publisher={OSF},
	year={2022},
	month={Jan}
}

Installation

Install from the MFA command line:

mfa models download acoustic dutch_cv

Or download from the release page.

Intended use

This model is intended for forced alignment of Dutch transcripts.

This model uses the Epitran phone set for Dutch, and was trained with the pronunciation dictionaries above. Pronunciations can be added on top of the dictionary, as long as no additional phones are introduced.

Performance Factors

As forced alignment is a relatively well-constrained problem (given accurate transcripts), this model should be applicable to a range of recording conditions and speakers. However, please note that it was trained on read speech in low-noise environments, so as your data diverges from that, you may run into alignment issues or need to increase the beam size of MFA or see other recommendations in the troubleshooting section below.

Please note as well that MFA does not use state-of-the-art ASR models for forced alignment. You may get better performance (especially on speech-to-text tasks) using other frameworks like Coqui.

Ethical considerations

Deploying any Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.

Demographic Bias

You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue.

Surveillance

Speech-to-Text technologies may be misused to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in many countries. You should not assume consent to record and analyze private speech.

Troubleshooting issues

Machine learning models (like this acoustic model) perform best on data that is similar to the data on which they were trained.

The primary sources of variability in forced alignment will be the applicability of the pronunciation dictionary and how similar the speech, demographics, and recording conditions are. If you encounter issues in alignment, there are couple of avenues to improve performance:

  1. Increase the beam size of MFA

    • MFA defaults to a narrow beam to ensure quick alignment and also as a way to detect potential issues in your dataset, but depending on your data, you might benefit from boosting the beam to 100 or higher.
  2. Add pronunciations to the pronunciation dictionary

    • This model was trained a particular dialect/style, and so adding pronunciations more representative of the variety spoken in your dataset will help alignment.
  3. Check the quality of your data

    • MFA includes a validator utility, which aims to detect issues in the dataset.
    • Use MFA's anchor utility to visually inspect your data as MFA sees it and correct issues in transcription or OOV items.
  4. Adapt the model to your data

    • MFA has an adaptation command to adapt some of the model to your data based on an initial alignment, and then run another alignment with the adapted model.

Training data

This model was trained on the following corpora:

czech_cv v2.0.0

23 Mar 01:35
e092254
Compare
Choose a tag to compare

Czech CV acoustic model v2.0.0

Link to documentation on mfa-models

Jump to section:

Model details

  • Maintainer: Vox Communis
  • Language: Czech
  • Dialect: N/A
  • Phone set: XPF
  • Model type: Acoustic model
  • Features: MFCC
  • Architecture: gmm-hmm
  • Model version: v2.0.0
  • Trained date: 02-11-2022
  • Compatible MFA version: v2.0.0
  • License: CC-0
  • Citation:
@misc{
	Ahn_Chodroff_2022,
	author={Ahn, Emily and Chodroff, Eleanor},
	title={VoxCommunis Corpus},
	address={\url{https://osf.io/t957v}},
	publisher={OSF},
	year={2022},
	month={Jan}
}

Installation

Install from the MFA command line:

mfa models download acoustic czech_cv

Or download from the release page.

Intended use

This model is intended for forced alignment of Czech transcripts.

This model uses the XPF phone set for Czech, and was trained with the pronunciation dictionaries above. Pronunciations can be added on top of the dictionary, as long as no additional phones are introduced.

Performance Factors

As forced alignment is a relatively well-constrained problem (given accurate transcripts), this model should be applicable to a range of recording conditions and speakers. However, please note that it was trained on read speech in low-noise environments, so as your data diverges from that, you may run into alignment issues or need to increase the beam size of MFA or see other recommendations in the troubleshooting section below.

Please note as well that MFA does not use state-of-the-art ASR models for forced alignment. You may get better performance (especially on speech-to-text tasks) using other frameworks like Coqui.

Ethical considerations

Deploying any Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.

Demographic Bias

You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue.

Surveillance

Speech-to-Text technologies may be misused to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in many countries. You should not assume consent to record and analyze private speech.

Troubleshooting issues

Machine learning models (like this acoustic model) perform best on data that is similar to the data on which they were trained.

The primary sources of variability in forced alignment will be the applicability of the pronunciation dictionary and how similar the speech, demographics, and recording conditions are. If you encounter issues in alignment, there are couple of avenues to improve performance:

  1. Increase the beam size of MFA

    • MFA defaults to a narrow beam to ensure quick alignment and also as a way to detect potential issues in your dataset, but depending on your data, you might benefit from boosting the beam to 100 or higher.
  2. Add pronunciations to the pronunciation dictionary

    • This model was trained a particular dialect/style, and so adding pronunciations more representative of the variety spoken in your dataset will help alignment.
  3. Check the quality of your data

    • MFA includes a validator utility, which aims to detect issues in the dataset.
    • Use MFA's anchor utility to visually inspect your data as MFA sees it and correct issues in transcription or OOV items.
  4. Adapt the model to your data

    • MFA has an adaptation command to adapt some of the model to your data based on an initial alignment, and then run another alignment with the adapted model.

Training data

This model was trained on the following corpora:

chuvash_cv v2.0.0

23 Mar 01:35
e092254
Compare
Choose a tag to compare

Chuvash CV acoustic model v2.0.0

Link to documentation on mfa-models

Jump to section:

Model details

  • Maintainer: Vox Communis
  • Language: Chuvash
  • Dialect: N/A
  • Phone set: XPF
  • Model type: Acoustic model
  • Features: MFCC
  • Architecture: gmm-hmm
  • Model version: v2.0.0
  • Trained date: 02-11-2022
  • Compatible MFA version: v2.0.0
  • License: CC-0
  • Citation:
@misc{
	Ahn_Chodroff_2022,
	author={Ahn, Emily and Chodroff, Eleanor},
	title={VoxCommunis Corpus},
	address={\url{https://osf.io/t957v}},
	publisher={OSF},
	year={2022},
	month={Jan}
}

Installation

Install from the MFA command line:

mfa models download acoustic chuvash_cv

Or download from the release page.

Intended use

This model is intended for forced alignment of Chuvash transcripts.

This model uses the XPF phone set for Chuvash, and was trained with the pronunciation dictionaries above. Pronunciations can be added on top of the dictionary, as long as no additional phones are introduced.

Performance Factors

As forced alignment is a relatively well-constrained problem (given accurate transcripts), this model should be applicable to a range of recording conditions and speakers. However, please note that it was trained on read speech in low-noise environments, so as your data diverges from that, you may run into alignment issues or need to increase the beam size of MFA or see other recommendations in the troubleshooting section below.

Please note as well that MFA does not use state-of-the-art ASR models for forced alignment. You may get better performance (especially on speech-to-text tasks) using other frameworks like Coqui.

Ethical considerations

Deploying any Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.

Demographic Bias

You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue.

Surveillance

Speech-to-Text technologies may be misused to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in many countries. You should not assume consent to record and analyze private speech.

Troubleshooting issues

Machine learning models (like this acoustic model) perform best on data that is similar to the data on which they were trained.

The primary sources of variability in forced alignment will be the applicability of the pronunciation dictionary and how similar the speech, demographics, and recording conditions are. If you encounter issues in alignment, there are couple of avenues to improve performance:

  1. Increase the beam size of MFA

    • MFA defaults to a narrow beam to ensure quick alignment and also as a way to detect potential issues in your dataset, but depending on your data, you might benefit from boosting the beam to 100 or higher.
  2. Add pronunciations to the pronunciation dictionary

    • This model was trained a particular dialect/style, and so adding pronunciations more representative of the variety spoken in your dataset will help alignment.
  3. Check the quality of your data

    • MFA includes a validator utility, which aims to detect issues in the dataset.
    • Use MFA's anchor utility to visually inspect your data as MFA sees it and correct issues in transcription or OOV items.
  4. Adapt the model to your data

    • MFA has an adaptation command to adapt some of the model to your data based on an initial alignment, and then run another alignment with the adapted model.

Training data

This model was trained on the following corpora:

bulgarian_cv v2.0.0

23 Mar 01:35
e092254
Compare
Choose a tag to compare

Bulgarian CV acoustic model v2.0.0

Link to documentation on mfa-models

Jump to section:

Model details

  • Maintainer: Vox Communis
  • Language: Bulgarian
  • Dialect: N/A
  • Phone set: XPF
  • Model type: Acoustic model
  • Features: MFCC
  • Architecture: gmm-hmm
  • Model version: v2.0.0
  • Trained date: 02-11-2022
  • Compatible MFA version: v2.0.0
  • License: CC-0
  • Citation:
@misc{
	Ahn_Chodroff_2022,
	author={Ahn, Emily and Chodroff, Eleanor},
	title={VoxCommunis Corpus},
	address={\url{https://osf.io/t957v}},
	publisher={OSF},
	year={2022},
	month={Jan}
}

Installation

Install from the MFA command line:

mfa models download acoustic bulgarian_cv

Or download from the release page.

Intended use

This model is intended for forced alignment of Bulgarian transcripts.

This model uses the XPF phone set for Bulgarian, and was trained with the pronunciation dictionaries above. Pronunciations can be added on top of the dictionary, as long as no additional phones are introduced.

Performance Factors

As forced alignment is a relatively well-constrained problem (given accurate transcripts), this model should be applicable to a range of recording conditions and speakers. However, please note that it was trained on read speech in low-noise environments, so as your data diverges from that, you may run into alignment issues or need to increase the beam size of MFA or see other recommendations in the troubleshooting section below.

Please note as well that MFA does not use state-of-the-art ASR models for forced alignment. You may get better performance (especially on speech-to-text tasks) using other frameworks like Coqui.

Ethical considerations

Deploying any Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.

Demographic Bias

You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue.

Surveillance

Speech-to-Text technologies may be misused to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in many countries. You should not assume consent to record and analyze private speech.

Troubleshooting issues

Machine learning models (like this acoustic model) perform best on data that is similar to the data on which they were trained.

The primary sources of variability in forced alignment will be the applicability of the pronunciation dictionary and how similar the speech, demographics, and recording conditions are. If you encounter issues in alignment, there are couple of avenues to improve performance:

  1. Increase the beam size of MFA

    • MFA defaults to a narrow beam to ensure quick alignment and also as a way to detect potential issues in your dataset, but depending on your data, you might benefit from boosting the beam to 100 or higher.
  2. Add pronunciations to the pronunciation dictionary

    • This model was trained a particular dialect/style, and so adding pronunciations more representative of the variety spoken in your dataset will help alignment.
  3. Check the quality of your data

    • MFA includes a validator utility, which aims to detect issues in the dataset.
    • Use MFA's anchor utility to visually inspect your data as MFA sees it and correct issues in transcription or OOV items.
  4. Adapt the model to your data

    • MFA has an adaptation command to adapt some of the model to your data based on an initial alignment, and then run another alignment with the adapted model.

Training data

This model was trained on the following corpora:

belarusian_cv v2.0.0

23 Mar 01:35
e092254
Compare
Choose a tag to compare

Belarusian CV acoustic model v2.0.0

Link to documentation on mfa-models

Jump to section:

Model details

  • Maintainer: Vox Communis
  • Language: Belarusian
  • Dialect: N/A
  • Phone set: XPF
  • Model type: Acoustic model
  • Features: MFCC
  • Architecture: gmm-hmm
  • Model version: v2.0.0
  • Trained date: 02-11-2022
  • Compatible MFA version: v2.0.0
  • License: CC-0
  • Citation:
@misc{
	Ahn_Chodroff_2022,
	author={Ahn, Emily and Chodroff, Eleanor},
	title={VoxCommunis Corpus},
	address={\url{https://osf.io/t957v}},
	publisher={OSF},
	year={2022},
	month={Jan}
}

Installation

Install from the MFA command line:

mfa models download acoustic belarusian_cv

Or download from the release page.

Intended use

This model is intended for forced alignment of Belarusian transcripts.

This model uses the XPF phone set for Belarusian, and was trained with the pronunciation dictionaries above. Pronunciations can be added on top of the dictionary, as long as no additional phones are introduced.

Performance Factors

As forced alignment is a relatively well-constrained problem (given accurate transcripts), this model should be applicable to a range of recording conditions and speakers. However, please note that it was trained on read speech in low-noise environments, so as your data diverges from that, you may run into alignment issues or need to increase the beam size of MFA or see other recommendations in the troubleshooting section below.

Please note as well that MFA does not use state-of-the-art ASR models for forced alignment. You may get better performance (especially on speech-to-text tasks) using other frameworks like Coqui.

Ethical considerations

Deploying any Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.

Demographic Bias

You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue.

Surveillance

Speech-to-Text technologies may be misused to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in many countries. You should not assume consent to record and analyze private speech.

Troubleshooting issues

Machine learning models (like this acoustic model) perform best on data that is similar to the data on which they were trained.

The primary sources of variability in forced alignment will be the applicability of the pronunciation dictionary and how similar the speech, demographics, and recording conditions are. If you encounter issues in alignment, there are couple of avenues to improve performance:

  1. Increase the beam size of MFA

    • MFA defaults to a narrow beam to ensure quick alignment and also as a way to detect potential issues in your dataset, but depending on your data, you might benefit from boosting the beam to 100 or higher.
  2. Add pronunciations to the pronunciation dictionary

    • This model was trained a particular dialect/style, and so adding pronunciations more representative of the variety spoken in your dataset will help alignment.
  3. Check the quality of your data

    • MFA includes a validator utility, which aims to detect issues in the dataset.
    • Use MFA's anchor utility to visually inspect your data as MFA sees it and correct issues in transcription or OOV items.
  4. Adapt the model to your data

    • MFA has an adaptation command to adapt some of the model to your data based on an initial alignment, and then run another alignment with the adapted model.

Training data

This model was trained on the following corpora: