Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TTS]add Diffsinger with opencpop dataset #3005

Merged
merged 57 commits into from
Mar 13, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
57 commits
Select commit Hold shift + click to select a range
f58de66
updata readme, test=doc
lym0302 Aug 26, 2022
0251c38
Merge branch 'PaddlePaddle:develop' into develop
lym0302 Aug 29, 2022
034aef5
Merge branch 'PaddlePaddle:develop' into develop
lym0302 Sep 6, 2022
ccce14f
Merge branch 'PaddlePaddle:develop' into develop
lym0302 Sep 14, 2022
2244b53
Merge branch 'PaddlePaddle:develop' into develop
lym0302 Sep 15, 2022
5c197e7
Merge branch 'PaddlePaddle:develop' into develop
lym0302 Sep 20, 2022
8e5e265
update yaml and readme, test=tts
lym0302 Sep 20, 2022
6b4cccb
Merge branch 'PaddlePaddle:develop' into develop
lym0302 Sep 20, 2022
697e1f7
Merge branch 'PaddlePaddle:develop' into develop
lym0302 Sep 26, 2022
f6cf18e
fix batch_size, test=tts
lym0302 Sep 26, 2022
20ccc05
Merge branch 'PaddlePaddle:develop' into develop
lym0302 Sep 27, 2022
c737dab
Merge branch 'PaddlePaddle:develop' into develop
lym0302 Sep 30, 2022
8dc3c98
Merge branch 'PaddlePaddle:develop' into develop
lym0302 Oct 8, 2022
fa434cb
Merge branch 'PaddlePaddle:develop' into develop
lym0302 Oct 12, 2022
2b9d7c8
Merge branch 'PaddlePaddle:develop' into develop
lym0302 Oct 17, 2022
8164d86
Merge branch 'PaddlePaddle:develop' into develop
lym0302 Oct 20, 2022
8964190
Merge branch 'PaddlePaddle:develop' into develop
lym0302 Oct 27, 2022
06383d5
Merge branch 'PaddlePaddle:develop' into develop
lym0302 Oct 27, 2022
2a978bc
Merge branch 'PaddlePaddle:develop' into develop
lym0302 Nov 1, 2022
664aed4
Merge branch 'PaddlePaddle:develop' into develop
lym0302 Nov 2, 2022
003ff8f
update readme, test=doc
lym0302 Nov 4, 2022
d3eb589
Merge branch 'develop' of https://github.com/lym0302/PaddleSpeech int…
lym0302 Nov 4, 2022
dc71ad0
chmod, test=tts
lym0302 Nov 14, 2022
8457159
Merge branch 'develop' of https://github.com/lym0302/PaddleSpeech int…
lym0302 Nov 14, 2022
eef87bb
Merge branch 'PaddlePaddle:develop' into develop
lym0302 Nov 14, 2022
2e5af47
Merge branch 'PaddlePaddle:develop' into develop
lym0302 Nov 14, 2022
5c67d95
Merge branch 'develop' of https://github.com/lym0302/PaddleSpeech int…
lym0302 Nov 14, 2022
4d8ef8c
Merge branch 'PaddlePaddle:develop' into develop
lym0302 Nov 16, 2022
152ebcb
Merge branch 'PaddlePaddle:develop' into develop
lym0302 Nov 29, 2022
5c8b75e
Merge branch 'PaddlePaddle:develop' into develop
lym0302 Dec 4, 2022
700e281
Merge branch 'PaddlePaddle:develop' into develop
lym0302 Dec 28, 2022
bfae0be
add multi-spk static model infer, test=tts
lym0302 Dec 28, 2022
7ad91d6
Merge branch 'PaddlePaddle:develop' into develop
lym0302 Jan 11, 2023
82378e5
Merge branch 'PaddlePaddle:develop' into develop
lym0302 Jan 15, 2023
c463b35
diffsinger opencpop fft train, test=tts
lym0302 Jan 16, 2023
6fb281c
fix pitch_mask
lym0302 Jan 16, 2023
ef7d15d
base diffsinger, test=tts
lym0302 Feb 1, 2023
c91dc02
fix diffsinger, test=tts
lym0302 Feb 3, 2023
84a22ff
diffsinger_tmp
lym0302 Feb 9, 2023
def9d64
fix diffsinger loss target to noisy_mel
HighCWu Feb 9, 2023
9e8bd9f
Merge pull request #3 from HighCWu/diffsinger_tmp
lym0302 Feb 9, 2023
8a4b18c
add test.jsonl
lym0302 Feb 10, 2023
ffe44b8
Merge branch 'diffsinger_tmp' of https://github.com/lym0302/PaddleSpe…
lym0302 Feb 10, 2023
4ecc752
fix eval
lym0302 Feb 10, 2023
9df1294
add linear norm
lym0302 Feb 14, 2023
d1173b9
fix
lym0302 Feb 14, 2023
d7928d7
update diffsinger, test=tts
lym0302 Feb 22, 2023
1d1e859
diffsinger, test=tts
lym0302 Mar 7, 2023
f71f481
solve conflict
lym0302 Mar 7, 2023
3df69e7
update inference step
lym0302 Mar 8, 2023
9acc852
fix comment
lym0302 Mar 9, 2023
c9c6960
fix inference
lym0302 Mar 9, 2023
bd47de8
update
lym0302 Mar 10, 2023
72d9c63
remove test.jsonl
lym0302 Mar 10, 2023
9b34070
add readme
lym0302 Mar 13, 2023
4e4609f
astype
lym0302 Mar 13, 2023
b86b4db
update voc path
lym0302 Mar 13, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
174 changes: 174 additions & 0 deletions examples/opencpop/svs1/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,174 @@
([简体中文](./README_cn.md)|English)
# DiffSinger with Opencpop
This example contains code used to train a [DiffSinger](https://arxiv.org/abs/2105.02446) model with [Mandarin singing corpus](https://wenet.org.cn/opencpop/).

## Dataset
### Download and Extract
Download Opencpop from it's [Official Website](https://wenet.org.cn/opencpop/download/) and extract it to `~/datasets`. Then the dataset is in the directory `~/datasets/Opencpop`.

## Get Started
Assume the path to the dataset is `~/datasets/Opencpop`.
Run the command below to
1. **source path**.
2. preprocess the dataset.
3. train the model.
4. synthesize wavs.
- synthesize waveform from `metadata.jsonl`.
- (Supporting) synthesize waveform from a text file.
5. (Supporting) inference using the static model.
```bash
./run.sh
```
You can choose a range of stages you want to run, or set `stage` equal to `stop-stage` to use only one stage, for example, running the following command will only preprocess the dataset.
```bash
./run.sh --stage 0 --stop-stage 0
```
### Data Preprocessing
```bash
./local/preprocess.sh ${conf_path}
```
When it is done. A `dump` folder is created in the current directory. The structure of the dump folder is listed below.

```text
dump
├── dev
│ ├── norm
│ └── raw
├── phone_id_map.txt
├── speaker_id_map.txt
├── test
│ ├── norm
│ └── raw
└── train
├── energy_stats.npy
├── norm
├── pitch_stats.npy
├── raw
├── speech_stats.npy
└── speech_stretchs.npy

```
The dataset is split into 3 parts, namely `train`, `dev`, and` test`, each of which contains a `norm` and `raw` subfolder. The raw folder contains speech, pitch and energy features of each utterance, while the norm folder contains normalized ones. The statistics used to normalize features are computed from the training set, which is located in `dump/train/*_stats.npy`. `speech_stretchs.npy` contains the minimum and maximum values of each dimension of the mel spectrum, which is used for linear stretching before training/inference of the diffusion module.
Note: Since the training effect of non-norm features is due to norm, the features saved under `norm` are features that have not been normed.


Also, there is a `metadata.jsonl` in each subfolder. It is a table-like file that contains utterance id, speaker id, phones, text_lengths, speech_lengths, phone durations, the path of speech features, the path of pitch features, the path of energy features, note, note durations, slur.

### Model Training
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path}
```
`./local/train.sh` calls `${BIN_DIR}/train.py`.
Here's the complete help message.
```text
usage: train.py [-h] [--config CONFIG] [--train-metadata TRAIN_METADATA]
[--dev-metadata DEV_METADATA] [--output-dir OUTPUT_DIR]
[--ngpu NGPU] [--phones-dict PHONES_DICT]
[--speaker-dict SPEAKER_DICT] [--speech-stretchs SPEECH_STRETCHS]

Train a FastSpeech2 model.

optional arguments:
-h, --help show this help message and exit
--config CONFIG fastspeech2 config file.
--train-metadata TRAIN_METADATA
training data.
--dev-metadata DEV_METADATA
dev data.
--output-dir OUTPUT_DIR
output dir.
--ngpu NGPU if ngpu=0, use cpu.
--phones-dict PHONES_DICT
phone vocabulary file.
--speaker-dict SPEAKER_DICT
speaker id map file for multiple speaker model.
--speech-stretchs SPEECH_STRETCHS
min amd max mel for stretching.
```
1. `--config` is a config file in yaml format to overwrite the default config, which can be found at `conf/default.yaml`.
2. `--train-metadata` and `--dev-metadata` should be the metadata file in the normalized subfolder of `train` and `dev` in the `dump` folder.
3. `--output-dir` is the directory to save the results of the experiment. Checkpoints are saved in `checkpoints/` inside this directory.
4. `--ngpu` is the number of gpus to use, if ngpu == 0, use cpu.
5. `--phones-dict` is the path of the phone vocabulary file.
6. `--speech-stretchs` is the path of mel's min-max data file.

### Synthesizing
We use parallel wavegan as the neural vocoder.
Download pretrained parallel wavegan model from [pwgan_opencpop_ckpt_1.4.0.zip](https://paddlespeech.bj.bcebos.com/t2s/svs/opencpop/pwgan_opencpop_ckpt_1.4.0.zip) and unzip it.
```bash
unzip pwgan_opencpop_ckpt_1.4.0.zip
```
Parallel WaveGAN checkpoint contains files listed below.
```text
pwgan_opencpop_ckpt_1.4.0.zip
├── default.yaml # default config used to train parallel wavegan
├── snapshot_iter_100000.pdz # model parameters of parallel wavegan
└── feats_stats.npy # statistics used to normalize spectrogram when training parallel wavegan
```
`./local/synthesize.sh` calls `${BIN_DIR}/../synthesize.py`, which can synthesize waveform from `metadata.jsonl`.
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name}
```
```text
usage: synthesize.py [-h]
[--am {diffsinger_opencpop}]
[--am_config AM_CONFIG] [--am_ckpt AM_CKPT]
[--am_stat AM_STAT] [--phones_dict PHONES_DICT]
[--voc {pwgan_opencpop}]
[--voc_config VOC_CONFIG] [--voc_ckpt VOC_CKPT]
[--voc_stat VOC_STAT] [--ngpu NGPU]
[--test_metadata TEST_METADATA] [--output_dir OUTPUT_DIR]
[--speech_stretchs SPEECH_STRETCHS]

Synthesize with acoustic model & vocoder

optional arguments:
-h, --help show this help message and exit
--am {speedyspeech_csmsc,fastspeech2_csmsc,fastspeech2_ljspeech,fastspeech2_aishell3,fastspeech2_vctk,tacotron2_csmsc,tacotron2_ljspeech,tacotron2_aishell3}
Choose acoustic model type of tts task.
--am_config AM_CONFIG
Config of acoustic model.
--am_ckpt AM_CKPT Checkpoint file of acoustic model.
--am_stat AM_STAT mean and standard deviation used to normalize
spectrogram when training acoustic model.
--phones_dict PHONES_DICT
phone vocabulary file.
--tones_dict TONES_DICT
tone vocabulary file.
--speaker_dict SPEAKER_DICT
speaker id map file.
--voice-cloning VOICE_CLONING
whether training voice cloning model.
--voc {pwgan_csmsc,pwgan_ljspeech,pwgan_aishell3,pwgan_vctk,mb_melgan_csmsc,wavernn_csmsc,hifigan_csmsc,hifigan_ljspeech,hifigan_aishell3,hifigan_vctk,style_melgan_csmsc}
Choose vocoder type of tts task.
--voc_config VOC_CONFIG
Config of voc.
--voc_ckpt VOC_CKPT Checkpoint file of voc.
--voc_stat VOC_STAT mean and standard deviation used to normalize
spectrogram when training voc.
--ngpu NGPU if ngpu == 0, use cpu.
--test_metadata TEST_METADATA
test metadata.
--output_dir OUTPUT_DIR
output dir.
--speech-stretchs mel min and max values file.
```


## Pretrained Model
Pretrained DiffSinger model:
- [diffsinger_opencpop_ckpt_1.4.0.zip](https://paddlespeech.bj.bcebos.com/t2s/svs/opencpop/diffsinger_opencpop_ckpt_1.4.0.zip)

DiffSinger checkpoint contains files listed below.
```text
diffsinger_opencpop_ckpt_1.4.0.zip
├── default.yaml # default config used to train diffsinger
├── energy_stats.npy # statistics used to normalize energy when training diffsinger if norm is needed
├── phone_id_map.txt # phone vocabulary file when training diffsinger
├── pitch_stats.npy # statistics used to normalize pitch when training diffsinger if norm is needed
├── snapshot_iter_160000.pdz # model parameters of diffsinger
├── speech_stats.npy # statistics used to normalize mel when training diffsinger if norm is needed
└── speech_stretchs.npy # Min and max values to use for mel spectral stretching before training diffusion

```
At present, the text frontend is not perfect, and the method of `synthesize_e2e` is not supported for synthesizing audio. Try using `synthesize` first.
179 changes: 179 additions & 0 deletions examples/opencpop/svs1/README_cn.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,179 @@
(简体中文|[English](./README.md))
# 用 Opencpop 数据集训练 DiffSinger 模型

本用例包含用于训练 [DiffSinger](https://arxiv.org/abs/2105.02446) 模型的代码,使用 [Mandarin singing corpus](https://wenet.org.cn/opencpop/) 数据集。

## 数据集
### 下载并解压
从 [官方网站](https://wenet.org.cn/opencpop/download/) 下载数据集

## 开始
假设数据集的路径是 `~/datasets/Opencpop`.
运行下面的命令会进行如下操作:

1. **设置原路径**。
2. 对数据集进行预处理。
3. 训练模型
4. 合成波形
- 从 `metadata.jsonl` 合成波形。
- (支持中)从文本文件合成波形。
5. (支持中)使用静态模型进行推理。
```bash
./run.sh
```
您可以选择要运行的一系列阶段,或者将 `stage` 设置为 `stop-stage` 以仅使用一个阶段,例如,运行以下命令只会预处理数据集。
```bash
./run.sh --stage 0 --stop-stage 0
```
### 数据预处理
```bash
./local/preprocess.sh ${conf_path}
```
当它完成时。将在当前目录中创建 `dump` 文件夹。转储文件夹的结构如下所示。

```text
dump
├── dev
│ ├── norm
│ └── raw
├── phone_id_map.txt
├── speaker_id_map.txt
├── test
│ ├── norm
│ └── raw
└── train
├── energy_stats.npy
├── norm
├── pitch_stats.npy
├── raw
├── speech_stats.npy
└── speech_stretchs.npy
```

数据集分为三个部分,即 `train` 、 `dev` 和 `test` ,每个部分都包含一个 `norm` 和 `raw` 子文件夹。原始文件夹包含每个话语的语音、音调和能量特征,而 `norm` 文件夹包含规范化的特征。用于规范化特征的统计数据是从 `dump/train/*_stats.npy` 中的训练集计算出来的。`speech_stretchs.npy` 中包含 mel谱每个维度上的最小值和最大值,用于 diffusion 模块训练/推理前的线性拉伸。
注意:由于非 norm 特征训练效果由于 norm,因此 `norm` 下保存的特征是未经过 norm 的特征。


此外,还有一个 `metadata.jsonl` 在每个子文件夹中。它是一个类似表格的文件,包含话语id,音色id,音素、文本长度、语音长度、音素持续时间、语音特征路径、音调特征路径、能量特征路径、音调,音调持续时间,是否为转音。

### 模型训练
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path}
```
`./local/train.sh` 调用 `${BIN_DIR}/train.py` 。
以下是完整的帮助信息。

```text
usage: train.py [-h] [--config CONFIG] [--train-metadata TRAIN_METADATA]
[--dev-metadata DEV_METADATA] [--output-dir OUTPUT_DIR]
[--ngpu NGPU] [--phones-dict PHONES_DICT]
[--speaker-dict SPEAKER_DICT] [--speech-stretchs SPEECH_STRETCHS]

Train a DiffSinger model.

optional arguments:
-h, --help show this help message and exit
--config CONFIG fastspeech2 config file.
--train-metadata TRAIN_METADATA
training data.
--dev-metadata DEV_METADATA
dev data.
--output-dir OUTPUT_DIR
output dir.
--ngpu NGPU if ngpu=0, use cpu.
--phones-dict PHONES_DICT
phone vocabulary file.
--speaker-dict SPEAKER_DICT
speaker id map file for multiple speaker model.
--speech-stretchs SPEECH_STRETCHS
min amd max mel for stretching.
```
1. `--config` 是一个 yaml 格式的配置文件,用于覆盖默认配置,位于 `conf/default.yaml`.
2. `--train-metadata` 和 `--dev-metadata` 应为 `dump` 文件夹中 `train` 和 `dev` 下的规范化元数据文件
3. `--output-dir` 是保存结果的目录。 检查点保存在此目录中的 `checkpoints/` 目录下。
4. `--ngpu` 要使用的 GPU 数,如果 ngpu==0,则使用 cpu 。
5. `--phones-dict` 是音素词汇表文件的路径。
6. `--speech-stretchs` mel的最小最大值数据的文件路径。

### 合成
我们使用 parallel opencpop 作为神经声码器(vocoder)。
从 [pwgan_opencpop_ckpt_1.4.0.zip](https://paddlespeech.bj.bcebos.com/t2s/svs/opencpop/pwgan_opencpop_ckpt_1.4.0.zip) 下载预训练的 parallel wavegan 模型并将其解压。

```bash
unzip pwgan_opencpop_ckpt_1.4.0.zip
```
Parallel WaveGAN 检查点包含如下文件。
```text
pwgan_opencpop_ckpt_1.4.0.zip
├── default.yaml # 用于训练 parallel wavegan 的默认配置
├── snapshot_iter_100000.pdz # parallel wavegan 的模型参数
└── feats_stats.npy # 训练平行波形时用于规范化谱图的统计数据
```
`./local/synthesize.sh` 调用 `${BIN_DIR}/../synthesize.py` 即可从 `metadata.jsonl`中合成波形。

```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name}
```
```text
usage: synthesize.py [-h]
[--am {diffsinger_opencpop}]
[--am_config AM_CONFIG] [--am_ckpt AM_CKPT]
[--am_stat AM_STAT] [--phones_dict PHONES_DICT]
[--voc {pwgan_opencpop}]
[--voc_config VOC_CONFIG] [--voc_ckpt VOC_CKPT]
[--voc_stat VOC_STAT] [--ngpu NGPU]
[--test_metadata TEST_METADATA] [--output_dir OUTPUT_DIR]
[--speech_stretchs SPEECH_STRETCHS]

Synthesize with acoustic model & vocoder

optional arguments:
-h, --help show this help message and exit
--am {speedyspeech_csmsc,fastspeech2_csmsc,fastspeech2_ljspeech,fastspeech2_aishell3,fastspeech2_vctk,tacotron2_csmsc,tacotron2_ljspeech,tacotron2_aishell3}
Choose acoustic model type of tts task.
--am_config AM_CONFIG
Config of acoustic model.
--am_ckpt AM_CKPT Checkpoint file of acoustic model.
--am_stat AM_STAT mean and standard deviation used to normalize
spectrogram when training acoustic model.
--phones_dict PHONES_DICT
phone vocabulary file.
--tones_dict TONES_DICT
tone vocabulary file.
--speaker_dict SPEAKER_DICT
speaker id map file.
--voice-cloning VOICE_CLONING
whether training voice cloning model.
--voc {pwgan_csmsc,pwgan_ljspeech,pwgan_aishell3,pwgan_vctk,mb_melgan_csmsc,wavernn_csmsc,hifigan_csmsc,hifigan_ljspeech,hifigan_aishell3,hifigan_vctk,style_melgan_csmsc}
Choose vocoder type of tts task.
--voc_config VOC_CONFIG
Config of voc.
--voc_ckpt VOC_CKPT Checkpoint file of voc.
--voc_stat VOC_STAT mean and standard deviation used to normalize
spectrogram when training voc.
--ngpu NGPU if ngpu == 0, use cpu.
--test_metadata TEST_METADATA
test metadata.
--output_dir OUTPUT_DIR
output dir.
--speech-stretchs mel min and max values file.
```

## 预训练模型
预先训练的 DiffSinger 模型:
- [diffsinger_opencpop_ckpt_1.4.0.zip](https://paddlespeech.bj.bcebos.com/t2s/svs/opencpop/diffsinger_opencpop_ckpt_1.4.0.zip)


DiffSinger 检查点包含下列文件。
```text
diffsinger_opencpop_ckpt_1.4.0.zip
├── default.yaml # 用于训练 diffsinger 的默认配置
├── energy_stats.npy # 训练 diffsinger 时如若需要 norm energy 会使用到的统计数据
├── phone_id_map.txt # 训练 diffsinger 时的音素词汇文件
├── pitch_stats.npy # 训练 diffsinger 时如若需要 norm pitch 会使用到的统计数据
├── snapshot_iter_160000.pdz # 模型参数和优化器状态
├── speech_stats.npy # 训练 diffsinger 时用于规范化频谱图的统计数据
└── speech_stretchs.npy # 训练 diffusion 前用于 mel 谱拉伸的最小及最大值

```
目前文本前端未完善,暂不支持 `synthesize_e2e` 的方式合成音频。尝试效果可先使用 `synthesize`。
Loading