Purpose
Utilizing state-level alignment labels allows us to copy the prosody from one speaker and use it on another speaker’s acoustic model. This can be used to improve the synthesized results by using prosody from natural speech and phone features from a HMM-based acoustic models. Moreover, since this technique can create phone-aligned parallel sentences from different acoustic models, we can also use it to generate comparable sentences where the quality of the vocoders or the acoustic features in the training data can be compared separately from the duration models.
Steps
The basic steps to get the state level alignment are listed below:
1. Train HTS systems of both data sets.
2. Add the –f parameter to the HSMMAlign function call in the “forced alignment for no-silent GV” step. The modified call should look as below:
1 |
shell("$HSMMAlign -H $monommf{'cmp'} -N $monommf{'dur'} -m $gvfaldir -f $lst{'mon'} $lst{'mon'}"); |
3. Run the above step. The state-level alignment result will be store in gv/qst001/ver1/fal (the question and version number can be different according to your training configuration). This folder contains all the monophone state-level alignment labels of all sentences In the training data.
4. Convert the monophone state-level alignment labels into full-context state-level alignment labels. You should already have all the monophone and full-context labels for all those sentences from the training data, so converting is simply replacing the corresponding monophone with the full-context one. I have written a small perl script to do the job. (I have omitted the part preparing the path $monoDir, $fullDir, $monoStateDir and $fullStateDir corresponding to the monophone and full-context label folders, the monophone state-level alignment label folder and the output full-context state-level alignment label folder.)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
use File::Basename; $monoDir = "mono"; # folder containing all phone-aligned monophone labels $fullDir = "full"; # folder containing all phone-aligned full-context labels $monoStateDir = 'mono_state'; # folder containing all state-aligned monophone labels $fullStateDir = 'full_state'; # output folder containing all phone-aligned full-context labels $STATES = 5; mkdir ($fullStateDir); opendir (DIR, $monoStateDir) or die $!; foreach my $monoStateFile (readdir (DIR)) { if ($monoStateFile =~ /(.lab)$/) { my ($base, $dirs, $suffix) = fileparse($monoStateFile, qr/\.[^.]*/); chomp ($base); print "Processing $base\n"; $monoStateFile = "$monoStateDir/$base.lab"; my $monoFile = "$monoDir/$base.lab"; my $fullFile = "$fullDir/$base.lab"; my $fullStateFile = "$fullStateDir/$base.lab"; open (MONO, "$monoFile") || die "Cannot open $!"; my @monoLines = <MONO>; close (MONO); open (FULL, "$fullFile") || die "Cannot open $!"; my @fullLines = <FULL>; close (FULL); open (MONO_STATE, "$monoStateFile") || die "Cannot open $!"; my @monoStateLines = <MONO_STATE>; close (MONO_STATE); open (FULL_STATE, ">$fullStateFile") || die "Cannot open $!"; for (my $i = 0; $i < scalar(@monoLines); $i++) { my $monoLine = @monoLines[$i]; chomp($monoLine); my $fullLine = @fullLines[$i]; chomp($fullLine); my @monoTokens = split (/\s+/, $monoLine); my @fullTokens = split (/\s+/, $fullLine); my $monoPhone = @monoTokens[3]; my $fullPhone = @fullTokens[3]; for (my $j = 0; $j < $STATES; $j++) { my $monoStateLine = @monoStateLines[$i * $STATES + $j]; chomp($monoStateLine); $monoStateLine =~ s/$monoPhone/$fullPhone/g; print FULL_STATE "$monoStateLine\n"; } } close (FULL_STATE); } } |
Then to use this inside another HTS system, you will have to do some more steps:
1. Create all the unseen models in the generating sentences. The unseen models will be created base on the labels in the full_all.list file, so we will need to make sure this file have all the full-context labels in our synthesizing script first. The easiest way is to copy the full-context labels from the source data set to the generating folder of the target data set and the run the makefile to recreate the gen.scp and full_all.list. Then we could run all the making unseen models steps in the training script.
2. Change the gen.scp file, pointing to the state-level alignment labels.
3. Change the function call to HMGenS in the Training.pl script to use state-level alignment labels by adding –s parameter and remove the duration model:
1 |
shell("$HMGenS -c $pgtype -H $rclammf{'cmp'}.$mix -N $rclammf{'dur'}.$mix -M $dir -s $tiedlst{'cmp'}"); |
4. Limit the $pgtype parameter to 0 and 1 only (remove 2). $pgtype = 2 (both state and mix sequences are hidden) will not work with state-level alignment labels.
5. Run the generating speech parameter sequences and synthesizing waveforms step again.
Conclusion
Supporting for generating speech from state-level alignment level allows us to have a very refined level of control on the prosody of the generated speech from the HTS system. Currently, I still have some problem with the quality of the generated state-level alignment labels, but I hope I can fix this problem soon and utilize this feature more in the future.
Hello,
In perl file given above what is the value of $STATES?
Can you please tell me?
Sorry for the late reply as I was overseas for the last couple of days.
$STATES here is the number of HMM states for each phone (excluding the opening and closing states). In the current HTS demo for English, $STATE = 5.