Speech:Spring 2017 Nicholas Bielinski Log


 * Home
 * Semesters
 * Spring 2017
 * Proposal
 * Report
 * Information - General Project Information
 * Experiments - List of speech experiments

Week Ending February 7th, 2017
2/2 - First time logging into the Wiki page using our accounts created for us. I am in the experiment group, so I decided to go see past years teams and see what they did. We also discussed how we will go about doing our proposal.
 * Task:

2/4 - Logged in to check logs.

2/6 - Logged in to check logs.

2/7 - Start working on our rough draft proposal for the experiment group.


 * Results:

2/2 - We will be working on our proposal over the weekend. We will be communicating through slack, or another mean like Google Hangouts for voice chat.

2/4 - Logged in to check logs.

2/6 - Logged in to check logs.

2/7 - We worked on a draft proposal that we put on the 2017 group proposal page.


 * Plan:

2/2 - Our plan is to communicate through slack channels as a group (Experiment) and as a wider class in the general section.

2/4 - Logged in to check logs.

2/6 - Logged in to check logs.

2/7 - Our plan is to look at all scripts, see how they work and look at their relevancy. Then we want to update scripts if it is needed, and create additional scripts if we think they could be a benefit to the project.


 * Concerns:

2/2 - Our main concern at this point is that we are just starting out in the class, and we need to get a better holding on what exactly we are doing.

2/4 - Logged in to check logs.

2/6 - Logged in to check logs.

2/7 - Our main concern right now is if our proposal puts us in the right direction or not. We are still wading the waters on our exact role and our understanding of what needs to be done in our group.

Week Ending February 14, 2017

 * Task:

2/8 - Today we reconvened and sorted out what we are going to be doing for the next week, and also discussed what we wanted to do that day. We ended up working more on our rough draft, adding more to the goals based off of what Professor Jonas had written on the board in class. I personally went back and forth between helping on the rough draft and updating addExp.pl. Those were our two tasks for today. 2/11 - Checked in to check logs.

2/14 - Checked in to check logs.

2/14 - I updated a little bit more of our rough draft. I thought about when we should have certain tasks done. The implementation timeline isn't quite filled out. I'm not sure who I should assigned what. That is something we will have to sort out later.


 * Results:

2/8 - Our results for updating our rough draft went ok I think. We still need to settle on when we want to complete our other goals for the year, but we can do that for the final version, and expand on the overview a little bit more I think. Updating the addExp.pl script was fairly easy. Since we are only allowed to login to the server using our active directory account, we made sure that that was now the default for when people were creating an experiment. Now instead of defaulting to WILDCAT, the code will default to AD. We no longer allow anther domain to even be entered. $domain = 'AD'; print "please enter your username -> ". $domain. "/"; $cred = <>; chomp $cred; $user = $cred;

print "Credentials: ". $domain. "/" . $user. "\n";

2/11 - Checked in to check logs.

2/14 - Checked in to check logs.

2/14 - The result of me working on the rough draft proposal is that we now have a more complete proposal. The implementation guideline I think is close enough to what we can actually accomplish in the timeframe allotted. The implementation tasks is a different story. We will have to decide in our group tomorrow who exactly is going to do what and when. i think for the bigger piece we have at the end, the script we will need to create to copy experiments from one directory to another, will be a group effort. I think it would be good for the four of us to work on it together so we all get a good understanding of perl, and so we can all help each other out. It sounds difficult, but I'm not sure how bad it will actually be. I just don't have much personal experience with perl so I'm unsure.


 * Plan:

2/8 - Our plan for the rough draft was initially to read through last years group and see what it was they did. We came to the conclusion that they updated scripts, created new scripts, archived old scripts, and updated the wiki page to correlate with their changes. So we based our rough draft off of their work from last year, and from what Professor Jonas wanted us to accomplish this year. As for updating addExp.pl, the problem wasn't too difficult. We all had the same idea, so we didn't have to do too much research or thinking on solving that issue.

2/11 - Checked in to check logs.

2/14 - Checked in to check logs.

2/14 - Our plan right now is to try to stick to the implementation guideline. I think it is close enough to something that is viable for us to do. I put some of the things that I think will be easier for us to accomplish sooner, and the bigger things farther back in the timeline.
 * Concerns:

2/8 - As of right now we don't have too many concerns. We were able to test our script to make sure it worked fine. I guess for me personally I would like to get better working with emac on the server. I'm not used to using a text editor like that, and working with the file right off the server. I also will need to learn more perl in order to complete the other goals we want to accomplish for the semester. I didn't need a lot of perl knowledge in order to understand what we changed today in addExp.pl, but in the future that will not be the case.

2/11 - Checked in to check logs.

2/14 - Checked in to check logs.

2/14 - My only concern right now is if everyone else agrees with me that this is the way to go and that we can accomplish this in time. Also eventually we need to go through all the most common scripts, and then hopefully all the scripts, so we can get a better understanding of what scripts we have, what they do, and what can be upgraded or fixed. As the school upgrades its technology/servers, we have to make sure that our scripts also keep up with that progression.

Week Ending February 21, 2017
2/15 - Today we reconvened and worked more on our rough draft proposal in order to get it more completed for next weeks final draft. We also discussed what we were going to do going forward, and who was exactly was going to do what parts of what scripts.
 * Task:

2/17 - Signed in to check logs

2/19 - Signed in to check logs

2/20 - Today I worked to finish our final proposal. I worked with Tucker and took his suggestion to the final proposal so that our proposal, and the entire classes proposal, would be more in sync with each other.


 * Results:

2/15 - We decided that we needed to do a full experiment and see how the scripts work currently. We need to look at the makeTrain.pl script and MakeTest.pl script (which really should be called makeDecode.pl). We need to go through the process and find out why something is being used and if there are any improvements. If something is not being used, we need to figure out why it is not being used, and again, figure out what can be done to improve it. We also discussed making a copyTrain and copyDecode, for people that want to easily copy a previous year's experiment setup to a different directory on the server. We also discussed looking at how the language model is created currently and whether we can ease that process with making the current script better, or combining multiple scripts.

2/17 - Signed in to check logs

2/19 - Signed in to check logs

2/20 - I ended up getting rid of our tasks to each individual person and just going with when we want to get certain tasks done in a timeline. This was something that Tucker wanted everyone to do so that our proposals would be more in sync with the others. I also filled out our timeline better when compared to what it was before. I added more to our goals when talking about future scripts we will be working on (makeTrain.pl, makeTest.pl (makeDecode.pl), copyTrain.pl and copyDecode.pl). I expanded on more what we needed to do after we finished the scripts. I plan on finishing the scripts as early as possible so that they are more useful for the rest of the group. After we do the scripts, we will need to update the wiki page to make sure it is in line with what the scripts currently do, or add additional pages for when we create additional scripts.


 * Plan:

2/15 - Our plan was to go through and see how to make an experiment fully and see what parts need to be upgraded or changed. We also assigned Zack to work on the makeTest.pl script and see what changes can be made. I'll be working on the makeTrain.pl script to see what changes can be made. Cody was working on the addExp.pl script to make it force a 001 sub experiment when you create a root experiment. Jake and I will probably go through the creation of an experiment together.

2/17 - Signed in to check logs

2/19 - Signed in to check logs

2/20 - My plan going into this was just to follow what Tucker was saying, and what was suggested by Professor Jonas in class, about making our proposals come from one voice rather than many.
 * Concerns:

2/15 - My concern this week is that we haven't created a full experiment yet, so we are hoping it goes well and that we can get some valuable information out of it so we can go forward with making the scripts easier for everyone to use for future experiments.

2/17 - Signed in to check logs

2/19 - Signed in to check logs

2/20 - I don't have any real concerns about our final proposal. I think it is much better than what it was before we started working on the proposals.

Week Ending February 28, 2017

 * Task:

2/22 - Today I ran my first experiment all the way through (train, language model, and then decode). Running the train seemed fairly easy. The wii was up to date and lead you through the process well. The language model was probably the opposite. It hasn't been updated since 2015, but really not since 2014. Some of the examples are incorrect because they have a bad directory, or they talked about a script that is no longer in use. The decode section went fine, the wiki page was pretty solid for that part.

2/25 - Logged in to check logs.

2/26 - Logged in to check logs.

2/28 - My tasks was to look to see how the cfg files in sphinx were initially edited to include the current directory of the files produced.


 * Results:

2/22 - I successfully completed the experiment and ended up with these results:

,-. |                           hyp.trans                            | |-| | SPKR   | # Snt # Wrd | Corr    Sub    Del    Ins    Err  S.Err | |-+-+-| |=================================================================| | Sum/Avg | 1000 14264 | 63.4   27.3    9.3    8.8   45.4   92.3 | |=================================================================| | Mean   |  5.7   82.0 | 63.9   27.9    8.2   11.0   47.1   92.9 | | S.D.   |  2.9   47.9 | 13.3   11.5    5.6   11.8   17.0   12.6 | | Median |  5.0   78.0 | 64.9   26.7    7.9    7.1   45.6  100.0 | `-'

2/25 - Logged in to check logs.

2/26 - Logged in to check logs.

2/28 - When a train is built, it creates the directory structure and it creates the files needed to run the train. Professor Jonas told us that it wasn't as easy as just copying files from one directory to another, and we believe that we found the reason for that. In the cfg files that are created, it has hardcoded directories so sphinx knows where to work out of. In order for us to create a successful copyTrain, we will need to know how to edit that cfg file using a spcript, in order to account for the changed directory. If we do not do so, I think as Professor Jonas alluded to, it would run the scripts in the new directory, but would overwrite the files in the old one it is still linked too. Becauase of this, I am working on creating a script that will be able to edit that configuration file to update to the newest location.


 * Plan:

2/22 - I am not sure what all of the results mean, but I plan on finding out and writing about what each column means sometime this week before we come back to class.

2/25 - Logged in to check logs.

2/26 - Logged in to check logs.

2/28 - I have started writing a script that will edit the configuration file. I will test it locally first to see if I can move a single file from one directory to another and change the directory so it is up to date with the new directory. I think that will be a good starting point. Jake is working on moving the files we need to move with copyTrain, so we will be able to combine them.
 * Concerns:

2/22 - My only concern right now is now that I have run a full experiment, I need to find out exactly what is needed from copyTrain and copyDecode.

2/25 - Logged in to check logs.

2/26 - Logged in to check logs.

2/28 - No real concerns right now. I should be able to create that section of the script, I'm just not sure how challenging it is going to be.

Week Ending March 7, 2017

 * Task:

3/1 - Our task for this week is to try to get a copy script as close to being done as possible.

3/4 - Logged in to check logs

3/5 - Logged in to check logs

3/6 - Jake and I were continuing to work on the copyExp.pl script


 * Results:

3/1 - We are pretty close right now. We copy one directory to another, and change the file paths in the configuration files. the biggest challenge I think is going to only grab train files for copyTrain and then only copy decode files for copyDecode. Just because we have to go and look through the directory structure and see what gets change as we go through the process. We are going to just have one copyExperiment scripts with flags for doing the individual parts or all if you just want to copy the whole experiment over. Originally I had started doing it in Perl, but jake said it would be easier to do unix commands instead. So we looked at how the configuration files were changed in makeTrain, and found that they use "sed -i", which is a unix command. We ended up just using that command along with regex to match up the source path, and then replace it with the destination path.

$cmd = "sed -i s/$src/$dest/ $dest/etc/sphinx_train.cfg";

3/4 - Logged in to check logs

3/5 - Logged in to check logs

3/6 - We were able to change the sphinx_train.cfg file so that the db name and the directory were updated to the path that it was being transferred to. However, the sphinx_decode.cfg file used single quotes instead of double quotes for some reason, so we couldn't just copy and paste to change it. It took us quite a bit in order to get the sphinx_train part working, so we are going to pick it up in the morning and do more work on the decode part.


 * Plan:

3/1 - My plan is to keep working with Jake in getting a nearly completed script before next class, as long as time permits.

3/4 - Logged in to check logs

3/5 - Logged in to check logs

3/6 - Our plan is to further work on the decode issue tomorrow.
 * Concerns:

3/1 My major concern as stated earlier was just grabbing the correct files and copying them over. I think it will be more annoying to just check them off initially than it will be difficult.

3/4 - Logged in to check logs

3/5 - Logged in to check logs

3/6 - Since we figured out how to do the sphinx_train part, hopefully do the double quote to single quote adjustment won't be too bad.

Week Ending March 21, 2017
3/8 - Jake and I wanted to start working on fixing the file names for when you transfer train files from a source to a destination.
 * Task:

3/17 - Signed in to check logs over vacation

3/18 - Logged in to check logs.

3/21 - Jake and I are continuing to work on renaming the files from whatever sub directory number to another sub directory number.


 * Results:

3/8 - We need to try and find the best way possible using either unix commands or Perl code in order to change the file and folder names. for example if we have 6 files that are like this 001_xxxxx.xxx we don't want to hardcode every file. We would rather go through using a command or regex or something like that and looking for each file that starts with the end of the working directory (ex mnt/main/Exp/0296/001) and then change it to the end of the destination directory (ex mnt/main/Exp/0296/006). We tried different things like using the find command, ls command, etc to see if we could firstly pick out only the file we wanted. We didn't really have much success, so it is definitely something that we are going to have to keep working at.

3/17 - Signed in to check logs over vacation

3/18 - Logged in to check logs.

3/21 - I was hoping that this would be quick, because I got an easy one liner script working on my ubuntu virtual machine.

find. -depth -execdir rename 's/001/006/g' '{}' \;

That line did exactly what we wanted. In went through the directory it was run in (you could specificy the directory path after find) and then every instance of 001 would be replaced with 006. Unfortunately, this did not work correctly on caesar. We could runt he command, but it would not work correctly. It gave no errors, it just changed nothing.


 * Plan:

3/8 - We continue to keep working on finding the perfect solution so that we can go through a sub directory and change all files and folders that need to be changed. We do not want to do it the lazy way and manually put in each file name. That won't be scalable in the future. If another file gets added, you would have to keep going back into the script to change it.

3/17 - Signed in to check logs over vacation

3/18 - Logged in to check logs.

3/21 - We we are starting a new plan and hopefully we will either be able to figure out how to do it in perl correctly, or try using the mv command and finding the patterns needed and renaming them.
 * Concerns:

3/8 - My only concern right now is just making sure we can get this working.

3/17 - Signed in to check logs over vacation

3/18 - Logged in to check logs.

3/21 - My only concern is that we haven't been able to figure out how to do it on caesar yet. If we start to get some results, like at least updating all files or all folders (or both) then I will obviously be less concerned.

Week Ending March 28, 2017

 * Task:

3/23 - Our task was to complete changing the source subdirectory files and folders to the destination subdirectory.

3/24 - Logged in to check logs.

3/25 - Logged in to check logs.

3/28 - Jake and I are still working on finishing off the script


 * Results:

3/23 - Jake and I were finally able to come up with a solution that worked on the server. It was not too different from a solution that we had come up with before that worked on Ubuntu. The only difference in the solution I had previously when compared to Jake's final solution is the bash and -- arguments. Basically it looks in the specified directory (after everything has been copied over) using the $dest variable. Then it looks for all names that have a sourceSub (ie 001). The asterisks on both sides denote wildcard meaning it will pick up any instance of what you specify. the -type f or -type d are used for files and directories respectively. We could not get a single command to do both at the same time. Then we execute a search and replace using bash. This has given us the desired results that we wanted and expected it to do.

$cmd = "find $dest -name \'*$sourceSub*\' -type f -exec bash -c \'mv \"\$1\" \"\${1/$destSub/$sourceSub}\"\' -- {} \\;"; $cmd = "find $dest -name \'*$sourceSub*\' -type d -exec bash -c \'mv \"\$1\" \"\${1/$destSub/$sourceSub}\"\' -- {} \\;";

3/24 - Logged in to check logs.

3/25 - Logged in to check logs.

3/28 - I added in the parts that need to get copied over when you either run -d for decode or -t for train or -a for all. I still have to look at whether we are inadvertently changing some of the file names. I think I found a way to do it that would work with how we are finding and changing file and directory names. I found this online:

$ find. -type f \( -iname ".*" ! -iname ".htaccess" \)

Basically, what comes after the ! gets excluded from the search, which is what we are looking for.


 * Plan:

3/23 - Our plan now is to run every part of our script and make sure there are no bugs. We would like this to be ready by next class. If that is not possible, than two weeks max I think. As long as we have done everything needed, and haven't missed anything, we think we should be close to completion. We need to make it work with flags, depending on whether you want to move just train (-t), decode (-d) or all (-a).

3/24 - Logged in to check logs.

3/25 - Logged in to check logs.

3/28 - My plan is to finish the script in class. If not, then by the end of next week.
 * Concerns:

3/23 - One concern we have is in the wav folder, some files may be getting changed unintentionally. For instance, if some of those files in the wav folder have a 001 anywhere in them, we think it will be switching it to lets say, 006. We have not looked into it too much, but we think that may be an area of concern. If it is in fact doing just that, then we will need to exclude the wav folder, and probably the feats folder. These folders, if I'm correct, just hold information that is needed for sphinx, so it is not necessary to change any sub directory numbers.

3/24 - Logged in to check logs.

3/25 - Logged in to check logs.

3/28 - I don't have any real major concerns right now.

Week Ending April 4, 2017
3/30 - Logged in to check logs
 * Task:

4/1 - Logged in to check logs

4/3 - I am writing about the script we did to put up on the media wiki under the scripts page section.

4/4 - I am doing the final testing for the script, and hopefully I will be able to say it was a success. Also I will be adding my piece about the script to the media wiki page.


 * Results:

3/30 - Logged in to check logs

4/1 - Logged in to check logs

4/3 - I'm going to copy what other people have done int he past, formatting wise. I gave an overall description of what the script does, and I plan on breaking it down a little more to give more description on what each part does, whether it is Perl or unix, etc. I want to try and make this as clear and concise as possible, so that it is more helpful. I've read documentation for other scripts before, and I didn't think they were particularly helpful. They would give the brief overview of what they are, but they wouldn't break down the code, nor did they have comments in the code. Obviously this makes it harder to get off the ground quickly when you are first starting the Capstone class.

4/4 - First, Jake figured out how to make a page on the media wiki. I was able to just copy and paste what I had previously written and post it on the media wiki. We now have a description of what our script does, and hopefully it will be useful for people in the future. Secondly, I have done more testing with our script. I still think we are all set with it not accidentally overwriting previous files. I also copied over an old experiments train files and ran a successful decode off of that. I am going to be doing a fresh train and then I will copy that over to another experiment and try a decode again, just to make sure.


 * Plan:

3/30 - Logged in to check logs

4/1 - Logged in to check logs

4/3 - My plan right now is to find out how to actually add a page to the wiki so I can actually post it. I've only ever made pages before through the scripts.

4/4 - My plan is to try one last test to make sure everything is all set and that we can say our script is officially finished, as long as nothing else comes up. I think our next step would be to look at makeDecode and see what that does and find out why it is not being used.


 * Concerns:

3/30 - Logged in to check logs

4/1 - Logged in to check logs

4/3 - My concern is basically the same as what I am planing on doing. I'm concerned that I don't know how to create the page.

4/4 - I don't really have concerns right now. As long as my final test goes as planned I will be fine.

Week Ending April 11, 2017

 * Task:

4/6 - Logged in to check logs

4/8 - Logged in to check logs

4/10 - Today I wanted to finish the copyExp.pl media wiki page to make it more helpful for future semesters working on the project.

4/11 - Started to look at makeTest.pl to see what we should do about it


 * Results:

4/6 - Logged in to check logs

4/8 - Logged in to check logs

4/10 - I went through the scripts code that we put up on the page and deleted some unnecessary pieces that we had left over. I also added some more comments in the code so people can look quickly at a general overview of what that section comes. Since I know how annoying it is having little documentation from previous years on what a piece of code does, I wanted to make sure to make our as descriptive as possible. I talked in general about what each piece does and then went into more detail about why we did certain things in the script and what exactly that section does. For example I talked about why we needed to change the hard links in the configuration files, and I explained how we did it and why we did it this way. Hopefully this will be good information for the future capstone teams when they might need to do something similar.

4/11 - I started to look at makeTest.pl. It seems like it is fairly similar to our copyExp.pl script. It uses the same kind of directory file transfering like we do when we move something from one directory to another. This for example doesn't transfer a file, but they access directories like we do in our script

chdir("$EXP_DIR/$DEST") or die("Bad destination: $!");
 * 1) Now go to where the destination experiment resides.

They use the same technique for flagging and arguments when you run the script. I can't remember if we looked back on previous years and that is why we do the same exact thing. I'm not sure if there is more than one way to do it in Perl, but regardless we do arguments and flagging the same way. Going farther down it looks like it copies folders and files from when you make your train at the beginning. We have to look into that more to find out why it is doing that. The top of the script states:

Prepares files for a decode from the source, with the files generated being put in the destination. Except for trans (which can either be a filepath or a "flag" by starting the parameter with a dash), all of the arguments are filepaths, with source and destination referring to experiment directories, and the trans and corpus referring to the corpus directories. If you choose to use a flag for trans, then these flags will select the .trans file to use to create the language model and fileids:

It never mentions copying over train files. All in all, I understand most of the code, it isn't too dissimilar from what Jake and I did. We just have to figure out the logic behind it more to make a better determination as to where we should go next.


 * Plan:

4/6 - Logged in to check logs

4/8 - Logged in to check logs

4/10 - As I have stated in class, I plan on looking more into makeTest, which should be called makeDeocode, and figure out why it is not being used in the decoding process. Whether it does exactly what is in the wiki page and just not actually implemented into the wiki page, or whether it does not work correctly, I do not know. Hopefully tomorrow I will have a report done on what needs to be fixed or done with it.

4/11 - We still have to get a better understanding of where the script stands in the experiment process.
 * Concerns:

4/6 - Logged in to check logs

4/8 - Logged in to check logs

4/10 - I don't have any real concerns at this point in time.

4/11 - No concerns at this moment in time

Week Ending April 18, 2017

 * Task:

4/14 - Logged in to check logs.

4/15 - Logged in to check logs.

4/17 - I was going to look at some changes I could make to the sphinx configuration files to see if the changes could produce a better results for the rebels team.

4/18 - Start work on making the makeTest and makeTrain script into a single makeExperiment script


 * Results:

4/14 - Logged in to check logs.

4/15 - Logged in to check logs.

4/17 - I looked more into this link that was posted in our group slack channel http://www.speech.cs.cmu.edu/sphinx/tutorial.html. I was looking more specifically at one of the sections that suggested different values for training.

$CFG_FINAL_NUM_DENSITIES = if you are training semi-continuous models, set this number, as well as $CFG_INITIAL_NUM_DENSITIES, to 256. For continuous, set $CFG_INITIAL_NUM_DENSITIES to 1 and $CFG_FINAL_NUM_DENSITIES to any number from 1 to 8. Going beyond 8 is not advised because of the small training data set you have been provided with.

$CFG_N_TIED_STATES = set this number to any value between 500 and 2500. This variable allows you to specify the total number of shared state distributions in your final set of trained HMMs (your acoustic models). States are shared to overcome problems of data insufficiency for any state of any HMM. The sharing is done in such a way as to preserve the "individuality" of each HMM, in that only the states with the most similar distributions are tied. The $CFG_N_TIED_STATES parameter controls the degree of tying. If it is small, a larger number of possibly dissimilar states may be tied, causing reduction in recognition performance. On the other hand, if this parameter is too large, there may be insufficient data to learn the parameters of the Gaussian mixtures for all tied states.

$CFG_CONVERGENCE_RATIO = set this to a number between 0.1 to 0.001.

$CFG_NITER = set this to an integer number between 5 to 15. This limits the number of iterations of Baum-Welch to the value of $CFG_NITER.

4/18 - As of right now, we are planning on combining the two scripts, and use flagging to determine whether you want to create the train, decode, or an entire experiment. Jake and I decided that instead of having a really long command line argument that is more likely to be inputted incorrectly, that we would ask for inputs during the process. Things such as switchboard hour size, etc. IT might also be a good idea has to have a default settings setting. For instance if I just want a standard 30hr train, the default setting could provide the necessary input so that you could just run the script and walk away from the terminal.


 * Plan:

4/14 - Logged in to check logs.

4/15 - Logged in to check logs.

4/17 - I have to figure out more as to what those different values do, and if they would affect what I am doing with trying to get a better result for the group.

4/18 - Our plan is to continue working on the makeExperiment script. We are also probably going to have to add to part of the script to make the language model. Right now I don't think we are currently using a script to create the language model directory automatically.


 * Concerns:

4/14 - Logged in to check logs.

4/15 - Logged in to check logs.

4/17 - No concerns right now

4/18 - No concerns

Week Ending April 25, 2017

 * Task:

4/22 - Logged in to check logs

4/23 - Logged in to check logs

4/24 - Finish the makeTrain section of the makeExp script

4/25 - Keep working on makeExp script


 * Results:

4/22 - Logged in to check logs

4/23 - Logged in to check logs

4/24 - I have not tested the script yet, but I think what I have should work. It basically just follows the steps we have for creating a train, but does it automatically. For example one piece creates the directory for you.

print "Please enter the destination you want your experiment to be in: \n"; $directory = ; chomp $directory; $cmd = "mkdir $directory"; system($cmd);
 * 1) makes the directory at the requested spot

This part of the script has the user enter were they would like the folder to be created, and then uses the system function in perl in order to execute the unix command mkdir. I will hopefully be able to test it either tomorrow or wednesday in class to see if it is working correctly.

4/25 - I changed how the script runs for the train creation section. Instead of going exactly by the order listed in the wiki page here https://foss.unh.edu/projects/index.php/Speech:Run_Train_Setup_Script I am asking all the questions that are necessary at the start so the script can be left unattended once the questions (switchboard size and directory location) have been answered. I also did some more research on how exactly the "system" command works in perl. I saw that there were two functions in perl that allow for executing a script or unix command in perl. They were "exec" and "system". From what I was able to gather, "exec" does not return whether it finished what is was executing or not. If it is unsuccessful it will return false. If it is successful then it doesn't return anything, meaning you never know exactly when it is finished unless you manually check the processes running on the system. "System" however will return an exit status when it has competed whatever it was supposed to do. I am hoping that means when it has returned that it has exited whatever it was doing, the script makeExp will continue on and go to the next part.

$cmd = "genfeats.pl -t"; system($cmd); $cmd = "nohup scripts_pl/RunAll.pl &"; system($cmd);
 * 1) now run the runAll script

As an example, I don't want the RunnAll.pl script to run until genfeats.pl has finished. I'm hoping my understanding of system is correct and that it will wait for the script to finish executing before going onto the next part.


 * Plan:

4/22 - Logged in to check logs

4/23 - Logged in to check logs

4/24 - My future plan is to continue with making the language model and decode in the script. I will also be testing it sometime in the future.

4/25 - Assuming my understanding of the "system" command is correct, I plan on just continuing the script. The next thing I will do is test it (the train part) and then I will move on to the language model part if the train part was successful.


 * Concerns:

4/22 - Logged in to check logs

4/23 - Logged in to check logs

4/24 - My only concern about the script is if it will mess up when it runs genfeats.pl -t and then the nohup scripts_pl/RunAll.pl & right after that. I don't think it is going to wait for the other script to be finished, so I think I will have to try to find a find to check if the script is finished in the processes in order for this to be successful. Hopefully perl can make that easy.

4/25 - Same concerns as last time really, but I feel a bit better after researching how system works. Hopefully it works as intended.

Week Ending May 2, 2017

 * Task:

4/27 - Work on getting the createExp.pl script to be runnable from anywhere

4/29 - Logged in to check logs

4/30 - Logged in to check logs

5/2 - Try to fix the error we were having on our createExp.pl script not working globally on the system.


 * Results:

4/27 - As of right now it is not working for some reason. We have the #!/usr/bin/perl at the top, to indicate it should be run using perl. We have it in the correct folder /mnt/main/scripts/user, which is in the PATH environment for our server. We are able to see the script using the which command. It tells us that it is in the correct directory and that it can be found from anywhere. However when it is run, it does not work correctly. We get a "Command not found" error.

4/29 - Logged in to check logs

4/30 - Logged in to check logs

5/2 - The first thing I did when trying to fix the script was check the permissions the other scripts were giving that were working correctly, globally. Our permissions were the same, besides the user they belonged too. Ours belonged to Jake, while makeTrain.pl for example belonged to root, so I figured I may as well try to see if I can get it working correctly as root. So I logged in as root and recreated the script, did chmod a+x createExp.pl to make it executable, and then tried running it. To no avail, it did not work again. I kept looking up why it would keep saying the command is not found. Eventually I found a stackoverflow answer saying that it might have something to do with an incorrect character encoding format that perl can not read. Unfortunately, both what was required form the answer and from our script were the same, UTF8. The next thing I saw was that our line endings were different. Since I did the script in windows and then uploaded it to the server, the line endings were in CRLF format, whereas linux requires it to be LF format. Once i was able to edit it to the correct format, the script worked as it should have been working. I'm not sure exactly why that would cause it to not work globally though. We were able to run the script locally in the windows format, but to do it globally it needed to be in the Unix format.


 * Plan:

4/27 - We will have to look into this more. IT's more convenient for us to be able to run the script directly from the experiment folder the user will have. It is more annoying to change directory in the script. If we get it working globally on the server, we have a lesser chance of running into an issue. We will have to try to find different solutions online, or maybe try making the script logged in as root and see if that makes a difference for whatever reason.

4/29 - Logged in to check logs

4/30 - Logged in to check logs

5/2 - If i have time I would like to find out the answer to why it was not working globally with the incorrect character encoding. I mainly plan on just finishing the script. We still have the language model and decoding parts to go. I don't think it will take as long now. Fixing that problem has taken the longest out of the whole script so far i would say.
 * Concerns:

4/27 - My only concern right now is that it should be working based off of permissions, being seen in which, being in the correct directory. It has the same attributes that the other scripts that work globally do, but is not working. So I'm concern as to why that is.

4/29 - Logged in to check logs

4/30 - Logged in to check logs

5/2 - No concerns right now

Week Ending May 9, 2017

 * Task:

5/5 - Finish the createExp script

5/6 - Logged in to check logs

5/7 - Logged in to check logs

5/9 - Create the wiki page for our new script createExp.pl


 * Results:

5/5 - The script looks like it is ready to go. All the code looks like it will work correctly. We haven't added a default option but I'm not sure how useful that would be anyways. It won't really save that much time, but it is something we can add if we have time this weekend.

5/6 - Logged in to check logs

5/7 - Logged in to check logs

5/9 - I just setup a new page in the scripts section and gave a brief description of what this script does. Our comments in the code tell what we are doing. The code itself isn't that difficult to read, it basically reads right from the experiment setup page on the wiki.


 * Plan:

5/5 - Jake and I just have to plan on testing it out and make sure it does work 100%. We know the training part works, and the other sections follow along with how the training part works. We'll have to put up documentation on the wiki, and also give some future plans to however works on this script in the future. In the future it would be nice to be able to change configurations while you're making the train part. It would also be nice if it could decode unseen data as well. 5/6 - Logged in to check logs

5/7 - Logged in to check logs

5/9 - I added some future plans for people who would be working on this next year. As of right now, our script can only create experiments using the default settings in the configuration files. It would be more preferable in the future to add an option when the training function is running so that you would be able to change the configuration files. This would make it more useful when it comes time trying to get a better result. Also another thing I said should be worked on in the future is the ability for it to run an unseen decode. Right now we only allow it to run on seen data. These are things Jake and I would have liked to have added, but we did not have enough time at the end of the semester.


 * Concerns:

5/5 - No concerns from me right now

5/6 - Logged in to check logs

5/7 - Logged in to check logs

5/9 - No concerns right now