Speech:Spring 2014 Joshua Anderson Log

From Openitware
Jump to: navigation, search


Contents

Week Ending February 4th, 2014

Task

Wednesday Jan 29th

I plan on pulling down the Speak repo from the code.google source located here: https://code.google.com/p/speak/ This will give me the ability to try and run the app on my local system and get a feel for how the system works in its current state.

Notes from QAing the web app:

1. Once I got the database all built, and able to actually run the web app, the first time I went to the homepage, I got this error:

Parse error: syntax error, unexpected '[' in /Users/joshuaanderson/Speak/web/php/controllers/dbUser.php on line 115

And here is the code snippet from that line area:

$userArray = array();
while($sth->fetch()) {
    $userArray[] = [$userID, $username];
}

After realizing what the intention was, I modified the code to look like this:

$userArray = array();
while($sth->fetch()) {
    $userArray[] = array($userID, $username);
}

This allowed the system to bring me to the login screen.

2. Password hash function isn't secure. With the SQL scripts to create the database and admin user, I noticed for the created Admin user the password looks hashed:

d033e22ae348aeb5660fc2140aec35850c4da997

That's all fine and all, however I couldn't find the actual password to use in order to log into the system. So randomly, I decided to just google that hashed password... and sure enough, I came accross a web page that is dedicated to showing already simple sha1 hashed passwords. The link to the site is here: http://md5-database.org/sha1/admin -- the bolded one matches the value exactly what's in the database.

Now obviously this doesn't have to do with Speech recognition, but I thought I would still bring it up.

3. I did a test to add a new experiment to the system. Eveything seemed to be added just fine, however the File did not seem to get added.

I would like to add this functionality to the app so a file can be attached to a specific experiment record and then retreivable upon viewing that experiment detail page.

Thursday Jan 30

Today I connected to my account on Caesar through my Mac using SSH:

ssh jsm69@caesar.unh.edu

And was prompted for password... all that good stuff. I also successfully changed my password.

After that I started doing some research on how to add a secure file upload feature to the site and saving those files on the websever, rather than the database. I was initially thinking to save on the database, but after reading online, that is a huge no-no in PHP/MySQL land I guess. I will bring this up in the next status meeting to confirm this belief.

I also started making a list of items the team can bring up during the next status meeting, focusing on the Speak web app.

Over the weekend, I'd like to explore a bit on some of the Exp. directory structure passed classes have made

Saturday Feb 1

Today I logged into Ceasar to do some exploring through the Unix commands. I have a very small expirence working with Unix commands, mostly able to go through the file system, list out the directories, and view contents on those directories.

My main goal today was to find where SpeaK is located in Caesar. After looking through some of the past student's logs, I was able to find out that the machine they were assigned (which now I see we are also assigned), was miraculix. So knowing that, I remembered Prof. Jonas saying we could just ssh into Ceasar, then once logged in, we can just ssh into any other machine... so I just typed in:

caesar sp14/jsm69> ssh Miraculix

And sure enough that worked! So now I'm on the machine that the 2013 Spring team used. I used the following commands to get to the /mnt/main directory:

miraculix sp14/jsm69> cd ..

Directory: /mnt/main/home/sp14

miraculix home/sp14> cd ..

Directory: /mnt/main/home

miraculix main/home> cd ..

Directory: /mnt/main

miraculix /mnt/main>

Now in there, I could see all the directories on this server:

miraculix /mnt/main> ls backup Exp install notes root srv ttemp corpus home local old scripts svn var

With some expirence of where sites are usually placed, I decided to go into the "srv" directory. Lucky guess as the next directoy inside was "www". Here is my full command list to get to SpeaK:

miraculix main/srv> ls

www

miraculix main/srv> cd www

Directory: /mnt/main/srv/www

miraculix srv/www> ls

cgi-bin htdocs vhosts

miraculix srv/www> cd vhosts

Directory: /mnt/main/srv/www/vhosts

miraculix www/vhosts> ls

default speak test

miraculix www/vhosts> cd speak

Directory: /mnt/main/srv/www/vhosts/speak

miraculix vhosts/speak> ls

html php Sponser Project requirements sql tests uml

So that was a cool learning journey to find that out. Now the questions remain...

- How do I add files from my machine to there?

- Is that code repo the same that's on the code.google page?

Results

Tuesday Feb 4

This week's results included a great start regarding SpeaK and setting up the db to get it to work. Also, I hopped on Caesar and made some good steps in learning how that system works as one that rarely ever used Unix Command Line before.

I also browsed through the last semesters Exp. group logs to gain more information on what they did throughout the year. As I stated above, it seems like they were trying to be able to upload files via Speak when one is creating a new expirement, but they never seemed to finish that as I don't see anything about it in the current code base from code.google.com.

So my proposal during the next class meeting will be to guage whether that is something we should add during this semester to SpeaK. Some other stuff I noticed that would be nice to add include: - Ability to specifiy the Exp. Number when creating a new one (currently pre-defines it) - From Brian | possibly make the way of creating a new expirement a button within Speak rather then having to run a number of scripts via command line ... have to research if this is at all possible (found this to start somewhere: http://stackoverflow.com/questions/2050859/copy-entire-contents-of-a-directory-to-another-using-php)

Also as Brian pointed out, how we can move our files to Casear can be done using FileZilla. I have successfully connected to Caesar using FileZilla on my Mac. The Protocol I have enabled is SFTP - SSH File Transfer Protocol - and I'm using Login Type "Ask for Password". I have pre-filled the username with my Wildcat one "jsm69". Upon connecting, it prompts me to enter my password and I entered the NEW one I created the first time I logged in to Caesar via Termainal SSH.

Plan

Wednesday Jan 28

My plan will first be to get the Speak web app running on my system. Then once that's all good, I will be able to dive into the code being used now and basically do some QA of the app. During my QA process, I will take notes of how the app is working, and if there is anything particular to add and/or modify.

Concerns

Tuesday Feb 4

I don't have many concerns right now, but more like Brian mentioned in his, not much direction as of yet. I feel we all made good progress this week getting SpeaK installed on our system to checkout via code.google.com but obviously not all of us are going to be working on SpeaK. So we just need to firgure out next week what's our ground plan going to be and how we are going to complete that.

Notes for Next Meeting

Tuesday Feb 4

Speak:

  • About adding ability to upload files when creating new experiments
    • which file types allowed?
    • size restrictions?
  • Is the latest repo on the code.google source page?
  • How do we commit new changes to code.google.com? Prof need to add us as commit users?
  • Noticed when creating a new experiment, the form doesn’t allow you to choose your experimentId… I can see that being an issue keeping the experiments made on the server and the records kept on Speak consistent. Suggest adding functionality there to make that field editable and adding business logic to make sure the entered Id doesn’t already exist etc...
  • Possibly creating a page where there’s a page that allows users to create a new experiment directory within Caesar and that page will have a form to take in inputs from the user like the exp. number, etc.
    • This would obviously be a large conquest including the ability needed to connect to Caesar via the SpeaK site (if we can’t already do that) in order to write in the Exp. directory
  • Password security? - see log above.

Week Ending February 11, 2014

  • 2/7/14 - logged in and viewed logs, and script documents. Also wrote my Tasks and Plans for this week below.
  • 2/8/14 - logged in and analyzed tutorials on creating new experiment and running a train.'
  • 2/9/14 - logged in and viewed Brian's log.
  • 2/11/14 - logged in and added my experiments to the Exps page - also updated log. Created the Proposal Sub-group page and gave my recommendation on how to create a more of a narrative proposal rather than a list of random stuff.
Task
  • Get the immediate goals for this semester set.
  • Communicate with group members and members of the Modeling group to find what will be beneficial to group as a whole in creating new experiments.
  • Come up with implementation and long term goals to achieve over the course of the semester.
  • Begin with our proposal planning.
Results
  • After the status meeting on the 5th, the Experiment group got together to gather our outcomes from that meeting. We spent some time pulling together a master list of what we all thought Professor wants us to tackle this semester. We also included a member of the Modeling team (Colby Johnson - http://foss.unh.edu/projects/index.php?title=Speech:Spring_2014_Colby_Johnson_Log) to give us more knowledge of what they need to speed up the process of creating a new experiment. This worked well.

A lot more contributions have been made to our Group page, including immediate goals and long term/implementation goals we have come up. After our next status meeting and work next class, we should have these tasks assigned and finalized allowing me to successfully create a well written Proposal for the Experiment group.

To see our group page, go here: http://foss.unh.edu/projects/index.php/Speech:Spring_2014_Experiment_Group

I have some questions regarding the Run a Train tutorial I plan to bring up next status meeting tomorrow. The first one is obviously that it didn't work correctly when either Brian or I did our first run with it. I did two runs, and both failed. It even says in the documentation that it will fail the first time (what is that about?). My question for Professor Jonas is, are we as the Experiment group responsible for how that script works? Do we have to fix it so it works correctly? Before I want to commit anyone to work on resolving that, I need to make sure that is in our responsibility.

Like I said above, I created two new Experiments following the Run a Train tutorial. The two experiments and the details about them can be seen here: 0154: http://foss.unh.edu/projects/index.php/Speech:Exps_0154 0155: http://foss.unh.edu/projects/index.php/Speech:Exps_0155

I created the Proposal Sub-group page and gave my recommendation of how to create a more narrative proposal. This page will help the other proposal members have a central area to put any questions about their groups stuff. My plan is to have every group add their final proposal draft to that page, so I can then take those sections and create a master proposal paper and post that on the Official Proposal page for the Spring 2014 semester.

Check out the Proposal group here: http://foss.unh.edu/projects/index.php/Speech:Spring_2014_Proposal_Group

Plan

My plans for this week include preparing our immediate tasks and long term, and actual implementation goals for the Experiment group of Spring 2014. This will take some time as we as a group gather up our material from the last meeting where Professor Jonas specified what he wanted us to focus on during the semester. Also, to gain more knowledge, we plan on speaking with the Modeling group to get more of a feel for how they run experiments. Doing this will give us more knowledge of figuring out how we can make the process of creating a new experiment easier and more efficient.


Concerns
  • The fact that the entire group mostly spent our time on SpeaK last week put us back a bit on what Professor Jonas actually wanted us to do.
  • Need to know if it is our groups responsibility to fix any errors on the scripts i.e. like the RunAll.pl script gave almost the entire group the same error.

Week Ending February 18, 2014

Task
  • 2/15/2014 - I worked a lot on our Proposal. Got a good draft for our Introduction and Implementation Goals ready including the tasks we plan to hit this semester. Communicated with team that final list of tasks and asked each to choose which one they would like to hit along with an estimated time length to be used in the proposal Timeline section.
  • 2/15/2014 - Worked on better understanding the folder structure for the Experiment directory. Wanted to go through the Wiki and find information regarding each section to gain more knowledge of all the folders.
  • 2/16/2014 - Continued on the process I started yesterday - finishing up the process through Decoding. Also want to get a detailed description for each of the individual folders in the experiment directory.
  • 2/17/2014 - Read some logs today and added important parts of a convo I had with Colby and David from the modeling group. Also worked a bit on finalizing the Experiment groups Proposal.
  • 2/18/2014 - Logged in and read logs of other team members.
Results

Model Building

Part 1- Model Building - > Data Prep: http://foss.unh.edu/projects/index.php/Speech:Models_Data_Prep

  • To be ready to Run a Train and then Decode that train, we need to have the right files placed appropriately in the experiment directory.
    • There are 3 main groups we need to accomplish this: Audio files in .sph format, Transcript of those audio files, and a working dictionary.
      • The dictionary must have the current words and the phonetic spelling of the words including the pronunciations for names
        • Phonetic spelling, according to Dictionary.com is "Phonetic spelling is the representation of vocal sounds which express pronunciations of words. It is a system of spelling in which each letter represents invariably the same spoken sound... i.e. WEDNESDAY = Wed Nes Day"
      • Copies of audio files in .sph format can be found on Caesar at: /mnt/main/corpus/switchboard and then go inside one of the folders. When running a train, you would select which folder (type of train) you want to do, i.e. miny, tiny, full10hour.
      • Copies of the corresponding Transcript files can also be found inside one of the type of train folders.
      • A working Dictionary is also a significant piece to performing a train and decoding as that will contain all the English words and their pronunciations. This document states that large dictionaries can cause issues during the train process, so to solve this there is a Perl script we use to essentially "prune" out the words that don't exist in our supplied Transcripts.
        • Copies of dictionary files can be found on Caesar at: /mnt/main/corpus/dist - currently the Run a Train tutorial says to use the /mnt/main/corpus/dist/cmudict.0.6d master dictionary until stated otherwise.

SIDE NOTE: This document seems a outdated because there are script and file references to locations that doesn't exist anymore. Would like to see this cleaned up sometime this semester for better accuracy.

Part 2 - Model Building -> Language Modeling (done after Running a Train): http://foss.unh.edu/projects/index.php/Speech:Models_LM_Build

  • The first step to creating a new language model is to clean by removing unwanted characters from the raw transcript files you've used. There is a ParseTranscript Perl script that can be used that takes two arguments: unparsed transcript file, and the name of the to-be-created filtered file. Again the result of this script is a new file containing ONLY what was said in the supplied audio files (.sph).
  • The next step is to actually create the new language model. To do this, we have to call another perl script called lm_create.pl which when run, calls 4 different executable commands which from the documentation, we don't need to supply any details for. The only argument for this script is to pass the name of the newly created parsed transcript file we made above.

Part 3 - Model Building -> Building and Viewing Models: http://foss.unh.edu/projects/index.php/Speech:Models_AM_Build

  • There are two types of models we can create here: Acoustic and Language Models
  • Building an Acoustic Model
    • A mini train and decode has been completed with supplied audio files, transcripts, and dictionaries.
    • The purpose of this task is to take dialog saved in the .wav format and their corresponding transcripts and be able to create a Speech Recognition Tool.
    • The trainer grabs the .wav files, phonemes dictionary, master dictionary, and transcripts of the convos and matches up the audio with the transcript. In order for the trainer todo this, it needs a dictionary with every word that is in the transcript and an accurate phoneme dictionary with every word that is in that said dictionary.
  • Verifying an Acoustic Model (Decoding)
    • The Decoder is used to check and confirm if the Trainer actually worked and how accurate it was. To do this, we have to run the run_decode.pl script.

Run a Train - http://foss.unh.edu/projects/index.php/Speech:Training Below are notes I took on all the scripts we need to run during this process so I can note and learn what each sub-folder is used for.

  • Part 1: Setup the task directory
    • This step involves creating a new Experiment directory to be used for this specific data set.
    • After we use the mkdir unix command to make our new exp#, we have to prep the directory by calling a script called setup_SphinxTrain.pl. This script takes in an argument which is the newly created exp# we have. Based on documentation, this is the script that creates all the sub-folders and copies over some essential scripts, though not all of them, and imports a generic train configuration file (sphinx_train.cfg)
      • NOTE on this .cfg file - this file needs to update some hardcoded placeholders with the new exp# we are currently using. Eric's train_02.pl script takes care of that mess so when we do start to create some master scripts for this process, we need to make sure that step is definitely included as this saves a future step further down the road of Running a Train.
  • Part 2: Setup the Sphinx Train Config file
    • This is where we have to change the placeholder values of the sphinx_train.cfg file to the newly created experimentId... We are going to modify this process so the end user does NOT need to open the crappy emacs unix thing and change them manually.
    • Just to note, before doing this process, we move into the /etc folder.
  • Part 3: Generate the Transcript and its Associated Audio-file list
    • This is where we run a script that will generate the Transcript file to be used in running the train when finished all the steps. To do this, we have to first choose which corpus subset of audio files we want (i.e. tiny, mini, full10, etc). Each of these directories contain both audio files in .sph format and the textual transcripts.
    • To complete this process, we call the genTrans6.pl script. This script takes 2 arguments: the full absolute path to where the corpus subset we want to use, and the experimentId we are currently using in this experiment.
    • We first have to make sure we are in our BASE experiment folder: /mnt/main/Exp/expId#
    • An example call would look like:
      /mnt/main/scripts/user/genTrans6.pl /mnt/main/corpus/switchboard/mini/train 0028
    • Upon completion, we will have 2 new files created in our etc directory: exp#_train.trans and exp#_train.fileids

2/16/2014 - Continued working on my way through the process of what each folder is inside the Experiment directory.

  • Part 4: Create the Experiment Dictionary and Copy Over the Filler Dictionary
    • Once we completed step 3 in generating a valid transcript, we now can create a custom dictionary for this specific experiment, which will contain a list of words along with its corresponding pronunciation in Arpabet format.
      • Arpabet format is a "phonetic transcription code developed by Advanced Research Projects Agency as part of their Speech Understanding Project (1971-1976). It represents each phoneme of American English with a distinct sequence of ASCII characters." - http://en.wikipedia.org/wiki/Arpabet
    • According to the documentation, this dictionary resides in the /etc folder within the directory structure.
    • The reasons we create this custom dictionary for the train is simple: for Train and Decode speed.
      • According to the documentation, like we said above, if we used the entire English dictionary, the process would be much longer than we need it to be.
    • To complete this step, first run this command: cd etc
      • We are going to be using a script that "prunes" the large dictionary called PruneDictionary2.pl
      • This script has 3 arguments: name of transcript to generate a word list from, a "Master" dictionary to reference them from, and the file name of the new dictionary to be created
      • An example usage would be:
        /mnt/main/scripts/train/scripts_pl/pruneDictionary2.pl <experiment #>_train.trans /mnt/main/corpus/dist/cmudict.0.6d <experiment #>.dic
      • After that, we have to now copy over the "filler" dictionary into the same /etc folder
        • The filler dictionary is composed of non-speech events, mapping them to user-defined phones.
        • An example usage would be:
          cp -i /mnt/main/root/tools/SphinxTrain-1.0/train1/etc/train1.filler <experiment #>.filler
  • Part 5: Generate the Phone List
    • Phones are the smallest component of a phonetic transcription code (Arpabet) - they represent how each part of a word sounds like.
    • We first need to copy over the genPhones.csh script to my etc folder inside the experiment directory.
      • cp -i /mnt/main/scripts/user/genPhones.csh .
      • NOTE: I need to have the "." period at the end. It will error out if we don't have that - want to know why that is.
    • After that has been copied over, lets execute it:
      ./genPhones.csh <experiment #>
    • Once that has successfully executed, a file gets created: <experiment #>.phone
    • However, we have to insert an additional phone entry to that generated file ... we need to insert the phrase, "SIL" (without quotes) in the correct alphabetic ordered spot - not doing so will error out the trainer.
      • NOTE - we need to make this process more descriptive because without Brian's guide last week, I would have no idea what I was doing. Need to modify this with step by step instructions to opening the file and finding the correct spot using Emacs
  • Part 6: Generate the Feats data
    • Feats is short for Features - used in training and is derived from recordings. The data derived from this step is also used when decoding the train
    • We need to be in the base experiment_# directory to execute the following script.
    • To create the Feats, we run a script called make_feats.pl
    • Usage would be:
      /mnt/main/scripts/train/scripts_pl/make_feats.pl -ctl /mnt/main/Exp/0028/etc/0028_train.fileids
      (with the 0028 being whatever experiment # you are currently working on).

Creating the Language Model - http://foss.unh.edu/projects/index.php/Speech:Create_LM

  • Below is the steps I've noted to create a new Language Model
  • So we went over already the basis for creating a Language Model above. Below are steps we have to take to create this:
    • We first need to create a new folder within our base Experiment directory: mkdir LM
    • Then of course, go inside that cd LM
    • We now need to copy the SAME transcript we used when Running a Train into this new LM folder.
    • So if we are using the mini/train corpus directory, usage would look like this: cp -i /mnt/main/corpus/switchboard/mini/train/trans/train.trans trans_unedited
      • There is NO documentation as to what the "trans_unedited" argument is...
    • Once that is copied over to the LM directory, we now need to prepare that Transcript
      • There is a script called ParseTrasncript.pl that takes in two arguments: trans_unedited (which I assume is the file we create when coping over the transcript we used previously), and trans_parsed (which I assume is the file that will get created after this script runs)
      • Before we do anything else, we have to copy over the script that actually will create the language model: cp -i /mnt/main/scripts/user/lm_create.pl . (have to include the "." at the end).
      • Now we can run it by doing: ./lm_create.pl trans_parsed - trans_parsed is the file we created after running ParseTranscript

NOTE: So David did run a successful process of Running a Train -> Building a Language Model -> and Running the Decode.

  • Exp 0156 - http://foss.unh.edu/projects/index.php/Speech:Exps_0156
    • Purpose: Build an acoustic model using the first_5hr data set with modified parameters in hope of getting a better decode. Used a density of 8 and senone of 200.
    • Results Acoutsic Model was built successfully.

Below is the final step - Running the Decode

Run the Decode: http://foss.unh.edu/projects/index.php/Speech:Run_Decode

  • Prepping the DECODE directory and starting the Sphinx 3 Decoder:
    • From your base experiment folder, create a new directory called "DECODE". mkdir DECODE
    • Go into that new directory cd DECODE
    • We have to copy down another script to run the decode process: cp -i /mnt/main/scripts/user/run_decode.pl . (have to include the "." at the end)
    • NOTE - In that scripts/user directory, there is another decode script titled run_decode2.pl. Colby J from the Modeling group has told me that this script is very useful as it calls for a 3rd argument where we can specify the Senone Value that is used in the Decode process that Eric found out last summer... Jonas mentioned something on this, but it basically will help making ratio during the process more balanced.
    • Now this run_decode.pl script takes two arguments: experiment number of the train we ran, and the experiment number of the Acoustic Model we built - NOTE: The Acoustic Model is a result of successfully running a train.

NOTE - Regarding the two experiment numbers we have to pass to the run_decode.pl script, I was a little confused why sometimes they could be different like in the case with David's above... he used the 0156 experiment AND the 0158 experiment to run the decode... So what I did was email the modeling group (Colby S, Colby J, and David) to clarify why this was the case... I saw in the documentation that after you have successfully Run a Train, an Acoustic Model will be created. Then you can just run the decode process using the same Experiment # for both arguments for run_decode.pl. I wanted to make sure I wasn't thinking something different.

  • My Email to them:
    • Okay I guess I just saw at the end of Running a Train, the successful result of doing that will ultimately create the Acoustic Model... So then just to clarify, when you're ready to DECODE and want to call the run_decode.pl script, the two arguments you need to pass (FROM Wiki Page: This script takes two parameters, the experiment number of the train to be decoded, and the experiment number of the Acoustic Model to be used) will mostly be the same as you create a new experiment, say 0200, and you successfully run a train with that experiment thus creating an Acoustic Model as well... so the next step you would do is create the DECODE directory inside your base experiment folder, and call the run_decode.pl like: ./run_decode.pl 0200 0200
  • Colby Johnson's response:
    • You are correct in thinking the way that you are. There are some circumstances where people were running the Decode on a different experiment number which required them to link all of the files contained in the two experiments used. There is nothing wrong however with doing the decode in the same experiment. No matter the case the experiment numbers will always be the ones where you have run the train. These under a rare circumstance will be different, but probably not for something you would be doing (or any of us for that matter).
    • And after that runs, check the log file and you should be good and ready to use the SCLite tool to produce the table with information regarding accuracy, word count, etc.

2/16/2014

  • Information regarding individual folders in the experiment directory
    • /bin
    • /bwaccumdir
      • This folder is empty
    • /etc
      • This is where the Sphinx Configuration file is located when we first create the new experiment: sphinx_train.cfg
        • Analyzed this file a bit and found some interesting snippets of it using our folder structure...
# Directory containing SphinxTrain binaries
$CFG_BIN_DIR = "$CFG_BASE_DIR/bin";
$CFG_GIF_DIR = "$CFG_BASE_DIR/gifs";
$CFG_SCRIPT_DIR = "$CFG_BASE_DIR/scripts_pl";

# Directory to write queue manager logs to
$CFG_QMGR_DIR = "$CFG_BASE_DIR/qmanager";
# Directory to write training logs to
$CFG_LOG_DIR = "$CFG_BASE_DIR/logdir";
# Directory for re-estimation counts
$CFG_BWACCUM_DIR = "$CFG_BASE_DIR/bwaccumdir";
# Directory to write model parameter files to
$CFG_MODEL_DIR = "$CFG_BASE_DIR/model_parameters";
# Directory containing transcripts and control files for
# speaker-adaptive training
$CFG_LIST_DIR = "$CFG_BASE_DIR/etc";

#*******variables used in main training of models*******
$CFG_DICTIONARY     = "$CFG_LIST_DIR/$CFG_DB_NAME.dic";
$CFG_RAWPHONEFILE   = "$CFG_LIST_DIR/$CFG_DB_NAME.phone";
$CFG_FILLERDICT     = "$CFG_LIST_DIR/$CFG_DB_NAME.filler";
$CFG_LISTOFFILES    = "$CFG_LIST_DIR/${CFG_DB_NAME}_train.fileids";
$CFG_TRANSCRIPTFILE = "$CFG_LIST_DIR/${CFG_DB_NAME}_train.trans";
$CFG_FEATPARAMS     = "$CFG_LIST_DIR/feat.params";

So it's fair to say that the experiment structure isn't something UNH fully came up with - it was designed by the Sphinx creators.

      • This directory gets used a lot in the process of Running a Train. During that process, we create a number of files including:
        • <exp_#>_train.trans
        • <exp_#>_train.fileids
          • Both _train files will get created when we run the genTrans.pl script.
        • <exp_#>.dic
          • The .dic file gets created when we run the pruneDictionary2.pl script using a combo of our generated Transcript file and the Master English dictionary we provide (cmudict.0.6d)
        • feat.params
          • This file gets created when we run the make_feats.pl script.
    • /feat
      • This folder is empty.
    • /logdir
      • This folder remains empty UNLESS you use Eric's train_01 and train_02 perl scripts - he has code that actually creates log files.
    • /model_architecture
    • /model_parameters
      • Both the model_ folders remain empty after completing a Run a Train process - however I feel that they might get used when decoding or creating the language model (next step after running a train).
    • /python
    • /scripts_pl
      • This folder contains what seems like ALL of the perl scripts that is used when we actually are ready to "run" the Train - RunAll.pl ... I'm confused because on the tutorial page, when we do call that RunAll script, we are referencing the script located on Caesar's scripts directory, instead of the Experiment directory. I'm curious why that is and why we can't call the RunAll in our own Experiment folder - I think we should check with the Modeling group and Professor about that
    • /wav
      • This folder gets populated with A LOT of .sph files when we run the make_feats script.
    • /LM
      • This folder gets created when you have successfully run a train, thus creating an Acoustic Model. There's no script that creates it, you have to manually do it.
      • When you do have it created, what goes into it is a copy of the Transcript we used to run the train
      • And then we run a script called ParseTranscript that will take in that used Transcript file, and parse it out creating another file called trans_parsed which contains a parsed, and much smaller version of the Transcript we used.
      • We then have to call a script called lm_create.pl and pass it that trans_parsed file. This creates the Language Model and drops files into our LM folder.
    • /DECODE

---

2/17/2014

On Sunday night, I emailed the Modeling group regarding building an Acoustic Model BEFORE running the Decode process. I was confused because on the Wiki, there didn't seem like there was a specific area where we can get instructions to create an Acoustic Model... there were many areas where it talks about creating a Language Model, but not directly the Acoustic Model. So I emailed them this question:

So I'm going through the process of Running a Train, Building a Lang. Model, and then Decoding so I can have a better understanding on Experiment directory as a whole, and I'm at the Decode part where we call the run_decode script - it says we need to pass it the experiment number of the train we ran, and the experiment number of the Acoustic Model to be used...

I remember David did one of these with experiment #0156 and he then ran the Decode with #1058... my question is, there seems to only be instructions to '

Am I missing something regarding the creation of building an Acoustic Model?

And before Colby first responded, I sent them this:

Okay I guess I just saw at the end of Running a Train, the successful result of doing that will ultimately create the Acoustic Model...

So then just to clarify, when you're ready to DECODE and want to call the run_decode.pl script, the two arguments you need to pass (FROM Wiki Page: This script takes two parameters, the experiment number of the train to be decoded, and the experiment number of the Acoustic Model to be used) will mostly be the same as you create a new experiment, say 0200, and you successfully run a train with that experiment thus creating an Acoustic Model as well... so the next step you would do is create the DECODE directory inside your base experiment folder, and call the run_decode.pl like: ./run_decode.pl 0200 0200

That make sense?

And then Colby Johnson replied back and confirmed this thought:

You are correct in thinking the way that you are. There are some circumstances where people were running the Decode on a different experiment number which required them to link all of the files contained in the two experiments used. There is nothing wrong however with doing the decode in the same experiment. No matter the case the experiment numbers will always be the ones where you have run the train. These under a rare circumstance will be different, but probably not for something you would be doing (or any of us for that matter).

The Acoustic Model is generated from the train, the LM is build around the Switchboard Corpus you used (i.e. first_5hr/train, mini/train) Then the Decode is run against the data. With any parts missing the decode will not be created successfully. Alternatively you can generate the SClite table (scoring) even if the decode did not fully finish. This can skew results. To be sure it ran successfully follow these steps: 1. cd into the DECODE dir for your Experiment

2. Vi into decode.log

3. Press Shift+G (brings you to the bottom)

4. Go up a few lines. you should see SUMMARY in bold (not super obvious) - it should be located about 20 lines above the bottom or so

So at this point, Colby had confirmed my thought process behind it all and I had a good feeling about the Acoustic Model stuff... then I wake up Monday morning and have a lovely email from David saying kind of the same thing, but not at all... You can read this article (which I think everyone should read) here: http://foss.unh.edu/projects/index.php/Speech:Spring_2014_Joshua_Anderson_Log#David_M_on_Building_a_Language_Model_.26_Acoustic_Model_then_Running_a_Decode

Plan

2/15 My plans for this week are to finalize our groups tasks for this semester. We have a better idea of what Professor wants from this group after the status meeting on the 12th. I can then use these tasks in our proposal which needs to be completed by Tuesday night.

Other plans include working through our main task for this week which is learning the theory behind the structure we use in the Experiment directory. Before we do any work regarding scripts or automation, it is best of learning WHAT and WHY there are 8 folders in the main experiment directory, and what each one does. My plan, along with my other team members is to fully digest all the parts of Building a Model that we train on, and ultimately decode to check for accuracy. There are many pages on the Wiki that deal with this process including:

If we all can go through all those pages, and take notes on them how I interpret the process, then the group can get together and talk about it all and answering each other's questions and just making sure we all have an understanding of each folder in our experiment structure and what their used for.

I will contact the Models group if I run into any questions during this process as they are in good shape regarding the Model system.

Concerns

2/15 - No concerns at the moment.

Week Ending February 25, 2014

Task
  • 2/21/2014 - Logged in and read Paulines logs on all the scripts used in running a train in anticipation for writing a master script for the modeling group.
  • 2/22/2014 - Logged in and looked at Eric's logs on train_01.pl and train_02.pl from the Summer 2013. Started writing my master_run_train.pl script.
  • 2/23/2014 - Continued on my master_run_train.pl script. Also thinking of ways to re-create the Experiment stuff in my local Caesar directory.
  • 2/25/2014 - Continued on my master_run_train.pl script. Want to get at least parts 1-3 all set and working. Also want to make sure the directions and formatting of content when running the script is good.
  • 2/25/2014 - Read Pauline's, Brian (WOW lol) and Ramon's logs for this week.
Results

2/22/2014

  • I have started the process of creating the master_run_train.pl script. After a couple hours of learning Perl and reading the past scripts for examples, I have successfully coded a Perl script titled master_run_train.pl that encompasses Eric's train_01.pl script and runs it. I have it setup now to accept 2 optional arguments: -a <true/false> and -x <experimentId>
    • I have run 2 test Experiments using this script (0187 and 0188) - the 87 one didn't run successfully as nothing got created. However my first successful test of calling my new script which then calls the train_01.pl script is the 88 one. This directory is filled with all the directories that one starts out with when following the Run a Train tutorial.
    • Before moving on tomorrow, I would love to find a way to test all of this on my local caesar/mnt/main/home/sp14/jsm69 directory instead of the main Exp directory as I plan on doing a lot of testing with this script. That will be my number 1 priority for Sunday.

2/23/2014

  • I started today looking how I can do these tests in my local directory on Caesar. After about an hour, I found that if I modified the scripts I'm writing/modifying where there are variables specifying the root Exp directory as /mnt/main/Exp to instead /mnt/main/home/sp14/jsm69/Exp that would work.
      • I pushed my scripts up to Caesar via FTP and tried it out and success! I created the 0000 experiment directory within my local Exp directory, away from the official Experiments.
      • This functionality uses Eric's train_01.pl script. I have renamed this script to exp_dir_setup.pl and made necessary modifications to work with the master script.
    • Starting now to add the train_02.pl script implementation to this master script. I have to modify it some and also rename it to be something more descriptive.
      • I have named this script exp_sphinx_config.pl as this script will actually open up the sphinx_train.cfg file and replace some values with the new ExperimentId.
      • I also have to allow the user to specify new Density and Senone values that can be modified in this file.
  • Master Run a Train Script
    • This script which is called master_run_train.pl successfully completes the first two steps in Running a Train.
    • When you first start the script there is a little intro that shows basic info about the script.
    • YOU DO NOT HAVE TO PASS ARGUMENTS when calling it like you do for every other script.
      • I have wrote this script to work so I prompt the end user for specific arguments and they just type it in and hit enter.
    • Inside this script, I'm saving all the arguments I ask for as variables and then when I check to make sure I have their values and such, I call the exp_dir_setup.pl and exp_sphinx_setup.pl scripts from my script passing the arguments I asked for.

So far in my internal tests on my local directory on Caesar, I have successfully built a new Experiment folder (0012) is my newest test, and then in my master_run_train.pl script, the next step after copying over all the directories I need to, I'm prompted to enter the Density value I want to use (optional) and the Senone value I want to use (optional)... after I input those values, the script lets me know it's finished modifying the sphinx_train.cfg file... I went to check it out and all my inputs I passed were in there correctly.

My goal for this week's meeting is to have this script and the two modified train_01 and train_02 scripts finished and ready for the Modeling group to start using. I hope to get the 3rd step in there which is just running the genTrans5.pl script which takes in a corpus subset path and the experiment Id - I don't see this being large task as I already completed the same functionality already.

2/25/2015

Today I continued on my master_run_train.pl script to get it to where I want it for this week's status meeting. I also spoke with Pauline on using a Git service to house all of our Perl scripts. I have some experience in using github.com in the past, however this site doesn't offer FREE PRIVATE repositories which is what this class needs. After some digging, I found a service called BitBucket that does in fact offer FREE PRIVATE source code repo's. I went ahead and created an account for UNH Manchester Capstone and got everything ready to start commiting our Perl scripts - I will wait until Professor gives us permission to do this though. Link to project: https://bitbucket.org/unhmanchestercomptech/unh-manchester-capstone-project Username: unhmanchestercomptech Password: UNHCompTechCapstone1860

If anyone wants or needs access to upload a script, please let me know and I will invite you. You can use the above credentials instead if you want.

After reading Pauline's log I fount the Scripts page where she is creating a great layout of all the scripts the project uses for a variety of tasks. I went ahead and added this new master_run_train.pl script in there with the must up-to-date information about it.

Plan

My plan for this week will be to get started on writing a master script for running a train. The Experiment group has spent the last week and a half or so building our understanding of the entire processes as a whole (Run a Train to build the Acoustic Model -> Build the Language Model -> Run the Decode), and it's at this point in the semester where I feel comfortable starting to write a more simple process with a Perl script. I have spoken directly to the Modeling group as they mentioned in their proposal they would like to help collaborate in this process. They were a huge help in getting me a document they use that helps them complete the current process which is about six steps a bit faster.

My goal for this script this week will be to get the first TWO steps of Running a Train completed with just 1 call to a script (my master_run_train.pl script). This will include them passing any required arguments that they currently pass already.

I have done some searching on calling a Perl script from another one AS WELL as passing arguments from that calling script. There are actually a number of different ways to do this, but it seems most only have the returning value being the last "exit" value; which on some of the scripts they currently call is just "0". Obviously that won't work as there are many print outs in the scripts they are calling for statues and such. After more testing on my local machine, I found a way to do this.

Other plan is going to try and setup the entire Experiment environment in my caesar sp14/jsm69 directory so I don't inundate the official Experiment directory.

Concerns
  • Never really have done Perl before.
  • Not sure if you can even call other Perl scripts within a Perl script yet.

Week Ending March 4, 2014

Task
  • 2/28/2014 - Logged in and got started on planning the modifications for my script to how we talked in class about.
  • 3/1/2014 - Finalized my plan of attack with Colby J on modifying my script. Began planning it out and actual coding.
  • 3/2/2014 - Continuing along my plan of attack for this Master Script. Really getting into making this work exactly how it needs to.
  • 3/4/2014 - Now that Caesar is back up, I can test the newly modified master_run_train2.pl script and continue working on it.
Results

3/2/2014

I have made good progress on re-formatting the master_run_train.pl script with the adjustments I have mentioned throughout this log. With Caesar being down however, I can't test it exactly, but I am confident it will work just fine.

Features I've added:

  • When user first runs, they are prompted for basic information including what type of Experiment is this: MASTER or CHILD.
    • A MASTER Experiment is a normal experiment we've been doing already - i.e. /mnt/main/Exp/0200
    • A CHILD Experiment is a sub-experiment INSIDE a MASTER Experiment directory - i.e. /mnt/main/Exp/0200/d12/s2000
  • This answer will dictate how the rest of the process is run.
  • The parts that I had originally already working: Creating Experiment Directory, and Configuring the sphinx_train.cfg file now work with either a MASTER or a CHILD Experiment.
    • For both of these steps, I had to create new version of exp_dir_setup.pl (train_01) and exp_sphinx_config.pl (train_02) specifically for the CHILD Experiment version.

3/4/2014

Today I made huge progress on the master_run_train2.pl script. I first tested what I was working on during the weekend now that Caesar is back up and running and everything seemed to go smoothly; just some minor bugs. After testing the crap out of that, I continued on down the steps in the Run a Train tutorial. As of my last two tests at 9pm, I have made a MASTER Experiment and completed all the steps up to just before I have to insert SIL in the <expid>.phone file. I also made it this far in a test creating a CHILD Experiment as well.

The remaining items I have to do:

  • Insert 'SIL' in the phones file that gets generated
  • Create the features by running the make_feats.pl script

My plan for tomorrow's status meeting are to share with my team the progress I made and show them an example of creating a new Experiment and then during the actual meeting with all the groups and Professor, I want to demo this in front of everyone and hopefully be able to deploy my scripts using live configuration (currently have all my scripts loading into my personal home folder on Caesar).

Plan

This will be updated throughout the weekend.

My plan for this week is to do a couple things to my master_run_train.pl script. First, before I continue onto step 4, 5 and 6, I want to make sure this works with how Professor would like it to work. Going to confirm on this with the Modeling group and get started on that.

I spoke with Colby Johnson from the Modeling group to confirm my plan for modifying what I have written so far on the master_run_train.pl script. After planning stuff out, I think I'm on the right track now to begin actual coding it up. I'd like to have as much as I possibly could have ready to go for this week's status meeting on Wednesday.


Concerns
  • A little cautious of how I want to modify this script. I don't want to make to convoluted so I'm going to make sure I plan things out before I start blindly coding.

Week Ending March 18, 2014

Task
  • 3/16/2014 - Logged in and started my planning process for this week
  • 3/17/2014 - Logged in.
  • 3/18/2014 - Logged in and viewed Ramon's guide he's starting to create for the master_run_train.pl script. Also worked on simple modifications to script after talking with Colby.
  • 3/19/2014 - Logged in and continued writing the Run Train Master Script tutorial page.
Results
3/16/2014

Today I was able to modify the master_run_train.pl script. My main goals included:

  • Adding the last two steps of configuring the Run a Train process:
    • Insert 'SIL' in the .phone file located in the /etc folder
    • Run the make_feats.pl script to generate the feats directory and all its contents.
  • Add the ability for the script to automatically determine the next available Master Experiment Number in the Exp. directory. This will help the end user as they don't have to first determine this on their own prior to running the script.
  • Add the ability for the script to prompt user for a Dictionary file they want to use for the Experiment. Instead of just hard-coding the cmudict.0.6d, this gives the user significant flexibility to the process as the Dictionary can change.

I started off at the top of that list and luckily made it through all the tasks. I ran a large number of tests on my home directory on Caesar and everything through the make_feats step (last one before RunAll.pl is called), worked perfectly. I then modified the code in the files I'm using to point to the official /mnt/main/Exp directory and created experiment 2011 that also ran through perfectly.

My next goals before class this week are to update the master_run_train.pl script wiki page to include these updates. I then want to work on modifying the Run a Train tutorial page and crete an instruction guide to use the master_run_train.pl script.

3/17/2014

Today I spoke with Colby and David about the master_run_train.pl script and if anything else needs to be modified. Colby said instead of allowing the user to choose which dictionary they want to use, just have the script hard-code the /custom/switchboard.dic him and David found during the break. Also, he said to change the reference to genTrans5.pl to genTrans8.pl as that one is the most up-to-date.

I hope to launch this officially tomorrow during the next status meeting so all other groups can start using it to configure their models before running the actual train (calling RunAll.pl).

As of 3/17 @ 5:45pm I pushed all updated files to Caesar and they are ready to be used.

Tonight, I hope to get started a new Run a Train Tutorial page in the Information wiki location.

Plan

My plan for this week is to finish up the master_run_train script to include the final two steps: inserting 'SIL' into the generated .phone file and then running the make_feats.pl script.

I spoke with David a bit on the inserting of the 'SIL' part because I was mentioning that it could be tricky placing that phrase in the correct alphabetical order, but he said why not append it to the bottom of the .phone file, and then sort it alphabetically and save it. Mind was instantly blown so I am going to attack that starting this week.

Next in line would be to show the next MASTER Experiment number when the end user chooses to do a new master experiment. This will help them so they don't have to actually go in to the Exp directory and check the next available number.

After that, I am going to modify the script to instead of using a hardcoded Dictionary, I was prompt the user to enter the name of whatever dictionary they want to use.

And lastly, for this weeks meeting, I would like to get the master_run_train script wiki page all up to date and officially release this to the class so they can start using it.


Concerns

Week Ending March 25, 2014

Task
  • 3/22/2014 - Logged in and planned out what I would like to accomplish this week.
  • 3/23/2014 - Logged in.
  • 3/24/2014 - Logged in.
  • 3/25/2014 - Logged in.
  • 3/26/2014 - Logged in and made some modifications to the master_run_train.pl script based on discussions with Brian.
Results

On Monday night, I worked with Brian as he was trying out my master_run_train script. He had some questions and then some great ideas to change some wording around in the instructions part.

I contacted the Experiment group regarding the next phase of the Experiment Information wiki page located here and how we plan to update it. I recommend that we each think of something to contribute so new people to the Speech Recognition project can have some idea of what exactly is an Experiment - and the actual result of Running a Train is building the most important part of the entire project: an Acoustic Model. Like our proposal says, I really think if we can format this page with enough information about Experiments at a very high level, the next semesters will be great shape.

On Wednesday morning after talking with Brian, I made some modifications on the master_run_train.pl script per his request. They weren't any functional edits, but rather more clarification on some of the instructions during the script. The only real functional mod was to add after the make_feats.pl script runs (step 5), that I programmatically move them to their base experiment directory and then show them the command they have to run to start up the RunAll.pl script.

Plan

For this week, I would like to run a full Experiment from creation, language model building, decoding and then finishing up and scoring it. I also want to make sure everything that the Experiment group said they would accomplish is finished up. I plan on speaking with Colby Johnson on what corpus I should do this on and the Senone/Density values he has had some success with. During the semester so far, I have been focused with building the master_run_train script to make that process, which besides waiting for the actual trains/decodes to run is the most time consuming part, faster and smoother for everyone to use. Fortunately it is finished enough to start using it and based on some reactions from the class last week, it seems to be working well and helping those who run Experiments run them faster.

Regarding the Experiment group and what we said we'd accomplish in the proposal, I want to make sure we are hitting those goals. I think I need to talk to Pauline and get her input in how we can adjust the Experiment wiki Information page located here . We need to think how we can present information on that page without confusing people with duplicate data found else where in the wiki.

Concerns

None at this moment.

Week Ending April 1, 2014

Task
  • 3/26/2014 - Logged in and wrote out the plan for the Experiment group this week.
  • 3/27/2014 - Logged in and worked on the Master Run Train Guide
  • /3/29/2014 - Logged in and worked on modifying the Master Train Script based on the input from Prof. Jonas and Brian.
  • 3/30/2014 - Logged in and continued working on the script and Experiment Information page redesign.
  • /4/1/2014 - Logged in and tested the final product of modifying the master script. Pushed it live.
Results

This week The Experiment redesigned the Experiment Information Page and it looks great. I also modified the master run a train script to prompt the user if they want to create a new MASTER Experiment. The prompt will tell them the new Experiment number they will be using AND to make sure they create a Log Entry on the WIKI BEFORE continuing. Also, the user has the option to quit the script at this step if they don't want to actually create the Experiment directory.

Plan

The plan for this week is for the entire Experiment group to create the best possible informative web page for the existing Experiment Wiki Page. The now archived version of the page had information that was non-related to any current configuration.

Our plan is to have a page where anyone new to the Capstone project can go to and fully understand what an Experiment is, why we do them, what the Caesar Exp directory is, definitions to common terminology used by Professor, scripts that are used during the Experiment process, guides to start creating your own Experiment (Apple, Linux, and Windows OS instructions), and more.

When we started this project, most of were basically clueless - and that is expected. We all were confused and frantically searching for information that seemed to be there, but in random form and all over the place. So the Experiment group sat down and each of us talked about our first 3 weeks on this project and how we were trying to fit pieces of this puzzle together. All four of us agreed that after over 2 weeks into the project, we still didn't know that after running a train, you essentially are creating your Acoustic Model. The Acoustic Model is the most important part of the entire project as Jonas will tell you. Come to find out, on the currently archived run a train tutorial, there is ONE sentence at the bottom that says congrats on creating your acoustic model! ... that's it. So that was a big item all of us struggled to find concrete information about and there are many others that had us throwing fits.

After we discussed our experiences, we asked ourselves how we can fix it. By creating a page with enough information, laid out in a way someone who just started on the project can read, digest, and most importantly LEARN about the project is what we plan on doing.

On Saturday the 29th, I emailed the Modeling group and asked them if they could come up with high level descriptions of each process an Experiment goes through: Run a Train to build the Acoustic Model - Build the Language Model - Run a Decode - Score a Decode. I think if we can get these paragraphs of information for each step in the process, a new user to the project will be ready to take on whatever challenge they get in the beginning of the semester.

Concerns

Week Ending April 8, 2014

  • 4/2/14 - Talked with Colby about our plan of attack
  • 4/3/14 - Started up my first Train using configuration settings proposed by Colby
  • 4/5/14 - Started up the tune_senones.pl script on my train I ran yesterday.
  • 4/7/14 - Talked a lot today and this weekend to the group about our plan and we are really making great progress.
  • 4/8/14 - Was assigned another task by Jonas for getting Speak working on loading on Rome through Caesar.
Task

My tasks are to get to know the process Colby J and David are doing regarding training and decoding. They have found very cool stuff in the past few days prior that they will be passing onto the rest of the team throughout the week. After I feel comfortable with that stuff, I will run a train using the configuration settings Colby wants me to use.

On April 8th, I was assigned a task to get Speak running on Rome through Caesar. This means that when one goes to: https://caesar.unh.edu (note the HTTPS), that will be forwarded to Rome and load up Speak. During some preliminary research, I found that apache was indeed running on Rome which is a good thing.

Results

This weekend, we as a group have made amazing progress in our mission. Without giving anything away here, I'll say that we the whole group is engaged in finding any possible solution to better the accuracy, wer, and also the real time factor.

Plan

This week is all about getting off to a great start of the competition. I spent a good chunk of time Wednesday and Thursday talking to Colby J on our teams plan of attack and he already had some great ideas. We came up with a list of items, sorted by priority that we think will help us get the results we want in the time we have (about 4 weeks). We understand that running the amount of trains and decodes we plan to do is very time consuming so the first thing we did was make sure everyone knew the process of running a train (the new way Colby and David have been doing for a couple days prior). This way we could ALL be doing a train or a decode at all times and all could be doing a different corpus set with different configurations. This will definitely give us the best possible chance to get the proper parameters we need to then run a very long (about a week or so) train and decode on the 100hr transcript.

Concerns

None right now - we're on track for what we want to do I think.

Week Ending April 15, 2014

  • 4/9/2014 - Started working on getting MySQL installed on Rome for Speak.
  • 4/12/2014 - Continued on this process and working with Brian.
  • 4/13/2014 - Worked on Experiment Group URC Poster.
  • 4/14/2014 - Researched why Speak is so slow.
Task
  • Get Speak working through Caesar on Rome. Should be able to go to https://www.caesar.unh.edu and use Speak.
  • Get MySQL working on Rome.
Results

On Thursday night, I started to get MySQL on Rome. Here is my documentation:

My guide is located here: http://www.server-world.info/en/note?os=Fedora_19&p=mysql

Step 1:

 I logged into Caesar: ssh jsm69@caesar.unh.edu
 Then ssh’ed into Rome: ssh rome
 Once on Rome, I logged into super user: su

I then was in the position to install MySQL onto Rome, then modify the SpeaK dbConnect.php file to point to there instead of what it’s currently pointing to: Caesar.

I ran this command to install MySQL:

 yum -y install community-mysql-server

And it came back with this:

 Loaded plugins: langpacks
 Package community-mysql-server-5.5.35-1.fc19.i686 already installed and latest version
 Nothing to do


That was surprisingly fun to see …

Step 2: So now knowing that MySQL is installed and updated to the latest version, I then ran this command to start the service up:

 systemctl start mysqld.service
 systemctl enable mysqld.service 

This seemed to work just fine as there was no output from either commands. In order to make sure MySQL is actually running, we can go into the service and actually play with it.

I did this by simply running:

 mysql

Then this was prompted:

 Welcome to the MySQL monitor.  Commands end with ; or \g.
 Your MySQL connection id is 6
 Server version: 5.5.35 MySQL Community Server (GPL)
 Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.
 Oracle is a registered trademark of Oracle Corporation and/or its
 affiliates. Other names may be trademarks of their respective  
 owners.
 Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

So far so good!

Step 3: Now that we have mysql running, I wanted to see what databases were currently on Rome. Running this command: show databases I saw the usual local database you usually would… however I didn’t see any Speak database.

So I created it by using the two scripts that were on the Google Code repo I pulled down from the beginning of class. To create the user and database, I ran this:

 DROP USER speak;
 DROP USER speak@localhost;
 CREATE USER `speak` IDENTIFIED BY 'speak';
 CREATE USER 'speak'@'localhost' IDENTIFIED BY 'speak';
 
 DROP DATABASE IF EXISTS `speak`;
 CREATE DATABASE `speak` CHARACTER SET utf8 COLLATE utf8_unicode_ci;
 GRANT ALL ON speak.* TO speak;


To create the tables and insert the admin user, I ran this:

 use speak;
 DROP TABLE IF EXISTS `speak`.`users` ;
 CREATE  TABLE IF NOT EXISTS `speak`.`users` (
 `User_ID` INT NOT NULL AUTO_INCREMENT UNIQUE,
 `User_Name` VARCHAR(15) NOT NULL UNIQUE,
 `Password` VARCHAR(40) NOT NULL ,
 `Salt` VARCHAR(64) NOT NULL ,
 `Email` VARCHAR(100) NOT NULL ,
 `Last_Name` VARCHAR(30) NOT NULL ,
 `First_Name` VARCHAR(30) NOT NULL ,
 `Middle_Name` VARCHAR(30) NULL ,
 `Other` VARCHAR(30) NULL ,
 `PermView` TINYINT(1) NOT NULL ,
 `PermCreate` TINYINT(1) NOT NULL ,
 `PermEdit` TINYINT(1) NOT NULL ,
 `PermModifyUsers` TINYINT(1) NOT NULL ,
 `Admin` TINYINT(1) NOT NULL DEFAULT 0,
 CONSTRAINT pk_userid PRIMARY KEY (User_ID)
 );
 DROP TABLE IF EXISTS `speak`.`experiments` ;
 CREATE  TABLE IF NOT EXISTS `speak`.`experiments` (
 `Exp_ID` INT NOT NULL AUTO_INCREMENT UNIQUE,
 `Purpose` LONGTEXT NULL ,
 `Details` LONGTEXT NULL ,
 `Results` LONGTEXT NULL ,
 `Summary` LONGTEXT NULL ,
 `Notes` LONGTEXT NULL ,
 `Create_Date` DATETIME NULL ,
 `Title` VARCHAR(45) NULL ,
 `Author_ID` INT NOT NULL,
 CONSTRAINT pk_expid PRIMARY KEY (`Exp_ID`),
 CONSTRAINT fk_authorid FOREIGN KEY (Author_ID)
       REFERENCES users (User_ID)
 );
 GRANT ALL ON speak.* TO speak;


 INSERT INTO users (User_Name, Password, SALT, Email, Last_Name, First_Name, PermView, PermCreate, PermEdit, PermModifyUsers, Admin) VALUES ('Admin', 'd033e22ae348aeb5660fc2140aec35850c4da997', , 'admin@email.com', 'Admin', 'Admin', 1, 1, 1, 1, 1);

These scripts worked perfectly with no errors.

The d033 hash password translates to ‘admin'

Step 4: I wanted to add the password that was being used in Speaks dbConnect.php file for the speak Db User: 60b346e8e474e20b3e5cb68d462f687545f9a5fc

I ran this command to do this:

 set password for speak@localhost=password('60b346e8e474e20b3e5cb68d462f687545f9a5fc'); 
 set password for speak=password('60b346e8e474e20b3e5cb68d462f687545f9a5fc');

There is two because I noticed in the script that created the users above, it was:

 CREATE USER `speak` IDENTIFIED BY 'speak';
 CREATE USER 'speak'@'localhost' IDENTIFIED BY 'speak';

I just created both for sanity.

Step 5: So now that we have our speak database and tables setup on Rome, I needed to modify the dbConnect.php file located on Rome in /var/www/html/speak/php/controllers/dbConnect.php

I ran this command to copy this file into my ~/home/speak directory on Caesar: scp dbConnect.php jsm69@caesar.unh.edu:~/speak

This seemed to work fine and I could now FTP that file on Caesar onto my Macbook to edit. (Note I could have used the VI editor to make this edit, but I wanted to find out how I could copy down a file to my machine as I will be needing this functionality later on when editing this site)

Okay so I modify that file to look like this:

    // Not sure if I should use Rome's local IP address: 192.168.10.11 or localhost.
   $host = '127.0.0.1';
   $database = 'speak';
   $username = 'speak';
   $password = '60b346e8e474e20b3e5cb68d462f687545f9a5fc';
   $connection = @new mysqli($host, $username, $password, $database);
   // Works as of PHP 5.2.9 and 5.3.0.
   if ($connection->connect_error) {
       die('Connect Error: ' . $connection->connect_error);
   }

All seems good there, so I push that file back up to Rome.

Step 6: I then attempted to login at https://caesar.unh.edu with the credentials: 'Admin' and 'admin' and I received a MySQL error when trying to connect: Permission Denied

I talked with Brian Friday morning about this and he wanted to look into this. He contacted me throughout the day for confirmation of what I have already completed Thursday night and around 11pm, he pinged me asking to try logging in ... and it worked!!!

You can see his log here about what he did to get MySQL working correctly with Speak's source code.

4/13/14 - As of today, Speak is up and running and is assessable here: https://caesar.unh.edu - This is great, however, the speed of it is horrible. This will take more looking into.

Also on Sunday, I finished up the URC Experiment Group poster.

4/14/2014

Today I researched a bit on why SpeaK on Rome is so slow. The page loads at good speed about 5 times, and after that, just crawls and/or doesn't load. I'm not sure why we are using HTTPS for this site - nothing really secure is happening. HTTPS only makes load times slower because it has to encrypt EVERYTHING - EVERYTIME. I want to talk to Professor and Brian about getting rid of this HTTPS and try accessing the site with regular HTTP on port 80 to see if this solves our load time problems. Other than that, I will talk to the systems group and find if they have ran into this or have any knowledge of how slow the Rome web server is. ---

Plan

Last week, I got assigned a secondary task that is to take up 60% of my time per Jonas. He wants me to get Speak running fully on Rome through Caesar: that is going through https://www.caesar.unh.edu to point to Speak. On Tuesday night, we were in the server room till 11:30pm, but we got it going - sans MySQL however.

On Thursday, I want to start tackling this.

Also for the Experiment group's URC poster, I'd like to get that all wrapped up. So far the other group members have put together a draft with information we can use. Hopefully I can get this designed and finished up on Sunday.

Concerns

Week Ending April 22, 2014

Task
  • 4/17/14 - Spent about 5 hours researching the API of media wiki
  • 4/20/14 - Kept working on the MediaWiki API stuff
  • 4/21/14 - Logged in
  • 4/22/14 - Posted a Support Ticket on the MediaWiki website for help regarding the API
Results

4/17/14

Today I want to start trying to make a Perl script that creates a HTTP POST request to our Wiki site using MediaWiki's API. They have a wiki page setup for their API here: http://www.mediawiki.org/wiki/API:Main_page and also on our foss wiki itself has an API reference page: http://foss.unh.edu/projects/api.php

For my Perl script, I started following this guide: http://xmodulo.com/2013/05/how-to-send-http-get-or-post-request-in-perl.html - right away I noticed a library that I don't think I have on my machine: LWP::UserAgent - I didn't worry about that just yet so I continued going forward following the guide on this part: HTTP POST Perl example - This is my code I created:

my $browser = new LWP::UserAgent();
$browser->agent('Mozilla/5.0');
$browser->default_header('Accept-Encoding' => scalar HTTP::Message::decodable());
$browser->default_header('Accept-Language' => "no, en");
$browser->default_header('Content_Type' => 'application/x-www-form-urlencoded');
my $api_url = "http://foss.unh.edu/projects/api.php";
my $request = HTTP::Request->new(POST => $api_url);
$request->header('Accept-Encoding' => scalar HTTP::Message::decodable());
$request->header('Accept-Language' => "no, en");
$request->header('Content_Type' => 'application/x-www-form-urlencoded');
 
# add our data.
# This token is required for edit requests - must also be added last.
my $api_edit_token = "9ea5d9df257608ba66f28a6f02af8550+\\";
my $content_title = "Speech:TestingAPIWithPerlAPI";
my $content_text = "This is my second time trying our the Perl API.";
my $post_data = 'action=edit&format=json&title='. $content_title .'&text='. $content_text .'&token=' . $api_edit_token;
 
$request->content($post_data);
 
print $request->as_string;
 
# Fire away.
my $response = $browser->request($request);
if ($response->is_success) {
    my $message = $response->decoded_content;
    print "Received reply:\n$message\n";
}
else {
    print "HTTP POST error code: ", $response->code, "\n";
    print "HTTP POST error message: ", $response->message, "\n";
}

NOTE: Regarding the token you see above: my $api_edit_token = "6bde13537ae386fd6a72a232bd03cbbf+\\"; -- My research for the MediaWIKI API lead me to this. In order to create/edit/delete a page on our Wiki, we need to pass a token in our API call. To retrieve this token, we need to go to this page: http://foss.unh.edu/projects/api.php?action=tokens

Of course I got an error when I ran it saying it doesn't know where the LWP library is blah blah blah ... So I researched how to install that library and came to this guide: http://lwp.interglacial.com/ch01_03.htm

So I followed those steps and it took about 10 minutes to complete as the installation process seemed to do a lot of coping files and running scripts.

Now I have it running and returning my JSON! So this is great news, however I keep getting an error: {"error":{"code":"badtoken","info":"Invalid token"}} -- I'm stuck here and have been stuck here for quite some time. I have a strong feeling it has something to do with the '+\' at the end of the token and passing that in the POST Data is just killing it during encoding or a lack of encoding. I am specifiying in my request you can see above this header content type: $request->header('content-type' => 'application/x-www-form-urlencoded'); - and still doesn't work. WIll continue to look into this.

I'm still getting this damn token error. No idea why. I have made sure the API Token is the correct one as I did notice it changes every 24 hours. I will have to add logic in the script that eventually works to grab that token dynamically instead of having it be hardcoded.

I printed out the request object as a string just before I'm actually making the request and this is what it spit out: action=edit&format=json&title=Speech:TestingAPIWithPerlAPI&text=This is my second time trying our the Perl API.&token=9ea5d9df257608ba66f28a6f02af8550+\

Everything looks perfect!

On Tuesday, I posted a support ticket on the MediaWiki website: http://www.mediawiki.org/wiki/Project:Support_desk#API_Question_-_Create.2FEditing_page_42204

Hopefully there will be an answer to my issues soon.

Also on Tuesday, I did more research regarding coding with Perl and using the mediawiki API. This turned to be somewhat successful as there were 4 libraries I could choose to use that encapsulates the MediaWiki API. I will start to create some test scripts with this and report later today.

Plan

My plan for this week is to get started on researching the API for MediaWIKI as well as finding out how to make POST and GET HTTP requests to our Wiki site using a Perl script. There seems to be a lot of stuff available for this but not any universal way. Prof. Jonas wants me to research this so when we have a final script to create a new Experiment, it will automatically create your new Experiment log on the Wiki.

Concerns

Week Ending April 29, 2014

Task
  • 4/27/14 - Logged on. Communicated with group.
  • 4/28/14 - Logged on. Planned out the weeks work. Emailed David about his Exp/Train setup script.
  • 4/29/14 - Logged on. Worked on final report with Colby
Results

This week is all about getting our final results ready and writing up the final report for the team. I'll be working directly with Colby on the write up as the final decode finishes up.

Plan

This week's plan will consist of working with the Avengers group to get the best final results from our data. Then of course we'll start getting the report all finished up with our results after the last trains/decodes are finished up running - hopefully Tuesday.

I also want to get with David on the new script he was working on regarding setting up a new Experiment and more specifically the Running a Train portion of the process. I want to talk to him so I can update the new Experiments wiki information page before we finish up everything.

Concerns

With the rules of a specific disk not allowed to be used in our acoustic model (disk23), we were forced to stop what we had done for 2 weeks already and fix our data before moving on. This definitely slowed our progress down. Also with Caesar's server rack popping a fuse shutting down multiple machines, multiple processes were cut off also slowing us down.

Week Ending May 6, 2014

Task
  • 5/1/2014 - Logged on.


Results

Made up a bulleted list of all the stuff the Experiment group did this semester for Pauline and Ray as they will be working on the final report.

Plan


Concerns

Important Items

Below are items I thought were important enough to have their own spot on my logs.

Link for a better formatting guide for MediaWiki: http://www.mediawiki.org/wiki/Help:Formatting

David M on Building a Language Model & Acoustic Model then Running a Decode

There are two scenarios when running a decode. If your acoustic model, Language Model and Decode are all in the same directory, use the current directory number for both arguments:

./run_decode.pl <current_exp_num> <current_exp_num> ​ We typically use two experiments to separate the LM and decode from the acoustic model. The idea behind this was to keep the two independent so each could be used by future experiments as needed without any dependency on the other, which is the convention Mike wants us to use.

To run a decode using two experiments (acoustic model as one and Language Model/Decode as the other), we created a new experiment. In the new experiment directory, we create symbolic links referencing the location of the original directories in the experiment used to build the acoustic model. The purpose of this is to save disk resources as copying these directories will waste a lot of space. This can be done using the following command:

ln -s ../<originalExpID>/* .

From here we can build the Language Model and begin the decode. To decode using this hierarchy, we must first go into the initial experiment we used to build the Acoustic Model and create a symbolic link pointing to the Language Model in our new directory. Then we can go back to our decode experiment and run the decode using the following command:

./run_decode.pl <acoustic_exp_id> <acoustic_exp_id​>


It is important to note that we are running the decode using the id of the acoustic model experiment (not the Language Model) to run the decode. The log will still be created in the new directory. Hopefully that made sense.