Speech:Spring 2014 Brian Gailis Log

From Openitware
Revision as of 13:32, 11 April 2014 by Btj9 (Talk | contribs)

Jump to: navigation, search

Week Ending February 4th, 2014

  • 2014.02.01 - Logged in-Details Below
  • 2014.02.02 - Logged in-Details Below
  • 2014.02.03 - Logged in-Details Below
  • 2014.02.04 - Logged in-Details Below
  • Maintain 4 logs
  • Change passwd in Caesar
  • Get SpEAK working locally on my machine
  • Review other logs, offer assistance where needed
  • Prepare notes for next meeting
  • SpEAK page last update 2013.06.22
  1. It's remaining items:
    1. Search function needs work
    2. Admin function - requires implementation
  • Checked out SpEAK code
  1. On my machine locally
    1. Verified TortoiseSVN was installed
    2. Created local download directory
    3. From Windows Eplorer, Right-Clicked newly created download folder
    4. Selected TortoiseSVN-CheckOut
    5. In the URL of Repository: https://speak.googlecode.com/svn/trunk/
    6. In the Checkout directory: verified path to newly created download folder
    7. Checkout Depth: Fully recursive
    8. Revision: HEAD revision radio button selected
    9. Clicked OK Button
    10. All of SpEAK downloaded to my machine locally
  • Configured my local XAMPP to work with SpEAK
  1. Created backup of httpd.conf file \xampp\apache\conf\2014.02.01.httpd.conf
    1. Edited \xampp\apache\conf\httpd.conf
      1. Added directory entry defining root path of speak/php/index.ini
      2. I use a Windows machine, had to change backslach to forward slash in path name
<Directory "F:/speak/php"">
   Options Indexes FollowSymLinks Includes ExecCGI
   AllowOverride All
   Order allow,deny
   Allow from all
  • Added an Alias to access the directory more quickly
  1. Created backup of httpd.conf file \xampp\apache\conf\2014.02.01.httpd.conf
    1. Edited \xampp\apache\conf\httpd.conf
      1. Added Alias entry allowing my browser to be simplified URL
    Alias /speak "F:/speak/php"
  • Created the SpEAK database
  1. Started XAMPP
    1. browsed to xampp root directory
    2. started xampp-control.exe
    3. Started Apache
    4. Started MySQL
    5. Created SpEAK database##
    6. Opened command prompt
    7. changed directory to xampp\mysql\bin
    8. logged into mysql mysql.exe -u root -p
    9. From the mysql prompt created the database
      1. SOURCE f:\createspeakuseranddb.sql
    10. Created the database tables
      1. SOURCE f:\createspeaktables.sql
  • Started SpEAK page
  1. Opened Chrome
    1. Specified URL:
      http://localhost:8080/speak/login.php (my apache is configured for port 8080)
    2. SpEAK home page opened
    3. Entered the user name as defined in the createspeakuseranddb.sql
    4. Entered the password as outlined from Josh's log regarding the pword
    5. Logged into SpEAK no trouble


  1. Attempted to access SpEAK via caesar.unh.edu
    1. Opened command prompt on my local public machine
    2. Used ping to verify public ipaddress for caesar.unh.edu
      1. Response back:
    3. Opened local browser
    4. Entered URL: caesar.unh.edu/speak
      1. Could not connect
    5. Entered URL: caesar.unh.edu
      1. Could not connect
  • It appears that caesar is not accessible via HTTP protocol
    1. Tried the secure HTTPS protocol and that did not work either
  • Logged into caesar via ssh
  1. Followed similar steps as Josh from his 2/1/2014 log but connected to the methusalix machine as noted under
    1. Using the documentation from speech->semester->spring2014->groups->experiment group->Assigned machines are: methusalix & verleihnix
    2. mySQL nor is apache installed on methusalix
    3. attempted to connect to the machine that Josh's logs identify miraculix
    4. using the same method as Josh was able to login to that machine
    5. navigated the same directories
    6. was able to login to my ftp sever from the miraculix /mnt/main/srv/www/vhosts/speak directory
      1. to answer one of Josh's questions, How do I tadd files from y machine to there?
        1. One method is to use FTP put/get to move stuff
        2. However, we should all be working with the same file set so a better way might be to use the repository and checkin/checkout files when needed...
      2. to answer the other q is the code repo the sam?, I don't think it matters because we're all going to use the same code set moving forward, the one identified in google


  • Reviewed the SpEAK documentation further in hope to find other good nuggets
  1. Review from SpEAK home page
    1. Selected Semester
    2. Selected 2012
    3. Reviewed the different areas
      1. Prof Jonas has comments in several pages requesting more information
        1. For Example
        2. Here, he comments "Details of how to access code base and deployment environment can be accessed and how the information is organized would be very helpful..."
    4. The System Design Document identifies more tables that what the create db script defines, Josh may have a point regarding a starting point and code repos, might be worth looking further into....
      1. For example,
    5. Curious, there are many references to something called STEM and there is even a public page for it:
      1. http://stem.unh.edu/speak/login.php
      2. Tried the login but couldn't login and I'm not able to locate a login for it... very curious, more investigation may be needed here....


  • Reviewed other logs
  1. Spoke with Mike, and we tested google hangout for on-line communication
    1. We also talked about the group not using Google Groups but rather use wiki-media group logs
  1. Review previous logs and SpEAK pages
  2. Checkout SpEAK Code
  3. Configure SpEAK locally on my machine
  4. Verify SpEAK works and what it currently does
  1. The code is a mess and poorly written, can use a lot of clean up
  2. Documentation is scattered and difficult to follow
  3. Not enough direction for strong collaborative team efforts (As of yet)

Week Ending February 11, 2014

  • 2014.02.08 - Logged in- Details below
  • 2014.02.09 - Logged in- Details below
  • 2014.02.10 - Logged in- Details below
  • 2014.02.11 - Logged in- Details below
  1. Review Speech: Spring 2014 Experiment Group http://foss.unh.edu/projects/index.php/Speech:Spring_2014_Experiment_Group
  2. Review Speech: Training http://foss.unh.edu/projects/index.php/Speech:Training
  3. Review Speech: Spring 2014 Proposal http://foss.unh.edu/projects/index.php/Speech:Spring_2014_Proposal
  4. Review Individual logs
  5. Identify areas of concern and update accordingly (See Concerns Below)
  • 2014.02.08
  • Speech: Spring 2014 Experiment Group
  1. Reviewed the items outlined under Experiment Group Immediate Goals
    1. Followed the bullet point for running a train on an experiment (NOTE: did not actually run a train, just followed the instruction)

  • Speech: Training
  1. Read
    1. Found the outline a bit confusing, it does a good job of a step by step procedure but there are very little very, if any, notes that explain what it is that the user is trying to accomplish
    2. Also, the instructions make assumptions that the user is already logged in and being on the proper machine and at the right location
    3. * This is an important thing to note as a beginner is not going to have a clue about any of this stuff and will blindly follow the instruction but won't have a clue about what it is they're doing and why, provided they even get it to work if they're miraculously beginning at the right place.
  • Speech: Spring 2014 Proposal
  1. Browsed the proposal
    1. Nothing has been defined for the experiment group
  • Review Individual logs
  1. Josh had identified several concerns regarding SpEAK
  2. Both Ray and Pauline are attempting to run trains
  • 2014.02.09
  • Updated the Team's Group Log and added a Team Member Schedule
  • Attempted to run a train
* Environment: Windows Professional 7
* Access: Remote
* Protocol: SSH
* Client Application: OpenSSH
* Client Application: Putty
  1. Steps to run a train:
    1. Opened Windows Command Prompt
    2. Changed Directory to C:\Program Files\OpenSSH\bin
    3. From Command Prompted typed: ssh btj9@caesar.unh.edu
    4. Received error mgs stating:
could not create directory /home/bgailis/.ssh
The authenticity of host caesar.unh.edu ( can't be established

RSA key fingerpint is
Areyou sure you want to contiune connecting (yes/no)
    1. Typed: yes
    2. Error msg:
Failed to add the host to the list of know hosts (/home/bgailis/.ssh/known_hosts)
    1. Prompted for password btj9@caesar.unh.edu's password:
    2. Entered password
    3. System pompted back with last login and returned me to prompt caesar sp14/btj9
    4. At this point, followed the Steps for running a Train found on http://foss.unh.edu/projects/index.php/Speech:Training
    5. cd /mnt/main/Exp
    6. ls
    7. verified the last known experiment number 0152
    8. mkdir 0153
    9. cd 0153
    10. /mnt/main/root/tools/SphinxTrain-1.0/scripts_pl/setup_SphinxTrain.pl -task 0153
    11. cd etc
    12. Created a back up of the sphinx_train.cfg file before editing
      1. cp sphinx_train.cfg sphinx_train.cfg.bak
      2. ls to verify the bak file was created
    13. vi sphinx_train.cfg
      1. got an error mgs stating: E437: terminal capability “cm” required
      2. a consideration of install ncurses-term for the OS team?
    14.  :set nu (this enables line numbers in vi)
    15. Attempted to edit but kept getting strange responses
    16.  :q! (exited vi and reverted to back up)
      1. rm sphinx_train.cfg
      2. cp sphinx_train.cfg.bak sphinx_train.cfg
      3. started researching the issues with VI and editing and during my research Caesar remote host kicked me out..
      4. at this point I abandoned the use of OpenSSH and reverted to PuTTy
      5. PuTTy did not give me the same errors as OpenSSH and offered the color display, which is terrible!
    17. vi sphinx_train.cfg (edits file)
    18.  :set nu (enables line numbers)
    19.  :6 (moves to the 6th line of the file)
    20. l (moved right to "train1")
    21. x (deleted train1)
    22. i (inserted 0153)
    23. ESC (move out of insert)
    24. j (move to down to line 7)
    25. h (moved to start of "/root...")
    26. x (deleted all text between quotes)
    27. i (inserted "/mnt/main/Exp/0153"
    28. h (moved to start of "/root...")
    29. x (deleted all text between quotes)
    30. i (inserted "/mnt/main/Exp
    31. ESC (move out of insert)
    32.  :80 (move to line 80)
    33. i (move into insert mode, type hash mark at start of line)
    34. ESC (move out of insert)
    35. k (move up one line to line 79)
    36. x (delete the hash at start of line)
    37. ESC (to ensure out of any weirdness)
    38.  :x (to save changes and exit)
    39. UP ARROW (recalls last command at prompt, in this example called vi sphinx_train.cfg)
    40. ESC (to ensure out of any weirdness)
    41.  :1 (verifiy changes on lines 6 through 8)
    42. ESC
    43. q! (to quit)
  1. The instructions for "Generate the transcript and its associated audio-file list." are next
    1. Found these to be extremly confusing so here's what I did
    2. /mnt/main/scripts/user/genTrans6.pl /mnt/main/corpus/switchboard/mini/train 0153
      1. got a ton of error messages stating: "sox FAIL formats: can't open output file wav/temp.wav: No such file or directory
    3. cd.. (had to change back one directory)
    4. /mnt/main/scripts/user/genTrans6.pl /mnt/main/corpus/switchboard/mini/train 0153
      1. File ran without error
    5. cd etc (moving to etc directory)
    6. /mnt/main/scripts/train/scripts_pl/pruneDictionary2.pl 0153_train.trans /mnt/main/corpus/dist/cmudict.0.6d 0153.dic
      1. The previous step for pruneDictionary took forever!!!!
    7. cp -i /mnt/main/root/tools/SphinxTrain-1.0/train1/etc/train1.filler 0153.filler
    8. cp -i /mnt/main/scripts/user/genPhones.csh .
    9. ./genPhones.csh 0153
    10. vi 0153.phone
    11.  :set nu (looked for the line number starting with S, in my test, like 56 started with T and line 55 started with SH. Since SIL needs to be added, a new line 56 is needed for the insert of SIL)
    12. O (inserted a line above the current line and puts user into insert mode)
    13. SIL (inserted the character SIL on line 56)
    14. ESC (escaped out of insert mode)
    15. J (move down one line, verified order)
    16.  :x (save changes and exit)
    17. cd .. (move back one dire to exp base directory)
    18. /mnt/main/scripts/train/scripts_pl/make_feats.pl -ctl /mnt/main/Exp/0153/etc/0153_train.fileids
      1. A bunch of .sph file types were created under /mnt/main/Exp/0153/wav/sw*.sph
    19. /mnt/main/scripts/train/scripts_pl/RunAll.pl
      1. Response back: Something failed: (/mnt/main/Exp/0153/scripts_pl/00.verify/verify_all.pl)
      2. The instructions state that most scripts fail the first time and this is normal? That's crap, failure should never be normal....
    20. lynx 0153.html
      1. There were a lot of WARNING messages
    21. Went to the http://foss.unh.edu/projects/index.php/Speech:Training#Issue_1:
      1. This area refences The Training not finding words referenced in the transcript within the dictionary file
      2. following the instructions view the missing words, I was not able to locate the list as the instructions reference but instead read through each log entry
        1. There has to be an easier way... Maybe introduce a Parser?
    22. exited the program as I was at the end of a train and nothing more to do except append to the dictionary followed by rinse and repeat...

  • 2014.02.10-Reviewed Logs
  • 2014.02.11-Reviewed Logs
  • 2014.02.08-Review Logs
  • 2014.02.09-Try a train
  • 2014.02.10-Review Logs
  • 2014.02.11-Make updates and corrections as identified from other's logs and previous work from the 8th, 9th, and 10th
  • 2014.02.08
  • Speech: Spring 2014 Experiment Group
  1. No concerns with the goals, they follow as identified from Wednesday's meeting (2/5)
    1. One concern thus far, the projects overall direction, I don't see how the next semester is going to push the project forward but that might be due to my ignorance as it is still early in the semester to see the ultimate goal
  • Speech: Training
  1. Found the outline a bit confusing, it does a good job of a step by step procedure because the instructions make assumptions that the user is already logged in and is on the proper machine at the right location
  • This is an important item to note because a beginner is not going to have a clue about any of this stuff and will blindly follow the instruction.
  • Speech: Spring 2014 Proposal
  1. Experiment group has been in contact with Modeling group and there are several open items concerning automation, this maybe the reason the Experiment group has yet to contribute to the Semester Proposal and the experiment group is waiting on confirmation from modeling
  • Review Individual logs
  1. No concerns here as of yet
  2. Pauline and Josh's efforts I think are paying off and will provide the experiment group with good direction moving forward
  1. At step 2 it gives instruction to use the last expirment number +1, however it does not give you instruction on how to do that, when writing the automation script, this will need to be an item to cosider, doing a directory read and pulling the last known (max) value and adding 1
  2. on step 4 it gives a command and offers little explanation to what's actually occuring. The user must go on blind faith that what their doing is correct, maybe adding a link to the page that actually explains what's occuring would be useful?
  3. Step 2 of the set up the Sphinx Train Configuration File. It might be a better idea to introduce a backup of sphinx_train.cfg before actually editing it
    1. Creating a backup: cp sphinx_train.cfg sphinx_train.cfg.bak
    2. I know from my own experience with VI, it's not simple and depending on the terminal used, you may not get nice line numbers... GEDIT might be a better solution for those who are more comfortable with a GUI text editor
  4. Using vi I got an error message stating: E437: terminal capability “cm” required
    1. This is caused by the terminal set in dumb mode, installing additional software will help to avoid this error
    2. a consideration of install ncurses-term for the OS team?
      1. yum install ncurses-term
  5. The instructions call for using VI as the editor and give a link for common VI stuff but for those who have no interest in learning VI then I think offering some hints with vi at each step will save the user time when editing directly
    1. for example, the instruction make reference to editing lines 6 through 8 and 79 & 80 but VI by default does not include line numbers and that feature has to be enabled, including instruction on enabling line numbers first will help with the remaining steps
  6. I found the instructions for "Generate the transcript and its associated audio-file list." very confusing
    1. These instrcutions make note of things not to do
    2. Then provides examples of what could be done but says don't use them
    3. Then in the commands to actually use it references paths that should be replaced but doesn't actually tell you what those paths are or where/how to get them
    4. The instructions state to define a Corpus subset and gives you a location of where to find them and a bunch of words that tell you not to use this or that... very confusing!!
    5. The instruction set "Set up the Sphinx Train Configuration file:" leaves the user in the .../etc directory while the instruction set for "Generate the transcript and its associated audio-file list." requires you to start back one directory at the experiment level- be sure to move back one level befor generating the transcript
      1. Perhaps updating the instruction set "Set up the Sphinx Train Configuration file:" with the last item to move the user back one directory, which ensure the user is starting the next phase at the proper location (I.E. directory)
    6. The "Generate the phone list." instruction should end with change directory back one "cd.." as it leaves you in .../etc and you need to be in exp base dir for the next step Generate Feats data.
    7. The "Start the Train!" instructions end with "Please note: Trains will usually fail the first time executing RunAll.pl!" ... Mmmm. nuff said...
      1. Identifying missing words is painful, we may want to think about a better parser or possibly find out how the html file is produced and re-engineer that to produce a list instead of gobbly gook....
  • 2014.02.10-Reviewed Logs, no concerns at this time
  • 2014.02.11-Reviewed Logs, no concerns at this time

Week Ending February 18, 2014

  • 2014.02.16 - Logged in- Details below
  • 2014.02.17 - Logged in- Details below
  • 2014.02.18 - Logged in- Details below
  1. Learn the structure of the experiment directory
    1. Log findings under personal log
      1. Where things are
      2. How they are stored
      3. What does sphinx create when it runs a train
  2. Collaborate with group at next meeting regarding findings
  3. Update logs accordingly

  • 2014.02.16
  1. Updated Brian time line for:
    1. Speech:Spring 2014 Proposal Group (section)
      1. http://foss.unh.edu/projects/index.php/Speech:Spring_2014_Proposal_Group
  2. Read the Speech Home Page
  • Moved to the Information page
  1. Moved to the Speech Software Functionality
    1. Found this page to be very useful and good place to start as it identifies all areas involved
  • Server named caesar with an OS: openSUSE11.3
    • The server has 5 software packages installed:
      • Sphinx Decoder
      • CMU Language Model Toolkit
      • Sphinx Trainer
      • CMU Dictionary
      • SCLITE
  • Moved to the System Software Setup page
    • This page is a review of the OS and the general concerns regarding the OS and its versions
  1. Moved to the Hardware Configuration page
    1. Did not find this page particularly useful as it only discusses the hardware
  2. Moved back to home page
  3. MOved to Experiments page
    1. Selected Experiment 1
    2. Shows a redirect to April 24th Group 1
    3. Selected redirect
    4. Under the Group Log, there is a note from Prof. Jonas stating:
      1. Ah yes, finally we are getting to some really important issues. This is going to be the hard part...teasing out Sphinx dependencies based on their file hierarchy to get it working under ours...this may not be easy.
    5. Although this page was useful in that it identified some trouble areas, I was not able to locate any further direction...
  4. Moved back to the home page
  5. Then to Semesters
  6. Then to 2011
  7. Then to 2011 proposal page
    1. Found this page to be the most useful page so far as it outlines the start of Capstone and what's going on
    2. In the Building models section, information is given revealing the early foundations of the 2013 semester's Experiment group tasks
    3. Some things to note:
      1. Building models requires several steps
      2. Switchboard data needs to be orgranized into a suitable format
        1. Subsets get created, one of which is a proof of concept, referred to as a Mini set
        2. The remaining subsets are workable baseline sets of models, referred to as a Full set
      3. Sphinx (the software being used for this project) needs to be configured to generate acoustic models in a batch mode while synchronizing all machines in the Caesar stack
      4. The last step is to use Perl scripts to automate the expirment process
  • Switchboard corpus - hundreds of hours of overseas phone converation in native English
    • This is the data used to generate models during the training phase
  • Switchboard transcriptions - used as a base line to parse words from text transcription file that is then compared to the dictionary file
  • Training - comparison of words pulled from a transcript and compared against a dictionary
  • Mini Switchboard train set - a small portion of an audio file (typically an hour) that is used to create a transcript, the transcript is then "trained" where words are pulled out and compared to a dictionary
  • Full Train Set - 90% of a Switchboard corpus
  • Test set - set of data similar to a train set, except not used during training of a model, used to judge the accuracy of models during decoding
  • Dev set - used to tune the decode
    • Consists of 5% of the full Switchboard corpus not used in creating the train set
    • Dev Mini Test Set - 30 minute subset of the 5% Dev set, used for testing models created during training
  • Eval set - used after against the ending result of the Dev set
  • Perl Scripts - data manipulation tools, used to automate the text parsing of Switchboard data
    • Parse transcriptions from Switchboard to Sphinx
    • Call on an application to down sample audio files
    • Generate new experiment directories according to the experiment directory structure
  • CMU Pronunciation Dictionary - speech recognition dictionary, found at www.speech.cs.cmu.edu/cgi-bin/cmudict
  • Scoring - The output from Sphinx compared to an actual transcription of the audio
  • the Sphinx training process - http://www.speech.cs.cmu.edu/sphinxman/scriptman1.html - documentation on the sphinx training system
    • Probaly the most useful page I've yet to come accross, which isn't in UNHM Speech wiki.... hmmm....

  1. /SphinxTrain1 - where stuff is installed
  2. Compile SphinxTrain - ./autogen.sh
  3. Tutorial setup - perl scripts_pl/setup_tutorial.pl an4
  4. SphinxBase & Sphinx3 set up - /autogen.sh (SVN), ./configure (Tar Ball)
  5. Compile SPHINX-3 - ./autogen.sh --prefix=`pwd`/build --with-sphinxbase=`pwd`/../sphinxbase (SVN), configure --prefix=`pwd`/build --with-sphinxbase=`pwd`/../sphinxbase (Tar Ball)
  6. Running Trainer - perl scripts_pl/make_feats.pl -ctl etc/an4_train.fileids, perl scripts_pl/RunAll.pl
  7. Decoding it - perl scripts_pl/make_feats.pl -ctl etc/an4_test.fileids, perl scripts_pl/decode/slave.pl
  8. /root/speechtools/SphinxTrain-1.0/time/etc/ - transcriptions (etc/time.transcriptions)
  9. /root/speechtools/SphinxTrain-1.0/time/etc/ - sph files in wav
  10. /root/speechtools/SphinxTrain-1.0/train1/genFileIDs_withoutPath.sh - script to generate the fileids file to generate your features.

  1. You start out by making a new directory ( mkdir time)
  2. then you go into that folder.
  3. Once in there your run (perl $SPHINXTRAINDIR/scripts_pl/setup_SphinTrain.pl -task “name of directory you created”
  4. The main conifg file is put ino etc/sphinx_train.cfg
  5. You will put the wav files that are needed ino the wav/ file that is now in the structure in the new file that you created.
  6. nIn the file etc/”name of folder”.fileids you put the list of all the wav iles that are needed into it like wav0001 wav0002 wav0003 etc..
  7. Then in the file etc/”name of folder”.phone you enter all the phone names like AA EE AE etc…
  8. Next you have a word transcription of each file which is under etc/”name of file”.transcription each one must end in (FILEID)
    1. an example is THE TIME IS NOW … (WAV0001)
  9. Next you have a etc/”name of file”.filler This has a list of the filler words and there pronunciation (using the phones)
    1. example is SIL SIL <sil> SIL /NOISE/ +NOISE+</pre>
  10. After this we will need a dictionary which will be created from the switchboard files.
  11. Considering we are doing this on our own we will need to use bin/make_dict and etc/”name of file”.transcription
    1. This will then create the files etc/word.known etc/word.unknown
  12. Once we like the dictionary that we created run the command mv etc/word.known etc/”name of folder”.doc
  13. Then we make the melcep feature files with the command (perl scripts_pl/make_feats.pl –ctl etc/”name of folder”.fileids
  14. Now we can start on the basic perl scripts.
  15. Their results will be put in perl_”name of folder”.html which we can view as things progress.

  1. /root/speechtools/SphinxTrain-1.0/readme.txt - installation instructions
  2. /root/speechtools/SphinxTrain-1.0/doc/tinydoc.txt - Brief illustration of install process
  3. miraculix:~/decodeFiles - dictionary
  4. http://www.speech.cs.cmu.edu/sphinxman/scriptman1.html - information
  5. http://www.isle.illinois.edu/sst/courses/minicourses/2009/lecture1.pdf - Described process of converting sph to wav.
    1. sox sw02001.sph sw02001.wav trim 0 00:50 - convert a section of the sph file to wav.
    2. sw02001.sph is the sph filename, sw02001.wav is the output filename, 0 is the start time, and 00:50 is the end time
  6. Sphinxtrain config file - specifies two modes of training:
    1. first mode, Sphinxtrain uses only the local machine (Queue::Posix in sphinx_train.cfg)
    2. second mode, Sphinxtrain uses a PBS/Torque queue which is a server that distributes work among multiple computers
      1. require the installation of a PBS/Torque server on one of the systems to distribute work between all of the systems
  7. Torque - is a batch job system (http://www.democritos.it/activities/IT-MC/documentation/newinterface/pages/runningcodes.html & http://www.bc.edu/offices/researchservices/cluster/torqueug.html)
    1. Torque How To - http://wiki.hpc.ufl.edu/doc/TorqueHowto
  8. Carnegie Mellon University tutorial SPHINX system- http://www.speech.cs.cmu.edu/sphinx/tutorial.html
  9. training acoustic models using the Sphinx3 trainer - http://www.speech.cs.cmu.edu/sphinxman/fr4.html
  10. sphinxtrain - feature file
  11. generate the fileID file:
for i in /root/speechtools/SphinxTrain-1.0/train1/wav/*.wav ; do echo ${i%%.wav} | sed 's#^.*/##'; done
# Just pipe the output to a file with the extension fileid and you should be good.
  1. make_feat.pl - script specifies to use sph files
    1. generates a folder called ___BASE_DIR___
      1. wav folder inside ___BASE_DIR___ folder - all of the sph files
  2. an4 script - doesn't actually generate anything
  3. training files - Phones will be treated as case sensitive.

  1. information on using new models - http://cmusphinx.sourceforge.net/wiki/tutorialsphinx4/doc/UsingSphinxTrainModels.html
    1. to use new models (three steps):
      1. Defining a dictionary and a language model
      2. Defining a model and a model loader
      3. Configure a frontend (optional)
  2. SphinxTrain README.txt - referred the reader to other website
  3. CMU LM switchboard - uses .wav files that are sampled at 8kHz
  4. Sphinx software - requires a .wav file that is sampled at 16kHz
  5. Training Acoustic Model For CMUSphinx - http://cmusphinx.sourceforge.net/wiki/tutorialam?s
  6. etc/feat.params - Configure Sound Feature Parameters
  7. etc/sphinx_decode.cfg - Configure Sound Feature Parameters
    1. Once changes have been made:
      1. copy the sphinx_decode.cfg file from the an4 directory to the Capstone directory
      2. edit the file and change any file names from an4 to the project name
  8. Converting sph audio files to wav - http://library.rice.edu/services/dmc/guides/linguistic/converting-sph-audio-files-to-wav
  9. CMU Switchboard transcription files - are in .sph format

  1. svn on caesar - done using ssh tunneling, a feature that will result in the best overall security for caesar
  2. experiment directory structure - created by Scott Innes on April 3rd, 2011
  3. //media/data/Switchboard - mini training set and a mini development set
  4.  :/home/linux/Documents/timeFiles.pl - find the full length of the data with a high degree of accuracy
    1. accurate subsets of the data can now be constructed for training using this updated length measurement

  1. language model - can be built from both a Vocabulary data set and ID 3-Grams or text can be used to build the Vocabulary data and the ID 3-Grams
  2. change_log.txt - tools
    1. Firstly, the text file is converted to a word frequency file (.wfreq) with the following command. ‘../bin/text2wfreq < change_log.txt > change_log.txt’.
      1. contains a list of every word in from the file with a count indicating how many occurrences were found
    2. Next, a Vocabulary file is created from the word count file with the following command. ‘../bin/wfreq2vocab < change_log.wfreq > change_log.vocab’
      1. This file contains a list of every unique word from the word frequency file.
    3. Id 3-gram file - the vocabulary or text file is necessary
    4. vocabulary file that was created to make an ID 3-gram file - ‘../bin/text2idngram –vocab change_log.vocab > change_log.idngram’
      1. seemed to hang on me for more than 10 minutes - ‘Allocating memory for the n-gram buffer’.
  3. Language Model can be built with - ‘../bin/idngram2lm –idngram change_log.idgram –vocab change_log.vocab –binary change_log.binlm’
    1. requires the idgram file and vocab file
  4. ms98_icsi_word.text - contains transcripts from switchboard
  5. Create an ID 3-gram file - http://foss.unh.edu/projects/index.php/Speech:Spring_2011_Nick_Log - Week Ending April 5th, 2011
  6. /media folder - parsing script - to convert the words from the transcriptions into vocab files.
  7. Create a word frequency file - ~/speechtools/CMU-Cam_Toolkit_v2/bin/text2wfreq <trans.text> trans.wfreq
  8. Create a vocab file - ~/speechtools/CMU-Cam_Toolkit_v2/bin/wfreq2vocab <trans.wfreq> trans.vocab
  9. Create an id 3 gram - ~/speechtools/CMU-Cam_Toolkit_v2/bin/text2idngram -vocab trans.vocab -n 3 <trans.text> trans.idngram
    1. specify n gram of 3.
  10. create a language model with arpa and binary format
    1. ~/speechtools/CMU-Cam_Toolkit_v2/bin/idngram2lm -idngram trans.idngram -vocab trans.vocab -arpa trans.arpa
    2. ~/speechtools/CMU-Cam_Toolkit_v2/bin/idngram2lm -idngram trans.idngram -vocab trans.vocab -binary trans.binlm
      1. create a trans folder to contain all the items mkdir trans
      2. make a tarball of the files tar -cp trans.* trans.tar
      3. send it to caeser under the media folder sftp put trans.tar /media
      4. The files are on Caeser under /media/data/trans
      5. The language model files are the trans.arp and trans.binlm
  11. /media/data/trans/CreateLanguageModelFromText.perl:
    1. language model script - http://foss.unh.edu/projects/index.php/Speech:Spring_2011_Nick_Log - Week Ending April 19th, 2011
    2. To execute: perl CreateLanguageModelFromText.perl inFile outFile
    3. ParseTranscript.perl has to be in the same directory as this script
      1. This script will need to change or branched to receive a vocab file

  1. sphinx dictionary - http://www.speech.cs.cmu.edu/sphinx/models/hub4opensrc_jan2002/cmudict.06d
  2. dictionary" directory on idefix
    1. original transcripts - ms98_icis_word.text
    2. dict.perl - is the perl script
    3. uniq_words.txt - the list of unique words from the original transcripts.
    4. create_uniq_words.sh - bash shell script - used to create "uniq_words.txt"
  1. Given the nature of the work we are doing... it seems that a convenient way would be to use a web application that uses PHP and SQL
    1. Mike-jonas 14:30, 2 April 2011 (UTC): not working with a database but with a directory structure containing data
  2. http://www.perl.org - great starting point to get to know PERL
  3. Brian (bmq9) wrote a script and put up on Foss that - Takes the transcript, parses a line, creates a directory for audio/transcript and pulls out the audio for that conversation
    1. Experiment directory layout is a little different than it, but shouldn't be that hard to fix. It then parses and tries to fix that piece of the conversation. it's written in perl
  4. perldoc.perl.org - reference documentation
  5. "Learning Perl" from Safari books - pretty basic
    1. use the chomd unix command to have the system recognize the script as a program
  6. ExpDir.pl - ExpDir.pl onto methusalix, perl ExpDir.pl runs it..... WORTH REVIEWING THIS FILE?

  1. Wrote training guide - uploading files to foss ????
  2. the language model does not rely on the dictionary
  3. sclite under sctk - Speech Recognition Scoring Toolkit (SCTK) Version 2.4.0 - http://www.itl.nist.gov/iad/mig//tools/
  4. the language model - is where the probability of a specific word occurring in a specific situation is set up (a certain word is more likely to occur after a word than another word would be)
  5. language model information: http://en.wikipedia.org/wiki/Language_model
    1. That document appears to explain what each of the executables do.
  6. convert text to a word frequency file - text2wfreq filename.txt
  7. create a vocab file from word ount - wfreq2vocab wfreqfile.wfreq
  8. create 3-gram file using vocab or text - text2idngram -vocab vocab file > filename.idngram
  9. build language model - idngram2lm -idngram idngramfile.idngram -vocab vocabfile.vocab -binary filename.binlm
  10. languageModel/etc - copied stripped transcript
  11. to generate wfreq -text2wfreq <transcript.text> transcript.wfreq - Appears to be a count of the number of times that each word appears in the text.
  12. to generate vocab - wfreq2vocab <transcript.wfreq> transcript.vocab - Appears to be a dictionary of the words in the text\
  13. to generate idngram - text2idngram -vocab transcript.vocab -idngram transcript.idngram
    1. text2idngram - text2idngram -vocab transcript.vocab -idngram transcript.idngram < transcript.text
  14. langauge model - idngram2lm -idngram transcript.idngram -vocab transcript.vocab -binary transcript.binlm
  15. Attempted a decode
    1. Copied the language model up to etc
    2. command: sphinx3_decode -hmm model_parameters/train1.cd_cont_1000/ -lm etc/transcript.binlm -dict etc/train1.dic -fdict etc/train1.filler -ctl etc/train1_train.fileids -cepdir wav -cepext .sph > ~/decodeOutput.txt
  16. decode script: see the decode section of Brian Avery's Summer 2001 log towards the bottom, lots of detail here on the script and decoding

  1. Nothing really useful here, talks mostly about what he did with NFS Mount but not much detail on anything

  1. Well documented notes on training, step by step procedure
    1. Most of the procedure here has since been updated however, it's a good place to learn a bit about what Capstone is all about
  1. Reviewed Speech:Spring 2014 Experiment Group
    1. http://foss.unh.edu/projects/index.php/Speech:Spring_2014_Experiment_Group
  2. Read through the goals of the group and determined that I'm ignoring anything regarding scripts and automation at this time as it has nothing to do with this weeks immediate goal of learning the experiment directory.
  • 2014.02.18

  • 2014.02.16
  1. Review of Speech:Spring 2014 Proposal Group
    1. http://foss.unh.edu/projects/index.php/Speech:Spring_2014_Proposal_Group
  2. Update logs
  • 2014.02.17
  1. Review of Speech:Spring 2014 Experiment Group
    1. http://foss.unh.edu/projects/index.php/Speech:Spring_2014_Experiment_Group
  2. Read through and documented all of the efforts of the 2011 year, see results above
  3. Update logs
  • 2014.02.18
  1. Update logs
  • 2014.02.16
  1. A lot of time passes between class and group discussions which makes working this project difficult.
  2. Seems each week our direction changes and we lose focus.
  3. Finding information is difficult as there is no flow to the Speech site and a user stumbles through it without direction or purpose
  • 2014.02.17
  1. The Speech:Spring 2014 Experiment Group page, which identifies the groups goals, seems to focus heavily on scrips and automation. This may be important to the overall goal but not too helpful to meet our immediate needs, such as defining the environment of an experiment, the who, what, when and where of an experiment. To me, this page is a bit misleading towards that goal and perhaps our group could discuss it further?
  2. The 2011 year has lots of good foundation information and had written most of the first gen scripts, however, for whatever reason most of the scripts shown in their wiki's lack location and file names. Perhaps most or all of this work has been replaced but it would have been nice to see the names of files. Recommendation, ask Prof Jonas to force a template when posting script files that includes: script file name, location, and purpose.
  • 2014.02.18
  1. No concerns

Week Ending February 25, 2014

Carnegie Mellon University: http://www.speech.cs.cmu.edu/sphinx/tutorial.html
Virtual Box https://www.virtualbox.org/wiki/Downloads
OpenSUSE: http://software.opensuse.org/131/en
Perl: http://www.perl.org/
Wiki-markup (syntax): http://en.wikipedia.org/wiki/Help:Wiki_markup
FileZilla: https://filezilla-project.org/
Google SpEAK repository (depricated): https://speak.googlecode.com/svn/trunk/
Xampp: http://www.apachefriends.org/index.html
Converting sph audio files to wav:
  1. http://www.isle.illinois.edu/sst/courses/minicourses/2009/lecture1.pdf
  2. http://library.rice.edu/services/dmc/guides/linguistic/converting-sph-audio-files-to-wav
  1. http://www.bc.edu/offices/researchservices/cluster/torqueug.html)
  2. http://www.democritos.it/activities/IT-MC/documentation/newinterface/pages/runningcodes.html
  3. http://wiki.hpc.ufl.edu/doc/TorqueHowto
Carnegie Mellon University:training acoustic models using the Sphinx3 trainer: http://www.speech.cs.cmu.edu/sphinxman/fr4.html
CMUSphinx Wiki: http://cmusphinx.sourceforge.net/wiki/
Training Acoustic Model For CMUSphinx: http://cmusphinx.sourceforge.net/wiki/tutorialam?s
sphinx dictionary: http://www.speech.cs.cmu.edu/sphinx/models/hub4opensrc_jan2002/cmudict.06d
sclite under sctk - Speech Recognition Scoring Toolkit (SCTK) Version 2.4.0: http://www.itl.nist.gov/iad/mig//tools/
language model information: http://en.wikipedia.org/wiki/Language_model

Requirements: Parameters: Description: URL:
Host Machine: MacBook Pro, 15-inch mid 2012
  • Proc: 2.3GHz Intel Core i7
  • Mem: 4GB 1600MHz DDR3
  • Graphics: Intel HD Graphics 4000 1024MB
  • Software: OS X 10.9.1 (13B42)
Virtual Environment: Virtual Box Version 4.3.6-91406 https://www.virtualbox.org/wiki/Downloads
Operating System: OpenSUSE v13.1 (32 Bit PC) http://software.opensuse.org/131/en
Software: Perl higly capable feature-rich programming language http://www.perl.org/

Software Description URL:
The Trainer
The trainer source code
The acoustic signals
The corresponding transcript file
A language dictionary
A filler dictionary
The Decoder
The decoder source code
The language dictionary
The filler dictionary
The language model
Test data

Perl http://software.opensuse.org/package/perl?search_term=%22perl%22
  1. Click the Direct Install
  2. Follow the on screen prompts
  3. Open Terminal type: perl -v
C Comiler gcc
  1. Open Terminal
  2. Change to su
  3. zypper install gcc
  1. Open Terminal
  2. Change to su
  3. zypper install gcc-c++
  1. Open Terminal
  2. Change to su
  3. zypper install make
Word Alignment sclite
sctk-2.4.0-20091110-0958.tar.bz2 (MD5) http://www.itl.nist.gov/iad/mig//tools/
SCTK Basic Installation for Version 2.4.1
  1. Open Terminal
  2. Change directory to where the file has been extracted to
  3. vi INSTALL
  4. Read the Install instruction or if you lazy
    1. make config
    2. make all
    3. make check
    4. make install
    5. make doc
    6. Define the path variable by:
      1. pwd (to verify present working directory
      2. locate the bin directory and change directory to it
      3. once in the bin directory, run pwd again to verify the path
      4. Now type the following:
        1. PATH=$PATH:<the result of pwd go here>
        2. echo $PATH
Setting up the data
Create and change to System Directory
  1. Open Terminal
  2. Change to the directory of your choice
    1. pwd
    2. ls
    3. cd <directory path>
  3. mkdir tutorial
  4. cd tutorial
Download audio tarball to system directory http://www.speech.cs.cmu.edu/databases/an4/an4_sphere.tar.gz
To download and save tarball
  1. Open browser
  2. Browse to: http://www.speech.cs.cmu.edu/databases/an4/an4_sphere.tar.gz
  3. Select SaveAs
  4. Save to tutorial folder
Extract files to tutorial folder
  1. Open Terminal
  2. Change to the tutorial directory
  3. pwd (to verify present working directory)
  4. ls (to display contents of tutorial directory, there should be a single file, an4_sphere.tar.gz
  5. gunzip -c an4_sphere.tar.gz | tar xf -
  6. ls (to display contents of tutorial directory, there should now be a an4 directory and the an4_sphere.tar.gz file
Setting up the trainer Code retrieval
There are two options for retrieving the code:
  • subversion (svn)
* svn requires that svn be installed and is the preferred choice as the code is always the most recent version
  • tarball
* tarball is more readily available
This exercise will be using tarball
  1. Open browser
  2. Browse to: http://cmusphinx.org/download/nightly/SphinxTrain.nightly.tar.gz
  3. Select SaveAs
  4. Save the file to the tutorial directory
Extracting the SphinxTrain tarball
  1. Open terminal
  2. pwd (to verify present working directory)
  3. cd tutorial (Change directory to tutorial directory)
  4. ls (to verify the contents of the directory, you should see the file: SphinxTrain.nightly.tar.gz
  5. gunzip -c SphinxTrain.nightly.tar.gz | tar xf -
  6. ls (to verify the contents of the directory, you should see the file(s):
    1. an4 (directory)
    2. an4_sphere.tar.gz
    3. SphinxTrain (directory)
    4. SphinxTrain.nightly.tar.gz
Compilation of SphinxTrain
  1. Open terminal
  2. pwd (to verify present working directory)
  3. Change directory to tutorial directory
  4. ls (to verify the contents of the directory, you should see the file: SphinxTrain.nightly.tar.gz
  5. cd SphinxTrain (change directory from tutorial to SphinxTrain)
  6. ./configure
  7. make
Tutorial Setup copy all relevant executables and scripts to the same area as the data
Setup the Tutorial by:
  1. Open terminal
  2. pwd (to verify present working directory)
  3. cd tutorial (Change directory to tutorial directory)
  4. cd SphinxTrain (Change directory to SphinxTrain directory)
  5. perl scripts_pl/setup_tutorial.ph an4
  6. cd ../an4
  7. perl scripts_pl/make_feats.pl -ctl /etc/an4_train.fileids
  8. perl scripts_pl/RunAll.pl
Setting up the decoder SPHINX-3: Uses continuous HMMs. It can handle both live and batch decoding. Currently, it is the decoder most actively developed
SPHINX-3 tarball download
  1. Open Browser
  2. Browse to: http://www.speech.cs.cmu.edu/sphinx/tutorial.html#sphinx3tarball
  3. Select Save As
  4. Save the file to the tutorial directory
  5. Browse to: http://cmusphinx.org/download/nightly/sphinxbase.nightly.tar.gz
  6. Select Save As
  7. Save the file to the tutorial directory
Extract SPHINX-3 tarball
  1. Open terminal
  2. pwd (to verify present working directory)
  3. cd tutorial (Change directory to tutorial directory)
  4. gunzip -c sphinxbase.nightly.tar.gz | tar xf -
  5. gunzip -c sphinx3.nightly.tar.gz | tar xf -
  6. ls
    1. an4
    2. an4_sphere.tar.gz
    3. SphinxTrain
    4. SphinxTrain.nightly.tar.gz
    5. sphinx3
    6. sphinx3.nightly.tar.gz
    7. sphinxbase
    8. sphinxbase.nightly.tar.gz
Compilation Compile sphinxbase
To compile sphinxbase
  1. Open terminal
  2. pwd (to verify present working directory)
  3. cd tutorial (Change directory to tutorial directory)
  4. cd sphinxbase
  5. ./configure
  6. make
To compile SPHINX-3
  1. Open terminal
  2. pwd (to verify present working directory)
  3. cd tutorial (Change directory to tutorial directory)
  4. cd sphinx3
  5. ./configure --prefix=`pwd`/build --with-sphinxbase=`pwd`/../sphinxbase (NOTE: the apostrophies around pwd are the character from ~ key)
  6. make
  7. make install
Tutorial Setup setup the tutorial by copying all relevant executable and scripts to the same area as the data
  1. Open terminal
  2. pwd (to verify present working directory)
  3. cd tutorial (Change directory to tutorial directory)
  4. cd sphinx3
  5. perl scripts/setup_tutorial.pl an4
  6. cd ../an4
  7. perl scripts_pl/make_feats.pl -ctl etc/an4_test.fileids (transformation (or parameterization) of signals into a sequence of feature vectors, which are used in place of the actual acoustic signals)
  8. perl scripts_pl/decode/slave.pl
Perform preliminary training run multiple CPUs
edit sphinx_train.cfg
  1. Open terminal
  2. pwd (to verify present working directory)
  3. vi etc/sphinx_train.cfg
  4.  :set nu (to enable line numbers)
  5. j (to move down a line, move to line 168
* you can also type  :168 to jump to that line
  1. Set line to read:
* $CFG_QUEUE_TYPE = "Queue";
    1. type l to move right
    2. type i to insert at the proper location
    3. type ::POSIX";
    4. the edited line should now appear as: $CFG_QUEUE_TYPE = "Queue::POSIX";
    5. hit ESC key to exit insert
    6. type :x followed by enter to save changes and exit
Preliminary decode Compute MFCCs from the .wav files
From the top level directory, an4, type:
  1. perl scripts_pl/make_feats.pl -ctl etc/an4_test.fileids
  2. perl scripts_Pl/decode/slave.pl (~10min run time)
  3. The output will display the error rates
How to Train Training requires a working directory other than the source directory, in capstone, this is referred to as the experiment directory
Create an experiment directory
  1. mkdir /home/capstone/exp/0001
Copy set up files to the exp/001 directory
  1. cd /home/capstone/tutorial/an4
  2. perl scripts_pl/copy_setup.pl -task /home/capstone/exp/0001
Parameterize the training database
Determine which sound units require training Language Dictionaries
Dictionary files:
  1. /etc/<experiment directory>.dic
  2. /etc/<experiment directory>.filler
  3. /etc/<experiment directory>.phone
  4. Explanation: http://www.speech.cs.cmu.edu/sphinx/tutorial.html#app1
Change Training parameters The following files are referenced during the training process
File names:
  1. etc/sphinx_train.cfg
$CFG_DICTIONARY training dictionary with full path (do not change if you have decided not to change the dictionary)
$CFG_FILLERDICT your filler dictionary with full path (do not change if you have decided not to change the dictionary)
$CFG_RAWPHONEFILE your phone list with full path (do not change if you have decided not to change the dictionary)
$CFG_HMM_TYPE this variable could have the values .semi. or .cont.. Notice the dots "." surrounding the string. Use .semi. if you are training semi-continuous HMMs, mostly for Pocketsphinx, or .cont. if you are training continuous HMMs (required for SPHINX-4, and the most common choice for SPHINX-3)
$CFG_STATESPERHMM it could be any integer, but we recommend 3 or 5. The number of states in an HMMs is related to the time-varying characteristics of the sound units. Sound units which are highly time-varying need more states to represent them. The time-varying nature of the sounds is also partly captured by the $CFG_SKIPSTATE variable that is described below.
$CFG_SKIPSTATE set this to no or yes. This variable controls the topology of your HMMs. When set to yes, it allows the HMMs to skip states. However, note that the HMM topology used in this system is a strict left-to-right Bakis topology. If you set this variable to no, any given state can only transition to the next state. In all cases, self transitions are allowed. See the figures in http://www.speech.cs.cmu.edu/sphinx/tutorial.html#app2 for further reference. You will find the HMM topology file, conveniently named <exp_num>.topology, in the directory called model_architecture/ in your current base directory (<exp_num>).
$CFG_FINAL_NUM_DENSITIES if you are training semi-continuous models, set this number, as well as $CFG_INITIAL_NUM_DENSITIES, to 256. For continuous, set $CFG_INITIAL_NUM_DENSITIES to 1 and $CFG_FINAL_NUM_DENSITIES to any number from 1 to 8. Going beyond 8 is not advised because of the small training data set you have been provided with. The distribution of each state of each HMM is modeled by a mixture of Gaussians. This variable determines the number of Gaussians in this mixture. The number of HMM parameters to be estimated increases as the number of Gaussians in the mixture increases. Therefore, increasing the value of this variable may result in less data being available to estimate the parameters of every Gaussian. However, increasing its value also results in finer models, which can lead to better recognition. Therefore, it is necessary at this point to think judiciously about the value of this variable, keeping both these issues in mind. Remember that it is possible to overcome data insufficiency problems by sharing the Gaussian mixtures amongst many HMM states. When multiple HMM states share the same Gaussian mixture, they are said to be shared or tied. These shared states are called tied states (also referred to as senones). The number of mixtures you train will ultimately be exactly equal to the number of tied states you specify, which in turn can be controlled by the $CFG_N_TIED_STATES parameter described below.
$CFG_N_TIED_STATES set this number to any value between 500 and 2500. This variable allows you to specify the total number of shared state distributions in your final set of trained HMMs (your acoustic models). States are shared to overcome problems of data insufficiency for any state of any HMM. The sharing is done in such a way as to preserve the "individuality" of each HMM, in that only the states with the most similar distributions are tied. The $CFG_N_TIED_STATES parameter controls the degree of tying. If it is small, a larger number of possibly dissimilar states may be tied, causing reduction in recognition performance. On the other hand, if this parameter is too large, there may be insufficient data to learn the parameters of the Gaussian mixtures for all tied states. (An explanation of state tying is provided in http://www.speech.cs.cmu.edu/sphinx/tutorial.html#app3). If you are curious, you can see which states the system has tied for you by looking at the ASCII file <exp_num>/model_architecture/<exp_num>.$CFG_N_TIED_STATES.mdef and comparing it with the file <exp_num>/model_architecture/<exp_num>.untied.mdef. These files list the phones and triphones for which you are training models, and assign numerical identifiers to each state of their HMMs.
$CFG_CONVERGENCE_RATIO set this to a number between 0.1 to 0.001. This number is the ratio of the difference in likelihood between the current and the previous iteration of Baum-Welch to the total likelihood in the previous iteration. Note here that the rate of convergence is dependent on several factors such as initialization, the total number of parameters being estimated, the total amount of training data, and the inherent variability in the characteristics of the training data. The more iterations of Baum-Welch you run, the better you will learn the distributions of your data. However, the minor changes that are obtained at higher iterations of the Baum-Welch algorithm may not affect the performance of the system. Keeping this in mind, decide on how many iterations you want your Baum-Welch training to run in each stage. This is a subjective decision which has to be made based on the first convergence ratio which you will find written at the end of the log file for the second iteration of your Baum-Welch training (<exp_num>/logdir/0*/<exp_num>.*.2.norm.log. Usually, 5-15 iterations are enough, depending on the amount of data you have. Do not train beyond 15 iterations. Since the amount of training data is not large you will over-train the models to the training data.
$CFG_NITER set this to an integer number between 5 to 15. This limits the number of iterations of Baum-Welch to the value of $CFG_NITER.
  • Once you have made all the changes desired, you must train a new set of models. You can accomplish this by re-running all the slave*.pl scripts from the directories <exp_num>/scripts_pl/00* through <exp_num>/scripts_pl/09*, or simply by running perl scripts_pl/RunAll.pl.
  • The above will create a new setup by rerunning the SphinxTrain setup, then rerunning the decoder setup using the same decoder as used by the originating setup. It then copies the configuration files, which are located under etc, to the new setup, with the file names matching the new task's.
  • The the copy_setup.pl script also copies the data, located under feat and wav to the new location. If dataset is large, the duplication may be wasting disk space. Editing the script and creating a Symbolic link to the data is a better option but symbolic linking is not supported in all operating systems, for example, Windows does not support symbolic linking.

How to decode, and key decoding issues
Setting Definition
The first step in decoding is to compute the MFCC features for your test utterances
You may change decoder parameters, affecting the recognition results, by editing the file etc/sphinx_decode.cfg in tutorial/<exp_num>/
$DEC_CFG_DICTIONARY the dictionary used by the decoder. It may or may not be the same as the one used for training. The set of phones has be be contained in the set of phones from the trainer dictionary. The set of words can be larger. Normally, though, the decoder dictionary is the same as the trainer one, especially for small databases.
$DEC_CFG_FILLERDICT the filler dictionary.
$DEC_CFG_GAUSSIANS the number of densities in the model used by the decoder. If you trained continuous models, the process of training creates intermediate models where the number of Gaussians is 1, 2, 4, 8, etc, up to the total number you chose. You can use any of those in the decoder. In fact, you are encouraged to do so, so you get a sense of how this affects the recognition accuracy. You are encouraged to find the best number of densities for databases with different complexities.
$DEC_CFG_MODEL_NAME the model name. It defaults to using the context dependent (CD) tied state models with the number of senones and number of densities specified in the training step. You are encouraged to also use the CD untied and also the context independent (CI) models to get a sense to how accuracy changes.
$DEC_CFG_LANGUAGEWEIGHT the language weight. A value between 6 and 13 is recommended. The default depends on the database that you downloaded. The language model and the language weight are described in http://www.speech.cs.cmu.edu/sphinx/tutorial.html#app4. Remember that the language weight decides how much relative importance you will give to the actual acoustic probabilities of the words in the hypothesis. A low language weight gives more leeway for words with high acoustic probabilities to be hypothesized, at the risk of hypothesizing spurious words.
$DEC_CFG_ALIGN the path to the program that performs word alignment, or builtin, if you do not have one.
You may decode several times with changing the variables above without re-training the acoustic models, to decide what is best for you.
The script scripts_pl/decode/slave.pl already computes the word or sentence accuracy when it finishes decoding. It will add a line to the top level .html page that looks like the following if you are using NIST's sclite.
Term Definition
Capstone You are expected to train the SPHINX system using all the components provided for training. The trainer will generate a set of acoustic models. You are expected to use these acoustic models and the rest of the decoder components to recognize what has been said in the test data set. You are expected to compare your recognition output to the "correct" sequence of words that have been spoken in the test data set (these will also be given to you), and find out the percentage of errors you made (the word error rate, WER, or the sentence error rate, SER).

In the course of training the system, you are encouraged to use what you know about HMM-based ASR systems to manipulate the training process or the training parameters in order to achieve the lowest error rate on the test data. You may also adjust the decoder parameters for this and study the recognition outputs to re-decode with adjusted parameters, if you wish.

At the end of Capstone you should be able to answer the following question: What is your word or sentence error rate, what did you do to achieve it, and why?

HMM-based speech recognition functions by first learning the characteristics (or parameters) of a set of sound units, and then using what it has learned about the units to find the most probable sequence of sound units for a given speech signal
Experiment' decoding an audio file where the system interprets the sounds and presents stats on two main items: Word Error Rate (WER) and Sentence Error Rate (SER)
Training process of learning about the sound units
Decoding using the knowledge acquired to deduce the most probable sequence of units in a given signal
Trainer learns the parameters of the models of the sound units using a set of sample speech signals to create the training database
  • Requires you to tell it what you want it to learn (that parameters)
  • The sequence in which they occur in every speech signal in the training database
Transcript file information provided to the trainer, in the sequence of words and non-speech sounds exactly as they occurred in a speech signal, followed by a tag tying it to the signal
Dictionary maps every word to a sequence of sound units, used to derive the sequence of sound units associated with each signal
Transcripts/Language dictionary a single file containing the text of the spoke English words
Filler dictionary non-speech sounds found in the source audio
Decoder a set of programs that perform the recognition task given a set of inputs
Utterance each file within the database used for training
cepview tool used to view the cepstra in any file
mk_mdef_gen tool used to occurrence frequencies in the data

  1. Define common terms

  1. Read through and follow instructions from the Carnegie Mellon University: http://www.speech.cs.cmu.edu/sphinx/tutorial.html site
  2. This site really explains what Capstone is all about and gives detailed explanation into everything
  1. Define common terms in the structure of a table as it's easier than bullet points
  2. Create a virtual machine (vm) on my local machine
  3. Install OpenSUSE on my vm machine
  4. Follow the Carnegie Mellon University instructions for completing an experiment locally on my own machine
  1. Disorganization and lots of it
  2. Finding even my own logs to be overwhelming, need to be better organized

Week Ending March 4, 2014

  • 2014.03.02 - Logged in to review other's activity
  • 2014.03.03 - Logged in to review the progress on the Master Script as well as SpEAK
  • 2014.03.04 - Logged in, reviewed logs
  • Work on defining content
  • Up date areas as needed.
  • Looking forward to seeing the work Josh has done with the Master Script
  • The rest of the team is making headway so all is well with the world!
  • This week I was not able to contribute anything and sorry to my team for that!
  • Read additional logs and post important related material in on location

  • Wasn't able to do much of anything this week due to other priorities
  • Will need to devote double time next week

Week Ending March 18, 2014





  • SPRING BREAK and during this time will be on vacation from university projects

Week Ending March 25, 2014

  • 2014.03.22 - Logged in Updated Tasks
  • 2014.03.23 - Logged in Reviewed Team Logs
  • 2014.03.24 - Logged in Reviewed Josh Anderson's log specifically for how to use his script
  • 2014.03.25 - Logged in and attempted the master script again

  • 2014.03.22

1. Update task assignments for this week

  • 2014.03.23

2. Review other team members logs

  • 2014.03.24

3. Run an experiment using the newly created scrip file written by Josh Anderson (Spring 2014 Semester)

  • 2014.03.25

4. Try running another experiment after correcting errors

  • 2014.03.22

1. was able to update all areas of my log, Tasks, Plan, and Concerns

  • 2014.03.23

1. reviewed other Experiment Group logs to figure out where they are and how I could contirbute

Week Ending March 25, 2014
Team Member: Plan:
Josh Anderson Plans to:
  1. Run a full experiment
  2. Check with Experiment Group Members
  3. Work with Colby Johnson and corpus Sensone/Desity values
  4. Work with Exp Grp regarding presentation and clarification
Raymond Whitman Plans to:
  1. run experiments
Pauline Wilk Plans to:
  1. run experiments
  2. work with SpEAK
  3. updated wiki: clarify topics add content
  • The group seems well on their way performing individual tasks, however both Josh and Pauline have mentioned content additions and clarifications. The group may need to get together to establish a unified plan on presentation. We should probably get together on this at the end of the next class?
  • 2014.03.24
  • The following was performed from the following environment:
Parameter Description
Connection Type: Remote
Machine: x64
Operating System: Windows 7 Professional
Client Application: Putty SSH
  • The following is the procedure for running an expirement from master_run_train.pl script
Running Experiment From Script File
Steps Action
1. Putty Installation:
Step 1.
  1. Download Putty From: http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
  2. In this example, A Windows installer for everything putty-0.63-installer.exe was selected
  3. Once downloaded, run PuTTy Setup
  4. Select Next Button
Step 2.
  1. Accept or change the install path
  2. Click next button
Step 3.
  1. Accept or change the program's shortcut menu
  2. Click next button
Step 4.
  1. Define Additional Tasks
  2. Click next button
Step 5.
  1. Select Install button to Install
Step 6.
  1. When complete, click finish button
Step 7.
  1. PuTTY Readme contains documented information regarding Putty
Step 8.
  1. To start Putty: Select Windows Start Globe
  2. Select All Programs
  3. Expand PuTTY Folder
  4. Select the PuTTY shortcut
Step 9.
  1. In PuTTY Configuration, enter the Host Name
  2. For UNHM the host name would be your Blackboard User name followed by @caesar.unh.edu
Step 10.
  1. If prompted, select Yes button on Security Alert
Step 11.
  1. When prompted, enter password followed by enter
2. : Run the Script
Step 12.
  1. Login to the correct machine
  2. At the prompt, type: ssh <machine name>
  • The example shown is for machine Miraculix
Step 13.
  1. At the prompt, type: /mnt/main/scripts/user/master_run_train.pl
  2. Press Enter
Step 14.
  1. When prompted, type: m or c
  • m represents Master (parent directory)
  • c represents Child (sub directories running same script over and over)
  1. Press Enter
Step 15.
  1. Script will create experiment directory
  2. Type: 1
  3. Press Enter
Step 16.
  1. Enter Desity (Multiple of 2)
  2. Type: 8
  3. Press Enter
Step 17.
  1. Enter Senone
  2. Type: 1000
  3. Press Enter
Step 18.
  1. Type: 1
  2. Press Enter
Step 19.
  1. Enter Corpus subset train path
  2. Type: first5_hr/train
  3. Press Enter
Step 20.
  1. Type: 1
  2. Press Enter
Step 21.
  1. You're returned to a prompt
  2. At this point the script is complete and you'll need to do stuff on your own
Step 22.
  1. Change directory to your expirment directory (see Step 15.)
  2. Type: cd /mnt/main/Exp/<exp number from step 15>/scripts_pl
  3. Press Enter
Step 23.
  1. Type: RunAll.pl
  2. Press Enter
  3. I got to here and then clearly it dies....
  • 2014.03.25

1. Updated the screen shot for figure 19 2. Followed the script again but still got failure when attempting to RunAll.pl {{{ Configuration (e.g etc/sphinx_train.cfg) not defined Compilation failed in require at RunAll.pl line 48 Begin failed--compilation aborted at RunAll.pl line 48 }}} 3. Not sure what's going on here, will need to follow up with group tomorrow on what the heck the above error is...

  • 2014.03.22

1. Update task assignments for this week

  • Organize this weeks log entries
  • Develop plans to achieve task completion for this week
  • 2014.03.23

2. Review other team members logs

  • Review Josh's, Pauline's, and Rays Logs
  • 2014.03.24

3. Run an experiment using the newly created scrip file written by Josh Anderson (Spring 2014 Semester)

  • After review of Josh's logs, follow his instruction for using the script to generate an experiment
  • 2014.03.25

4. Try Josh's script again using the suggestions given for success

  • Tried the script again changing step 19
  • 2014.03.22
  1. Free Time: I have several major ongoing projects in my professional life as well as with the Internship course project. It is highly unlikely that I'll be able to maintain the save effort level as I had in the first part of the semester
  • 2014.03.23
  1. Content: A shared concern among the experiment group is organization and clarification of the data we've been adding to speech wiki. We will need to get together, probably next class, to discuss this further.
  • 2014.03.24
  1. Script failed due to user error, passed suggestions to Josh regarding my step 19 and Step 23 for the RunAll.pl example and path
  • 2014.03.25
  1. Script still failed when attempting RunAll.pl.. will need to follow up with group to figure out the trouble...
  • 2014.03.26
  1. on step 19 the example provided for 5hrTrain is not correct. the syntax is: first_5hr/train

Week Ending April 1, 2014

  • 2014.03.27 - Logged in (detail below)
  • 2014.03.28 - Logged in (detail below)
  • 2014.03.29 - Logged in (detail below)
  • 2014.03.31 - Logged in (detail below)
  • 2014.03.27
  1. Update tasks for Week Ending April 1, 2014
  2. Add content to the Experiment Setup page
  • 2014.03.28
  1. Update the Experiment page with the experiments that I've run
  • 2014.03.29
  1. Update the Experiment Setup page http://foss.unh.edu/projects/index.php/Speech:Exp
  • 2014.03.31
  1. Updated log entries
  • 2014.03.27
  1. Added content from my log to the Terminal page under experiment setup page
  • 2014.03.28
  1. Added content to experiment numbers:
Experiment Number Link
0234 http://foss.unh.edu/projects/index.php/Speech:Exps_0234
0235 http://foss.unh.edu/projects/index.php/Speech:Exps_0235
  1. Mike Jonas had sent me an email requesting that I update the experiment labels for experiment log entries of 0234 & 0235. He also asked that since these were not true experiments that I label them as accident. Corrected the labels per Mike's request.
  1. Corrected log entries
  • 2014.03.27
  1. Update tasks for Week Ending April 1, 2014
  2. Add content to the Experiment Setup page
  • 2014.03.28
  1. Update the Experiment page with the experiments that I've run
  • 2014.03.29
  1. Update the pages specified in my tasks lists
  • 2014.03.31
  1. Update log's Tasks, Results, Plan regarding this week
  • 2014.03.27
  1. None
  • 2014.03.28
  1. There is a lot of redundant information within the speech wiki and the overall site is becoming more cluttered with redundant information. For example, the details of the testing that I've done are now duplicated both here in my log as well as in the experiments page. This is the primary reason for new students being overwhelmed when they enter into Capstone.
  2. Defining experiment log entries as Accident is not beneficial to any viewer of the experiment log. I think labels should be descriptive of activity in the way I had labeled them but... I'll do what's been asked and follow a non-descriptive label convention.
  • 2014.03.29
  1. No concerns regarding the Experiment page
  • 2014.03.31
  1. None

Week Ending April 8, 2014

  • 2014.04.04 - Logged in (details below)
  • 2014.04.05 - Logged in (details below)
  • 2014.04.06 - Logged in (details below)
  • 2014.04.07 - Logged in (details below)
  • 2014.04.09 - Logged in (details below)
  • 2014.04.04
  1. Edited the Experiment Resources page and removed the contents per Dr Jonas' request as it's duplicate information.
  2. Review the work done by my team.
  3. Attempt to follow Colby's directions
  • 2014.04.05
  1. Request clarification on the procedure for completing an experiment
  • 2014.04.06
  1. Work on Poster for Capstone
  2. Work on Avengers team stuff
  • 2014.04.07
  1. Work on Avengers team stuff
  • 2014.04.09
  1. Check status of MySQL on the Rome Machine
  • 2014.04.04
  1. Page is cleared with but an empty table remains
  2. Colby has provided detailed instructions, read through and printed out
  3. Attempted to follow Colby's instruction
Running a train
Ha, fooled you. This is top secret.
    1. got kicked out of Caesar due to time out, dropping out of speech.
  • 2014.04.05
  1. sent an email to the Avengers team, waiting on feed back
  • 2014.04.06
  1. Created a .docx summarizing the different areas of content and submitted to the Experiment group via email
  2. Communicated with the Avengers team
  • 2014.04.07
  1. Communicated with the Avengers team
  • 2014.04.09
  1. Fatal MySQL Errors
Running a train
Logged into Caesar using PuTTY Mysql figure1.png
From Caesar, SSH'd to Rome Mysql figure2.png
In Rome, Checked the IP Address Mysql figure3.png
Attempted to login to mysql Mysql figure4.png
Verified that /etc/my.cnf exists Mysql figure5.png
Checked contents of /etc/my.cnf Mysql figure6.png
Verified that /var/lib/mysql directory exists Mysql figure7.png
Attempted to start mysql: rcmysql start Mysql figure8.png
Attempted to start mysql: service mysql start Mysql figure9.png
Tried a trace on starting: sh -x /etc/init.d/mysql start Mysql figure10.png
Verified the contents of: /var/lib/mysql Mysql figure11.png
Checked the mysql log file: /var/log/msyqld.log Mysql figure12.png
  • 2014.04.04
  1. Edit the experiment resources/utils page
  2. Read the material
  3. Run an experiment from Colby's instruction (Avengers team)
  • 2014.04.05
  1. Email the Avengers team requesting clarification
  • 2014.04.06
  1. Create a .docx summarizing the different areas of content
  2. Submit to the exp group via email for their ideas
  3. Review material regarding Avengers group
  • 2014.04.07
  1. Follow along with the Avengers group and assist where I can
  • 2014.04.09
  1. Log into Rome and review the status of mySQL
  • 2014.04.04
  1. don't have time to continue working on this, my scheduled and availability is a great concern...
  • 2014.04.05
  1. The collaboration and work flow of the Avengers group isn't defined well and it's difficult to follow or contribute in its present state
  • 2014.04.06
  1. None really, the Experiment group is pretty solid so I don't think we'll have any trouble with this poster.
  2. I had several concerns regarding the Avengers group that I had passed along to them
  • 2014.04.07
  1. No concerns today
  • 2014.04.09
  1. MySQL has fatal errors in its log, mysql might need to be reinstalled? Will need to follow up with the experiment team

Week Ending April 15, 2014

  • 2014.04.09 Logged in (details below)
  • 2014.04.10 Logged in (details below)
  • 2014.04.11 Logged in (details below)
  • 2014.04.09
  1. Work on Experiment Group Poster Content
  • 2014.04.10
  1. Get SpEAK database connect working
  • 2014.04.11
  1. Work on the SpEAK changes made yesterday and remove those that are not needed
  • 2014.04.09
  1. Created the first revision of the Experiment Group Poster and Content
  • 2014.04.10
2014.04.10 MySQL on Rome
Logged into Caesar
  1. Opened PuTTY
  2. entered login into
  3. entered password
Mysql 20140410 figure1.png
SSH'd into Rome
  1. ssh rome
Mysql 20140410 figure2.png
Entered Super User
  1. su
  2. <entered password>
Mysql 20140410 figure3.png
Edited the iptables
  1. vi /etc/sysconfig/iptables
  2. :q
  3. I immediately exited realizing I hadn't backed it up first
Mysql 20140410 figure4.png
Created backup of iptables
  1. cp /etc/sysconfig/iptables /etc/sysconfig/iptables.20140410.btj9
Mysql 20140410 figure5.png
Edited the iptables
  1. vi /etc/sysconfig/iptables
  2. Searched for 3306: /3306
  3. Search result pattern not found
  4. Port 3306 is not in the iptables and it needs to be
Mysql 20140410 figure6.png
Added port 3306 to the iptables
  1. Move to the bottom of the file
  2. Above COMMIT, used insert i
  3. inserted: -A INPUT -m state --state NEW -m tcp -p tcp --dport 3306 -j ACCEPT
  4. saved changes: :x
Mysql 20140410 figure7.png
Verified Changes
  1. vi /etc/sysconfig/iptables
  2. /3306
  3. Exited: :q
Mysql 20140410 figure8.png
Restarted the Iptables (Firewall)
  1. service iptables restart
Mysql 20140410 figure9.png
Attempted to access mysql
  1. verify ipaddress: ifconfig
  2. connect to mysql: mysql -h -u admin -p
  3. typed pword
  4. error message appeared: Access Denied
Mysql 20140410 figure10.png
Need to verify user
  1. accessed mysql: 'su'
  2. entered password
  3. Set permissions: GRANT ALL ON speak.* TO 'speak'@'';
  4. Fluch privileges: FLUSH PRIVILEGES;
Mysql 20140410 figure11.png
Dropped out of mysql
  1. exit
  2. Dropped out of su: exit
  3. logged into speak: mysql -h -u speak -p speak
  4. entered password
  5. ran select query: SELECT * FROM users;
Mysql 20140410 figure12.png
Edited httpd.conf file
  1. backedup /etc/httpd/conf/httpd.conf TO /etc/httpd/conf/httpd.conf.2014.04.10.btj9
  2. Edited httpd.conf: vi /etc/httpd/conf/httpd.conf
  3. At line 43 edited, changed from Listen 80 TO Listen *:80
  4. At line 94 inserted: ServerName rome
  5. Exited vi saving changes: :x
  6. Restarted apache: apachectl graceful
Verified the binding address for mySQL
  1. netstat -ano | grep 3306
Mysql 20140410 figure14.png
Set the binding for MySQL to
  1. using vi:
  2. edited /etc/my.cnf
  3. added: bind-address=
Mysql 20140410 figure15.png
Verified the bind was working
  1. cat /var/log/mysqld.log
Mysql 20140410 figure16.png
Updated the php dbconnect.php file
  1. vi /var/www/html/speak/php/controllers/dbConnect.php
  2. Changed the variable of $host from to localhost
Mysql 20140410 figure13.png
logged into speak
  1. Opened Chrome Browser
  2. http://caesar.unh.edu
  3. entered user: Admin
  4. entered pwd: admin
  5. Waited a while but it finally redirected me to the proper page
Mysql 20140410 figure17.png
  • 2014.04.11
2014.04.10 MySQL on Rome
Edited /etc/my.cnf
  1. Commented out line 3
  2. removed the use of mysql.sock
  3. added on line 5 the port: 3306
  4. restarted the mysql service: service mysqld restart
20140411 mysql figure1.png
Viewed the messages log during a connect
  1. tail -f /var/log/messages
  2. Unfortunately, only contained start up from the mysql restart
  3. exited tail with <ctrl+c>
20140411 mysql figure2.png
Viewed the mysqld.log
  1. tail -f /var/log/mysqld.log
  2. Curious, it's now showing the socket as mysql.sock, not the ipaddress of
20140411 mysql figure3.png
Removed the comment applied earlier on my.cnf
  1. vi /etc/my.cnf
  2. Uncomment line 3
  3. restarted the mysql service: service mysqld restart
  4. checked the mysqld.log: tail -f /var/log/mysqld.log
  5. socket still shows as mysql.sock
20140411 mysql figure4.png
Removed the work I had done on the httpd.conf file
  1. edited /etc/httpd/conf/httpd.conf
  2. vi /etc/httpd/conf/httpd.conf
  3. At line 43 edited, changed from Listen *:80 TO Listen 80
  4. At line 94 Removed entry: ServerName rome
  5. Exited vi saving changes: :x
  6. Restarted apache: apachectl graceful
20140411 mysql figure5.png
Attempted to login
  1. Open chrome: https://caesar.unh.edu
  2. I can't tell if response is quick or not but it seemed better
20140411 mysql figure6.png
edited /etc/my.cnf and removed the binding and port
  1. vi /etc/my.cnf
  2. restarted the mysql service: service mysqld restart
20140411 mysql figure7.png
Tried to connect to SpEAK
  1. Open chrome: https://caesar.unh.edu
  2. Took forever but logged in
20140411 mysql figure6.png
At this point, the only remaining changes from yesterday's work are:
  1. $host = localhost found in /var/www/html/speak/php/controllers/dbConnect.php
  2. no other changes remain except the back up files I created for prior to changes
  3. Those files include:
    1. /etc/sysconfig/iptables.20140410.btj9
    2. /etc/httpd/conf/httpd.conf.20140410.btj9
    3. /etc/my.cnf.2014.04.10.btj9
as a side note: Apache Logs (must be su):
  1. cd /etc/httpd/logs
  2. View the dir by date: ls -lt
  3. See end of file: tail -f <file name>
  4. Apache Log files:
    1. default-access_log
    2. speak_access_log
    3. ssl_request_log
    4. speak_error_log
    5. error_log
  5. MySQL Log files:
    1. /var/log/mysqld.log
  6. PHP Log files:
  • 2014.04.09
  1. Identify content for the Experiment Group Poster and pass along to the team
  • 2014.04.10
  1. Review the present configurations, make changes where needed
  • 2014.04.11
  1. Work on the SpEAK changes made yesterday and remove those that are not needed
  • 2014.04.09
  1. No concerns as this is the first version of the poster
  • 2014.04.10
  1. The performance is slow, need to check the config, some of the changes I made may not be needed. I think the only change that is needed was the localhost setting in the dbconnect.php file. Will have to try and back track later on.
  • 2014.04.11
  1. No concerns really, the performance still stinks, guess that's a concern?

Week Ending April 22, 2014





Week Ending April 29, 2014





Week Ending May 6, 2014