Speech:Spring 2018 Wesley Couturier Log


 * Home
 * Semesters
 * Spring 2018
 * Proposal
 * Report
 * Information - General Project Information
 * Experiments - List of speech experiments

Week Ending February 5th, 2013

 * Task:

1/31 - Begin to discover how the decoder works. Look into any logs, READMEs, ChangeLogs, etc. to find more about what this project uses as a decoder.

2/2 - Checking In, Reading Logs and Documents, Viewed Video on performing first training.

2/3 - Read the main files that I found, and piece together what I can.


 * Results:

1/31 - The decoder, known as Sphinx, is an open source project presented and maintained by students at Carnegie Mellon University, and is used to recognize and process speech and language. Through the exploration of our files, it is rather made to be a more light-weight language processor as it is composed mostly in C. The Sphinx project has been in existence for some time, and self describes from the README of our /mnt/main/root/sphinx3 as using certain code back from 1986. Our particular version of Sphinx is comprised of two different areas. The first is sphinx-base-0.6.1 which was last modified in July of 2006, according to the project's Sourceforge page. The other half, which is where I believe we will be focusing on, is our version of Sphinx3 release 3.7.0, which was released on August 20th, 2007, according to sphinx's ReleaseNotes in the /mnt/main/root/sphinx3 directory.

2/2 - More reading and locating of where files were in the project. I found in sphinx3 the location of various mains, including the one that I've read today. It was main_conf.c which deals with measuring the "Confidence Annotation" using the lattice-base word posterior word probabilities. I'm very happy in finally finding these mains, as I can log them over the course over the next few days. I also watch This Video which happens to mostly follow the main tutorial from CMU's own documentation on running the training software. After watching it, I feel more confident in my abilities in getting this done correctly, even if something unexpected happens.

2/3 - Although I can describe a little more of what these files are, I want to absolutely be sure that what I articulate is accurate as well as complete. I will describe at a later date what the main directory holds.


 * Plan:

1/31 - Connect to Caesar and start poking around the root directory under. From there take a look around and begin taking notes as to what the project uses. Look up online sources as to what the decoder uses / is. Keep our head on straight, take notes, and ask plenty of questions.

2/2 - Continue to read the various main files located in:

2/3 - Keep reading and taking notes. Check in on discord and see if anyone from our group needs any assistance.
 * Concerns:

1/31 - Organization of files, and their locations may be difficult or different on untrained eyes. We want to know what this thing does, but we don't necessarily need to learn about it 100% on the first or second time walking through the files.

2/2 - No immediate concerns. I do wonder what we can improve on this project, if nothing more than recompiling and / or upgrading the decoder itself to sphinx 4.

2/3 - No immediate concerns.

Week Ending February 12, 2013
2/9 - Coming back from a work conference located in Philadelphia, I want to maintain both an effort to contribute the software group as well as a personal sense of responsibility. Along with my plan of completing a train, I want to help focus the software group's efforts in helping to decide which main files located within the directory /mnt/main/root/sphinx3/src/programs are more important / relevant to what we need to do.
 * Task:

2/10 - Today I was tasked with helping to create the software group's individual portion to the semester's proposal. Although it doesn't sound like much, I am slightly concerned with how it's been worded so far. The task at hand is to essentially 'decode' what the decoder does and how it works, which is no small feat. But on paper, it does appear that the group is not achieving a particular advancement of the speech recognition project other than documentation. This leads me to believe we need to do more, but I am also concerned about the scope of our job. The second task today was the running of my first train.

2/11 - I want to complete the train / language model / decode I started yesterday. After I began the thirty-hour train, I decided to write my log and then log off until today. The other task is to meet with the software group and discuss about our portion of the proposal that Camden has so graciously taken control of.

2/9 - These are the files of which I believe are of most significance. Although it does not permit the ignorance of reading additional files within Sphinx 3, it has come to my understanding that it is these files that will most likely help our group solve the mystery of how the decoder runs.
 * Results:

---Main Files--- main_decode.c main_continuous.c main_livedecode.c main_dag.c main_aster.c main_livepretend.c

---Directory in /mnt/main/root/sphinx3/python/--- _sphinx3module.c

---Directory in /mnt/main/root/sphinx3/src/libs3decoder/libAPI/--- s3_decode.c

---Common Headers located in /mnt/main/root/sphinx3/include/--- s3_decode.h kb.h utt.h dict.h

There are a few things that are worthy of note. The first being that the headers seem to be the files that contain a great source of documentation, while their complimentary c files are simply the code. Second, the file s3_decode.h seems to be referenced more than any other file directly relating to any mention of a decoder. Third, I am seeing macros everywhere, and this confuses me as I have thought of macros as unnecessary at best and problematic at worst. I might want to brush up on the subject more before I give any additional opinion.

2/10 - After taking a look at Speech:Run Train Setup Script, I managed to run a 30/hr train which is still going. I will check back tomorrow to see if it completed. As for the proposal, I believe that we need to focus on S.M.A.R.T. goal making. This means to make our goals Specific, Measurable, Attainable, Relevant, and have Time-constraint. So I will pursue a plan that describes not only the documentation, but also the recompiling of Sphinx 3. Hopefully people will agree with my perspective.

2/11 - The proposal we finalized was official submitted, edited, and approved for the rough draft. I feel better now that we have defined goals in mind. As for the train, it completed. I then followed the directions to the T on creating the language model, which didn't take long. However, the decoder seemed to fail. I was supposed to see a chart of success percentages, failures, errors, etc. Instead I saw nothing, and an error of "Segmentation failed (core dumped)" appeared. This occured right as I submitted the last command in the decoder process.

2/9 - Continue reading on the selected files I have found to see what I can pull together. Also, I must eventually do a train, and may as well as follow along with my group when I get a chance.
 * Plan:

2/10 - Perhaps when the next upcoming Tuesday comes around, we can discuss with the professor how exactly the group is advancing the project along. Maybe we can be the savers of much needed documentation.

2/11 - On top of talking to my group members if they've come across a similar error, I am going to search on-line and figure out what a segmentation error is in its generic form. 2/9 - These files have a range of complexity and documentation that is great at times, and horrible at others. One file can be easily understood while the other is simply a 'mish-mosh' of various lines of code. In certain cases, comments from revisions questioned the validity or necessity of certain parts of code. This is very concerning as it means our teams' job will be harder than originally thought.
 * Concerns:

2/10 - Very little concerns. I know that the train should take some time, and I will have to check back tomorrow. The proposal should be easy enough as well, and as such I am not concerned. If people don't agree with it, we should be able to compromise.

2/11 - I hope this error can be easily corrected. I understand the process for getting a train and setting up the LM.

Week Ending February 19, 2013
2/15 - Meet with group members on-line in order to finalize our portion of the semester's proposal
 * Task:

2/16 - Continue code review

2/17 - Continue code review

2/15 - The group met on our discord server that is ran by Joshua Young. Various topics of discussion include how we want to articulate ourselves such that someone of Jonas' background was our audience. We also considered the next year's class as we are the first software group in Jonas' capstone project. We felt as though it is important that we set the bar high.
 * Results:

2/16 - I found something very interesting. While in the directory  /usr/local/bin/  I came across various files that had no file extensions or file-types, yet were explicitly about sphinx in general. Under emacs, they are easily determined compiler or computer generated files. I wondered what they are, and to what relation they had to the project as a whole. Later I discovered something that I don't believe I've seen anyone talk about. There was a solution file located within the main root/sphinx3 directory. In case you don't know what those are, those are project files specifically for the use of Microsoft Visual Studios. They are simply the project's organization file, of where things are, and how to traverse directories in order to compile something correctly. With visual studios, we're able to not only see the relation of how these files interact, but it makes a lot more sense as to where the placement of these are, and why they seem very spread out.

2/17 - While using visual studios, I noticed a few lines particular that were not being able to be referenced. When trying to get to their definitions, it appears that the header file that contains them isn't able to be referenced, because it wasn't in the #include path. After doing some digging, I noticed that even though our main functions are in sphinx3, it is actually sphinxbase-0.6.1 that contains the files. Within the sphinxbase directory, I also found yet another solution file. Could this be the area where we should be focusing on? Sphinxbase's README states that the base directory is used across sphinx 2, sphinx 3, and pocket sphinx. It appears that the base directory is mostly comprised of features or addmentites that the main sphinx decoder would use. What I mean by that is for example, the base directory contains a file sole for the use of parsing out the command lines correctly (of which case, there are many different functions contained within that, I believe, are used across the entire two solution projects).

Later in the day, I explored and contributed some findings on the original  file posted Here. There was definitely some interesting things I learned while exploring what everything was, like how the knowledge base is pretty much encompasing, with the language and acoustic models, and dictionary. However, what I found most interesting was the persistence of this structure called cmd_ln_t. It essentally is the command line string that is passed in when we run the decoder. And that string is actually built by a pearl script located at /mnt/main/scripts/usr/run_decode.pl. The actual command line it builds is: system("$DECODE -hmm $HMM -lm $LM -dict $DICT -fdict $FDICT -ctl $CTL -cepdir $CEP_DIR -cepext $CEP_EXT &> ./decode.log"); I want to explore this more.

Lastly, I want to point out that there were some lines that I found were using deprecated and obsolete code. Not to discredit anyone who's worked on the sphinx project, but when I take a look at the code definitions for these function calls, it surprises me that someone doesn't recompile or refractor this code so that we aren't using old code! No wonder professor Jonas wants us to refractor this ourselves. At this point, I'm looking forward to it.

2/15 - For the rest of the week I plan on focusing on the code review for the  as well as continuing to discover and personally document what I find and bring it to the groups attention if necessary.
 * Plan:

2/16 - Visual studios has now already been brought up to the software group, and looking forward, it appears we might want to make a necessary requirement of future software groups to strictly pull the decoder to a local instance, and use Visual Studios to edit the project.

2/17 - Continue with code review, but keep in mind where code is deprecated or obsolete. We need to fix that. 2/15 - No concerns.
 * Concerns:

2/16 - No concerns, in fact there's more inspiration because of the find.

2/17 - We need to refractor the project.

Week Ending February 26, 2013
2/23 - With me and Danielle at the helm of deciding what version control software we're going to use, I want to explore the possibility of using a UNIX based revision system called Revision Control System (RCS).
 * Task:

2/25 - Today, I want to take a look into Subversion as a version control system. The knowledge from this will be used to decide what version control system we're going to install and use.

2/23 - From my initial work and testing with RCS, I found a few things I liked, and a few that I didn't. Coming from a background in using Git, I am used to simply committing my work, and then pushing up what I have done to a repository. This is performed mostly by a  file that contains the logs of all files. With RCS, it's done a little differently.
 * Results:

The explaination that I've come to understand from This Tutorial is simply put: the entire project is like a car rental company, when you want to edit a file, you check it out and no one else can edit it until you "bring it back to the agency". RCS also stores versions differently. Let's say you created a file called helloworld.c, when we initally "check in" the file to the "agency", the file turns into a *.c,v file. It now contains the current version that people have checked it in, as well as other information, such as revision log as well as a description. If we extrapolate, this might seem like a terrible idea as the files themselves cannot compile because they're no longer strictly files that are based in a language, but this is not fully true. The command make has been incorporated with an auto-compiler for all RCS files automatically. And on top of that, if we based our working files within a specified working directory tied to RCS, make will automatically search within the working directory specified for any files it's looking for.

An excerpt of the tutorial worth keeping here:


 * If you create a central directory where your RCS baselines are to be stored, all people on the project can then create symbolic directory links such as "~/work/RCS/" to it. It would be slightly too ::chaotic for this tutorial for everyone to attempt to checkout and lock files from a single central directory, so create a directory named RCS of your own which isn't a symbolic link, and experiment with ::putting your baseline files in there.
 * If a subdirectory named RCS exists beneath your current working directory, all checkins and checkouts will be made by default to and from files in that directory.
 * Many of you have probably also used make(1). Make is a UNIX tool which will compile a group of source files in a particular way based on compilation rules written down by the user in a file named ::"Makefile". Using make makes it much, much easier to compile large projects. Because RCS is also so useful in large project situations, the make tool has been extended to support "auto-checkout" features ::for RCS files. If a target file that it is looking for is not available in the current or specified directory, make will search both the current directory and the RCS subdirectory for a checked-in version ::of that file. If one exists, it will be checked out, used in the compilation, and deleted when no longer needed. (If make finds the file before needing to look for RCS files, of course, it uses the ::checked-out working file rather than overwriting it with a possibly older version.)

2/25 - Subversion is a open source version control system that is maintained by Apache. It is a centralized system, meaning that all interactions are a part of one repository. Typically, files need to be checked into the repository for version control, and then it is tracked. It behaves very similarly to Git, with the exception is that it is not a decentralized or distributed version control system. With Git, make a copy of the entire repository locally and work on files within that copy. Once you're done, you submit your changes to the main repository, and fix or perform any merge / push commits that need to be done. With SVN, you are directly working on the centralized repository, not a local copy.

With what we're trying to accomplished, Jonas has stated that we want to make sure that our working files are always stored on our server / drones, so that in-of-itself doesn't disqualify any sort of version control. For my decision, I answer it in the simplest terms possible. How can we prevent anyone from messing with the same file at the same time, while keeping overhead to a minimum. That answer was very simply, RCS. Although it may be a slight learning curve, as it is different, there are many less dependencies involved with the upkeep of RCS, and make commands should automatically be configured to work within RCS directories for our projects.

2/23 - Continue my research into a good version control. I believe the other option was to explore Subversion (SVN). It appears to be more popular than RCS, as SVN is hosted at a local repository, and not within it's own project directory.
 * Plan:

2/25 - With RCS being my choice, it will most likely come down to installing RCS on the drone Jonas specifies. This is, of course, all depending on if the group doesn't have any objections or additional information. 2/23 - Very little concerns
 * Concerns:

2/25 - No concerns

Week Ending March 5, 2013
3/1 - Today, I will be looking into documentation and logs for installing software correctly. I also need to double check and make sure that I install RCS on the correct Drone. For that I will be sending an email to Jonas on which one he wants me to install on
 * Task:

3/5 - Because our class is tomorrow, I want to install RCS today. If anything goes wrong, it won't take an entire week for me to be available to fix whatever I would do wrong.

3/1 - After consulting with a few class mates, I looked on the System Group's Main Page and found instructions to install software onto the machines. I don't want to completely re-iterate their instructions, but to show that I do understand the gist of what I need to do as well as cover the main concerns, I need to:
 * Results:


 * 1) Unmount Rome from Caesar
 * 2) * Done by using command umount -a
 * 3) Make a file with contents of all directories related to software installations
 * 4) * There is a specified area where you do this: /root/Documents/
 * 5) Physically disconnect ethernet cables from Caesar, and connect to a free port on the server stack to the right of the drones.
 * 6) * This gives us some added assurance that nothing is connected to Caesar, as well as lets us connect to the outside world when installing software.
 * 7) Double check to make sure we're NOT connect to caesar and other drones, as well as run the command ping -c 3 8.8.8.8. This pings Google's DNS server, and confirms we're getting to the outside world.
 * 8) Use Yum commands as root to install the software you want to. In our case, the command is: yum install rcs
 * 9) Once software is installed, you want to repeat step 2
 * 10) * This will give us the ability to see what changed from before to after the install, in case something goes wrong
 * 11) Reconnect all ethernet cables as they were before, disconnect connection to the outside
 * 12) re-mount using mount -a

Because you have to be physically present in the server room while you're doing the install, I've done some prep such as taking a copy of the directories.

3/5 - There were a few bumps in the road, but I did manage to install RCS onto Rome correctly, with the help of Camden Marble. The installation process, which is something I believe everyone should have experience with, could have been a little more clear. Including but not limited to:


 * Labeling or identifying what ethernet ports, Both on the server stack next to and including the ports on the drone. There are multiple ports on the drones themselves, of which the correct one had the next additional problem.
 * The Firewall that prevented Rome from getting to the outside without unplugging and re-plugging in Ethernet.

Between these two problems, a process that should have taken an hour at most, took two and a half. Thankfully Camden was available on the Discord channel.

3/1 - When I get a chance this Monday to finish the install, I will be able to pick up right at step 3.
 * Plan:

3/5 - I need to make sure it's as simple as creating an RCS directory within a working directory for make to recognize where to pull files that we're working on. After that we can begin recompiling with direction from Jonas. Perhaps we can take a look at refactoring as well.

3/1 - A little concerned about potentially messing with things. But I'm certain that with people's help I should be able to get RCS install with relative ease.
 * Concerns:

3/5 - No concerns now that RCS is installed

Task
3/9 - It's time to start seriously learning about how RCS and Make work under the hood, practice with some example projects, and see how far we can go with them.

3/10 - Determine with the specifications of how Jonas wants the project to be set up if it is possible to organize the project with RCS. According to the talk this past Tuesday with Jonas, he wants the ability for people to spin up their own work 'repository' that is connected to and can update the central repository.

3/9 - Both RCS and Make are supported under the Open GNU Project. There they have a ton of manuals, explanations, and sources where we can learn more about them. However, I want to learn under the influence of video explanation as that is how I learn faster. Using This Video, I've managed to find out where these manuals are, as well as a helpful command that tells us which make commands are implicit. We need to run the command make -p within a directory WITHOUT a Makefile. There is should spit out various lines dictating how it sees certain file extensions and what commands to run if it sees them. Below are what we want to see:
 * Results:

%:: %,v $(CHECKOUT,v)
 * 1)  commands to execute (built-in):

%:: RCS/%,v $(CHECKOUT,v)
 * 1)  commands to execute (built-in):

%:: RCS/% $(CHECKOUT,v)
 * 1)  commands to execute (built-in):

It appears that make automatically checks out every RCS file, by extension ",v". If you look closely, it even checks out, within a directory where the Makefile is located in, a subdirectory called RCS/. This is perfect, as it means that any files we want to work on, we simply put them within the RCS/ directory and we're in business with version control.

3/10 - Unfortunately, it does not appear that RCS alone does what Jonas wants. RCS is specifically for files that can keep track of their own versions. It is not a hierarchical or directory based version control system. There is no way to control a directory structure with RCS alone.

Looking into other version control systems, it appears that the next open source version control systems, that both use RCS, is Concurrent Versions Systems (CVS) and Distributed Concurrent Versions Systems. Through both, we are able to:


 * 1) Check-in a hierarchically structured project (with directories)
 * 2) Checkout projects on a centralized system
 * 3) Update projects through commits.

The main difference between the two software packages is that one has more functionality as a distributed system, with multiple team members, and better source control committing functions.

I will present this to Jonas this upcoming Tuesday.

3/9 - Look into how we can further organize this project altogether.
 * Plan:

3/10 - Present to Jonas what I've found so far about CVS and DCVS

3/9 - I'm not too afraid of organizing the project, but I need to make sure that what I need to do is correct, and is what we want to do. I think I'll have to look into the Makefiles and continue with this task based on what I find in them.
 * Concerns:

3/10 - I'm having a hard time trying to find the software for DCVS, hopefully I will be able to find it if he chooses it. Or perhaps I might need to find a way to use CVS with how he wants it.

Week Ending March 26, 2013
3/21 - After talks with Jonas Yesterday it appears that CVS might be able to do exactly what we want. My next steps within this week is to install CVS, and learn it as fast as possible. I don't want to take up too much time on this.
 * Task:

3/21 - Because I'm not physically at school to work on the servers what I've done is download and played with CVS locally on my computer. I'm confident in saying that it is exactly what Jonas wants. It gives us the ability to checkout entire projects with multiple directories, as well as have each user maintain their own versions of the repository.
 * Results:

The installation of CVS would be simple as well, as I've already installed it's dependent component, RCS.

Using the directions that are both listed on the System's group page, as well as what's on our own page for RCS install, I will be following the same steps to install CVS.

3/21 - When class starts next Tuesday I should be able to install CVS. 3/21 - No major concerns
 * Plan:
 * Concerns:

Week Ending April 2, 2013
3/27 - With the class time available today, I will be able to install CVS for a revision control system located on Rome. I will be using the same process to install it as RCS.
 * Task:

4/2 - Now that RCS is install correctly, and CVS was already there, it's time to create a CVS repository.

3/27 - With showing Josh on how to install software, something very odd, but non-critical was discovered.
 * Results:

After preping for the install, by creating a snapshot of the local directories etc., it turns out that Rome already had CVS installed. When I checked the date on the install, it was listed back in 2016. This doesn't make sense as CVS requires the use of RCS, and so for CVS to run without RCS is impossible. It brings up other questions of, "What were they doing? Did they decide not to use it after all? Why did they delete RCS and not CVS, if the previous questions was true?" It seems fishy to me

4/2 - What I've managed to do today, is create the CVSROOT directory located in /root/sphinx/cvs_root. This is important as this is where all of CVS tools are. When we reference any command interacting with a repository, is uses CVSROOT to interact with that repository. It turns out that in order to run any command, we need to pass in the argument -d . I need to get Jonas' permission to add it as an environment variable so that we don't need to use that argument every single time.

However, I was not able to create the repository today, as I don't know which decoder hierarchies we want to keep track of. So as an example to teach myself, I decided to keep track of our currently working decoder located on Caesar. Don't be alarmed! As making a repository does nothing to the original project work.

When we create a new repository to track, the repostory gets made on the same directory as CVSROOT. This can be shown if we go into /root/sphinx/cvs_root/ and look at the project folders there. If we look in those project folders, you'll see various ",v" files. Those are files that are being tracked with RCS.

To import a project, from the directory we want to start tracking, you need to use the command cvs import [-d ] -m "Type a description of the project here"  . The Vendor tag isn't too important, but from the |resources that I've been using, it's nothing more than saying who owns / uses it. We can for our sake simply put your student ID for caesar. The release tag is typically saying which version of the project do we want to start on. We usually want to say v1.0.0 or something similar, just to keep it simple.

If you run the command right, and in the right directory, you should see a list of different files and directories being added to cvs. If for some reason you're keeping track of a symbolic link that goes above the directory you're tring to keep track of, you may accidentally get yourself in a loop. Luckily, CVS is incredibly easy to get rid of a bad repo. Simply delete the project on '/root/sphinx/cvs_root/' using the command rm -r 

3/27 - Investigate further as to how CVS can create a repository, and then work alongside Lamia and see if we can use CVS to help recompile the decoder
 * Plan:

4/2 - Ask Jonas about setting that environment variable, and then figure out either with Lamia or Jonas which decoders we want to keep track of. 3/27 - None, other than things being fishy
 * Concerns:

4/2 - None

Week Ending April 9, 2013
4/8 - After reporting what I've done thus far, I got a response saying that we want to keep track of the sphinx3-0.8 version. This will let us (Lamia and I) have the ability to get use to version control using CVS as well as give Rome an edge in decoding if the compiler works.
 * Task:

4/9 - I got the source of the sphinxbase-0.8 now, so I simply need to extract and find out what exactly I need to do.

4/8 - My results are not quite as conclusive as I want them to be. I did manage to import our current versions of sphinx for the recompile, but I noticed something odd about the sphinxbase we were recompiling. It didn't have any source files! There were some directories in the right place, as well as INSTALL file dictating how we should compile, but there weren't any C files where they should be.
 * Results:

4/9 - After extracting and importing the project, it appears my initial suspicions were correct. There are now source files we can use to recompile. After looking in the directions in INSTALL, it appears that we have three commands to run ./configure make, and make install. The first one mainly configures make for proper project compilation, the second turns all of our C files into Object code, and the last one turns those object files into executables.

But we need to take note! ./configure needs a parameter specifiying where we want to build our PATH folder for project executables, otherwise they'll simple go into /usr/local/. We don't want that in case we have to do something to that. Instead, point ./configure to a directory you make yourself to house all of these files.

Once that command has finished, you should be able to run make and make install relatively easily. I actually did not manage to successfully recompile sphinxbase. When I look closely, it looks as though it's searching for 4 programs:
 * lex
 * yacc
 * flex
 * bison

Of which case, two of those programs come in the other two. So I need to get permission from Jonas to install these packages.

4/8 - After talking with Lamia, it sounds like I'm gonna go to sourceforge and download the package for sphinxbase-0.8, extract it, and then see what I can do.
 * Plan:

4/9 - Talk with Lamia, and see what we need to do in order to go further once install is complete 4/8 - None
 * Concerns:

4/9 - A little bit with how we're gonna recompile, but not too bad

Week Ending April 16, 2013
4/10 - After class today, I want to see if I can get Sphinx3 and / or Sphinxbase compiled. With the four programs now installed on Rome, we should be able to compile sphinxbase.
 * Task:

4/10 - There was a successful compile of sphinxbase!
 * Results:

Using the steps from the previous week's log, I managed to compile sphinxbase successfully at /root/sphinx It create new directories including include, lib, and bin. I cannot be happier right now as I've finally created something to show for! However, when compiling for sphinx3, I did get multiple errors. It seems it's coming from when it tries to run make in make[3]: Entering directory `/root/sphinx/user_work/wmc1008/sphinx3-0.8/src/libs3decoder/libam. I've recorded the log in my personal working directory /root/sphinx/user_work/wmc1008/ under the name make_sphinx3.log

4/10 - I need to go over this with Lamia. Together we should be able to find out what's going on 4/10 - Some concern, but I don't necessarily think it's because of files's not being written correctly. If anything, it's probably because we're not passing in a command that we need to do.
 * Plan:
 * Concerns:

Week Ending April 23, 2013

 * Task:


 * Results:


 * Plan:


 * Concerns:

Week Ending April 30, 2013

 * Task:


 * Results:


 * Plan:


 * Concerns:

Week Ending May 7, 2013

 * Task:


 * Results:


 * Plan:


 * Concerns: