Speech:Spring 2016 Neil Champagne Log

From Openitware
Jump to: navigation, search


Week Ending February 9, 2016

Task

2/7: I've been conducting research on RedHat, as I'm fairly unfamiliar with the system.

2/8: Checking in

2/9: performed more research on RedHat, successfully logged into Caesar from home, changed my password


Results

2/7: as expected, I've learned quite a few useful commands, and a little more about the actual file system.

2/8: Checking in

2/9: successfully used ssh commands from cisunix to get to caesar, and changed my password


Plan

2/7: current plan is to continue research, and collaboration with group members.

2/8: Checking in

2/9: the plan was to change my password, i accomplished this


Concerns

2/7: concerns about my lack of familiarity with the systems involved come to mind. "beginning of the semester" concerns, mostly just my tendency to panic at the drop of a feather are also concerning.

2/8: Checking in

2/9: my concern at this point is just hoping we get access to the server room tomorrow so we can start planning the project at hand.

Week Ending February 16, 2016

Task

2/11: We realized either the documentation or the labels were off, our goals for this week include fixing both problems. I'm making a logical outline today.

192.168.10.3: Methusalix.unh.edu
192.168.10.4: disconnected
192.168.10.5: disconnected
192.168.10.6: Majestix.unh.edu
192.168.10.7: disconnected
192.168.10.8: disconnected
192.168.10.9: disconnected
192.168.10.10: disconnected
192.168.10.11: disconnected
192.168.10.12: Brutus.unh.edu (DO NOT TOUCH)
13-17 are also all disconnected, and as we only have a few servers, this was where i ended my search

2/14: checking in

2/15: checking in

2/16: continue work on updating documentation beginning with an accurate physical map of the system. continuing my work from the 11th, I accessed each of the online servers and used a command # dmidecode --type chassis to find any information i could to help identify any external features. the command lists a Serial number as part of the chassis details. My current plan is to use this to identify the chassis on any servers to accurately map the physical layout of the server rack before the seminar tomorrow to give the System group a road map to work with.

192.168.10.1: Caesar, Serial Number: FQV7TF1
192.168.10.3: Methusalix, Serial Number: 46RX9D1
192.168.10.6: Majestix, Serial Number: F11R441
192.168.10.12: Brutus, Serial Number: HKJS251
Results

2/11: Results were not as conclusive as i would have liked. Nmap would have made the task far easier, as well as more complete.

2/14: checking in

2/15: checking in

2/16: i was able to find serial numbers on all the current online servers, I plan on showing up early tomorrow to turn on the machines which are currently off. and use the appropriate dmidecode and get a proper physical map for the seminar tomorrow.

Plan

2/11: I need to get in contact with the tools group, and see if we can get permission to get nmap installed to make future mappings easier. I am fully capable of installing it, but would like to make sure it's okay to install it.

2/14: Checking in

2/15: checking in

2/16: I outline my current plan in my results, however, before I boot the offline servers into gear, I to ensure that they are not down for a reason. I will be consulting with Professor Jonas (the client) before booting the servers to ensure I cause no conflicts when I power them up.


Concerns

2/11: I am concerned that nmap is not installed on caesar, though I can understand the logic, I would greatly appreciate being able to use the useful network utility.

2/14: Checking in

2/15: checking in

2/16: my primary concern right now is getting the rest of the servers up and online, at the moment we have access to three of the nine we're supposed to. Another concern is finding the physical IDs, and names, of the three servers the client wishes to remove so we can work that into our project goals.

I will ensure that the "dmidecode --type chassis" command, along with a table of servers/serial numbers makes it into our updated documentation as well, so that anyone who should need it in the future has access to the tools they need to physically identify the servers.

Week Ending February 23, 2016

Task

2/20: Checking in

2/21: Checking in

2/22: Spent 4.5 hours in the server room trying to troubleshoot the Redhat installations on the four remaining servers. Every server returned the same error: that no viable disc was present.

2/23: spent more time with the servers today: changed out the CMOS Battery in Miraculix and Obelix 2, sadly, to no real effect.


Results

2/20: Checking in

2/21: Checking in

2/22: I checked the BIOS setting on the four defunct servers, and realized the SATA ports were disabled. I attempted to enable the SATA port, and through a series of trials and errors, realized the BIOS were being reset whenever the server shut down. This led me to believe the CMOS batteries in the four servers were dead.

2/23: The CMOS batteries were not the problem at hand. upon further inspection, i noticed the RAID Controller battery is completely missing from Miraculix, but as this is a more expensive, less convenient fix than the CMOS battery, I'm going to confer with the group to see if we can find other potential solutions, though we will need a new RAID Controller battery at some point.


Plan

2/20: Checking in

2/21: Checking in

2/22: I bought the last two correct batteries at Wal Mart, and will replace the batteries tomorrow. If my suspicion is correct, this will enable us to have Redhat installed on all the servers in time to configure them during the team split session of the seminar.

2/23: current plan is to confer with my group tomorrow, and see if we can attempt to pin down the problem we're having, as well as get Asterix properly configured and open for business.

Concerns

2/20: Checking in

2/21: Checking in

2/22: My big concern is that my suspicion will prove incorrect, and I will have wasted my time, money and effort on a wild goose chase.

2/23: My concerns remain the same as yesterday, as the potential solutions seem to ne going up in price, and in delivery time...

Week Ending March 1, 2016

Task:

2/27: Checking in

2/28: ran a decode on the train started by one Mike Salem

2/29: task for today is to get Redhat installed on as many of the servers as possible, using to steps discovered last Wednesday. also to document these steps properly


Results:

2/27: Checking in

2/28:

,-----------------------------------------------------------------.
|                            hyp.trans                            |
|-----------------------------------------------------------------|
| SPKR    | # Snt # Wrd | Corr    Sub    Del    Ins    Err  S.Err |
| Sum/Avg | 1000  12903 | 74.0   18.9    7.1   17.8   43.8   96.4 |
|=================================================================|
|  Mean   | 35.7  460.8 | 74.9   18.6    6.5   21.5   46.6   96.6 |
|  S.D.   | 16.3  229.0 |  7.5    5.9    3.3   12.6   14.8    5.0 |
| Median  | 33.5  459.5 | 76.3   16.9    5.3   17.6   43.3  100.0 |
`-----------------------------------------------------------------'

2/29:

Asterix: installed on a prior week
Obelix 2: error in disk format
Majestix: Installed
Miraculix: error in disk format
Idefix 2: Installed


Plan:

2/27: Checking in

2/28: original plan was to run a train, I discovered a train which Mike had started. under the advisement of Ryan (modeling group) ran the decode on the train, and got a word error rate 43.8%

2/29: current plan is to show up early on Wednesday and do work on Obelix 2 and Miraculix, attempting to figure out why they are not installing properly.

Concerns:

2/27: Checking in

2/28: as I stumbled upon a train which had already been run, and ran the decode on it, i have yet to run an actual train, and should run one in the coming weeks...

2/29: i have concerns over Obelix 2 and Miraculix, the Hard drives are, as far as i can tell, functional, yet consistently run into errors as I try to install Redhat on them.

Week Ending March 8, 2016

Task

3/5: Checking In

3/6: Checking In

3/7: tracked down the documentation for enabling Sphinx, and followed it, enabling the links between /usr/local (on the local machine) and /mnt/main/local on Caesar.

3/8: My task for today was to reconfigure the interfaces on the new servers to match existing documentation, and make the corresponding changes to the hosts table.


Results

3/5: Checking In

3/6: Checking In

3/7: results were mixed. While the links were made successfully, I had issues running a train. This is most likely due to my inexperience, and I will consult with the modeling group on Wednesday to ensure that I know how to actually run a train before we break off into teams.

3/8: Today was a success, the interfaces of the four servers that needed it were were properly reconfigured, and the hosts table on the five new servers, as well as Caesar, were updated.


Plan

3/5: Checking In

3/6: Checking In

3/7: assuming all goes as planned, my current plan is to make the changes need to the host files, and ip addresses that the professor emailed us about earlier today, and make sure the documentation reflects the necessary changes.

3/8: My plan at the moment is to do some research into a banner generator to get banners for the drones, spruce them up, make them more welcoming.


Concerns

3/5: Checking In

3/6: Checking In

3/7: My concern (at this point in the semester) is that while I've run a decode, i have yet to run a train. if all goes as planned, that will change this week.

3/8: my concerns haven't changed too much, I'm feeling more confident as i poke around Redhat though, so some of my concerns from earlier in the semester are starting to resolve themselves.

Week Ending March 22, 2016

Task

3/9: I continued my work on the servers: as i was getting the banners ready for each servers i kept running into an error

Warning: the RSA host key for 'miraculix' differs from the key for the IP address '192.168.10.4'
Offending key for IP in /mnt/main/home/sp16/nchampagne/.ssh/known_hosts:8
Matching host key in /mnt/main/home/sp16/nchampagne/.ssh/known_hosts:7
Are you sure you want to continue connecting (yes/no)? 

i did some research on the error, and found a simple command

ssh-keygen -R 192.168.10.X

that fixed the issue at hand, which was created when we reconfigured the IP addresses. The command clears cached RSA keys and allows you to obtain an updated key upon your next ssh session with the drone in question.

aside from all that fun, I got welcome banners for the drones, courtesy of http://www.network-science.de/ascii/. I will document exactly what settings I used so future semesters will be able to make things look uniform.

3/20: I attempted to run a decode on the train I started on 3/9.

3/21: checking in

3/22: checking in


Results

3/9: successfully updated all my RSA keys, and documented my methodology in my personal log, will also ensure the documentation makes its way into general information so it may be considered "properly" documented. Also got welcoming banners added to all the drone servers.

3/20: The results of my decode are not currently available, as i keep running into an error which has been detailed in the log for Neil's Train take 2

3/21: checking in

3/22: checking in


Plan

3/9: I need to update our documentation with steps to update SSH Keys and create the welcoming banners.

3/20: I'm hoping the Ryan will be able to offer some insight, failing that I'll be consulting with the rest of Team Stark for further assistance

3/21: checking in

3/22: checking in


Concerns

3/9: no concerns presently

3/20: my concern at this juncture is whether or not I messed up my train, and if that could be what is causing my errors.

3/21: checking in

3/22: checking in

Week Ending March 29, 2016

Task

3/26: Ran a train on the first_4hr corpus, mostly to familiarize myself with the process/commands. Will run another train before the day's end and compare results.

3/27: checking in

3/28: my task today was to get the internet connection to Majestix working. I was able to trace a network cable to Methusalix, that was also connected to the internet. Using this cable Majestix now has an external address, though for some reason, cannot ping to its gateway. After a frustrating hour, I realized that the server's connection to Caesar had been compromised along the way. I resolved this issue, and Majestix is now regularly accessable through Caesar.

3/29: Checking in


Results

3/26: My first train today was technically run yesterday, but today i will be running a train on the same corpus, and comparing the results.

,-----------------------------------------------------------------.
|                    hyp.trans (train one)                        |
|-----------------------------------------------------------------|
| SPKR    | # Snt # Wrd | Corr    Sub    Del    Ins    Err  S.Err |
|=================================================================|
| Sum/Avg | 1000  12903 | 58.4   19.8   21.8    3.0   44.6   74.3 |
|=================================================================|
|  Mean   | 35.7  460.8 | 60.1   19.4   20.5    3.3   43.3   72.8 |
|  S.D.   | 16.3  229.0 | 11.0    5.9    6.1    3.1   11.4   18.6 |
| Median  | 33.5  459.5 | 61.8   18.3   19.9    2.1   42.1   79.5 |
`-----------------------------------------------------------------'

3/27: checking in

3/28: the internet connection is not yet active, though Majestix does currently have an external address, acquired through DHCP (132.177.189.44/22), it cannot ping its default gateway (132.177.188.1/22) or DNS servers. and i cannot hazard a guess as to why.

3/29: Checking in


Plan

3/26: I'm going to run another train on the same corpus today, identical, or widely varying results may indicate errors somewhere in the process.

3/27: checking in

3/28: (added late) Planning to spend time in the server room on Wednesday to fix any remaining issues with Majestix.

3/29: Checking in

Concerns

3/27: checking in

3/28: if we can't get this internet connection up and running, the tools group will continue to be unable to do work.

3/29: Checking in

Week Ending April 5, 2016

Task

4/2: Checking In

4/3: checking in

4/4: Over the weekend ,students noticed an inability to connect to Obelix, Idefix and Miraculix. I got in early on Monday morning and, after some troubleshooting, discovered that power strip common to the affected servers was faulty. I found a strip with three active outlets, and plugged the affected servers into it restoring power to the live servers for the project, which I prioritized. Regrettably, there weren't enough outlets for Majestix or Rome, so those servers remain down for the time being.

The power strip currently being used is the one that already had Asterix (which hadn't failed) and the switch labeled HUB#3 plugged into it. This "Drone Strip" now has all four drones plugged into it, this is not an ideal situation.

I also made sure the Drone Strip and the strip powering Caesar are plugged into different wall outlets, so as not to fry the wall outlets and lose access to all our servers.

Upon further investigation, I was able to find out that the power strip had blown a fuse. The reset button (located on the left side of the strip, immediately right of the strip's red light) restored power to the strip. Rome, Majestix, Obelix I and Idefix I are now available again.

4/5: went in to try to fix up the network bridge. Sent out an email before doing so, the long chain resulted in a decision to bench the project for the time being, until a better time and opportunity are available. the interfaces have been reverted to what they were before I started my work, with one exception. The interface configuration i created for the bridge, ifcfg-br0, has been renamed to ifcfg-br0.old. I will be adding the remaining steps to activate the bridge to the documentation later tonight.


Results

4/2: Checking In

4/3: checking in

4/4: Obelix, Idefix, and Miraculix are back up, though what caused the failure on the power strip remains a mystery.

4/5: Progress was made on the network bridge, however due to potential risks to NFS and the limited use we would actually get out of the bridge, it was determined the project was best left for another day.


Plan

4/2: Checking In

4/3: checking in

4/4: my plan for the time being is to work more on the network bridge, though that will be tomorrow.

4/5: My plan for tomorrow is to get together with as many of the modeling group as possible and try to hammer out a firm plan for the network bridge.


Concerns

4/2: Checking In

4/3: checking in

4/4: my primary concern right now is what caused the power failure. if it should repeat itself, we don't have another power strip to fall back to.

edit: the original power strip has been restored to working order.

4/5: i have no pressing concerns presently, besides the progress lost during the power interruption.

Week Ending April 12, 2016

Task

4/9: Checking In

4/10: updated the Red Hat Network Configuration to include configuring and revoking the network bridge.

4/11: Video call with Matt, Ben, Nigel, Brendan and Jon, Matt offered in depth tutorials on decoding unseen data. I recorded the whole call, and plan on editing a detailed visual tutorial to offer for future capstone students.

4/12: edited the raw footage into a reasonable length video featuring Matt walking us through decoding unseen data, and use of Peter's script. the raw footage was 32 and change minutes long, I trimmed roughly 17 minutes off, making it about 15 minutes long.


Results

4/9: Checking In

4/10: our network configuration page now includes step to enable and disable the network bridge.

4/11: video call went well, I have over a half hour of good footage, tomorrows task is to trim it down to a digestible length, and have a finished product by Wednesday.

4/12: I now have a 15 minute long video tutorial on both decodes on unseen data and the usage of Peter's script.


Plan

4/9: Checking In

4/10: look into basic scripting to automate as many of the commands as possible.

4/11: on 4/12 my plan is to is to trim and polish the raw footage I recorded into a relatively clean visual guide to decoding on unseen data.

4/12: My plan is to distribute this video, at first exclusively to team Stark, then, after the competition and assuming the video is helpful, I will post the video to Foss, allowing future students to grasp the concept easier.

I have the edited cut, with what i found most helpful, as well as the raw footage, in the event that others don't find the same information useful.


Concerns

4/9: Checking In

4/10: the configuration of the network bridge remains untested, and with so little opportunity to test before the end of the semester, I'm concerned that it won't work when it needs to.

4/11: I'm primarily concerned that the video i make will not be understandable.

4/12: I'm concerned that the video won't be helpful enough, and that my time will have been wasted.

Week Ending April 19, 2016

Task

4/16: My task this week is to communicate with Matt Heyner in regards to one parameter that, if changed, could make a positive change in our word error rate.

4/17: read a really interesting article provided by Matt which had visuals of the parameter I'm researching with him, really helped my understanding.

4/18: Visited the server room intent on diagnosing Caesar.

4/19: Rebooted Caesar, VGA ports is now functioning as normal.


Results

4/16: In the limited research I've conducted thus far, I've found that this parameter is mostly changed in the context of neural network based research.

4/17: got a deeper understand of "parameter A" as it relates to speech recognition, and getting better results.

4/18: Can confirm VGA ports are acting up. SSH internally remains fast and unchanged from the start of the semester. SSH externally (I.E. from Cisunix) is as slow as it was yesterday, taking a full minute and 26 seconds to connect to Caesar. As few people are in the loop about this, I'm going to send an Email out detailing a potential plan to reboot Caesar before the Undergrad Research conference, and see if that remedies some of the issues.

4/19: VGA port now functioning, initial connection is still sluggish from cisunix, however the hangup after the password prompt has gone.


Plan

4/16: I feel I need to conduct additional research in order to fully understand this parameter.

4/17: continue research into "Parameter A" as well as look at Caesar tomorrow, and try to diagnose the issues at hand.

4/18: My current plan is to consult with the systems group, and hopefully reboot Caesar before the URC. the typical Computer System remedy is to turn the technology off and back on, this will, hopefully, make things right.

4/19: I may have stumbled upon the key to the seamless SSH environment, and will be looking into this more in the morning. For my own memory, the key may be in /etc/ssh/sshd_config in a parameter "RhostsRSAAuthentication"

this actually appears to not be it, though i will still look into the particular parameter


Concerns

4/16: My primary concern actually relates to Caesar at this point. Caesar has been rather sluggish, and Aaron is reporting issues with power supply, among other errors. I want to know what may be causing these, and if anyone in the systems group has looked at it yet.

4/17: If any NFS processes are running on 4/18 it would severely hinder my ability to diagnose any potential problems with Caesar. Update with baseline: took one minute 27 seconds to connect to Caesar

4/18: Since, if everyone follows directions, there will be no processes using NFS, and that shouldn't be an issue. The big issue will be the slowing of the project during the time when no trains or decodes will be run.

4/19: Still trying to solve the sluggish connection problem, though it's not an issue of mortal peril, as it were.

Week Ending April 26, 2016

Task

4/23: Checking In

4/24: Checking In. Also checked on our Sphinx version for compatibility with MLLR

4/25: IRC Progress, I connected Rome to the Internet and got the packages needed to install InspIRCd.


Results

4/23: Checking In

4/24: Checking in. MLLR was built into sphinx versions 3.5 and later. As we are on Sphinx 3.7 it is entirely possible to use it, though as to what is actually involved, More research is needed

4/25: hit a roadblock installing InspIRCd. The ./configure script which is needed for the install requires g++ or another c++ compiler. I sent correspondence to Tom in the Tools group to see if he has recommendations.


Plan

4/23: Checking In

4/24: Checking in. do more research on MLLR, and see how much, if at all, this particular process will help us.

4/25: if possible, i will be trying to set the IRC server up remotely tomorrow.

Concerns

4/23: Checking In

4/24: Checking in

4/25: none presently

Week Ending May 3, 2016

Task

4/28: Began work on final proposal, page separated and formatted the report page, added some information to the Systems Group subsection.

4/29: Copied documentation regarding hardware issues from the Spring 16 System group log to a subsection of System Hardware Configuration

5/1: Checking In

5/2: updated the wiki to include documentation for installing the IRC server on Rome. can be found here


Results

4/28: got some positive feedback regarding the report structure.

4/29: Issues regarding future hardware configuration errors now have a home.

5/1: Checking In

5/2: documentation updated, included up to where i have gotten, as well as future steps to be taken.


Plan

4/28: Begin consolidation of Spring '16 Systems group documentation to the information page for easier access.

4/29: continue trying to find proper homes for system documentation, add documentation for the IRC Server to software configuration tomorrow

5/1: Checking In

5/2: I'm going to continue chipping away at what I can on the final report over the next week

Concerns

4/28: none presently

4/29: still none

5/1: Checking In

5/2: winding down the semester, i just want to make sure I'm still useful.

Week Ending May 10, 2016

Task

5/6: Checking In

5/8: worked on Report


Results

5/6: Checking In

5/8: worked on Report


Plan

5/6: Checking In

5/8: worked on Report

Concerns

5/6: Checking In

5/8: worked on Report