Speech:Spring 2018 Camden Marble Log


 * Home
 * Semesters
 * Spring 2018
 * Proposal
 * Report
 * Information - General Project Information
 * Experiments - List of speech experiments

Contact Camden: cck27@wildcats.unh.edu

Week Ending February 5th, 2018
Task
 * 30 Jan 2018
 * After class ended the Team (Systems Group) exchanged contact information and decided to have weekly meetings. The time of those meetings will be Fridays 6-8PM on Campus repeating for the foreseeable future. We further decided to utilize Slack as a means of communication, Chris N., set this up and was available immediately after class. Upon leaving class I determined a conference location on campus for said meetings – library is closed after 5 PM Friday, however I was informed of other conference rooms outside the library.

Results
 * The team now has three established means of communication; Texting, Slack and Email, as well as a meeting schedule. Slack has been quickly adopted and I believe will make a useful tool.

Plan
 * To review last year’s entries to try and get a better understanding of our Team’s responsibility in the Systems Group. Then review with the Team on Friday what our next steps should be.

Concerns
 * My primary concern at this point is not fully understanding what the tasks ahead of us are. Until I know what goals we have to achieve, I cannot determine how to execute those goals.

Tasks
 * 01 Feb 2018
 * Read entire entry for Spring 2017’s Systems Group and Tools Group as the Tools Group will now be integrated into our Spring 2018 Systems Group. Also read the 1st weeks entry for each member of the Systems Group 2017: Julian Consoli, Andrew George, Bonnie Smith, Mark Tollick.

Results
 * Seems Sp17’ SG (Spring 2017 Systems Group | Referred to as such going forward) had initial endeavors to configure a private server for the use of collaboration. Presumably this is a task that all other groups will face as well. A possible resolution would be to dedication a small portion of disk space on the UNHM Server for local file-sharing and local communication, similar to how this wiki is run.

Plan
 * To read the 1st weeks entry for each member of the Tools Group 2017 (Sp17 TG) and meet with my Sp18 SG Friday Night on campus to discuss next steps. Will look to outline general tasks we hope to achieve during this meeting.

Concerns
 * Chris was very proactive on setting up a Slack workspace for our group. However; after reading last years entries, I think we will want to confirm with Jonas that Slack is an acceptable tool to use. (( Slack was determined to be O.K.'d during the meeting listed below 02/04/2018)).

Tasks
 * 02 Feb 2018
 * Team Meeting, 6 to 8 PM Friday, on campus.
 * We had a team meeting. Laid out some ideas on what we could do to improve the current system, however; are unsure what if any direction we will receive from Jonas. Introduced the teams experience – that is strength and weaknesses (I.E: Networking/Administration/Programming/Security, etc). Set expectations to do reading of past semesters until upcoming class on Tuesday. Ran first Train.

Results
 * This was a successful meeting. I think we are more clear (but not absolutely clear) on what we are looking to do. We established some tasks and a flow for the group. I think these meetings will continue to be useful going forward. – Started a Train but did not finish by the time the meeting ended.

Plan
 * Read previous semesters work from System Group and Tools Group as well as individual members to better understand what our own objectives should be going forward. Need to request access to the server room to find out what tasks we have in front of us. Will worry about delegation of work after next class when we have better understanding.

Concerns
 * Though we have determined some actions we can take to improve the current system, we do not know if the objective of this semester is to direct ourselves or if there is some other direction in the pipeline from Jonas that we just haven’t received. We hope to have better understanding after next class.

Tasks
 * 03 Feb 2018
 * Check-in // Logged into Idefix, where the Train was run during the Team Meeting to check the results. Did some reading.

Results
 * Check-in // The Train Failed. Error: “Can not create models used by Sphinx-II” – This seems like quite a problem as I believe Sphinx is the whole speech software we are using.

Plan
 * Check-in // Will confer with the rest of the Team on Tuesday what the results of their Train was. If only I had the error I’ll have to dig into why. If everyone had the error, then I think this will be a big and immediate problem to tackle.

Concerns
 * Check-in // The Sphinx-II error could be really small or really big. Won’t know for sure until Tuesday.

Tasks
 * 04 Feb 2018
 * Read Tools 2017 member’s first two weeks of logs. Drafted a Server Map. Drafted a local webpage to be used if Nginx is approved. Installed nginx on my personal machine as a sandbox. Researched tools that could be used for server mapping. Looked up ways to determine what tools are already on the Server(s).

Results
 * ‘The Dude’ is a server map software we may be able to use. The command ‘rpm -qa --last | less’ will show the currently installed packages ordered by date. Nginx https://www.nginx.com/resources/admin-guide/load-balancer/ can be used as a load balancer and a web server.

Plan
 * If these tools (The Dude and Nginx) can be used, the webpage can host the results of The Dude and the server map. The webpage can also host additional features (Status page, maintenance, other).

Concerns
 * Obviously the above work will be useless if Nginx isn’t approved, as such I did not put a ton of work into them.

Week Ending February 12, 2013
Rejigged the formatting on my weekly reports because it was getting annoying how separated the information was.

Tasks
 * 06 Feb 2018
 * After class the Group got a tour of the Server room and ‘general’ access (that is, we can ask to get in but don’t yet have are own ‘keys’). Determined that a popped fuse on a generic power strip had caused all the drone servers to go offline – distributing power management will be a future task. We were introduced to the three new servers to be installed and that we’ll have to seat the servers on top of existing servers in the rack as we don’t have the slides to mount them onto.
 * We also had an impromptu meeting as a group to discuss what we would like to submit as part of our project proposal. I also started a 5hr Train to see if there were any fewer errors, left it running and will have to check back.

Results
 * Yashna will be pre-drafting an outline which we will fill in during our team meeting on Friday. We also set some tasks to achieve before Friday, such as; look into what Zach did last semester, see if Nginx or Apache is already installed, get the Train/Experiment running and as aforementioned draft the proposal.

Plan
 * Review the previous semesters logs (by Zach) and look into Nginx’s viability to load balance SSH connections. Daniel R said he was going to look into whether or not apache was already installed on the servers

Concerns
 * I know that Nginx has a monitoring capability, but am unsure if its included in the free version. If memory serves, it is not. Which means we will likely have to create a monitoring service via Perl.

Tasks
 * 08 Feb 2018
 * Checked in on the Train that I ran the other day. Seems to be the same number of errors. Using FileZilla it was a lot easier to see the errors as I was able to view the html file in a browser rather than Cat’ing the log. Not sure if there’s a better way to do this. -- I setup two Raspberry Pi and Linux on my personal computer and installed Nginx to test load balancing and SSH load balancing to make a case for its use to Jonas.

Results
 * Chris seems to be tackling the errors the Train is producing. Seems also another group, ‘Data’, has figured out a way forward on this. Will review their logs tomorrow to see what’s going on. – The Nginx Load Balancer works great for HTTP requests, need to do a little more research on getting SSH to load balance.

Plan
 * Need to read the logs from Zach (last semester) to see what alterations he might have done to the servers. Though, this might be moot if we proceed with our plan to normalize all the servers. Need to continue to research SSH load balancing via Nginx so this can be presented as viable to Jonas. Have the team meeting tomorrow night to finalize the draft of the proposal. And in the future likely need to write a monitoring script via Perl.

Concerns
 * No real concerns at the point, just the normal juggling, but I think we are closing in on a plan for what needs to be done. Once that is solidified we will be able to start tasking and that will make things a lot easier.

Tasks + Results
 * 09 Feb 2018 + Team Meeting
 * Early this morning/ Thursday midnight, reviewed the scripts responsible for running a Train. Discovered that there are some ‘new’ files as well such as genFeats.new.pl* -- I attempted to run a Train based off these ‘new’ files, however… after about 45 minutes I realized I was using the wrong names and it was a wasted effort, the file names were fairly close and I confused them. However; running a new Train as currently documented and *watching* it run this time, rather then letting the server run it and walking away, gave me the opportunity to see the Train ‘complete’. The & ampersand at the end of the Train command keeps it open for the server to run while you’re gone, however; until you hit a new key it looks as if the Train failed. Hitting a new key at the end actually says ‘complete’…. So I think we have been operating under a false assumption that the Train had been failing. I will try to run a Language Model later.
 * Later this day, we had out team meeting. Yashna had pre-drafted the Group Project Proposal so that we could discuss it during the team meeting. I think this meeting went well. We successfully laid out a timeline for the project and setup some language to use in the proposal. We also delegated out the tasks in areas of focus. These will not be silos in that we cannot get help, just that we are to be most informed in these areas. These areas were determined as:


 * Servers – Daniel.
 * Networking – Camden
 * Storage – Chris
 * Tools / Interdisciplinary – Yashna
 * We also review an app called Asana, discovered by Yashna, which will help schedule tasks we need to accomplish week to week. Chris discovered that there is a student package available that will allow up to 100 users. Yashna requested this package so we can invite the entire class to use it. We are awaiting response now.

Plan
 * We will be polishing the language of our Project Proposal before Sunday. -- After discussion with some other members of the class, I volunteered of sorts to conform all the groups proposals into once ‘voice’, as the submission is supposed to sound like it is coming from a single company requesting a grant.

Concerns
 * Obviously conforming all the proposals into one voice is a somewhat large task, but I should be good assuming all the other groups submit by the agreed 6pm Sunday deadline, giving me the rest of Sunday to finish; though I’ll be aiming to finish it by 8pm.

Tasks  + Results
 * 10 Feb 2018
 * I wrote a rough draft of a summary introduction I want to use for the proposal submission. Initial feedback from a few class members was good. Will try and finish the draft and have the rest of the class OK it before using it. Also continued to look into SSH Load Balancing with NGINX and reading logs.

Plan
 * I will continue to look into Nginx SSH Load Balancing throughout the night, but I wanted to get this logged as I am getting too focused on functional work than logging. Need to keep on top of the logging. As mentioned I also want to finish the summary draft and get that approved by everyone else in the class.

Concerns
 * No real concerns at this time. Just starting to have tasks pile up and I need to keep on top of the logging.

Tasks  + Results
 * 11 Feb 2018
 * Early this morning, 1 am'ish, I spent a few hours on the Nginx load balancer. This took longer than expected because my personal sandbox environment I am using to test, is a left over from a different project, services from that old project are interfering with my current nginx tests because they're using the same ports (80,8080). Also, the Windows 10 built in Linux Shell's /etc/hosts gets reset every time you close and reopen the shell. Which makes sense, Now... but not when it wasn't working. -- I think I have an idea to make the TCP SSH LB work, but it involves setting different ports for SSH.
 * Around mid-day I started touching up our Groups draft for the proposal and the intro letter to all the group proposals
 * Tonight, starting at 6PM until about 830ish I worked to conform all project proposals of all groups into one document with a single 'voice'. This consisted of rewording and re structuring sentences, reformatting etc. however; the content was left the same.

Plan
 * I will try and get the Nginx LB SSH to work tomorrow using something other than port 22. I think this will ultimately work and still allows users to directly access the drones if needed/

Concerns
 * So far so good. But considering its 'my voice' on the draft proposal, i'm kinda hoping Jonas doesn't poo poo it too badly haha -- but I did have everyone else weigh in and it only the draft, so not terribly concerned.

Tasks  + Results
 * 12 Feb 2018
 * Today I worked further on the Nginx SSH LB. I believe the system to be configured correctly, however; testing the system is presenting a bit of an issue. My home environment is imposing restrictions that would otherwise not be present, i.e., virtual machines, outdated nginx which can be upgraded and attempting to SSH into the same computer you're working off of...

Plan
 * I will need to get permission from Jonas to install NGINX on one of the actual servers to continue. Though I'm fairly certain this will work.

Concerns
 * Obviously if Jonas does not approve the install or sets conditions for it to be installed, this will either delay or derail this task of the project.

Week Ending February 19, 2013
Tasks  + Results
 * 13 Feb 2018
 * Today we had our class meeting, the first meeting after our draft proposal. Jonas did not like it. Seems the past two or so capstones (which we based our proposal off of) were, less than desirable. The 2014-15 proposal have much greater detail and are the showcase proposals to live up to. I coordinated with Hannah Yudkin of the Modeling Group on what we as a class should aim for as the new (and final) proposal. Also got some more direction from Joans.
 * The proposal should be more narrative - i.e., a story that you are telling someone that includes what happened in previous classes and what we hope to do going forward. As well as introduce the methods we will do that, i.e - 5 teams initially then 2 large teams.
 * Hannah and I also agreed that we should add definition area at the top that defines the terms, such as HMM, LDA, RNN etc. Similar to how you might see a Scientific Journal written (minus the abstract). This way, when we reference the terms in the narrative or in the sub-section Teams, we do not have to reiterate the definition for each group. We proposed this to the rest of the class and the feed back was positive.


 * Meeting with the System Group after class (5pmish) in the Server Room (Yashna was out sick), Chris and Daniel R, were already reviewing one of the Servers to be setup. As well as why the Monitor in the Server Room was not displaying; happens to do with a Config file, Chris or Daniel's log will likely reflect more.
 * Daniel and I moved the mounted server-rack power strip to its lowest wrung to make space for the three new servers. The new servers do not have any slides so we will have to 'plop' them on top of the servers that do have slides per Jonas. I recommended - though more work - to separate them out due to weight concerns. (I.E.: Sled,plop -- sled,plop -- sled,plop rather than sled, plop, plop, plop.)
 * We will need to ensure wire Management is redone and that we reset the time on the AM/FM Clock... sorry about that, but it did not need to be plugged into the same power strip as the Servers which already suffer from power distribution issues.


 * Jonas also mentioned that we are not as far along as he would like us to be. There was mention that the 'boot camp' that other classes seem to have received may have been beneficial. Though, I will be increasing my availability to be on campus about four times a week and remote-available two times a week. On top of my 18/7 availability on mobile.

Plan
 * Plan is to have a new Group proposal draft completed by Friday Night. All other groups should be done by Saturday Night. Hannah & I will polish and submit by Sunday Night.
 * I will also be installing Nginx on Rome to get SSH Load Balancing configured. I will likely need to make a small script so people can run something like './connect' rather than 'ssh -p  Rome'. I will also need to be sure to add that new port number to the SSH config file. I will aim to do this Wednesday night / Thursday Morning, about 1 AM when i get off work. (This will also help in the unlikely event that something breaks).
 * I also need to draft the basic concept of the Monitoring system so Chris & I can start thinking on how we want to program it.

Concerns
 * No terrible concerns at the moment, just a lot of work ahead - which is why I have increased my availability.

Tasks  + Results
 * 15 Feb 2018
 * Last night I provided a video on Automatic Speech Recognition to the class (https://youtu.be/q67z7PTGRi8), has a good overview on HMM and all the other aspects. I think it will be helpful to everyone to be on the same page.
 * Today I basically ‘worked a shift’, I was on group discussions and working on differing tasks from 8am until 4 PM, then again some work from 8PM until now (10PM) and probably longer. This morning I went through the Wiki for Train, LM and Decode; then copied out the codes into a text document and ran them until it worked. Lamia was helpful in her suggested to remove the parent folder (014) and recreate it, seems even if you remove the children folders there is something left that messes with the Train creation. Once successful (results: https://foss.unh.edu/projects/index.php/Speech:Exps_0303_014) I wrote some notes and shared the text (all-in-one.txt, below) to everyone in class. Having a single document with proven commands I think will get everyone up and running as the Wiki has proven to be… less than successful.
 * I had planned to install and test the Nginx TCP Load balancer on Rome, however; the Server Monitor has not been working and I did not want to mess with the SSH ports until we had another means of accessing the server. Daniel R and Yashna worked in the Server room starting around 5:30 PM, and have now (10PM) reported that the monitor works.
 * Around 11a-noon I started looking at the best way to make a Monitor Script since I was waiting to hear back about the Server Monitor. Nmap will tell you if a host is up or down and if you do xxx.xxx.xxx.0/24 (192.168.1.0/24), Nmap will scan the entire IP block and return the systems that are up as well as what ports are open. If we designate what servers should be up, then compare every ~15 minutes, we will be able to tell what is down. Odd thing is all the other servers except Rome seem to have nmap installed, so we will need to confirm we can install nmap on Rome as well.
 * Around 1PM, Isaac had an issue running is Train, ultimately it appears his Exp folder, 013, had permissions assigned to someone else. This appears to have been assigned to a student whom is no longer with the class. Even though we have root-like permissions, we don’t seem to have sudo permissions so neither of us could remove the bad 013 folder. I instead renamed it to Delete and left it there. Allowing Isaac to create his own 013 folder. Isaac has since completed his Train, LM an Decode.
 * Around 3 PM, Brutus and Majestix went Down but the reset of the Servers were working. As Daniel R and Yashna were already due to go to the Server room around 530 pm, I notified them and left it be. They have since gotten them up and running.
 * I later asked that each person send a screenshot of their scoring.log file to the Systems Group as well as the Sever it was running on. This will allow us to ensure each server is providing consistent results before moving forward. There have been a few responses but not enough to say what the status of the servers is. We (the Systems Group) may just need to run these ourselves.

Plan
 * Given that I was not able to work on the Nginx SSH LB today, I will likely have to push that off to the weekend as I will be reading over the past-class material in preparation of our Team meeting Friday Night, where we will be hammering out the final proposal.

Concerns
 * No real concerns at this time, just have the project proposal looming over our heads is all.

all-in-one.txt
 * 1) Author: Camden Marble #####
 * 2) 15 Feb 2018           #####


 * 1) The following are the individual commands necessary to complete a Train, Language Model and Decode.
 * 2) If this is your first time looking at any of these, please use the Wiki first at foss.unh.edu


 * 1) --- Reminder commands ---#
 * 2) the 'pwd' command gives you your present working directory, basically 'where are you' in the folder structure
 * 3) the 'cd' command changes directories. 'cd foldername' goes into the folder, 'cd ..' goes backward to the parent folder


 * 1) ***** STEP ONE IF YOU ARE HAVING PROBLEMS *****#
 * 2) ***** Delete your experiment folder and remake it *****#


 * 1) --- Start a Train ---#
 * 2) Change directory to /mnt/main/Exp// [ current group number is 0303 | /mnt/main/Exp/0303 ]
 * 3) Change directory to your specific experiment number, i.e. 014  [ /mnt/main/Exp/0303/014 ]

makeTrain.pl switchboard 30hr/train
 * 1) Command 1


 * 1) Note, you will want to run a 5hr train until it works correctly

genFeats.pl -t
 * 1) Command 2

nohup scripts_pl/RunAll.pl &
 * 1) Command 3


 * 1) Note, the ampersand (&) keeps the command-line open so you can walk away and the script will keep running. If you see the command-line 'stop moving' and give you the option to type, just input a junk character and hit return; the script will say 'done' and let you go about your business.


 * 1) --- Start a Language Model ---#
 * 2) Run 'pwd' to ensure you are at your respective experiment, i.e.: [ /mnt/main/Exp/0303/014 ]
 * 3) Create a new folder under your respective experiment folder, i.e.: mkdir LM
 * 4) Change directories to the new folder, i.e.: cd LM     [ /mnt/main/Exp/0303/014/LM ]

cp -i /mnt/main/corpus/switchboard/30hr/train/trans/train.trans trans_unedited
 * 1) Command 1


 * 1) Note, change the [30hr] to whatever you ran as a Train

parseLMTrans.pl trans_unedited trans_parsed
 * 1) Command 2

cp -i /mnt/main/scripts/user/lm_create.pl.
 * 1) Command 3


 * 1) Note, include the period above, this typically means 'to open' whatever it is; but don't quote me on that.

lm_create.pl trans_parsed
 * 1) Command 4


 * 1) --- Start a Decode --- #
 * 2) Change directory, up a level to /mnt/main/Exp/0303/014 (respective to your experiment)
 * 3) Then change directory to 'etc' - Note, this is 'etc' within your experiment and not the system /etc folder.
 * 4) I.e., [ /mnt/main/Exp/0303/014/etc ]

awk '{print $1}' /mnt/main/corpus/switchboard/30hr/test/trans/train.trans >> /mnt/main/Exp/0303/014/etc/014_decode.fileids
 * 1) Command 1


 * 1) Note, be sure to change the [30hr] to your Train hours and the experiment numbers to your respective experiment.

nohup run_decode.pl 0303/014 0303/014 1000 &
 * 1) Command 2

parseDecode.pl decode.log hyp.trans sclite -r 014_train.trans -h hyp.trans -i swb >> scoring.log
 * 1) Command 3
 * 1) Command 4


 * 1) Note, be sure to update the 014_train.trans to your respective experiment number


 * 1) All the commands without comments ---#

#Train [ /mnt/main/Exp/0303/014 ]

makeTrain.pl switchboard 5hr/train
 * 1) Command 1

genFeats.pl -t
 * 1) Command 2

nohup scripts_pl/RunAll.pl &
 * 1) Command 3

#Language Model [ /mnt/main/Exp/0303/014/LM ]

cp -i /mnt/main/corpus/switchboard/5hr/train/trans/train.trans trans_unedited
 * 1) Command 1

parseLMTrans.pl trans_unedited trans_parsed
 * 1) Command 2

cp -i /mnt/main/scripts/user/lm_create.pl.
 * 1) Command 3

lm_create.pl trans_parsed
 * 1) Command 4

#Decode [ /mnt/main/Exp/0303/014/etc ]

awk '{print $1}' /mnt/main/corpus/switchboard/5hr/test/trans/train.trans >> /mnt/main/Exp/0303/014/etc/014_decode.fileids
 * 1) Command 1

nohup run_decode.pl 0303/014 0303/014 1000 &
 * 1) Command 2

parseDecode.pl decode.log hyp.trans sclite -r 014_train.trans -h hyp.trans -i swb >> scoring.log
 * 1) Command 3
 * 1) Command 4

Tasks  + Results
 * 16 Feb 2018 | Team Meeting
 * Today we had our Team meeting. During the meeting we ensured that each member was able to complete a Train, LM and Model. By the time the meeting was over, everyone had. We then also agreed to meet 30 minutes before class to get in sync and have one spokes person in class, whom will be Daniel. After that we spent a great deal of time trying to hammer out the final draft of the proposal. We still have some work to do but we have the rest of Saturday for our Team to be done. Then Hannah Y. & and I will be working on Sunday to conform all Team submissions into a single voice.
 * Chris & I Also touched based with Jonas about installing Nmap and Nginx onto Rome. We’re okay to do this, but we need to make sure we remove the soft-link to Caesar and then for good measure the local networking to the other Drones.
 * Chris also reminded me of the root password, for some reason I had forgotten and was operating under the impressing that our user has root permissions, it does not. You must SU into Root then enter the password. The password is -- Gotcha ;)

Plan
 * I will continue working on the Team Proposal Saturday and the Class Proposal Sunday. I will likely head to the server room with Yashna Monday, in order to install Nginx and Nmap.

Concerns
 * So far so good. Keep on keeping on.

Tasks  + Results
 * 17 Feb 2018
 * I briefly logged onto the Drone servers in order to update the DNS at /etc/resolv.conf as the commented material was out of date. This is specifically only used when the Drones are connected through Romes bridge so no impact was expected. It was brought to my attention after the Experiment Team received an error 'host not found' when running the addExp.pl script. Technically speaking, the diagnoses and resolution of updating the DNS would have been correct, if the Drones were connected to the internet. It turns out that addExp.pl is only suppose to be run on Caesar. At any rate, having updated the /etc/resolv.conf file with commented-out sections which should be uncommented when Rome's Bridge is in use, I ran service network restart in order to have the new resolv.conf file take place. Asterix, did not take this well.
 * I also continued to work on and finish the Group Project Proposal for my team, Systems. This will be incorporated into the class submission on Sunday.

Plan
 * I will look to update Asterix this on Sunday if the campus is open, otherwise Monday. the rest of the Drones are fine.

Concerns No concerns at this time.

Tasks  + Results
 * 18 Feb 2018
 * Today, Hannah and I worked from 10am Sunday until 1 AM Monday to complete the class project proposal, with about an hour or two break. This consisted of writing an introduction, a conclusion and a glossary as well as ensuring that each group submission matched the agreed upon format. We then had to go through all 7,000+ words to conform it to one voice, formatting, grammar, clarity etc. Then uploaded it to the Wiki and had to conform the formatting to Wiki. This was just sheer editing on a large body of words which took a long time to process though.

Plan
 * Tomorrow I will meet with Yashna on Campus to correct the issue with Asterix and then review and if possible complete the installation of Nginx and Nmap onto Rome.

Concerns
 * This submission is the final version of our Project proposal, so hopefully Jonas likes this one. We increased it from 9 pages 2100 words to 22 pages 7900 words.

Week Ending February 26, 2013

 * 23 Feb 2018
 * This will be multiple days worth of information under one submission. I have been exhausted this week and simply failed to update the logs in a timely manner.


 * Task + Results 
 * Monday, 19 Feb. I went to campus of Yashna to look into why Asterix had stopped responding after doing the DNS update, which should not have had an issue on its connection. Even if the DNS had gotten screwed up, I should still be able to access it via IP. After getting into the server room, I found that Asterix was simply off. Don't know how, who or why. Upon boot, the server was operating as expected. Yashna & I proceeded to review Rome's configuration as a precursor to installing Nginx & Nmap. After reviewing, outside of the 'mount' to /mnt/main, there were no obvious links between the two. Professor Jonas had said there should be a mount and a soft/symbolic link to another area.


 * Plan:
 * Yashna and I will email the professor to try and get a better understanding of where these links are and how to remove them.


 * Concerns:
 * The additional time messing around with this unmounting/unlinking business might delay the TCP load balancer setup.


 * task + result
 * Tuesday, 20 Feb. Today was class. Jonas was still not a fan of the Proposal we had written. He had emailed 'those whom spoke out most during :class and assumed them to be team leaders' with feedback on the proposal late last night. After Jonas' lecture, My team got together to update our section. Some other teams did as well. I was burnt out from churning through the proposal on Sunday-into-Monday and Hannah had other obligations, so after class it was up in the air on coordinating the final edits. It was... chaotic, for a short time, however some order was restored and some new people took a leadership role over the proposal. We also got a 24 hour extension of sorts, as Jonas was out of town he was not able to grade it and gave us the extra time. Chris N, took over for my group the rest of the day. Kudos to Chris for stepping up and getting our section reformatted and updated. Overnight I monitored from my phone while at work, more edits were required. The next day I finished off our section by 3pm, as I had to go to work again, but others were still reviewing and editing the entire document until late into the day.
 * (This constant work and monitoring of work on the proposal atop my actual job was fairly taxing and basically made Thursday useless in terms of productivity. I have to remember to pace myself.)


 * Plan:
 * There is no plan. Get the proposal done or die trying.


 * Concerns:
 * The amount of chaos that had ensued and the number of emails between group leaders and Jonas made me think the whole document was just going to be shredded and the final proposal would just be a lump of coal by the end of the day. I am thankful that we as class were able to correct and get the document done.


 * Task + Result
 * Friday, 23 Feb. I met with Yashna on Campus to further review Rome as a precursor to the installation of Nginx and Nmap. Some miscommunication had occurred between the Professor and our group on what needed to be done. As such, this work had ground to a halt a few days ago. Some instruction in class was helpful as well as the 2016 Tools group installation of GCC (https://foss.unh.edu/projects/index.php/Speech:Spring_2016_GCC_Install_Documentation).
 * While Yashna and I were on campus. We were trying to determine the link on Rome from /usr/local --> /mnt/main/local . After spending way to much time trying to understand this, we looked at Asterix's /usr/local file and immediately noticed the link to /mnt/main/local. This lead us to the conclusion that Rome does not currently have a link and explains our confusion.
 * While also there, Yashna and I created a snapshot of the installation folders by creating a directory on Rome at /root/documents/snapshot_02232018 then LSing the files into texts such as: ls -al /bin > bin.txt   Once we are approved and install Nginx, we will do a new snapshot and then complete a diff between before_install and after_install to see what files were effected.


 * Plan:
 * I will email Professor Jonas with our findings and seek the go-ahead to start the Nginx and nmap install. The steps are:

On Rome
 * unmount -a
 * confirm /usr/local is not linked
 * confirm /root/documents/snapshot_02232018 is still there
 * confirm Ethernet connection to the Drones and Caesar is removed
 * confirm outside internet connection is established.
 * run yum install Nginx   ( version 1.9.0 or greater)
 * run yum install nmap    ( version doesn't matter)
 * disconnect outside internet connection
 * from direct access to servers (keyboard and mouse), complete new snapshot
 * complete and compare diff between old snapshot and new snapshot.
 * Depending on the files it effects, will determine the implementation onto Caesar.
 * Nginx does not need to be on any other server for our purposes, however.
 * Also plan -- Tomorrow we will have our team meeting via Discord Voice as the meeting time has been moved, per Daniel Rs request.


 * Concerns:
 * I do not want to delay the installation of Nginx further, it will still require some configuration to ensure it works as desired. The longer it takes to configure, the less time we have to work on the Monitoring script.


 * 25 Feb 2018
 * Tasks + Results
 * The above outlined plan is relatively correct. After discussing it with Professor Jonas, there are a few more folders we need to add to the snapshot. I also discovered that Rome did not have a symbolic link from /usr/local to /mnt/main/local - this is why NMAP didn't show up on Rome but does show up on the other servers. I think if possible, we should add /usr/local to the monitoring script, to periodically check if it is connected. On that same note, Majestix purposefully does not have /usr/local symbolically linked. This is because the 2016 Tools Group installed GCC onto Majestix in the local folder. If the local folder is moved (local.old) and a link to /mnt/main/local is created, GCC is excommunicated. Conversely, SCLite will work, which it does not while there is no Symbolic Link. I of sorts suggested that a maintenance log by server should be created to avoid this kind of confusion in the future. I may implement some version of a maintenance log along with a server map regardless of approval simply because this is a mess.
 * We also had our team meeting yesterday via Discord Voice. Nothing much to report out of this. We basically touched base to see where everyone was at and where everyone was going. We're doing pretty well working on our tasks at this point.


 * Plan
 * I will look into adding a 'installation procedures document' in the 'Systems Group 2018' file, to have as reference. Tomorrow, Monday, Yashna & I will work on installing NGINX. Once NGINX is in and approved, I will work to get the TCP SSH Load balancer up and running. I am thinking that Nginx will also be a great tool to use for the server map and maintence log. Will need to be sure to set the load balancer to a different port that is not 22. This will allow people to still directly SSH to a server if they need to.


 * Concerns
 * I am not really concerned with what tasks I have to do or how to achieve them. I am concerned with the lack of unified-documentation as to what has been done to these servers. There should have been a server map and maintence log from day 1 if not also a monitoring system. You cannot maintain a server cluster across multiple people, multiple classes and multiple years without a maintence log and not expect something to get messed up or have to triple perform a task. Simply a waste of time not to have one.


 * 26 Feb 2018
 * Task + Results
 * Today, Yashna and I went through the installation procedure for Nginx. This was to: Umount, rm symbolic link, snapshot, disconnect local network, install Nginx, new snapshot and diff snapshot 1 with snapshot 2. All of this has been completed, however; I need to better understand how to read the diff before I run it by Professor Jonas. I will also be creating a 'installation procedure doc' to be added to the System Group 2018, so future installs will go more smoothly. I may also create a small script for generating a snapshot as this was the longest part and it is simply doing a: ls -al /path/folder > file.txt 15+ times. then doing it again.
 * One issue we did have during the installation, is that you have to create a resource link for Yum to know where you want it to look to actually install anything. This is the Repo (repository) file at: nano /etc/yum.repos.d/nginx.repo Figured that out looking at: https://community.rackspace.com/general/f/general-discussion-forum/6812/nginx-basics-install-nginx-on-red-hat-enterprise-linux-and-debian-based-oss -- where rhel i believe is Red Hat Enterprise Linux. (news to me). I will be a little more thorough in my installation document if you want to read more on this process.


 * Plan
 * I will again meet with Yashna before class tomorrow, around noon, to try and better understand the diff output. So we can then run it by Professor Jonas after/during class. Which should be the last step.


 * Concerns
 * No real concerns, just want to get this up and running so I can complete the load balancer. Assuming we get the A-O.K. tomorrow, we'll still be on schedule.

Week Ending March 5, 2013

 * 27 Feb 2018
 * Task + Results
 * Today was class. Before class Yashna and I reviewed the installation of Nginx on Rome and compared the Diff between snapshot 1 and snapshot 2 (found at /root/Documents/snapshot_02260218*) There were not a log of changed files so it seems to be a low impact installation. We ran it by Jonas during class, he said since we are not planning to move it to Caesar, that he does not need to see the Diff, but having a Diff is always a good idea. After class, we re mounted Caesar (mount -a) but did not put the Symbolic link (/usr/local) back as this would override the Nginx install we just did.
 * Note | Server utilization of current:


 * Majestix [Software Team/GCC]
 * Rome [Dev Server/Backup Server/Nginx installed]
 * Miraculix [Specialized for something? idk]
 * Asterix [Apparently has LDA?]
 * Will need to make a proper server map and maintenance log.


 * Plan:
 * Yashna and I will work on Thursday to actually setup the functionality of the load balancer. One thing Jonas thought would be useful is if we can create two pools for the load balance, allowing us to assign X3 machines to team A and another X3 machines to team B. Seems feasible, just have to figure out for sure.


 * Concerns:
 * No real concerns at this time, just a lot of work ahead.


 * 03 March 2018 - Multi
 * 02 March 2018


 * Task + Results
 * This Friday, Yashna and I worked on getting Nginx running and also reviewed the possibility of installing Logwatch. The issue with Nginx is that the systems firewall is blocking it. I have used iptables before but for whatever reason this isn't working. Nginx should literally be a 'turn on and go' type utility, this issue is putting the project behind. After multiple attempts it seems there might be an issue with SElinux, what i believe is a policy manager. Will need to look into this later. Logwatch is a tool that reviews system logs and reports them back via email. Reviewing the outputs as imagines online, the tool did not seem like it provided a lot of detail. Yashna will continue reviewing different tools.


 * Plan:
 * I will continue to fight with why Nginx can't get up and running due to a port error. iptables does not seem to open the port even when it is specifically open.


 * Concern:
 * This issue with iptables/Nginx might take awhile.


 * 03 March 2018


 * Task + results
 * After doing a lot of research, it seems that SElinux has a enforcement and permit mode (Enforce / Permissive) sudo /sbin/setenforce Permissive temporarily disables SELinux, allowing me to Test Nginx. This, plus setting Nginx's port numbers to two different numbers within its config files (8022/8020) seems to have resolved the issue. Nginx can now turn on. SSH now works local to the machine on 22 and 8022.


 * Plan:
 * Next, i have to fight with Iptables to route 8022 to 8020 (SSH to Nginx) so that Nginx can TCP Balance the connection to all the Drones, on port 22. It is necessary to have a second port for SSH on Rome, so you can decide if you want to SSH into Rome or SSH onto the Load Balancer to be passed to a drone.


 * Concern:
 * Iptables is not playing well with others, and is not redirecting 8022 to 8020. I will need to research more on this, seems that everyone has a different answer.


 * 04 March 2018
 * Task + Result
 * I have figured out how to get iptables to route 8022 to 8020. The main area of concern is /etc/sysconfig here is iptables and another firewall, system-config-firewall. If you want to ensure a port is actually open, make sure its added to this second one, system-config-firewall. THEN configure iptables, or you'll go mad. the redirect for Iptables is: -A PREROUTING -p tcp --dport 8022 -j REDIRECT --to-ports 8020 however, this need to be in the NAT table not the FILTER table. In the FILTER table, you should ensure the ports area also open:

'''-A INPUT -p tcp -m state --state NEW -m tcp --dport 8022 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 8020 -j ACCEPT'''
 * After completing this, I still had to make sure Nginx would run with SELinux, Setenforce enabled. You would ordinarily be able to run cat /var/log/audit/log | audit2allow -m nginx however, the Audit2allow command, does not exist on Rome. After several hours... I rsync'd the audit.log over to Caesar but called it Nginx.log instead. This is because, Caesar does have the audit2allow tool. Doing a symbolic link back to Rome for /usr/local didn't work.
 * Once i had Rsynced the file, i ran the audit2allow on Caesar, this created a Nginx.pp policy file for SELinux. Rsyncing the Nginx.pp file back to Rome, i then ran sudo semanage -i Nginx.pp to install it. This allows SELinux to run with 'enforcing on.
 * What does this mean overall.... this means that the Nginx service can turn on and run local to the system. Nginx is listening on 8020. service-config-firewall opened port 8022. Iptables is accepting and routing traffic from 8022 to 8020. /etc/ssh/sshd_config is accepting traffic on port 22 and port 8022. This has all gotten me to the point where I can now start testing the functionality of the TCP Round Robin, which is thus far giving me issues, where SSH is failing with:

ssh -vvv -p8022 rome OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to rome [192.168.10.11] port 8022. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /root/.ssh/identity type -1 debug1: identity file /root/.ssh/identity-cert type -1 debug1: identity file /root/.ssh/id_rsa type -1 debug1: identity file /root/.ssh/id_rsa-cert type -1 debug1: identity file /root/.ssh/id_dsa type -1 debug1: identity file /root/.ssh/id_dsa-cert type -1 debug1: identity file /root/.ssh/id_ecdsa type -1 debug1: identity file /root/.ssh/id_ecdsa-cert type -1 ssh_exchange_identification: Connection closed by remote host


 * Plan:
 * Figure out why the error above is occurring when sshing to 8022>route>8020>Nginx>Should be going to idefix:22. Not sure how to figure that out yet.


 * Concern;
 * Might be a lot of work for nothing


 * 05 March 2018
 * Wrote the Firewall Procedure - open a port, for the systems group wiki. Reviewed the emails about Wiki docs from Jonas, need to review/update the Systems group docs accordingly.

Week Ending March 12, 2013

 * March 2018 - multiple entry
 * been an exhausting week, keep forgetting to just add a log...
 * Note, most of my week was fighting with Nginx logs that shouldn't have been there, not really log worth but that's what happen.

..
 * 10 March 2018


 * Task + results:
 * Around 3 AM this morning, I successfully got the TCP Load Balancer to work via NGINX on port 8022, routing from Caesar thru Rome to Idefix. This was successful due to a few factors. I re: mount -a 'd Rome (not sure why it wasn't mounted, maybe from Software Team's RCS install), then deleted all the Nginx .conf files and copied over the ones from my earlier sandbox testing, then updated my local password on idefix passwd. This all made it work. Seems there are two config files for Nginx, /etc/Nginx/Nginx.conf and /etc/Nginx/conf.d/default.conf -- the second 'default' is in unnecessary and actually conflicted with the 'Nginx.conf' even though it comes with the download. After deleting it and copying in a known-working configuration. Everything was gold. -- The reason I had to reset my local password 'passwd', is there seems to be something wrong with SSH and/or doesn't like me.
 * Around 1 PM, Daniel and I had a conference call. Chris was unavailable due to the snow storm knocking out his power and Yashna was briefly out of state. -- Overall, it was just pushing through the tasks we had last week. Finishing up server mounting and cable wiring, testing data backups, making the TCP LB work and getting a tool for error logging.


 * Plan:
 * Figure out why i have to reset my local password in order for the ssh to load balance. --> Seems we might not have to write a monitoring script as the tool we are looking at, Nagios, will monitor for us via SNMP as well as collect errors. --> Professor Jonas also wants Chris & I to focus on making sure the data backup system works.


 * Concerns:
 * Not looking forward to fighting with ssh.

..
 * 11 March 2018


 * Task + Results:
 * Starting around 2PM, I was working to figure out why I had to update the local password for my user on the Drones in order for the SSH LB to work. During this process, I had created a symbolic link back to Caesars /etc/ssh folder thinking that Rome was just not being seen as a parent device and that if it were seen as one (with all the necessary keygens) then it would work. After testing, it didn't seem to improve the issue at all, so i went to remove the symbolic link. HERE IS THE ISSUE, READ IT SO AS TO NOT REPEAT IT - When I went to remove the symbolic link called sshRomePass, I used 'tab' to auto complete the word, something most Linux users are accustom to however; was my downfall.
 * What I ran was rm sshRomePass/ to get rid of the symbolic link. What I should have run was rm sshRomePass. This subtle difference was the difference between removing the symbolic link, and deleting what is inside the symbolic link. As such, I did not remove the link I had incidentally deleted the contents of Caesars /etc/ssh folder. (Also the reason you will now find a ssh.bak folder on Caesar).
 * This was ultimately fixed with the Professors insights, I copied over the SSH files from Obelix into Caesar and it was functional again. Whole issue only lasted about 30 minutes, no impact to other users, but small heart attack for me.
 * Having gone through the fray however, I believe that the ssh-keygen is indeed the issue with the SSH LB. I will be working with Daniel to make sure that a Drone can SSH back to Caesar without prompting for a password as an extension of fixing my boo boo, but this should hopefully also fix the SSH LB issue i was having, that prompted the boo boo to begin with.


 * Plan:
 * I will go to campus tomorrow to fix Rome and make sure it is no longer looking for the sshRomePass link that I had established. I will then work to make sure that the Drones are able to fetch the ssh-keygen so they can login to Caesar. Right now for instance, Caesar can ssh to asterix, but asterix cannot ssh to Caesar. You should really ssh to Caesar from asterix anyway, but it also shouldn't be asking for a password.


 * Concerns:
 * This might take longer than desired. SSH is still an annoying and confusing configuration, even after having dealt with it for about 2 years.

..
 * 12 March 2018


 * Task + Results:
 * Today I went to campus and fixed Rome so that it is again using its own SSH file and not patched to Caesar via the sshRomePass. This was easily completed and SSH now works correctly. The SSH LB via Nginx will remain in limbo, hopefully to be picked up by next year's capstone.


 * Plan:
 * we're moving into spring break here so this week will be light, will focus on documentation.


 * Concerns:
 * None at this time.

..
 * 13 March 2018
 * Task + Results
 * Today I worked on documentation, I added the Nginx: What is it and why is it here? Doc and https://foss.unh.edu/projects/images/f/f0/ServerStatus_03132018.pdf which is an adaptation of the current hardware config with some additional information. I hope to keep this updated then add a final version at the end of the semester.


 * Plan:
 * I will likely add some more documentation, but ultimately I need move what I have already to the Information section of the speech project rather than the Systems Group, per Jonas. Also need to come up with a game plan for the Guardians team meeting on Tuesday.


 * Concerns:
 * Nothing too much right now, the past week has been spring break so, we'll see how everyone fairs their first week back.

Week Ending March 26, 2013

 * 20 March 2018
 * Task + results
 * Today we had our first Guardians Team meeting at 11 am. During this meeting we determined whom will be the leader of this team, the leader was voted to be myself. – We then moved into outlining how we as a team will approach achieving the objective: the lowest WER rate between the two groups. This was involved but beneficial, however I won’t note it here as the other team can read these logs also. Lastly we delegated tasks to differing team members, and established a timeframe for future meetings.
 * Then we had another status meeting/ class, this went relatively smooth as last week was spring break. This was some confusion as to the delegation of differing servers as one server still needs to be stood-up (made). The progression here will be: Guardians has an LDA machine, and the clone machine, Systems Group will clone the LDA Machine and give it to the Avengers in trade for one of their non-LDA machines.
 * After class, Chris an I worked on identifying where and how the backup system was operating. The backup server is the computer on the righthand side within the Tech Consultant Room, 124 i think. This computer is basically only running a hard drive with a limited operating system, ESXI. To interface with this machine, you must SSH to Caesar > SSH to Rome > SSH to 172.16.0.22 As root, with the typical root password as the rest of the machines.
 * The configuration appears to be, that within the backup server are virtual-harddrive volumes, and those volumes are somehow connected to Rome. It is confusing as to how they are linked, but I believe that they are. The backup server should simply be mounted to Rome just as Caesar is mounted to Rome. Should look like


 * /mnt/main/local (<Caesar)
 * /mnt/backup (<Backup server)
 * If we cannot figure out what the backup solution is in its current state, perhaps we can just install ubunto onto the backup server and mount it. Jonas kinda spoke to this in passing but we'll want to check.


 * Plan:
 * Chris & I will continue to look into the backup machine later this week. I will try and get a better grasp on ESXI and how to interface with that. I also need to look into the greater scope of the Guardians Team to plan out the rest of the tasks and objectives. I also need to move the documentation from the Systems Group to the overall project’s information page. And I need to work with Dan B. on determining a good meeting software for the Guardian Teams meeting on Saturday.


 * Concerns:
 * The data backup machine/configuration is a bit of a mess, I don’t know who or why it is setup like it is, but I would have never recommended it. Also, not sure why they didn’t use the DNS Hostname rather than the IP Address which appears to by dynamic, will never have a reliable backup method that way unless the IP is static. – The Guardians team has a large scope, not really concerned, more just taking a deep breath before jumping in.


 * 22 March 2018
 * Task + Results
 * Today I recapped some of the talking points and added some new ones to the team meeting we had on Tuesday, in discord. After request, I sent this out in an email with more detailed assignments. Going off an idea of Wesleys, I tasked a third of the team into a sub-project which will hopefully pay off. There were other organizational tasks completed today for the Guardians team, but nothing that can be spoken of openly.
 * Also worked out with the Systems Team on who will be doing the Server Clone and who will be doing the data backups, as this task has been reassigned a few different times for various reasons, I wanted to hammer out who was doing what in the most efficient way.


 * Plan
 * Is to brush up on the speech recognition process so I have a handle on the whole scope of what the team is doing and I am able to manage differing tasks. I will also be looking more into ESXI and the backup. And I will update some of the Guardian Teams secret weapons.


 * Concerns
 * none at this time.


 * 24 March 2018
 * Task + Results
 * Yesterday (Friday) was mostly research and providing some starting material for others in the Guardians Team.
 * Today, We had our team meeting. I think this went well. We were able to recap our progress, identify where we were heading, delegate out tasks and setup a reporting schedule to keep us all moving forward. Some really good information being generated by the Team. Once the 'secret weapon' has been fulfilled we will be able to rapidly move forward. 


 * Plan
 * More forward on secret things in our secret way, secret secret secret.
 * Send out email recap of meeting
 * Tomorrow I will continue to read into the data backups, then Monday Chris and I will work on fixing it.


 * Concerns
 * Actually, looking kinda good.


 * 25/26 March 2018
 * Task + Results
 * These two days were primarily researching, organizing and communicating to the team. Not a lot of tangible things otherwise.


 * plan:
 * work with Chris on the databackup


 * Concern:
 * None at this time.

Week Ending April 2, 2013

 * 27 March 2018
 * Task + Results
 * Today was class. Before class i conferred with Hannah - Technical Consultant - on whether they would need to use the current backup machine as anything else (that would require us to use a hyper visor), per Jonas. She said they did not need it which allowed Chris and I to wipe the backup machine and get rid of the vSphere stuff. After class, Chris and I completed a new Redhad installation on the backup server and got this setup. The only remaining thing task will be to mount the backup to Rome and register redhat. Otherwise, Rome can see the backup and the backup can see Rome just fine.


 * Plan:
 * Meet with Chris on Friday to continue work on the backup.


 * Concerns:
 * None really,


 * 29 March 2019
 * Task + Results
 * Today I ran a train on Miraculix and Majestix to get a baseline and brush up on the train/lm/decode. 0307/006 and 0307/007 both failed:

lm_create.pl trans_parsed sphinx_lm_convert: error while loading shared libraries: libsphinxbase.so.1: cannot open shared object file: No such file or directory

nohup run_decode.pl 0307/008 0307/008 1000 & more decode.log /usr/local/bin/sphinx3_decode: error while loading shared libraries: libs3decoder.so.0: cannot open shared object file: No such file or directory

sclite -r 008_train.trans -h hyp.trans -i swb >> scoring.log Segmentation fault (core dumped)
 * After testing several more configurations I continued to get this error. This took basically the entire day.


 * Plan:
 * Work on the backup server tomorrow with Chris, continue to look into the above errors.


 * Concerns:
 * These errors are persistent on both of Guardians servers with all tested command. Not sure whats going on here.


 * 30 March 2018
 * Task+Results
 * Met with Chris on Campus around 2:30. Operating on the thought that the Backup mount isn't working because of missing nfs software, we continued thinking that we needed to complete registration of redhat before fixing the backup. Chris worked to determine the steps to complete registration while I worked to establish trusted SSH between Rome and capstonebackup by creating the ssh-keygen and using ssh-copy-id. Around 530ish, Jonas suggested we might be able to pull the installation package we need from Rome, I'll look into this while Chris continues to dig into Registration and talk with Bruce Johnson, MIT Security? I believe. -- After, I worked for a few hours trying to figure out the decode error (above). After I thought I had determined the issue I ran it by Jonas, where he pointed out it was actually an issue with the Linux Library's pathing to /usr/local/lib and needed to be rehashed. I will write more on this later and add it to the information page > Decode > known issues.


 * Plan:
 * document a bunch of stuff, meeting with Guardians tomorrow night, finish backup server, finish server map ~ will update this later, going to sleep.


 * Concerns:
 * none at this time.


 * 01 April 2018
 * Task + Results
 * Today I worked on getting the data backup server mounted to Rome. After doing some research I found that the mount point has to be announced from the backup server for Rome to find it. This is done via /etc/exports. However, you must also open the firewall ports in iptables in order for this to function. I exported the iptables config from Caesar and imported them into iptables on capstonebackup, reloaded iptables and ‘magic’ Rome can now mount /mnt/backup_solution from the capstonebackup server.


 * Plan
 * Next I need to work on getting rsnapshot to actually backup from caesar’s /mnt/main to capstonebackup’s /mnt/backup_solution . both of which are mounted on Rome as the go-between.


 * Concerns
 * None at this time


 * 02 April 2018
 * Task + Results
 * Today I worked on the rsnapshot. This should backup from Caesar to capstone backup at set intervals to set folders, which I seem to have configured correctly as it is not throwing me any errors. However, it also isn’t working. So I will need to investigate this further to understand why.


 * Plan:
 * Determine why rsnapshot doesn’t error but also doesn’t work.
 * Continue on guardians tasks


 * Concerns:
 * Rsnapshot is being weird..

Week Ending April 9, 2013
0307	014	03/31/18	Camden 0307	017	04/01/18	Camden 0307	016	03/31/18	Camden 0307	018	04/01/18	Camden 0307	019	04/01/18	Camden 0307	020	04/02/18	Camden


 * 03 April 2018
 * Task + Results
 * Today I spent time working on tasks for the Guardians Team,


 * Plan
 * Continue working on tasks and look into rsnapshot.


 * Concerns
 * None


 * 05 April 2018
 * Task + Results
 * Today I spent time setting up instructions and next steps for the Guardians Team in lieu of a meeting this Saturday.


 * Plan
 * Continue to work on tasks, look into rsnapshot.


 * Concerns
 * None


 * 06 April 2018
 * Task + Results
 * Today I got figured out why rsnapshot wasn't working correctly. The default directory that rsnapshot saves to is a hidden file /.rsnapshot, you have to update the /etc/rsnapshot.conf to point this backup directory to a new location. I set this to /mnt/backup_solution and now the backup system works correctly. I also created a Cron job (crontab -e) under the Root user on Rome to execute the backup. Right now however it is only backing up /mnt/main/backup_solution/backupconfirmation.txt which is just a cron job on Caesar that appends the date and time to a text file once an hour so you can determine if the backup is current.


 * Plan:
 * I have set the cron job to fire at 3 AM on Monday, so i'm going to let the system do its own thing and check back on it.

Concerns:
 * None at this time.

0307	014		03/31/18	Camden 0307	017	       04/01/18	Camden 0307	016	 	03/31/18	Camden 0307	018	       04/01/18	Camden 0307	019		04/01/18	Camden 0307	020		04/02/18	Camden 0309	009		04/06/18	Camden 0309	010		04/07/18	Camden 0309	012		04/07/18	Camden 0309	016		04/08/18	Camden 0309	017		04/08/18	Camden 0309	021	       04/09/18	Camden 0309	022	       04/09/18	Camden 0309	023	       04/09/18	Camden 0309	024	       04/09/18	Camden

Week Ending April 16, 2013
0309	049	04/16/18	Camden

This week I have been feeling miserable so not a whole lot got done.


 * 12 April 2018
 * Task + Results
 * Today I had intended to do further work on the backup server, however after determining that all the drones had been shutoff due to yet another power outage. Jonas & I wired in a 3rd power supply and split the servers up. Several hours later Dan R. joined and we distributed the power load of the servers more evenly. There is now roughly 1300 watts per outlet. While Rome and Caesar have to power cords, each on a different outlet. The Switch should be on a small battery backup? as well, so even if the drones go down you should still be able to access Rome & Caesar. The backup server is in another room so it should not be affected.
 * Also discussed with Jonas how he would like the data backup server to actually do its backup. Working with rsnapshot its default is a rolling backup, where it saves the past X timeframe. Say the last week, then saves another week and another one until your have say 4 in a row, then drops the 5th oldest one. We don't want this, we want a differential backup aka an incremental backup. Where we do one massive backup then once a week it will check what has changed between the first backup and the current file structure and backup whatever is different. This saves space and is more of a true backup.


 * Plan:
 * Todays power issue kind of ate all the time I was planning to use for my other tasks so, I will have to reassess


 * Concerns:
 * None at this time


 * 13 April 2018
 * Task + Results
 * Jonas wanted the backup server renamed from the generic name "capstonebackup" to "Lutetia" -- reconfigured Backup server with new name, updated all core files/configs with corrected name, ensured the current backup-testing environment was still working.


 * Plan:
 * I need to look into whether rsnapshot can do differential backups or not, otherwise I might just use rsync.


 * Concerns:
 * None at this time


 * 13-15 April 2018 [Plus or minus a few days]
 * LDA Fiasco
 * The entire class gave some focus, including Jonas, into why LDA hasn't been running from day 1. I am not going to go into details but it is working now


 * Needed to back off of capstone work for a few days, I've been overdriving myself, so sorry if the last week or so of logs is not as clear as it could be.

Week Ending April 23, 2013
0309	065	04/21/18	Camden


 * 19-20 April 2018
 * Task + Results
 * These days I spent researching on the backup solution, differences of iterative vs rolling backups and whether rsnapshot was capable of iterative/differential backups. It does not appear to be, so I will be looking at rsync next. I also spent time researching and testing items for the Guardians Team.
 * the 20th was also the CCSCNE Conference so I spent some time getting ready for that, then it was there from 4pm until 9pm.


 * Plan:
 * Disassemble Rsnapshot config, assemble rsync config for data backup, test new data backup. If successful, update documentation. Continue Guardians work.


 * Concerns:
 * None


 * 22-23 April 2018
 * Task + Results
 * I have been looking at how to make the backup system a differential/incremental backup rather than a rolling backup. Rsnapshot appears to only do a rolling backup. Therefore i changed to use rsync and so far seem to be successful in doing a differential backup, however it is very slow. -- Command to run rsync backup:

nohup rsync -av /mnt/main/Exp/0309/ /mnt/backup_solution/rsyncTest/ && rsync -av /mnt/main/backup_solution/backupconfirmation.txt /mnt/backup_solution/rsyncTest/ &


 * Where nohup runs the task in the background, then it runs rsync 1 && rsync 2, with the last & leaving the terminal open. you can do ``tail -f nohup.out'' to watch the output manually. I may add a `` > backupoutput.txt`` to the end as a way to log the information later.


 * Plan:
 * Figure out how to make the rsync backup go faster, right now it transfers 1 file every 2 seconds or so. Great because there is no overhead but if we're backing up 1.1 terabytes we'll be here for years.


 * Concern:
 * Just the time it takes to backup.

Week Ending April 30, 2013

 * 25 April 2018
 * Task + Results
 * Wrote a small script to capture the start & stop time of a decode process (below) I purposefully left this generic so it could be edited in the future & i didn't want to waste time:

cat /mnt/main/scripts/user/testScripts/realtimelog.sh
 * 1) !/bin/bash
 * 2) Author: Camden C. Marble
 * 3) April 25th 2018
 * 1) April 25th 2018

read -p "Enter Sphinx3_decode pid to watch: " pid; echo " "; echo " "; echo $(ps r -e | grep "$pid");
 * 1) Capture
 * 1) Confirm

echo " "; echo " "; echo "start logging in 5 seconds..."; sleep 5;
 * 1) Warn

echo $(date) > $(pwd)/realtime.start; echo "realtime logging started!";
 * 1) Start process

getProcess=$(ps r -e | grep -o "$pid");
 * 1) Find actually pid number in system and monitor


 * 1) Debuging: uncomment echo
 * 2) echo $getProcess;

while [ "$getProcess" = "$pid" ] do echo "PID CPU% $(ps -F "$pid" | awk '{print $4}'| tail -1)"; echo $(date); echo "checking again in 60 seconds"; sleep 60; getProcess=$(ps r -e | grep -o "$pid"); done


 * 1) End process when getProcess no longer has an equal pid

echo "Sphinx3_Decode Process ended at $(date)"; echo $(date) > $(pwd)/realtime.stop;

Two files are output into the directory you started in, called realtime.start & realtime.stop.


 * Plan
 * The above script is necessary for the final eval.


 * Concerns
 * None at this time


 * 27 April 2018
 * Task + Results
 * Worked on documentation, data backup and Guardians team information. Data Backup Progress is likely concluded for the remainder of the semester. Information on the current status is here: https://foss.unh.edu/projects/index.php/Speech:Spring_2018_Systems_Group#System_Configuration but the reason for conclusion is the partition needs to be extended to utilize the full 4TBs. Shouldn't be a big deal but this plus the slow rsyncs likely wont be resolved in the next week and a half. I will still add whatever information I find though.


 * Plan
 * look into making rsyncs run faster, continue with Guardians Team information


 * Concerns
 * None at this time.


 * 28-30th April 2018
 * Tasks + Results
 * Worked on some more documentation and Guardians work. -> As far as the backup system goes, one way you can get the rsyncs to go faster is to create a Perl script to parallelize the RSYNCs to run together.


 * Plan
 * Writing report information for Guardians and for the Systems Group Final Report. Finish the 300hr stuff for Guardians.


 * Concerns
 * Time is running out.

Week Ending May 7, 2013

 * Task:


 * Results:


 * Plan:


 * Concerns: