Skip to main content

Posts

Day 30 (8/16/19): Final Presentations

Today we gave our final presentations, and everyone did a great job. I would like to thank everyone who helped me with this amazing experience! I'm very thankful to have had the opportunity to work on such interesting research with such amazing people this summer.
Recent posts

Day 29 (8/15/19): Final Day Before Presentations

Most of today was also spent practicing and editing my presentation to make it as professional as I can. I'm really looking forward to the opportunity to present my work to faculty and friends tomorrow. Here is a link to the slides for my final presentation: Novelty Detection in Streaming Learning using Neural Networks

Day 28 (7/14/19): Presentation Dry Run

In the morning, all of us interns got the chance to practice our presentations in front of each other in the auditorium. I was pretty happy with how mine went overall but the experience was definitely valuable in identifying typos or slight adjustments that should be made. Throughout the rest of the day, I tried to implement these changes and clean up a few plots that I want to include for Friday.

Day 27 (8/13/19): Improving Presentation Plots

Today I practiced my presentation more and also added better visual graphs to better understand my results. Now, the line graphs show the results after each batch of training so you can see the trend in accuracy and OOD detection over time. Lastly, I added a bar chart at the end of the presentation to summarize my overall results in addition to the spider chart.

Day 26 (8/12/19): Presentation Revisions

Today was very useful for making revisions and edits to my presentation. I ran through it in front of my lab this morning and got lots of helpful feedback as to how to make it more accessible to a general audience (eliminating jargon). Every day I am becoming more and more confident with the talk, and I'm looking forward to presenting on Friday! Furthermore, I learned today that I will be able to get my RIT computer account/email to stay active for a few months after the internship ends. This will allow me to continue communicating with the lab via Slack and help review and write a research paper including some of the work I have pursued over the past six weeks. We hope to submit this paper to get it published at a conference in the fall (possible AAAI).

Day 25 (8/9/19): Finishing Presentation

Today I made a lot of progress finishing up my presentation. I feel like we developed an interesting story to tell around the data we collected from the experiments, and I am excited to get a chance to share my results. Much of the beginning of the presentation is spent explaining high level concepts such as deep learning and machine learning so I will have a better idea of what I will need to include after my meeting with Joe and Amy on Monday. I will continue to keep practicing my presentation over the weekend and possible include more results from iCaRL and MLP w/ EWC models if I can get them trained. Below I have included a visualization of one of the most important results from my project. Notice how the SLDA w/ Mahalanobis model outperforms the other models in accuracy and OOD recognition combined (the more area a model has in the spider plot, the better it performed overall).

Day 24 (8/8/19): Multilayer Perceptron Experiment

I continued gathering more results for my presentation today, and the data table is coming along nicely. We are able to see a significant trend that using Mahalanobis instead of Baseline Thresholding recovers much of the OOD recognition that is lost with streaming or incremental models. The SLDA model appears to be a lightweight, accurate streaming model which can be paired with Mahalanobis to be useful as an embedded agent in the real world. For the purposes of demonstrating catastrophic forgetting, I ran five experiments and averaged the results for a simple incrementally trained MLP. Obviously, the model failed miserably and was achieving only about 1% of the accuracy of the offline model. Including this is only to show how other forms of streaming and incremental models are necessary to develop lifelong learning agents. A diagram of a simple multilayer perceptron.