Skip to main content

Day 26 (8/12/19): Presentation Revisions

Today was very useful for making revisions and edits to my presentation. I ran through it in front of my lab this morning and got lots of helpful feedback as to how to make it more accessible to a general audience (eliminating jargon). Every day I am becoming more and more confident with the talk, and I'm looking forward to presenting on Friday!

Furthermore, I learned today that I will be able to get my RIT computer account/email to stay active for a few months after the internship ends. This will allow me to continue communicating with the lab via Slack and help review and write a research paper including some of the work I have pursued over the past six weeks. We hope to submit this paper to get it published at a conference in the fall (possible AAAI).

Image result for aaai

Comments

Popular posts from this blog

Day 9 (7/18/19): Incrementally Learning CUB200

Today I continued my work learning about incremental learning models by testing out different strategies on the CUB200 dataset. From what I understand from reading various articles, there seem to be five different approaches to mitigating catastrophic forgetting in lifelong learning models. These are regularization methods (adding constraints to a network's weights), ensemble methods (train multiple classifiers and combine them), rehearsal methods (mix old data with data from the current session), dual-memory methods (based off the human brain, includes a fast learner and a slow learner), and sparse-coding methods (reducing the interference with previously learned representations).  All of these methods have their constraints and I don't believe it is yet clear what method (or what combination of different methods) is best. Full rehearsal obviously seems to be the most effective at making the model remember what it had previously learned but given that all training exam...

Day 24 (8/8/19): Multilayer Perceptron Experiment

I continued gathering more results for my presentation today, and the data table is coming along nicely. We are able to see a significant trend that using Mahalanobis instead of Baseline Thresholding recovers much of the OOD recognition that is lost with streaming or incremental models. The SLDA model appears to be a lightweight, accurate streaming model which can be paired with Mahalanobis to be useful as an embedded agent in the real world. For the purposes of demonstrating catastrophic forgetting, I ran five experiments and averaged the results for a simple incrementally trained MLP. Obviously, the model failed miserably and was achieving only about 1% of the accuracy of the offline model. Including this is only to show how other forms of streaming and incremental models are necessary to develop lifelong learning agents. A diagram of a simple multilayer perceptron.