Skip to main content

Day 11 (7/22/19): Partial Rehearsal and Out-Of-Distribution Recognition

This morning I reviewed the results of the different incremental learning models I started training over the weekend. Here are the results for five different methods.

*Accuracy is computed as the average of the batch 1 and batch 2 accuracies to represent the overall test set. Omega is the average ratio between the accuracy of the incremental model vs. the offline model (offline accuracy with default hyperparameters: .7330)

No Regularization:
Batch 1--> Testing 45/45 Accuracy: 0.0241
Batch 2--> Testing 46/46 Accuracy: 0.7702
Accuracy: 0.3972
Omega: 0.5418

L2 Regularization:
Testing 45/45 Accuracy: 0.0132
Testing 46/46 Accuracy: 0.7868
Accuracy: 0.4000
Omega: 0.5457

L2SP Regularization:
Batch 1--> Testing 47/47 Accuracy: 0.0000
Batch 2--> Testing 45/45 Accuracy: 0.8185
Accuracy: 0.4093
Omega: 0.5583


Pseudo-Rehearsal (w/ random sampling of ten images per batch-1 class):
Testing 45/45 Accuracy: 0.4166
Testing 46/46 Accuracy: 0.7988
Accuracy: 0.6077
Omega: 0.8291


Full Rehearsal:
Testing 45/45 Accuracy: 0.6499
Testing 46/46 Accuracy: 0.7496
Accuracy: 0.6997
Omega: 0.9546

These metrics obtained from my models are consistent with what was expected (Omega value of full rehearsal>partial rehearsal>simple regularization).
________________________________________________________________________


After speaking with Dr. Kanan about my project and progress I spent the rest of the afternoon reading papers and going through presentations created in the kLab. These topics ranged from evaluating bounded classification methods for CNNs (aka open-set or out-of-distribution recognition) to new techniques for streaming learning. 

Tomorrow I will be trying to experiment with some Open-set Recognition methods (similar to how I experimented with incremental learning techniques), and I'm excited to see how that turns out.

Comments

Popular posts from this blog

Day 22 (8/6/19): Streaming Linear Discriminant Analysis

Today I tested the previously trained models using the Stanford dogs dataset as the inter dataset evaluator for OOD instead of the Oxford flowers dataset. However, as expected, the omega values for performance were pretty much the same as before and didn't make much of a difference as the datasets varied.  I also implemented a streaming linear discriminant analysis model (SLDA) which differed from the previous incrementally trained models. This model didn't perform as well in terms of accuracy however as only the last layer of the model was trained and streaming is more of a difficult task. Nevertheless, we did show that Mahalanobis can be used in a streaming paradigm to recover some OOD performance in an online setting. This is likely to be a large focus of my presentation as it has never been discussed prior. Tomorrow, I plan to implement an L2SP model with elastic weight consolidation as well as iCarl to serve as two more baselines to compare our experiments to.

Day 24 (8/8/19): Multilayer Perceptron Experiment

I continued gathering more results for my presentation today, and the data table is coming along nicely. We are able to see a significant trend that using Mahalanobis instead of Baseline Thresholding recovers much of the OOD recognition that is lost with streaming or incremental models. The SLDA model appears to be a lightweight, accurate streaming model which can be paired with Mahalanobis to be useful as an embedded agent in the real world. For the purposes of demonstrating catastrophic forgetting, I ran five experiments and averaged the results for a simple incrementally trained MLP. Obviously, the model failed miserably and was achieving only about 1% of the accuracy of the offline model. Including this is only to show how other forms of streaming and incremental models are necessary to develop lifelong learning agents. A diagram of a simple multilayer perceptron.