Today I began some experiments that I hope to include in my final project presentation. The main objective as of now is to figure out which incremental learning strategies yield the best out-of-distribution (OOD) performance. For the experiments I performed today, I trained all layers of the models in batches of 20 classes (10 batches for the 200 species in CUB200 dataset) and evaluated OOD using a baseline softmax thresholding method. The performance metrics I hope to obtain are the Omega alpha (how accurate model is compared to offline model) and Omega OOD (how accurate the model is at novelty detection compared to offline model).
*These models are currently still training so I should have the results in the morning.
During lunch I went to the seminar which discussed ASL, specifically how it was important here at RIT. I found the talk very interesting and even learned a few signs which might be useful someday.
Tomorrow I hope to continue my work on this project and expand the experiment to include more complex models using different OOD methods as well.
*These models are currently still training so I should have the results in the morning.
During lunch I went to the seminar which discussed ASL, specifically how it was important here at RIT. I found the talk very interesting and even learned a few signs which might be useful someday.
Tomorrow I hope to continue my work on this project and expand the experiment to include more complex models using different OOD methods as well.
Comments
Post a Comment