Abstract
There have been many models made to achieve optimal results on classification tasks. We present a novel framework that is able to augment these models to achieve even higher levels of classification accuracy. Our framework is used in addition to and flexibly on top of other models and uses a reinforcement learning approach to learn and generate new difficult training data samples in order to further refine the classification model. By making new, harder, and more meaningful data samples our framework helps the model learn meaningful relationships in the data for its classification task. This allows our framework to augment models during training rather than working on pre-trained classifiers. Through our experimentation we show that our framework improves models’ classification accuracy. We also show the effectiveness of tuning our components through our ablation studies. Lastly, we discuss possible improvements to our framework and directions for future works.