The popular approach for several natural language processing tasks involves deep neural networks, and in particular, recurrent neural networks (RNNs) and convolutional neural networks (CNNs). While RNNs can capture the dependency in a sequence of arbitrary length, CNNs are suitable for extracting position-invariant features. In this study, a state-of-the-art CNN model incorporating a gate mechanism that is typically used in RNNs, is adapted to text classification tasks. The incorporated gate mechanism allows the CNNs to better select which features or words are relevant for predicting the corresponding class. Through experiments on various large datasets, it was found that the introduction of a gate mechanism into CNNs can improve the accuracy of text classification tasks such as sentiment classification, topic classification, and news categorization.
Acknowledgements This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2019R1F1A1058548).