# TabNet: Attentive Interpretable Tabular Learning > TabNet을 처음 소개한 논문. [TabNet](https://wiki.g15e.com/pages/TabNet.txt)을 처음 소개한 논문. https://arxiv.org/abs/1908.07442 ## Abstract > We propose a novel high-performance and interpretable canonical deep tabular data learning architecture, [TabNet](https://wiki.g15e.com/pages/TabNet.txt). TabNet uses sequential attention to choose which features to reason from at each decision step, enabling interpretability and more efficient learning as the learning capacity is used for the most salient features. We demonstrate that TabNet outperforms other [neural network](https://wiki.g15e.com/pages/Artificial%20neural%20network.txt) and variants on a wide range of non-performance-saturated tabular datasets and yields interpretable feature attributions plus insights into the global model behavior. Finally, for the first time to our knowledge, we demonstrate for tabular data, significantly improving performance with unsupervised representation learning when unlabeled data is abundant.