Performance analysis is essential for improving classification models. However, existing performance analysis tools do not provide actionable insights such as the cause of misclassification. Machine learning practitioners face difficulties such as prioritizing model, looking over confusion between classes. In addition, existing performance analysis tools that provide feature-level analysis are difficult to apply to image classification problems. This study has been proposed to solve these difficulties. In this paper, we present an interactive visual analytics system for diagnosing the performance of multiclass classification models. Our system is able to compare multiple models, find weaknesses, and obtain actionable insights for improving models. Our visualization consists of three views for analyzing performance at the class, confusion, and instance levels. We demonstrate our system using MNIST handwritten digits data.