The purpose of this study is to analyze societal acceptance of AI robots. This research is necessary because AI robots are likely to increase the risk of privacy violations, safety concerns, and job displacement, which will lead to lower acceptance. This study aims to examine the acceptance of AI robots. The variables of perceived risk, benefits, trust, and knowledge used in the traditional paradigm of risk research, the risk perception paradigm, were adopted to study the factors affecting the acceptance of AI robots. In addition, we set humanization, instrumentality, and controllability as independent variables as value variables because AI robot acceptance is connected to fundamental value issues. Based on the results of a large-scale survey, we found that being male, older, and having a higher income were associated with greater acceptance of AI robots. Furthermore, in the risk perception paradigm, perceived risk has a wealth effect on acceptance of AI robots, while perceived benefit, trust, and knowledge have a justice effect. Finally, humanization has a negative effect on AI acceptance, while instrumentality and controllability have a positive effect on justice.