News is an important way for people to obtain information, and in the open news environment, the type of news is increasingly diversified and the scale of news is huge, which causes problems such as information overload and redundancy. The open-domain event extraction task aims to identify and extract various types of event information from predefined text, The task is usually based on methods such as pre-training or neural topic modeling. However, there are a number of problems with existing methods. First, Existing pre-trained models suffer from insufficient feature vector extraction and excessively high embedding dimensions. Second, Existing methods are not rich enough in semantics and lack syntactic structural information, resulting in poor readability of results and insufficient extraction accuracy. Therefore, to address these issues, this paper first improves the open-domain event extraction method based on the neural topic model of BERT, and then dynamically in- tegrates semantic and syntactic dependency information to obtain rich semantic and syn- tactic features, in order to further improve the model performance. The main research is as follows: Proposed an improvement method of neural topic modeling based on BERT. First, BERT is used in the coding layer for pre-training to obtain the contextual representation of the feature sequences. Second, the Umap dimensionality reduction method is used to obtain more extensive local and global information, and the joint distribution of variables is combined with the deep hidden variable probabilistic graph model to further optimize the parameter inference learning process. Finally, the self-attention mechanism is introduced to assign weights to different nodes to reduce the influence of noisy data, so that the model can pay attention to the more critical features, and further improve the performance of the open-domain event extraction model. Keywords: Event extraction, Open-domain event extraction, Neural topic model.