In this thesis, the research focuses on leveraging domain spaces for unsupervised domain adaptation (UDA). The primary objective is to explore approaches that effectively utilize the intermediate spaces between the source and target domains. The goal is to overcome the limitations of traditional direct domain adaptation methods, which are inherently constrained in their ability to handle large discrepancies between domains. The first chapter introduces a novel approach that constructs intermediate domain spaces with distinct characteristics using fixed ratio-based mixup. To enhance domain invariant representations, we incorporate confidence-based learning techniques, including bidirectional matching and self-penalization. The effectiveness of each component is demonstrated through thorough analysis, while competitive performance is observed on three standard benchmarks compared to other UDA methods. In the second chapter, we present a more advanced method tailored to bridging domains while considering the uncertainty of model predictions. We extend the fixed-ratio-based mixup to operate at the feature level, adaptively determining the layer for mixup based on prediction uncertainty. Furthermore, we enhance our complementary learning by adjusting augmentation intensity using an adaptive confidence threshold. Extensive experiments validate the superiority of our proposed methods across public benchmarks, including single- source and multi-source scenarios. The final chapter sheds light on the problem of equilibrium collapse, where source labels dominate over target labels in the predictions of the vicinal space. To address this issue, we propose an instance-wise minimax strategy that minimizes the entropy of highly uncertain instances in the vicinal space. We divide the vicinal space into two subspaces and mitigate inter-domain discrepancy by minimizing their distance. Thorough ablation studies provide insights into the proposed method, demonstrating comparable performance to state-of-the-art approaches in standard unsupervised domain adaptation benchmarks. Overall, this thesis offers groundbreaking insights and approaches that leverage domain spaces for unsupervised domain adaptation, leading to significant advancements in the field. The proposed approaches are not only effective but also highly competitive, as demonstrated through comprehensive evaluations across diverse benchmarks and scenarios. The findings contribute valuable knowledge to the field of unsupervised domain adaptation, offering new perspectives and techniques to address the challenges associated with domain gaps. By demonstrating the effectiveness and competitiveness of the proposed approaches, this thesis paves the way for further advancements in unsupervised domain adaptation research. Keywords: Unsupervised domain adaptation, single/multi-source domain adaptation, deep neural network, transfer learning.