Domain adaptation in object detection typically involves learning to identify and locate objects within one type of visual input and applying that knowledge to another. Traditionally, this has meant transferring abilities between different visible spectrums, such as from daytime to nighttime scenes or from clear to foggy weather conditions. However, the leap from visible to thermal imaging presents a unique challenge, as the differences in data characteristics are much more pronounced than within the visible spectrum alone. The conventional methods of domain adaptation prove insufficient for bridging such a substantial gap, which has led to a scarcity of research in this area. To address these challenges, our research introduces the Distinctive Dual-Domain Teacher (D3T) framework, an innovative approach tailored to navigate the vast divide between visible and thermal imaging. Our framework diverges from the norm by establishing separate, domain-specific teacher models that guide the learning process within their respective realms. By employing exponential moving averages, we delicately balance the knowledge transferred to a single student model, alternating between the insights provided by each domain’s teacher. This nuanced strategy, which we refer to as zigzag learning, allows for a more natural and effective transition from the visible domain’s knowledge base to the intricacies of thermal image interpretation. The effectiveness of our method is not just theoretical; it has been rigorously tested and validated through a series of experiments utilizing established thermal imaging datasets, such as FLIR and KAIST. Our results demonstrate a clear advancement over existing techniques, showcasing the practical benefits of our dual-teacher model._x000D_
<br>_x000D_
<br> Keywords: Unsupervised domain adaptation, Thermal object detection, deep neural network, transfer learning.