The integration of solar energy harvesting into small-cell networks is a promising solution for achieving energy-efficient and sustainable wireless communications. However, the inherent variability and intermittency of solar energy, coupled with precise inter-cell interference management, significantly hinder efficient network operation. To resolve these challenges, we propose a multi-agent distributed deep Q-network (MA-DDQN) framework, where the distributed base stations learn the optimal transmit power control policies. To further enhance adaptability under varying solar conditions, we present a daily model transfer with a fine-tuning approach, enabling efficient deployment without extensive training overhead. Simulation results demonstrate that the proposed methods remarkably improve energy efficiency while maintaining robust adaptability under dynamic solar conditions, revealing their potential for sustainable small-cell network deployments.