As machine vision technology generates large amounts of data from sensors, it requires efficient computational systems for visual cognitive processing. Recently, in-sensor computing systems have emerged as a potential solution for reducing unnecessary data transfer and realizing fast and energy-efficient visual cognitive processing. However, they still lack the capability to process stored images directly within the sensor. Here, we demonstrate a heterogeneously integrated 1-photodiode and 1 memristor (1P-1R) crossbar for in-sensor visual cognitive processing, emulating a mammalian image encoding process to extract features from the input images. Unlike other neuromorphic vision processes, the trained weight values are applied as an input voltage to the image-saved crossbar array instead of storing the weight value in the memristors, realizing the in-sensor computing paradigm. We believe the heterogeneously integrated in-sensor computing platform provides an advanced architecture for real-time and data-intensive machine-vision applications via bio-stimulus domain reduction.
D.L., M.P., Y.B., B.B., and K.L. were supported by the U.S. National Science Foundation (NSF) under the grant ECCS-1942868. J.H. was supported by the Industrial Strategic Technology Development Program (20000300) funded by the Ministry of Trade, Industry and Energy (MOTIE, Korea).