Inspired by the selective attention mechanism in human vision, we propose to introduce a saliency-based processing step in the CMOS image sensor, to continuously select pixels corresponding to salient objects and feedback such information to the sensor, instead of blindly passing all pixels to the sensor output. To minimize the overhead of saliency detection in this feedback loop, we propose two techniques: (1) saliency detection with low-precision, down-sampled grayscale images, and (2) Optimization of the loss function and model structure. Finally, we pad the minimum number of pixels around the selected pixels to maintain the accuracy of object detection (OD). Our method is experimented with two types of OD algorithms on three representative datasets. At the similar OD accuracy with the full image, our proposed selective feedback method successfully achieves 70.5% reduction in the volume of output pixels for BDD100K, which translates to 4.3× and 3.4× reduction in power consumption and latency, respectively.