INRNet: Neighborhood Re-Ranking-Based Method for Pedestrian Text-Image Retrieval

AI Summary1 min read

TL;DR

INRNet introduces a neighborhood re-ranking method for pedestrian text-image retrieval, addressing challenges in aligning visual and textual data. It improves upon global and local matching techniques to enhance retrieval accuracy.

INRNet: Neighborhood Re-Ranking-Based Method for Pedestrian Text-Image Retrieval

Kehao Wang; Yuhui Wang; Lian Xue; Qifeng Li
https://doi.org/10.1109/ACCESS.2024.3518535
Volume 13

The Pedestrian Text-Image Retrieval task aims to retrieve the target pedestrian image based on textual description. The primary challenge of this task lies in mapping two heteromodal data (visual and textual descriptions) into a unified feature space. Previous approaches have focused on global or local matching methods. However, global matching methods are susceptible to result in weak alignment, ...

Visit Website