A Model of Dual Fabry-Perot Etalon-Based External-Cavity Tunable Laser Us...
Internal motion within pulsating pure-quartic soliton molecules in a fibe...
Enhanced light emission of germanium light-emitting-diode on 150 mm germa...
The Fabrication of GaN Nanostructures Using Cost-Effective Methods for Ap...
Negative-to-Positive Tunnel Magnetoresistance in van der Waals Fe3GeTe2/C...
Quantum Light Source Based on Semiconductor Quantum Dots: A Review
A High-Reliability RF MEMS Metal-Contact Switch Based on Al-Sc Alloy
Development of a Mode-Locked Fiber Laser Utilizing a Niobium Diselenide S...
Development of Multiple Fano-Resonance-Based All-Dielectric Metastructure...
Traffic Vibration Signal Analysis of DAS Fiber Optic Cables with Differen...
官方微信
友情链接

Beyond Triplet Loss: Person Re-Identification With Fine-Grained Difference-Aware Pairwise Loss

2022-04-22

 

Author(s): Yan, C (Yan, Cheng); Pang, GS (Pang, Guansong); Bai, X (Bai, Xiao); Liu, CH (Liu, Changhong); Ning, X (Ning, Xin); Gu, L (Gu, Lin); Zhou, J (Zhou, Jun)

Source: IEEE TRANSACTIONS ON MULTIMEDIA Volume: 24 Pages: 1665-1677 DOI: 10.1109/TMM.2021.3069562 Published: 2022

Abstract: Person Re-IDentification (ReID) aims at re-identifying persons from different viewpoints across multiple cameras. Capturing the fine-grained appearance differences is often the key to accurate person ReID, because many identities can be differentiated only when looking into these fine-grained differences. However, most state-of-the-art person ReID approaches, typically driven by a triplet loss, fail to effectively learn the fine-grained features as they are focused more on differentiating large appearance differences. To address this issue, we introduce a novel pairwise loss function that enables ReID models to learn the fine-grained features by adaptively enforcing an exponential penalization on the images of small differences and a bounded penalization on the images of large differences. The proposed loss is generic and can be used as a plugin to replace the triplet loss to significantly enhance different types of state-of-the-art approaches. Experimental results on four benchmark datasets show that the proposed loss substantially outperforms a number of popular loss functions by large margins; and it also enables significantly improved data efficiency.

Accession Number: WOS:000776227200032

Author Identifiers:

Author        Web of Science ResearcherID        ORCID Number

Zhou, Jun                  0000-0001-5822-8233

ISSN: 1520-9210

eISSN: 1941-0077

Full Text: https://ieeexplore.ieee.org/document/9392276



关于我们
下载视频观看
联系方式
通信地址

北京市海淀区清华东路甲35号(林大北路中段) 北京912信箱 (100083)

电话

010-82304210/010-82305052(传真)

E-mail

semi@semi.ac.cn

交通地图
版权所有 中国科学院半导体研究所

备案号:京ICP备05085259-1号 京公网安备110402500052 中国科学院半导体所声明