Study on HAZ extension characteristics during laser ablation of CFRP base...
3-Dimensional folded nanorod chiral structure with broadband circular dic...
Versatile design for temporal shape control of high-power nanosecond puls...
Artificial neural network assisted the design of subwavelength-grating wa...
Research on ultrawideband photodetector module based on parasitic paramet...
Narrow linewidth laser based on a sidewall grating active distributed Bra...
Buildup and synchronization regimes of a vector pure-quartic soliton mole...
The Influence Mechanism of Quantum Well Growth and Annealing Temperature ...
Camo: Capturing the Modularity by End-to-End Models for Symbolic Regression
Enhanced Band Filling Effect and Broadband Multiwavelength Lasing in Plas...
官方微信
友情链接

A contrastive learning based unsupervised multi-view stereo with multi-stage self-training strategy

2024-05-14

Author(s): Wang, ZH (Wang, Zihang); Luo, HN (Luo, Haonan); Wang, X (Wang, Xiang); Zheng, J (Zheng, Jin); Ning, X (Ning, Xin); Bai, X (Bai, Xiao)

Source: DISPLAYSVolume: 83Article Number: 102672  DOI: 10.1016/j.displa.2024.102672  Published Date: 2024 JUL

Abstract: Recent years, unsupervised multi -view stereo (MVS) methods have achieved excellent success that can produce comparable results to earlier supervised work. However, as unsupervised MVS uses image reconstruction as pretext task, it faces two vital drawbacks: RGB value, which is the measurement of image, is not robust enough across views due to complicated environment like lighting conditions and reconstruction itself cannot reflect quality of depth estimation linearly. These problems cause the actual optimization goal to diverge from the expected optimization goal, thus could impair the training process. To enhance robustness of pretext task, we propose a contrastive learning based constraint. The constraint adds featuremetric consistency across views by forcing the features between matching points similar and the features between unmatched points opposite. To add linear properties to overall training procedure, we propose a multi -stage training strategy that uses pseudo label as supervision after unsupervised training at the beginning. On the other hand, we adopt an iterative optimizer that proven to be quite effective in supervised MVS to accelerate training. Finally, we conduct a series of experiments on the DTU dataset and Tanks and Temples dataset that demonstrate the efficiency and robustness of our method compared with the state-of-the-art methods in terms of accuracy, completeness and speed.

Accession Number: WOS:001209221600001

ISSN: 0141-9382

eISSN: 1872-7387




关于我们
下载视频观看
联系方式
通信地址

北京市海淀区清华东路甲35号(林大北路中段) 北京912信箱 (100083)

电话

010-82304210/010-82305052(传真)

E-mail

semi@semi.ac.cn

交通地图
版权所有 中国科学院半导体研究所

备案号:京ICP备05085259-1号 京公网安备110402500052 中国科学院半导体所声明