nav emailalert searchbtn searchbox tablepage yinyongbenwen piczone journalimg journalInfo journalinfonormal searchdiv searchzone qikanlogo popupnotification paper paperNew
2026 5 8⁃15
一种基于量化神经网络的SLAM增强型点特征匹配方法
基金项目(Foundation): 陕西省重点研发产业项目(2021GY⁃338);西安市碑林区科技计划项目(GX2333)
邮箱(Email):
DOI: 10.16652/j.issn.1004⁃373x.2026.05.002
24 9 0
阅读 下载 被引

工具集

引用本文 下载本文
PDF
引用导出 分享

    扫码分享到微信或朋友圈

使用微信“扫一扫”功能。
将此内容分享给您的微信好友或者朋友圈
摘要:

针对同步定位与地图构建中前端特征提取与匹配鲁棒性不足的问题,提出一种基于量化神经网络的SLAM增强型点特征匹配方法。通过构建适应度函数并采用柯西变异策略优化卷积核权重,同时应用CLAHE算法均衡图像亮度分量,从而提升图像质量;在特征提取阶段,通过增加额外的卷积层,并设计含有跳跃连接结构的注意力机制,进一步提升ZippyPoint网络的性能;最终,通过计算欧氏距离的平方差构建距离矩阵,结合反向匹配结果批量提取匹配点,并通过张量操作验证双向一致性,从而实现精确的特征点匹配。实验结果表明,增强后的图像亮度适中,灰度分布均匀,且在复杂场景中的平均匹配精度达到70.87%,匹配时间为0.243 s,两项指标分别较ORB+BF算法提高52.07%和60.94%,具有较高的应用价值。

Abstract:

In view of the insufficient robustness of front ⁃ end feature extraction and matching in SLAM (simultaneous localization and mapping), a quantized neural network based enhanced point feature matching method for SLAM is proposed. The image quality is improved by constructing fitness function and optimizing convolutional kernel weights with Cauchy mutation strategy, and by applying CLAHE (contrast limited adaptive histogram equalization) algorithm to equalize image luminance components. In the stage of feature extraction, the performance of the ZippyPoint networks is further improved by adding extra convolutional layers and designing an attention mechanism containing jump connection structures. Finally, the distance matrix is constructed by calculating the squared difference of the Euclidean distance, the matching points are extracted in batch by combining the reverse matching results, and the bi⁃directional consistency is verified by tensor operation, so as to realize the accurate feature point matching. The experimental results show that the enhanced image has moderate luminance, uniform gray scale distribution, and its average matching accuracy in complex scenes reaches 70.87%, and its matching time is 0.243 s, which are 52.07% and 60.94% higher than those of ORB+BF algorithm, respectively, so the proposed method has high application value.

参考文献

[1] 张涛,齐向东,张海龙,等.基于ROS⁃Matlab 的移动机器人仿真研究[J].现代电子技术,2024,47(18):114⁃120.

[2] 朱代先,秋强,孔浩然,等.基于直线段检测和LT描述符的矿井图像线特征匹配算法[J].工矿自动化,2024,50(2):72⁃82.

[3] 康利娟,陈先桥.基于多级直方图形状分割的图像对比度增强技术[J].计算机应用与软件,2022,39(3):207⁃212.

[4] 孙峰,李博,高紫俊,等.一种基于Retinex理论矿井下图像增强算法[J].大连工业大学学报,2023,42(2):151⁃156.

[5] HAI J, XUAN Z, YANG R, et al. R2RNet: Low ⁃light image enhancement via real⁃low to real⁃normal network [J]. Journal of visual communication and image representation, 2023, 90: 103712.

[6] LIU X Y, XIE Q, ZHAO Q, et al. Low⁃light image enhancement by Retinex⁃based algorithm unrolling and adjustment [J]. IEEE transactions on neural networks and learning systems, 2024, 35(11): 15758⁃15771.

[7] BRATEANU A, BALMEZ R, AVRAM A, et al. LYT⁃NET: Lightweight YUV transformer⁃based network for low⁃light image enhancement [EB/OL]. [2025⁃12⁃07]. https://doi.org/10.48550/arXiv.2401.15204.

[8] SARLIN P E, DETONE D, MALISIEWICZ T, et al. SuperGlue: Learning feature matching with graph neural networks [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, WA, USA: Computer Vision Foundation, 2020: 4937⁃4946.

[9] SUN J M, SHEN Z H, WANG Y, et al. LoFTR: Detector⁃free local feature matching with transformers [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Computer Vision Foundation: [s.n.], 2021: 8922⁃8931.

[10] LINDENBERGER P, SARLIN P E, POLLEFEYS M. LightGlue: Local feature matching at light speed [C]// Proceedings of the IEEE/CVF International Conference on Computer Vision. Paris, France: IEEE, 2023: 17581⁃17592.

[11] DENG Y X, MA J Y. ResMatch: Residual attention learning for local feature matching [EB/OL]. [2023⁃07⁃24]. https://doi.org/10.48550/arXiv.2307.05180.

[12] XIE T, DAI K, WANG K, et al. DeepMatcher: a deep transformer ⁃ based network for robust and accurate local feature matching [J]. Expert systems with applications, 2024, 237: 121361.

[13] EDSTEDT J, SUN Q Y, BÖKMAN G, et al. RoMa: Robust dense feature matching [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, WA, USA: IEEE, 2024: 19790⁃19800.

[14] OQUAB M, DARCET T, MOUTAKANNI T, et al. DINOv2: Learning robust visual features without supervision [EB/OL]. [2024⁃08⁃08]. https://doi.org/10.48550/arXiv.2304.07193.

[15] KANAKIS M, MAURER S, SPALLANZANI M, et al. ZippyPoint: Fast interest point detection, description, and matching through mixed precision discretization [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver, BC, Canada: IEEE, 2023: 6114⁃6123.

[16] 邵长春,胡国明,陶汉卿.基于改进蝴蝶优化算法的冗余机器人逆运动学求解[J].机械设计与研究,2022,38(1):94⁃97.

[17] 闫景富,王鹏飞.多尺度特征融合下三维视觉图像场景分割算法[J].现代电子技术,2024,47(21):46⁃50.

[18] YUAN Z K, ZENG J, WEI Z X, et al. CLAHE⁃based low⁃light image enhancement for robust object detection in overhead power transmission system [J]. IEEE transactions on power delivery, 2023, 38(3): 2240⁃2243.

[19] CAMPOS C, ELVIRA R, RODRÍGUEZ J J G, et al. ORB⁃SLAM3: An accurate open⁃source library for visual, visual⁃inertial, and multimap SLAM [J]. IEEE transactions on robotics, 2021, 37(6): 1874⁃1890.

[20] BALNTAS V, LENC K, VEDALDI A, et al. HPatches: A benchmark and evaluation of handcrafted and learned local descriptors [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, Hawaii, USA: IEEE, 2017: 5173⁃5182.

[21] GUO C L, LI C Y, GUO J C, et al. Zero⁃reference deep curve estimation for low⁃light image enhancement [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, WA, USA: IEEE, 2020: 1777⁃1786.

[22] DETONE D, MALISIEWICZ T, RABINOVICH A. SuperPoint: Self⁃supervised interest point detection and description [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Salt Lake City, Utah, USA: IEEE, 2018: 224⁃236.

[23] POTJE G, CADAR F, ARAUJO A, et al. XFeat: Accelerated features for lightweight image matching [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, WA, USA: IEEE, 2024: 2682⁃2691.

基本信息:

DOI:10.16652/j.issn.1004⁃373x.2026.05.002

引用信息:

[1]朱代先,吕佳昊.一种基于量化神经网络的SLAM增强型点特征匹配方法[J],2026,49(5):8⁃15.DOI:10.16652/j.issn.1004⁃373x.2026.05.002.

基金信息:

陕西省重点研发产业项目(2021GY⁃338);西安市碑林区科技计划项目(GX2333)

文档文件

引用

GB/T 7714-2015 格式引文
MLA格式引文
APA格式引文