CHINESE  INSTITUTE  OF  COMMAND  AND  CONTROL

【CICC原创】多视觉传感器协同弱小目标检测

发表时间:2024-07-03 14:49

(《指挥与控制学报》刊文精选)

引用格式   王田,程嘉翔,刘克新,等. 多视觉传感器协同弱小目标检测[J]. 指挥与控制学报,2024,10(1):9-18

WANG T, CHENG J X, LIU K X, et al. Multi vision sensor coordination dim and small target cooperative detection[J]. Journal of Command and Control, 2024, 10(1): 9-18


摘要

多视觉传感器协同对空实现全区域覆盖的弱小目标检测,在近距离防空领域中具有重要意义。现有的全区域覆盖方法存在覆盖率低、随机性差等问题,弱小目标检测算法存在模型大、定位及分类准确性低等问题。提出了一种高效的对空全区域覆盖算法和轻量级弱小目标检测算法,通过结合最大面积优先法和最小曼哈顿离法改善存在覆盖死角和随机性差等问题。提出密集通道扩展网络(dense and channel expand network,DCENet)模型,基于轻量级稠密拼接和自适应尺寸通道扩展方法,在弱小目标数据集上获得了比原算法更有竞争力的平均精度结果。


在现代战争中,高效精准的防空系统是维护区域稳定、遏制敌方杀伤的重要组成。作为全球最重视防空力量建设的国家之一,我国正在不断完善防空体系,以阻止敌方空中军事力量入侵[1-2]。防空系统的组成复杂,其中,预警雷达作为防空系统中的“眼睛”在探测远目标上起到了重要作用。预警雷达的探测距离可达1 000 km级,为防空作战部署提供了充足的反应时间。然而,在地面杂波的干扰下,预警雷达对于近距离目标的探测能力较弱,难以消除敌方军机低空突袭的威胁。为解决该问题,可以使用不受杂波干扰的红外或可见光视觉传感器探测近距离目标,有效弥补预警雷达的低空盲区。

视觉传感器的照射角度和照射距离通常呈负相关,即照射角度越大,照射距离越小;照射角度越小,则照射距离越大。一般地,视觉传感器在照射距离达到100m级时,照射角度仅30度左右,这在视觉传感器数量有限的情况下无法做到静态地对空全区域覆盖,且随机性差易被敌机侦察。

而由于视觉传感器的照射距离限制,低空飞行的目标在拍摄到的画面中多以弱小目标的形式出现。因此,弱小目标检测是实现对空全区域覆盖后捕获敌方军机位置的重要环节[3],其旨在完成对视觉传感器获取的视频流中弱小目标的定位与识别。现阶段智能化目标检测算法在弱小目标检测上的平均精度远小于在大型目标检测的平均精度,同时由于作战环境的供电限制,弱小目标检测算法的浮点运算数(floating point operations,FLOPs)和功耗应当足够小,以便防空系统能够长期以低负荷状态运行[4]。

    基于上述问题分析,提出如下创新点:

1)提出一种基于最大面积优先法和最小曼哈顿距离法高效覆盖算法,以改善全覆盖路径规划算法

(complete coverage path planning,CCPP)中的覆盖死角和随机性差等问题,从而实现覆盖范围广、隐蔽性强的多视觉传感器对空全区域覆盖。

2)提出DCENet 模型。该模型提出轻量级稠密拼接方法(lightw-eight dense stitching),有效保留了弱小目标在主干网络的浅层和深层卷积过程中的有效特征信息,不仅提高了目标检测算法对弱小目标的定位和识别能力,还维持了模型的参数量处于轻量级状态。

3)在模型DCENet 中提出自适应尺寸通道扩展(adaptive size channel expansion)方法,有效扩充了弱小目标在主干网络输出的浅层特征图中的有效特征信息,提高了目标检测算法对弱小目标的定位能力。

图片
图片
图片
图片
图片
图片
图片
图片
图片
图片




References

[1] 陈永华, 孔德强, 陈洁. 天基信息支援在防空作战中的应用模式[J]. 指挥与控制学报, 2021, 7(3): 331-334.

CHEN Y H, KONG D Q, CHEN J. Application mode of space based information support in air defense operations[J]Journal of Command and Control, 2021, 7(3): 331-334.(in Chinese)

[2] 陈黎, 李芳芳, 冯清江, 等. 一种基于OODA-A 环的防空体系及其作战时效分析[J]. 指挥与控制学报, 2021, (4):383-388.

CHEN L, LI F F, FENG Q J, et al. An air defense system based on OODA-A loop and its operational effectiveness analysis[J]. Journal of Command and Control, 2021, 7(4):383-388.(in Chinese)

[3] 朱会杰, 王勇, 赵振宇, 等. 无人机精确定位中的目标实例分割算法[J]. 指挥与控制学报, 2021, 7(2): 192-196.

ZHU H J, WANG Y, ZHAO Z Y, et al. Target instance segmentation algorithm in UAV precise positioning[J]. Journal of Command and Control, 2021, 7(2): 192-196.(in Chinese)

[4] 孙显, 杨竹君, 李俊希, 等. 基于知识自蒸馏的轻量化复杂遥感图像精细分类方法[J]. 指挥与控制学报, 2021, 7(4):365-373.

SUN X, YANG Z J, LI J X, et al. A fine classification method for lightweight and complex remote sensing images based on knowledge self distillation[J]. Journal of Command and Control, 2021, 7(4): 365-373.(in Chinese)

[5] GAGE D W. Randomized search strategies with imperfect sensors[J]. Mobile Robots VIII. SPIE, 1994(2058): 270-279.

[6] CARVALHO R N, VIDAL H A, VIEIRA P, et al. Complete coverage path planning and guidance for cleaning robots[C]//

Proceeding of the IEEE International Symposium on Industrial Electronics. IEEE, 1997: 677-682.

[7] BERENSON D, ABBEEL P, GOLDBERG K. A robot path planning framework that learns from experience[C]// IEEE International Conference on Robotics and Automation. IEEE,2012: 3671-3678.

[8] REKLEITIS I, NEW A P, RANKIN E S, et al. Efficient boustrophedon multi-robot coverage: an algorithmic approach[J]. Annals of Mathematics and Artificial Intelligence, 2008,52(2): 109-142.

[9] 王琦斐, 杨军. 基于内螺旋覆盖算法的多AUV 协作反水雷路径规划研究[J]. 计算机测量与控制, 2012, 20(1): 144-146.

WANG Q F, YANG J. Research on multi AUV cooperative anti mine path planning based on inner spiral coverage algorithm[J]. Computer Measurement and Control, 2012, 20(1):144-146.(in Chinese)

[10] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. Imagenet classification with deep convolutional neural networks[J].

Advances in Neural Information Processing Systems, 2012(127): 1106-1114.

[11] LOWE D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision, 2004,

60(2): 91-110.

[12] BAY H, ESS A, TUYTELAARS T, et al. Speeded-up robust features(SURF)[J]. Computer Vision and Image Understanding, 2008, 110(3): 346-359.

[13] FRIEDMAN J, HASTIE T, TIBSHIRANI R. Additive logistic regression: a statistical view of boosting(with discussion

and a rejoinder by the authors)[J]. The Annals of Statistics, 2000, 28(2): 337-407.

[14] UIJLINGS J R, VAN D S, GEVERS T, et al. Selective search for object recognition[J]. International Journal of Computer Vision, 2013, 104(2): 154-171.

[15] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection[C]// Proceedings of the IEEE Conference on Computer Vision and Pat tern Recognition, 2016: 779-788.

[16] REDMON J, FARHADI A. YOLOv9000: better, faster, stronger[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017: 7263-7271.

[17] REDMON J. YOLOv3: an incremental improvement[J]. Computing Research Repository, 2018(253): 1298-1305.

[18] BOCHKOVSKIY A, WANG C Y, LIAO H Y. YOLOv4: optimal speed and accuracy of object detection[J]. Computing

Research Repository, 2020(283): 1043-1052.

[19] GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic

segmentation[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014: 580-587.

[20] GIRSHICK R. Fast r-cnn[C]// Proceedings of the IEEE International Conference on Computer Vision, 2015: 1440-1448.

[21] REN S, HE K, GIRSHICK R, et al. Faster r-cnn: towards real-time object detection with region proposal networks[J].

Advances in Neural Information Processing Systems, 2015(28): 91-99.

[22] LIN T Y, DOLL佗R P, GIRSHICK R, et al. Feature pyramid networks for object detection[C]// Proceedings of the IEEE

Conference on Computer Vision and Pattern Recognition,2017: 2117-2125.

[23] ZHOU X, WANG D, KRAHENBUHL P. Objects as points[J]. Computing Research Repository, 2019(275): 1210-1222.

[24] CAI Z, VASCONCELOS N. Cascade r-cnn: delving into high quality object detection[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018: 6154-6162.

[25] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[C]// Proceedings of the IEEE International Conference on Computer Vision, 2017: 2980-2988.

[26] GALCERAN E, CARRERAS M. A survey on coverage path planning for robotics[J]. Robotics and Autonomous Systems,

2013, 61(12): 1258-1276.

[27] KODITSCHEK D, RIMON E. Exact robot navigation using artificial potential functions[J]. IEEE Trans. Robot. Automat,

1992(8): 501-518.

[28] 张雷. 基于启发式搜索的最优规划算法研究[D]. 南京: 南京大学, 2014.

ZHANG L. Research on optimal planning algorithm based on heuristic search[D]. Nanjing: Nanjing University, 2014.(in Chinese)

[29] 熊壬浩, 刘羽. A* 算法的改进及并行化[J]. 计算机应用,2015, 35(7): 1843-1848.

XIONG R H, LIU Y. Improvement and parallelization of astar algorithm[J]. Computer Applications, 2015, 35(7): 1843-1848.(in Chinese)

[30] 刘淑华, 夏菁, 孙学敏, 等. 已知环境下一种高效全覆盖路径规划算法[J]. 东北师大学报(自然科学版), 2011,43(4): 39-43.

LIU S H, XIA J, SUN X M, et al. An efficient full coverage path planning algorithm in known environments[J]. Journal of Northeast Normal University(Natural Science Edition), 2011,43(4): 39-43.(in Chinese)

[31] XINBO G, MENGCHENG M, HAITAO W, et al. Research progress of small target detection processing[J]. Data Acquis. Process, 2021(36): 391-417.

[32] WANG C Y, LIAO H Y M, WU Y H, et al. CSPNet: a new backbone that can enhance learning capability of CNN[C]//

Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020: 390-391.

[33] REZATOFIGHI H, TSOI N, GWAK J Y, et al. Generalized intersection over union: a metric and a loss for bounding

box regression[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 658-666.

[34] HUI B W, SONG Z Y, FAN H Q, et al. A dataset for dimsmall target detection and tracking of aircraft in infrared image sequences[J]. Data Bank, 2019(57): 238-247.