高级检索

密度引导的图像数据自动混合增强技术

Density-guided Automatic Mixing for Image Data Augmentation

  • 摘要: 数据混合增强技术通过生成混合样本缓解视觉模型的过拟合问题,其核心包含样本选择和样本混合两个阶段。现有方法通常随机选取两张图像进行插值或补丁替换,以生成混合样本,忽略了特征空间分布与语义信息间的关联性,限制了增强效果。为此,本文提出一种密度引导的图像数据自动混合增强技术,利用特征分布引导混合增强。在样本选择阶段,该技术引入密度指标,以量化样本的分布密度,并提出基于密度差的图像配对算法,以选择特征代表性高且显著信息最大的图像对。在样本混合阶段,该技术基于图像对的密度差,以端到端的方式联合优化了混合样本的生成任务和分类任务,并自动生成混合掩码,以保留新混合图像上具有判别性的语义区域。在普通和细粒度基准数据集上进行的实验表明,与AutoMix相比,该技术的分类准确率约提升了1%。此外,本文提出的图像配对算法的兼容性较好,能进一步增强其他增强策略的性能。

     

    Abstract: Mixup Augmentations mitigate overfitting in visual models by generating mixed samples, with the core process consisting of two stages: sample selection and sample mixing. Existing methods typically randomly select pairs of images for interpolation or patch replacement to generate mixed samples, neglecting the relationship between feature space distribution and semantic information, which limits the enhancement effect. To address this issue, a density-guided mixing for image data augmentation method is proposed to leverage feature distribution to guide the augmentation process. In sample selection, the method introduces a density metric to quantify the distribution density of samples and proposes a density difference-based to select image pairs with highly representative features and maximum saliency. In sample mixing, the method incorporates the density difference ofimage pairs and jointly optimizes the mixed sample generation task with the classification task in an end-to-end manner, automatically generating mixing masks to ensure that discriminative semantic regions are preserved in the newly mixed images. Experiments on both standard and fine-grained benchmark datasets demonstrate that the proposed method improves classification accuracy by approximately 1% compared to AutoMix. Furthermore, the proposed image pairing algorithm exhibits strong compatibility and can further enhance the performance of other augmentation strategies.

     

/

返回文章
返回
Baidu
map