【PP-YOLOv2】训练自定义的数据集_pp-yoloe训练自己的数据集-CSDN博客 (2024)

引言

官方文档的数据准备写的十分详细,这里笔者根据个人操作进行总结。
官方数据准备文档:https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/PrepareDataSet.md

1. 准备训练数据

笔者将自定义的数据集转成转成VOC数据格式,执行步骤如下:

最初的数据集结构如下:

【PP-YOLOv2】训练自定义的数据集_pp-yoloe训练自己的数据集-CSDN博客 (1)

dataset/name(自定义数据集的名称)/├── Annotations│ ├── xxx1.xml│ ├── xxx2.xml│ ├── xxx3.xml│ | ...├── images│ ├── xxx1.jpg│ ├── xxx2.jpg│ ├── xxx3.jpg│ | ...

1.1 新建txt文件,写入各类别的标签

【PP-YOLOv2】训练自定义的数据集_pp-yoloe训练自己的数据集-CSDN博客 (2)

1.2 生成数据集的文件列表

step1: 新建Main.py代码文件,内容如下(笔者只生成训练集和验证集(test)),在'name\ImageSets\Main'路径下生成train.txt,test.txt

"""2021.01.19author:alianfunction: create train.txt and test.txt in ImageSets\Main"""import osimport randomtrainval_percent = 0.2 # 可自行进行调节(设置训练和测试的比例是8:2)train_percent = 1xmlfilepath = 'images'txtsavepath = 'ImageSets\Main'total_xml = os.listdir(xmlfilepath)num = len(total_xml)list = range(num)tv = int(num * trainval_percent)tr = int(tv * train_percent)trainval = random.sample(list, tv)train = random.sample(trainval, tr)ftest = open('ImageSets/Main/test.txt', 'w')ftrain = open('ImageSets/Main/train.txt', 'w')for i in list: name = total_xml[i] + '\n' # 保留图片的后缀名 if i in trainval: if i in train: ftest.write(name) else: ftrain.write(name)ftrain.close()ftest.close()

step2: 新建xml_to_txt.py代码文件,内容如下,在'name'生成train.txt,test.txt(不同于上面的,这个才是训练需要的文件)

"""2021.01.18author:alianfunction: xml to txt说明:将该文件放在与data同级根目录下面"""import xml.etree.ElementTree as ETimport pickleimport osfrom os import listdir, getcwdfrom os.path import joinsets = ['train', 'test']# 修改类别classes = ['nut_m', 'sr_m', 'sr_b', 'sr_l', 'pad', 'fb', 'nm', 'nut'] # 修改自己训练的类别def convert(size, box): dw = 1. / size[0] dh = 1. / size[1] x = (box[0] + box[1]) / 2.0 y = (box[2] + box[3]) / 2.0 w = box[1] - box[0] h = box[3] - box[2] x = x * dw w = w * dw y = y * dh h = h * dh return (x, y, w, h)def convert_annotation(image_id): in_file = open('Annotations/%s.xml' % (image_id)) # 修改路径 out_file = open('labels/%s.txt' % (image_id), 'w') # 修改路径 tree = ET.parse(in_file) root = tree.getroot() size = root.find('size') w = int(size.find('width').text) h = int(size.find('height').text) for obj in root.iter('object'): difficult = obj.find('difficult').text cls = obj.find('name').text if cls not in classes or int(difficult) == 1: continue cls_id = classes.index(cls) xmlbox = obj.find('bndbox') b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text), float(xmlbox.find('ymax').text)) bb = convert((w, h), b) out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')wd = getcwd()for image_set in sets: if not os.path.exists('labels/'): # 修改路径 os.makedirs('labels/') # 修改路径 image_ids = open('ImageSets/Main/%s.txt' % (image_set)).read().strip().split() # 修改路径 list_file = open('%s.txt' % (image_set), 'w') # 修改路径 for image_id in image_ids: list_file.write('images/%s Annotations/%s.xml\n' % (image_id,image_id[:-4])) # 修改路径 print(image_id) convert_annotation(image_id[:-4]) list_file.close()

经过上述操作生成最终的训练数据集如下:

dataset/name/├── annotations│ ├── xxx1.xml│ ├── xxx2.xml│ ├── xxx3.xml│ | ...├── images│ ├── xxx1.jpg│ ├── xxx2.jpg│ ├── xxx3.jpg│ | ...├── labels│ ├── xxx1.txt│ ├── xxx2.txt│ ├── xxx3.txt│ | ...├── ImageSets│ ├── Main│ │ ├── train.txt│ │ ├── test.txt├── label_list.txt (必须提供,且文件名称必须是label_list.txt )├── train.txt (训练数据集文件列表, ./images/xxx1.jpg ./annotations/xxx1.xml)└── test.txt (测试数据集文件列表)

各个文件说明

# label_list.txt 是类别名称列表,改文件名必须是这个>>cat label_list.txtclassname1classname2...# train.txt 是训练数据文件列表>>cat train.txt./images/xxx1.jpg ./annotations/xxx1.xml./images/xxx2.jpg ./annotations/xxx2.xml...# valid.txt 是验证数据文件列表>>cat valid.txt./images/xxx3.jpg ./annotations/xxx3.xml...

2. 修改配置文件

进入PaddleDetection文件夹,笔者的数据结构是VOC数据
打开'./_base_/ppyolov2_r50vd_dcn_voc.yml',可以看到需要修改的配置文件有5个:
按提示逐一修改配置文件
可以参考官网:https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/config_annotation/ppyolo_r50vd_dcn_1x_coco_annotation.md

_BASE_: [ '../datasets/voc.yml', './_base_/optimizer_365e.yml', './_base_/ppyolov2_reader.yml', './_base_/ppyolov2_r50vd_dcn.yml', '../runtime.yml',]snapshot_epoch: 1000 # 每保存一次模型的轮数weights: output/ppyolov2_r50vd_dcn_voc/model_final # 模型的保存路径TrainReader: mixup_epoch: 350 batch_size: 4 # 训练的batch_size

需要修改的部分笔者以#表示
文件1:'../configs/datasets/voc.yml'

metric: VOC # 数据类型map_type: 11pointnum_classes: 8 # 目标类别数,不包括背景TrainDataset: !VOCDataSet dataset_dir: dataset/Track_fasteners # 数据集 anno_path: train.txt # 训练数据集文件列表 label_list: label_list.txt # 类别名称列表 data_fields: ['image', 'gt_bbox', 'gt_class', 'difficult']EvalDataset: !VOCDataSet dataset_dir: dataset/Track_fasteners #  anno_path: test.txt # 测试数据集文件列表 label_list: label_list.txt # data_fields: ['image', 'gt_bbox', 'gt_class', 'difficult']TestDataset: !ImageFolder anno_path: dataset/Track_fasteners/label_list.txt #

文件2:'./ppyolo/_base_/optimizer_365e.yml'

epoch: 5000 # 总训练轮数LearningRate: base_lr: 0.002 # 学习率(0.01/现存*核数) schedulers: - !PiecewiseDecay gamma: 0.1 milestones: - 243 - !LinearWarmup start_factor: 0. steps: 1000OptimizerBuilder: clip_grad_by_norm: 35. optimizer: momentum: 0.9 type: Momentum regularizer: factor: 0.0005 type: L2

文件3:'./ppyolo/_base_/ppyolov2_reader.yml'

worker_num: 2 # 每张GPU reader进程个数TrainReader: inputs_def: num_max_boxes: 100 sample_transforms: - Decode: {} - Mixup: {alpha: 1.5, beta: 1.5} - RandomDistort: {} - RandomExpand: {fill_value: [123.675, 116.28, 103.53]} - RandomCrop: {} - RandomFlip: {} batch_transforms: - BatchRandomResize: {target_size: [320, 352, 384, 416, 448, 480, 512, 544, 576, 608, 640, 672, 704, 736, 768], random_size: True, random_interp: True, keep_ratio: False} - NormalizeBox: {} - PadBox: {num_max_boxes: 100} - BboxXYXY2XYWH: {} - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} - Permute: {} - Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[17, 15], [19, 14], [19, 16], [19, 18], [21, 18], [57, 66], [56, 76], [91, 91], [121, 144]], downsample_ratios: [32, 16, 8]} batch_size: 4 # 训练时batch_size shuffle: true # 读取数据是是否乱序 drop_last: true # 是否丢弃最后不能完整组成batch的数据 mixup_epoch: 25000 # mixup_epoch,大于最大epoch,表示训练过程一直使用mixup数据增广 use_shared_memory: true # 是否通过共享内存进行数据读取加速,需要保证共享内存大小(如/dev/shm)满足大于1GEvalReader: sample_transforms: - Decode: {} - Resize: {target_size: [640, 640], keep_ratio: False, interp: 2} - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} - Permute: {} batch_size: 4 # 评估时batch_sizeTestReader: inputs_def: image_shape: [3, 640, 640] sample_transforms: - Decode: {} - Resize: {target_size: [640, 640], keep_ratio: False, interp: 2} - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} - Permute: {} batch_size: 1 # 测试时batch_size

文件4:'./ppyolo/_base_/ppyolov2_r50vd_dcn.yml'

architecture: YOLOv3 # 模型结构类型# 预训练模型地址pretrain_weights: model_final.pdparams # 默认:https://paddledet.bj.bcebos.com/models/pretrained/ResNet50_vd_ssld_pretrained.pdparamsnorm_type: sync_bnuse_ema: true # 是否使用emaema_decay: 0.9998YOLOv3: backbone: ResNet neck: PPYOLOPAN yolo_head: YOLOv3Head post_process: BBoxPostProcessResNet: depth: 50 variant: d return_idx: [1, 2, 3] dcn_v2_stages: [3] freeze_at: -1 freeze_norm: false norm_decay: 0.PPYOLOPAN: drop_block: true block_size: 3 keep_prob: 0.9 spp: trueYOLOv3Head: anchors: [[17, 15], [19, 14], [19, 16], [19, 18], [21, 18], [57, 66], [56, 76], [91, 91], [121, 144]] anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] loss: YOLOv3Loss iou_aware: true iou_aware_factor: 0.5YOLOv3Loss: ignore_thresh: 0.7 downsample: [32, 16, 8] label_smooth: false scale_x_y: 1.05 iou_loss: IouLoss iou_aware_loss: IouAwareLossIouLoss: loss_weight: 2.5 loss_square: trueIouAwareLoss: loss_weight: 1.0BBoxPostProcess: decode: name: YOLOBox conf_thresh: 0.01 downsample_ratio: 32 clip_bbox: true scale_x_y: 1.05 nms: name: MatrixNMS keep_top_k: 100 score_threshold: 0.01 post_threshold: 0.01 nms_top_k: -1 background_label: -1

上述配置文件的预训练模型地址,不是源码中的默认地址,而是model_final.pdparams,获得方法如下:
在源码路径下新建代码文件changeppyolov2,内容如下:
其中ppyolov2_r50vd_dcn_365e_coco.pdparams需要到官网下载预训练模型

import numpy as npimport picklenum_class = 8 # 类别数with open('ppyolov2_r50vd_dcn_365e_coco.pdparams','rb') as f: # 预训练模型 obj = f.read()weights = pickle.loads(obj, encoding = 'latin1')weights['yolo_head.yolo_output.0.weight'] = np.zeros([num_class*3+18,1024,1,1],dtype = 'float32')weights['yolo_head.yolo_output.0.bias'] = np.zeros([num_class*3+18],dtype = 'float32')weights['yolo_head.yolo_output.1.weight'] = np.zeros([num_class*3+18,512,1,1],dtype = 'float32')weights['yolo_head.yolo_output.1.bias'] = np.zeros([num_class*3+18],dtype = 'float32')weights['yolo_head.yolo_output.2.weight'] = np.zeros([num_class*3+18,256,1,1],dtype = 'float32')weights['yolo_head.yolo_output.2.bias'] = np.zeros([num_class*3+18],dtype = 'float32')f = open('model_final.pdparams','wb')pickle.dump(weights,f)f.close()

运行上述代码则在相同路径下得到model_final.pdparams,并将其路径填入上述的配置文件。
文件5:'../runtime.yml'

use_gpu: true # 是否使用gpulog_iter: 20 # 日志打印间隔save_dir: outputsnapshot_epoch: 1 # 模型保存间隔时间

3. 生成自适应的anchor

python tools/anchor_cluster.py -c configs/ppyolo/ppyolov2_r50vd_dcn_voc.yml -n 9 -s 640 -m v2 -i 1000把anchor复制到config\ppyolo\_base_\ppyolov2_r50vd_dcn.yml和configs\ppyolo\_base_ppyolov2_reader.yml

【PP-YOLOv2】训练自定义的数据集_pp-yoloe训练自己的数据集-CSDN博客 (3)

4. 执行训练指令

export CUDA_VISIBLE_DEVICES=4,5python -m paddle.distributed.launch --log_dir=logs/ --selected_gpus='4,5' tools/train.py -c configs/ppyolo/ppyolov2_r50vd_dcn_voc.yml

【PP-YOLOv2】训练自定义的数据集_pp-yoloe训练自己的数据集-CSDN博客 (4)
最终在output/ppyolov2_r50vd_dcn_voc/model_final路径下生成模型

【PP-YOLOv2】训练自定义的数据集_pp-yoloe训练自己的数据集-CSDN博客 (2024)
Top Articles
Latest Posts
Article information

Author: The Hon. Margery Christiansen

Last Updated:

Views: 6525

Rating: 5 / 5 (50 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: The Hon. Margery Christiansen

Birthday: 2000-07-07

Address: 5050 Breitenberg Knoll, New Robert, MI 45409

Phone: +2556892639372

Job: Investor Mining Engineer

Hobby: Sketching, Cosplaying, Glassblowing, Genealogy, Crocheting, Archery, Skateboarding

Introduction: My name is The Hon. Margery Christiansen, I am a bright, adorable, precious, inexpensive, gorgeous, comfortable, happy person who loves writing and wants to share my knowledge and understanding with you.