当前位置: 首页 > news >正文

网站软文推广好处wps做网站

网站软文推广好处,wps做网站,企业宣传类网站建设,什么网站百度的收录高一.背景 还是之前的主题#xff0c;使用开源软件为公司搭建安全管理平台#xff0c;从视觉模型识别安全帽开始。主要参考学习了开源项目 https://github.com/jomarkow/Safety-Helmet-Detection#xff0c;我是从运行、训练、标注倒过来学习的。由于工作原因#xff0c;抽空…一.背景 还是之前的主题使用开源软件为公司搭建安全管理平台从视觉模型识别安全帽开始。主要参考学习了开源项目 https://github.com/jomarkow/Safety-Helmet-Detection我是从运行、训练、标注倒过来学习的。由于工作原因抽空学习了vscode的使用、python语法了解了pytorch、yolo、ultralytics、cuda、cuda toolkit、cudnn、AI的分类、AI的相关专业名词等等。到这里基本可以利用工程化的方式解决目标检测环境搭建、AI标注、训练、运行全过程了。 二.捋一捋思路 1.人类目前的AI本质是一堆数据喂出个没有智慧的算命师 大量的数量AI最大程度如95%的找到了满足了数据输入与结果的因果关系。你说不智能吧你人不一定能有它预测的准。你说智能吧其实受限于有限的输入数据毕竟不能穷举所有输入数据人类掌握的数据既有限也不一定就是客观的。个人愚见不喜勿喷。 2.图形目标检测就是划圈圈、多看看、试一试的过程 划圈圈就是数据标注在图片上面框一下。多看看就是让算法自己看图片好比我们教小孩子指着自己反复说“爸爸”而后小孩慢慢的学会了叫“爸爸”的过程。试一试就是把训练的结果AI模型或者算法模型拿来运行就像让小孩对着另外一个男人让他去称呼他可能也叫“爸爸”。那么就需要纠正告诉他只有自己才是叫“爸爸”其他的男人应该叫“叔叔”。图形目标检测就是这么个过程没有什么神秘的。当然我们是站在前辈肩膀上的那些算法在那个年代写出来确实是值得敬佩的。 3.环境与工具说明 windows11家庭版、 显卡NVIDIA GeForce GT 730(我没有用GPU训练主要还是显卡太老了版本兼容性问题把我弄哭了CPU慢点~~就慢点吧)、vscode1.83.0 、conda 24.9.2、Python 3.12.7、pytorch2.5.0、yolo8后面换最新的yolo11试试 三.开始手搓 1.创建空的工程结构 1在非中文、空格目录中创建object-detection-hello文件夹 2vscode打开文件夹 3在vscode中创建目录及文件等 下图中的文件夹并不是必须的但是推荐这样 4编写训练需要的参数文件train_config.yaml      train: ../../datas/images val: ../../datas/images#class count nc: 1 # names: [helmet] names: [helmet] labels: ../../datas/labels 5)下载yolov8n.pt到models目录中 在我之前上传的也有详见本文章关联的资源。也可以去安全帽开源项目GitHub - jomarkow/Safety-Helmet-Detection: YoloV8 model, trained for recognizing if construction workers are wearing their protection helmets in mandatory areas中去下载在根目录就有。 2.图片中安全帽标注 1图片准备 去把安全帽的开源下载下来里面有图片。我只选择了0-999共1千张图片毕竟我是cpu训练慢1千张估计也能有个效果了。 2图片标注 参考之前的文章在windows系统中使用labelimg对图片进行标注之工具安装及简单使用-CSDN博客 3标注数据处理 我标注后的文件是xml需要转为txt文件。内容分别是         annotationfolderimages/folderfilenamehard_hat_workers2.png/filenamepathD:\zsp\works\temp\20241119-zsp-helmet\Safety-Helmet-Detection-main\data\images\hard_hat_workers2.png/pathsourcedatabaseUnknown/database/sourcesizewidth416/widthheight415/heightdepth3/depth/sizesegmented0/segmentedobjectnamehelmet/nameposeUnspecified/posetruncated0/truncateddifficult0/difficultbndboxxmin295/xminymin219/yminxmax326/xmaxymax249/ymax/bndbox/objectobjectnamehelmet/nameposeUnspecified/posetruncated0/truncateddifficult0/difficultbndboxxmin321/xminymin212/yminxmax365/xmaxymax244/ymax/bndbox/object /annotation 0 0.745192 0.565060 0.072115 0.060241 0 0.826923 0.549398 0.081731 0.072289 有时候默认就是文本文件的格式了。如果不是创建converter.py直接转换     from xml.dom import minidom import osclasses{helmet:0}def convert_coordinates(size, box):dw 1.0/size[0]dh 1.0/size[1]x (box[0]box[1])/2.0y (box[2]box[3])/2.0w box[1]-box[0]h box[3]-box[2]x x*dww w*dwy y*dhh h*dhreturn (x,y,w,h)def converter(classes):old_labels_path datas/images/raw_data/new_labels_path datas/images/raw_data/current_path os.getcwd()# 打印当前工作目录print(当前路径是:, current_path)for file_name in os.listdir(old_labels_path):if .xml in file_name:old_file minidom.parse(f{old_labels_path}/{file_name})name_out (file_name[:-4].txt)with open(f{new_labels_path}/{name_out}, w) as new_file:itemlist old_file.getElementsByTagName(object)size old_file.getElementsByTagName(size)[0]width int((size.getElementsByTagName(width)[0]).firstChild.data)height int((size.getElementsByTagName(height)[0]).firstChild.data)for item in itemlist:# get class labelclass_name (item.getElementsByTagName(name)[0]).firstChild.dataif class_name in classes:label_str str(classes[class_name])else:label_str -1print (f{class_name} not in function classes)# get bbox coordinatesxmin ((item.getElementsByTagName(bndbox)[0]).getElementsByTagName(xmin)[0]).firstChild.dataymin ((item.getElementsByTagName(bndbox)[0]).getElementsByTagName(ymin)[0]).firstChild.dataxmax ((item.getElementsByTagName(bndbox)[0]).getElementsByTagName(xmax)[0]).firstChild.dataymax ((item.getElementsByTagName(bndbox)[0]).getElementsByTagName(ymax)[0]).firstChild.datab (float(xmin), float(xmax), float(ymin), float(ymax))bb convert_coordinates((width,height), b)#print(bb)#new_file.write(f{label_str} { .join([(f{a}.6f) for a in bb])}\n)new_file.write(f{label_str} { .join([(f{a:.6f}) for a in bb])}\n)print (fwrote {name_out})def main():converter(classes)if __name__ __main__:main() 4偷懒直接用开源项目标注的labels 当然也可以偷懒复制开源项目的labels中0-999的txt文件到我们的labels目录。但是它文件是多目标检测我们只保留下我们的安全帽标注也就是txt文件中0开始的行。所以我在AI的帮助下写了这个utils/deleteOtherclass.py程序文件内容如下      import osPROY_FOLDER os.getcwd().replace(\\,/)INPUT_FOLDER f{PROY_FOLDER}/datas/labels/ files os.listdir(INPUT_FOLDER)def process_file(file_path):# 存储处理后的行processed_lines []try:with open(file_path, r) as file:for line in file:if len(line) 0 and line[0] 0: # 检查行的第一个字符是否为 0processed_lines.append(line)except FileNotFoundError:print(f文件 {file_path} 未找到)returntry:with open(file_path, w) as file: # 以写入模式打开文件会清空原文件file.writelines(processed_lines)except Exception as e:print(f写入文件时出现错误: {e})for file_name in files:file_path INPUT_FOLDER file_nameprint(file_path)process_file(file_path) 执行这个程序后txt文件中就只剩下0开始的了也就是我们安全帽的标注了。 比如hard_hat_workers0.txt文件前后内容分别如下      0 0.914663 0.349760 0.112981 0.141827 0 0.051683 0.396635 0.084135 0.091346 0 0.634615 0.379808 0.052885 0.091346 0 0.748798 0.391827 0.055288 0.086538 0 0.305288 0.397837 0.052885 0.069712 0 0.216346 0.397837 0.048077 0.069712 1 0.174279 0.379808 0.050481 0.067308 1 0.801683 0.383413 0.055288 0.088942 1 0.443510 0.411058 0.045673 0.072115 1 0.555288 0.400240 0.043269 0.074519 1 0.500000 0.383413 0.038462 0.064904 0 0.252404 0.360577 0.033654 0.048077 1 0.399038 0.393029 0.043269 0.0649040 0.914663 0.349760 0.112981 0.141827 0 0.051683 0.396635 0.084135 0.091346 0 0.634615 0.379808 0.052885 0.091346 0 0.748798 0.391827 0.055288 0.086538 0 0.305288 0.397837 0.052885 0.069712 0 0.216346 0.397837 0.048077 0.069712 0 0.252404 0.360577 0.033654 0.048077 3.编写训练的程序 Ultralytics的配置文件一般存放在C:\Users\Dell\AppData\Roaming\Ultralytics\settings.json中路径中的Dell你要换成你的用户名哦。当然这里只是知道就好了。看看它内部的内容如下    {settings_version: 0.0.6,datasets_dir: D:\\zsp\\works\\temp\\20241218-zsp-pinwei\\object-detection-hello,weights_dir: weights,runs_dir: runs,uuid: 09253350c3bd45fd265c2e8346acaaa599711c1c3ef91e7e78ceff31d4132a83,sync: true,api_key: ,openai_api_key: ,clearml: true,comet: true,dvc: true,hub: true,mlflow: true,neptune: true,raytune: true,tensorboard: true,wandb: false,vscode_msg: true } 从配置的内容看我们可能需要修改的是datasets_dir为了更优雅我写了代码来修改。 1用程序去修改配置文件的代码scripts/ultralytics_init.py      from ultralytics import settingsimport osdef update_ultralytics_settings(key, value):try:#settings.update(key, value) # 假设存在 update 方法settings[key]valueprint(fUpdated {key} to {value} in ultralytics settings.)except AttributeError:print(fFailed to update {key}, the update method may not exist in the settings module.)def init():current_path os.getcwd()print(current_path)# 调用函数使用形参参数值用引号括起来update_ultralytics_settings(datasets_dir,current_path)print(settings)2建立训练的主程序scripts/train.py      import ultralytics_init as uinit uinit.init()from ultralytics import YOLOimport os# Return a specific setting # value settings[runs_dir]model YOLO(models/yolov8n.pt) model.train(dataconfig/train_config.yaml, epochs10) result model.val() path model.export(formatonnx)代码我我觉得不解释了一看就明白。 3配置文件config/train_config.yaml的设置 #训练的图片集合 train: ../../datas/images #过程验证的图片集合 val: ../../datas/images#目标类型的数量 nc: 1 #label的英文名称 names: [helmet] 4.执行训练 右上角点三角形运行。 训练了10代训练过程约1小时。日志如下 PS D:\zsp\works\temp\20241218-zsp-pinwei\object-detection-hello C:/Users/Dell/.conda/envs/myenv/python.exe d:/zsp/works/temp/20241218-zsp-pinwei/object-detection-hello/scripts/train.py D:\zsp\works\temp\20241218-zsp-pinwei\object-detection-hello Updated datasets_dir to D:\zsp\works\temp\20241218-zsp-pinwei\object-detection-hello in ultralytics settings. JSONDict(C:\Users\Dell\AppData\Roaming\Ultralytics\settings.json): {settings_version: 0.0.6,datasets_dir: D:\\zsp\\works\\temp\\20241218-zsp-pinwei\\object-detection-hello,weights_dir: weights,runs_dir: runs,uuid: 09253350c3bd45fd265c2e8346acaaa599711c1c3ef91e7e78ceff31d4132a83,sync: true,api_key: ,openai_api_key: ,clearml: true,comet: true,dvc: true,hub: true,mlflow: true,neptune: true,raytune: true,tensorboard: true,wandb: false,vscode_msg: true } New https://pypi.org/project/ultralytics/8.3.55 available Update with pip install -U ultralytics Ultralytics 8.3.49 Python-3.12.7 torch-2.5.0 CPU (12th Gen Intel Core(TM) i7-12700) engine\trainer: taskdetect, modetrain, modelmodels/yolov8n.pt, dataconfig/train_config.yaml, epochs10, timeNone, patience100, batch16, imgsz640, saveTrue, save_period-1, cacheFalse, deviceNone, workers8, projectNone, nametrain, exist_okFalse, pretrainedTrue, optimizerauto, verboseTrue, seed0, deterministicTrue, single_clsFalse, rectFalse, cos_lrFalse, close_mosaic10, resumeFalse, ampTrue, fraction1.0, profileFalse, freezeNone, multi_scaleFalse, overlap_maskTrue, mask_ratio4, dropout0.0, valTrue, splitval, save_jsonFalse, save_hybridFalse, confNone, iou0.7, max_det300, halfFalse, dnnFalse, plotsTrue, sourceNone, vid_stride1, stream_bufferFalse, visualizeFalse, augmentFalse, agnostic_nmsFalse, classesNone, retina_masksFalse, embedNone, showFalse, save_framesFalse, save_txtFalse, save_confFalse, save_cropFalse, show_labelsTrue, show_confTrue, show_boxesTrue, line_widthNone, formattorchscript, kerasFalse, optimizeFalse, int8False, dynamicFalse, simplifyTrue, opsetNone, workspaceNone, nmsFalse, lr00.01, lrf0.01, momentum0.937, weight_decay0.0005, warmup_epochs3.0, warmup_momentum0.8, warmup_bias_lr0.1, box7.5, cls0.5, dfl1.5, pose12.0, kobj1.0, nbs64, hsv_h0.015, hsv_s0.7, hsv_v0.4, degrees0.0, translate0.1, scale0.5, shear0.0, perspective0.0, flipud0.0, fliplr0.5, bgr0.0, mosaic1.0, mixup0.0, copy_paste0.0, copy_paste_modeflip, auto_augmentrandaugment, erasing0.4, crop_fraction1.0, cfgNone, trackerbotsort.yaml, save_dirruns\detect\train Overriding model.yaml nc80 with nc1from n params module arguments 0 -1 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2] 1 -1 1 4672 ultralytics.nn.modules.conv.Conv [16, 32, 3, 2]2 -1 1 7360 ultralytics.nn.modules.block.C2f [32, 32, 1, True]3 -1 1 18560 ultralytics.nn.modules.conv.Conv [32, 64, 3, 2]4 -1 2 49664 ultralytics.nn.modules.block.C2f [64, 64, 2, True]5 -1 1 73984 ultralytics.nn.modules.conv.Conv [64, 128, 3, 2]6 -1 2 197632 ultralytics.nn.modules.block.C2f [128, 128, 2, True]7 -1 1 295424 ultralytics.nn.modules.conv.Conv [128, 256, 3, 2]8 -1 1 460288 ultralytics.nn.modules.block.C2f [256, 256, 1, True]9 -1 1 164608 ultralytics.nn.modules.block.SPPF [256, 256, 5] 10 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, nearest]11 [-1, 6] 1 0 ultralytics.nn.modules.conv.Concat [1] 12 -1 1 148224 ultralytics.nn.modules.block.C2f [384, 128, 1] 13 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, nearest]14 [-1, 4] 1 0 ultralytics.nn.modules.conv.Concat [1] 15 -1 1 37248 ultralytics.nn.modules.block.C2f [192, 64, 1] 16 -1 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 2]17 [-1, 12] 1 0 ultralytics.nn.modules.conv.Concat [1] 18 -1 1 123648 ultralytics.nn.modules.block.C2f [192, 128, 1] 19 -1 1 147712 ultralytics.nn.modules.conv.Conv [128, 128, 3, 2]20 [-1, 9] 1 0 ultralytics.nn.modules.conv.Concat [1] 21 -1 1 493056 ultralytics.nn.modules.block.C2f [384, 256, 1] 22 [15, 18, 21] 1 751507 ultralytics.nn.modules.head.Detect [1, [64, 128, 256]] Model summary: 225 layers, 3,011,043 parameters, 3,011,027 gradients, 8.2 GFLOPstrain: Scanning D:\zsp\works\temp\20241218-zsp-pinwei\object-detection-hello\datas\labels.cache. val: Scanning D:\zsp\works\temp\20241218-zsp-pinwei\object-detection-hello\datas\labels.cache... Plotting labels to runs\detect\train\labels.jpg... optimizer: optimizerauto found, ignoring lr00.01 and momentum0.937 and determining best optimizer, lr0 and momentum automatically... optimizer: AdamW(lr0.002, momentum0.9) with parameter groups 57 weight(decay0.0), 64 weight(decay0.0005), 63 bias(decay0.0) Image sizes 640 train, 640 val Using 0 dataloader workers Logging results to runs\detect\train Starting training for 10 epochs... Closing dataloader mosaicEpoch GPU_mem box_loss cls_loss dfl_loss Instances Size1/10 0G 1.592 2.148 1.278 40 640: 100%|██████████| Class Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 32/32 [02:5800:00, 5.57s/it]all 1000 3792 0.977 0.033 0.423 0.229Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size2/10 0G 1.483 1.464 1.215 23 640: 100%|██████████| 63/63 [07:5300:00, 7.51s/it]Class Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 32/32 [01:2000:00, 2.51s/it]all 1000 3792 0.697 0.647 0.687 0.398Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size3/10 0G 1.489 1.318 1.242 15 640: 100%|██████████| 63/63 [02:5200:00, 2.74s/it]Class Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 32/32 [01:1300:00, 2.30s/it]all 1000 3792 0.783 0.662 0.744 0.401Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size4/10 0G 1.47 1.183 1.22 19 640: 100%|██████████| 63/63 [02:5000:00, 2.71s/it]Class Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 32/32 [01:1200:00, 2.25s/it]all 1000 3792 0.837 0.749 0.832 0.496Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size5/10 0G 1.42 1.041 1.196 31 640: 100%|██████████| 63/63 [02:5100:00, 2.72s/it]Class Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 32/32 [01:1100:00, 2.22s/it]all 1000 3792 0.867 0.776 0.87 0.537Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size6/10 0G 1.4 0.9758 1.196 30 640: 100%|██████████| 63/63 [02:5100:00, 2.72s/it]Class Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 32/32 [01:1100:00, 2.24s/it]all 1000 3792 0.898 0.818 0.902 0.565Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size7/10 0G 1.352 0.8787 1.156 37 640: 100%|██████████| 63/63 [02:5200:00, 2.74s/it]Class Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 32/32 [01:2000:00, 2.51s/it]all 1000 3792 0.921 0.843 0.922 0.576Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size8/10 0G 1.307 0.825 1.13 17 640: 100%|██████████| 63/63 [06:1800:00, 6.01s/it]Class Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 32/32 [01:1100:00, 2.22s/it]all 1000 3792 0.906 0.845 0.924 0.58Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size9/10 0G 1.294 0.7867 1.133 29 640: 100%|██████████| 63/63 [02:5100:00, 2.72s/it]Class Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 32/32 [01:1000:00, 2.21s/it]all 1000 3792 0.922 0.87 0.938 0.611Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size10/10 0G 1.257 0.7387 1.119 57 640: 100%|██████████| 63/63 [02:5100:00, 2.72s/it]Class Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 32/32 [01:1800:00, 2.47s/it]all 1000 3792 0.933 0.884 0.95 0.6210 epochs completed in 0.934 hours. Optimizer stripped from runs\detect\train\weights\last.pt, 6.2MB Optimizer stripped from runs\detect\train\weights\best.pt, 6.2MBValidating runs\detect\train\weights\best.pt... Ultralytics 8.3.49 Python-3.12.7 torch-2.5.0 CPU (12th Gen Intel Core(TM) i7-12700) Model summary (fused): 168 layers, 3,005,843 parameters, 0 gradients, 8.1 GFLOPsClass Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 32/32 [00:5900:00, 1.86s/it]all 1000 3792 0.933 0.884 0.95 0.62 Speed: 1.4ms preprocess, 51.6ms inference, 0.0ms loss, 0.4ms postprocess per image Results saved to runs\detect\train Ultralytics 8.3.49 Python-3.12.7 torch-2.5.0 CPU (12th Gen Intel Core(TM) i7-12700) Model summary (fused): 168 layers, 3,005,843 parameters, 0 gradients, 8.1 GFLOPs val: Scanning D:\zsp\works\temp\20241218-zsp-pinwei\object-detection-hello\datas\labels.cache... 1000 images, 76 backgrounds, 0 corrupt: 100%|██████████| 1000/1000 [00:00?, ?it/s]Class Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 63/63 [00:5400:00, 1.17it/s]all 1000 3792 0.933 0.884 0.95 0.62 Speed: 1.1ms preprocess, 46.3ms inference, 0.0ms loss, 0.4ms postprocess per image Results saved to runs\detect\train2 Ultralytics 8.3.49 Python-3.12.7 torch-2.5.0 CPU (12th Gen Intel Core(TM) i7-12700)PyTorch: starting from runs\detect\train\weights\best.pt with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 5, 8400) (6.0 MB)ONNX: starting export with onnx 1.17.0 opset 19... ONNX: slimming with onnxslim 0.1.43... ONNX: export success ✅ 1.0s, saved as runs\detect\train\weights\best.onnx (11.7 MB)Export complete (1.1s) Results saved to D:\zsp\works\temp\20241218-zsp-pinwei\object-detection-hello\runs\detect\train\weights Predict: yolo predict taskdetect modelruns\detect\train\weights\best.onnx imgsz640 Validate: yolo val taskdetect modelruns\detect\train\weights\best.onnx imgsz640 dataconfig/train_config.yaml Visualize: https://netron.app 5.运行训练后的模型看看效果 1把训练后的模型best.pt准备好 训练结果模型在哪里看看日志啊Results saved to D:\zsp\works\temp\20241218-zsp-pinwei\object-detection-hello\runs\detect\train\weights。我去把它复制到了test文件夹中。 2把测试图片1.jpg复制到test目录下       注意图片中的马赛克是为了保护同事隐私添加的并非程序效果。 3编写验证代码script/test.py 代码也不解释了一看就明白的 import os from ultralytics import YOLO import cv2PROY_FOLDER os.getcwd().replace(\\,/)INPUT_FOLDER f{PROY_FOLDER}/test/ OUTPUT_FOLDER f{PROY_FOLDER}/test_out/ MODEL_PATH f{PROY_FOLDER}/test/best.ptif not os.path.exists(OUTPUT_FOLDER):os.mkdir(OUTPUT_FOLDER)model YOLO(MODEL_PATH) files os.listdir(INPUT_FOLDER)def draw_box(params, frame, threshold 0.2):x1, y1, x2, y2, score, class_id paramsif score threshold:cv2.rectangle(frame, (int(x1), int(y1)), (int(x2), int(y2)), (0, 255, 0), 4)textValueresults.names[int(class_id)].upper()if HELMET in textValue :textValueyescv2.putText(frame, textValue, (int(x1), int(y1 - 10)),cv2.FONT_HERSHEY_SIMPLEX, 1.3, (0, 255, 0), 3, cv2.LINE_AA) elif HEAD in textValue :textValueno!!!cv2.putText(frame, textValue, (int(x1), int(y1 - 10)),cv2.FONT_HERSHEY_SIMPLEX, 1.3, (0, 255, 0), 3, cv2.LINE_AA) else: cv2.putText(frame, textValue, (int(x1), int(y1 - 10)),cv2.FONT_HERSHEY_SIMPLEX, 1.3, (0, 255, 0), 3, cv2.LINE_AA) return framefor file_name in files:file_path INPUT_FOLDER file_nameif .jpg in file_name:image_path_out OUTPUT_FOLDER file_name[:-4] _out.jpgimage cv2.imread(file_path,cv2.IMREAD_COLOR) results model(image)[0]for result in results.boxes.data.tolist():image draw_box(result, image) cv2.imwrite(image_path_out, image) cv2.destroyAllWindows() 4运行结果查看 注意图片中的马赛克是为了保护同事隐私添加的并非程序效果。 四.总结 到这里我们就完成了从标注、写代码训练、验证训练结果的全过程。为我们后面搭建一个安全帽检测的服务奠定了基础当然对于训练结果的调优干预还是我们的短板毕竟是初学我想未来都不是问题。我有编程基础过程中还是出现了不少的问题但只要努力尝试去看输出日志都能解决。不行的话把输出日志拿去问AI都能找到解决问题的思路。
http://www.dnsts.com.cn/news/5238.html

相关文章:

  • 网站开发前端应用程序青海省城乡建设厅网站
  • 太原网站建设注意dw响应式网站模板下载
  • 网站迁移建设方案本地网站做通用会员卡
  • 欧美网站设计特点成都网站建站
  • 俱乐部网站php源码网站站欣赏
  • 郑州建设网站的公司wordpress 默认分页
  • 建企业网站教程元气森林网络营销案例
  • 做本地地旅游网站商城定制开发
  • 企业宣传画册设计公司seo优化上首页
  • 怎么制作网站外链泉州百度竞价开户
  • php网站开发教程培训首页定制
  • 东莞市网站推广山东淄博网络科技有限公司
  • 网站建设企业官网源码衡水市住房和城乡规划建设网站
  • 专门做问卷的调查的网站在线crm系统
  • 做网站有个名字叫小廖网站建设平台开发
  • 国内单页网站重庆如何快速制作一个网站
  • 扬中网站推广导流西安有什么好玩的景点
  • 做网站 需要了解什么微信小程序怎么添加
  • 济宁做网站有哪几家photolux wordpress
  • 制作网站用什么软件网站为什么网页打不开怎么办
  • 建站之星服务器网站设计大概收费范围
  • 深圳 手机网站怎么制作网站链接
  • 三网合一营销型全网站优化什么
  • 定制头像的网站网站开发虚拟主机管理系统
  • 5h制作网站人力资源网站模板
  • 中国做视频网站有哪些科学小制作小发明
  • 网站 用户体验长春新闻最新消息
  • 网站建设项目进度表中国建设工程招聘信息网站
  • 辽宁数据网站建设哪家便宜网站开发移动端
  • 网站建设 文件源代码约定建设一个小说网站要多少钱