网站整体风格设计,河南网站建设品牌,极简建站,软文兼职#x1f9e1;#x1f49b;#x1f49a;TensorFlow2实战-系列教程 总目录 有任何问题欢迎在下面留言 本篇文章的代码运行界面均在Jupyter Notebook中进行 本篇文章配套的代码资源已经上传 1、猫狗识别任务
import os
import warnings
warnings.filterwarnings(ignoreTensorFlow2实战-系列教程 总目录 有任何问题欢迎在下面留言 本篇文章的代码运行界面均在Jupyter Notebook中进行 本篇文章配套的代码资源已经上传 1、猫狗识别任务
import os
import warnings
warnings.filterwarnings(ignore)
import tensorflow as tf
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.preprocessing.image import ImageDataGeneratorbase_dir ./data/cats_and_dogs
train_dir os.path.join(base_dir, train)
validation_dir os.path.join(base_dir, validation)train_cats_dir os.path.join(train_dir, cats)
train_dogs_dir os.path.join(train_dir, dogs)validation_cats_dir os.path.join(validation_dir, cats)
validation_dogs_dir os.path.join(validation_dir, dogs)model tf.keras.models.Sequential([tf.keras.layers.Conv2D(32, (3,3), activationrelu, input_shape(64, 64, 3)),tf.keras.layers.MaxPooling2D(2, 2),tf.keras.layers.Conv2D(64, (3,3), activationrelu),tf.keras.layers.MaxPooling2D(2,2),tf.keras.layers.Conv2D(128, (3,3), activationrelu),tf.keras.layers.MaxPooling2D(2,2),tf.keras.layers.Flatten(),tf.keras.layers.Dense(512, activationrelu),tf.keras.layers.Dense(1, activationsigmoid)
])model.compile(lossbinary_crossentropy,optimizerAdam(lr1e-4),metrics[acc])依次是导包、指定数据路径、构建模型、配置训练器等这些都与前面TensorFlow2实战-系列教程3猫狗识别1完全一致
2、数据增强
train_datagen ImageDataGenerator(rescale1./255,rotation_range40,width_shift_range0.2,height_shift_range0.2,shear_range0.2,zoom_range0.2,horizontal_flipTrue,fill_modenearest)test_datagen ImageDataGenerator(rescale1./255)train_generator train_datagen.flow_from_directory(train_dir, target_size(64, 64), batch_size20,class_modebinary)validation_generator test_datagen.flow_from_directory(validation_dir,target_size(64, 64),batch_size20,class_modebinary)history model.fit_generator(train_generator,steps_per_epoch100, # 2000 images batch_size * stepsepochs100,validation_datavalidation_generator,validation_steps50, # 1000 images batch_size * stepsverbose2)train_datagen
这里将rescale重新缩放、旋转、平移变换、剪切变换、缩放、水平翻转、以临近方式填充等多种方式对训练数据进行数据增强shear_range0.2 表示图像将在 -0.2 到 0.2 弧度的范围内随机剪切
test_datagen
验证数据没有进行数据增强这里只进行了归一化操作
train_generator
从 train_dir 目录加载训练图像并应用前面定义的数据增强target_size(64, 64)调整图像大小为 64x64 像素batch_size20每批次处理 20 张图像class_modebinary因为是二分类任务。
validation_generator
从 validation_dir 目录加载验证图像只应用缩放
history
fit_generator 方法在 TensorFlow 2.2 之后已经被弃用建议使用 fit 方法替代开始训练validation_datavalidation_generator指定验证数据生成器verbose2用于控制训练过程中输出的详细程度
3、预测效果展示
import matplotlib.pyplot as plt
acc history.history[acc]
val_acc history.history[val_acc]
loss history.history[loss]
val_loss history.history[val_loss]epochs range(len(acc))plt.plot(epochs, acc, b, labelTraining accuracy)
plt.plot(epochs, val_acc, r, labelValidation accuracy)
plt.title(Training and validation accuracy)
plt.legend()plt.figure()plt.plot(epochs, loss, b, labelTraining Loss)
plt.plot(epochs, val_loss, r, labelValidation Loss)
plt.title(Training and validation loss)
plt.legend()plt.show()很显然经过数据增强后的模型表现对比原本效果有显著提升
3、加入Dropout
Dropout就是指定比例对这一层随机杀死一下神经元这里我们只需要在构建网络的时候在全连接层加上一层Dropout就可以了
import os
import warnings
warnings.filterwarnings(ignore)
import tensorflow as tf
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.preprocessing.image import ImageDataGenerator
base_dir ./data/cats_and_dogs
train_dir os.path.join(base_dir, train)
validation_dir os.path.join(base_dir, validation)train_cats_dir os.path.join(train_dir, cats)
train_dogs_dir os.path.join(train_dir, dogs)validation_cats_dir os.path.join(validation_dir, cats)
validation_dogs_dir os.path.join(validation_dir, dogs)
model tf.keras.models.Sequential([tf.keras.layers.Conv2D(32, (3,3), activationrelu, input_shape(64, 64, 3)),tf.keras.layers.MaxPooling2D(2, 2),tf.keras.layers.Conv2D(64, (3,3), activationrelu),tf.keras.layers.MaxPooling2D(2,2),tf.keras.layers.Conv2D(128, (3,3), activationrelu),tf.keras.layers.MaxPooling2D(2,2),tf.keras.layers.Flatten(),tf.keras.layers.Dense(512, activationrelu),tf.keras.layers.Dropout(0.5),tf.keras.layers.Dense(1, activationsigmoid)
])model.compile(lossbinary_crossentropy,optimizerAdam(lr1e-4),metrics[acc])
train_datagen ImageDataGenerator(rescale1./255,rotation_range40,width_shift_range0.2,height_shift_range0.2,shear_range0.2,zoom_range0.2,horizontal_flipTrue,fill_modenearest)test_datagen ImageDataGenerator(rescale1./255)train_generator train_datagen.flow_from_directory(train_dir, target_size(64, 64), batch_size20,class_modebinary)validation_generator test_datagen.flow_from_directory(validation_dir,target_size(64, 64),batch_size20,class_modebinary)history model.fit_generator(train_generator,steps_per_epoch100, # 2000 images batch_size * stepsepochs100,validation_datavalidation_generator,validation_steps50, # 1000 images batch_size * stepsverbose2)Epoch 100/100 100/100 - 3s - loss: 0.4145 - acc: 0.8145 - val_loss: 0.4269 - val_acc: 0.7830 - 3s/epoch - 33ms/step 这效果又提升了一点