理解生成式对抗网络GAN的最简单方法是通过一个简单的比喻:
假设有一家商店从顾客那里购买某些种类的葡萄酒,之后进行再销售。
可以想象,最初,伪造者在尝试出售假酒时可能会犯很多错误,并且店主很容易认定该酒是假的。由于这些失败,伪造者会继续尝试使用不同的技术来模拟真正的葡萄酒,有些最终会成功。现在,伪造者知道某些技术已经可以骗过店主的检查,他可以开始根据这些技术进一步改进假冒葡萄酒。
同时,店主可能会从其他店主或葡萄酒专家那里得到一些反馈,说明她拥有的一些葡萄酒不是原装的。这意味着店主必须改善她鉴定葡萄酒的技术。伪造者的目标是制造与真实葡萄酒无法区分的葡萄酒,而店主的目标是准确地分辨葡萄酒是否真实。
这种来回的竞争是生成式对抗网络(GAN)背后的主要思想。
使用上面的例子,我们可以想出一个生成式对抗网络GAN的架构。
GAN中有两个主要组件:生成器和鉴别器。这个比喻中的店主被称为鉴别器网络,并且通常是卷积神经网络(因为GAN主要用于图像任务),其指定图像为真实的概率。
伪造者一般称为生成网络,并且通常也是卷积神经网络(具有反卷积层)。该网络需要一些噪声矢量并输出图像。在训练生成网络时,它会学习图像的哪些区域进行改进/更改,以便鉴别器将难以将其生成的图像与真实图像区分开来。
生成网络不断生成更接近真实图像的图像,而鉴别网络试图确定真实图像和假图像之间的差异。最终的目标是建立一个可生成与真实图像无法区分的图像的生成网络。
- keras
- matplotlib
- tensorflow
- tqdm
import os
import numpy as np
import matplotlib.pyplot as plt
from tqdm import tqdm
from keras.layers import Input
from keras.models import Model, Sequential
from keras.layers.core import Dense, Dropout
from keras.layers.advanced_activations import LeakyReLU
from keras.datasets import mnist
from keras.optimizers import Adam
from keras import initializers
# Let Keras know that we are using tensorflow as our backend engine
os.environ["KERAS_BACKEND"] = "tensorflow"
# To make sure that we can reproduce the experiment and get the same results
np.random.seed(10)
# The dimension of our random noise vector.
random_dim = 100
MINST数字的例子
def load_minst_data():
# load the data
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# normalize our inputs to be in the range[-1, 1]
x_train = (x_train.astype(np.float32) - 127.5)/127.5
# convert x_train with a shape of (60000, 28, 28) to (60000, 784) so we have
# 784 columns per row
x_train = x_train.reshape(60000, 784)
return (x_train, y_train, x_test, y_test)
def get_optimizer():
return Adam(lr=0.0002, beta_1=0.5)
def get_generator(optimizer):
generator = Sequential()
generator.add(Dense(256, input_dim=random_dim, kernel_initializer=initializers.RandomNormal(stddev=0.02)))
generator.add(LeakyReLU(0.2))
generator.add(Dense(512))
generator.add(LeakyReLU(0.2))
generator.add(Dense(1024))
generator.add(LeakyReLU(0.2))
generator.add(Dense(784, activation='tanh'))
generator.compile(loss='binary_crossentropy', optimizer=optimizer)
return generator
def get_discriminator(optimizer):
discriminator = Sequential()
discriminator.add(Dense(1024, input_dim=784, kernel_initializer=initializers.RandomNormal(stddev=0.02)))
discriminator.add(LeakyReLU(0.2))
discriminator.add(Dropout(0.3))
discriminator.add(Dense(512))
discriminator.add(LeakyReLU(0.2))
discriminator.add(Dropout(0.3))
discriminator.add(Dense(256))
discriminator.add(LeakyReLU(0.2))
discriminator.add(Dropout(0.3))
discriminator.add(Dense(1, activation='sigmoid'))
discriminator.compile(loss='binary_crossentropy', optimizer=optimizer)
return discriminator
def get_gan_network(discriminator, random_dim, generator, optimizer):
# We initially set trainable to False since we only want to train either the
# generator or discriminator at a time
discriminator.trainable = False
# gan input (noise) will be 100-dimensional vectors
gan_input = Input(shape=(random_dim,))
# the output of the generator (an image)
x = generator(gan_input)
# get the output of the discriminator (probability if the image is real or not)
gan_output = discriminator(x)
gan = Model(inputs=gan_input, outputs=gan_output)
gan.compile(loss='binary_crossentropy', optimizer=optimizer)
return gan
def plot_generated_images(epoch, generator, examples=100, dim=(10, 10), figsize=(10, 10)):
noise = np.random.normal(0, 1, size=[examples, random_dim])
generated_images = generator.predict(noise)
generated_images = generated_images.reshape(examples, 28, 28)
plt.figure(figsize=figsize)
for i in range(generated_images.shape[0]):
plt.subplot(dim[0], dim[1], i+1)
plt.imshow(generated_images[i], interpolation='nearest', cmap='gray_r')
plt.axis('off')
plt.tight_layout()
plt.savefig('gan_generated_image_epoch_%d.png' % epoch)
def train(epochs=1, batch_size=128):
# Get the training and testing data
x_train, y_train, x_test, y_test = load_minst_data()
# Split the training data into batches of size 128
batch_count = x_train.shape[0] / batch_size
# Build our GAN netowrk
adam = get_optimizer()
generator = get_generator(adam)
discriminator = get_discriminator(adam)
gan = get_gan_network(discriminator, random_dim, generator, adam)
for e in xrange(1, epochs+1):
print '-'*15, 'Epoch %d' % e, '-'*15
for _ in tqdm(xrange(batch_count)):
# Get a random set of input noise and images
noise = np.random.normal(0, 1, size=[batch_size, random_dim])
image_batch = x_train[np.random.randint(0, x_train.shape[0], size=batch_size)]
# Generate fake MNIST images
generated_images = generator.predict(noise)
X = np.concatenate([image_batch, generated_images])
# Labels for generated and real data
y_dis = np.zeros(2*batch_size)
# One-sided label smoothing
y_dis[:batch_size] = 0.9
# Train discriminator
discriminator.trainable = True
discriminator.train_on_batch(X, y_dis)
# Train generator
noise = np.random.normal(0, 1, size=[batch_size, random_dim])
y_gen = np.ones(batch_size)
discriminator.trainable = False
gan.train_on_batch(noise, y_gen)
if e == 1 or e % 20 == 0:
plot_generated_images(e, generator)
if __name__ == '__main__':
train(400, 128)
恭喜,你已经完成了本教程的最后部分,你将以直观的方式学习生成式对抗网络(GAN)的基础知识!另外,你在Keras库的帮助下实现了这个模型。