Generated against the basics of network understanding

As we all know, since the emergence of the generative confrontation network (GAN), it has been widely used in image processing. However, there are still many people who are not very familiar with GAN and are worried that they will not learn GAN because they have no knowledge of mathematical knowledge. In this article, Google researcher Stefan Hosein provided a tutorial for entry-level GAN ​​for beginners. In this tutorial, even if you do not have deep knowledge of mathematics, you can understand what is a generative network (GAN).

analogy

One of the simplest ways to understand GAN is through a simple metaphor:

Assume that there is a store where the store mainly buys certain types of wine from customers and then sells the wine.

However, some hateful customers sell fake wines in order to earn money. In this case, the owner must be able to distinguish between fake wine and authentic wine.

As you can imagine, at the very beginning, counterfeiters may make many mistakes when they try to sell fake wines, and it is easy for the owner to discover that the wine is not an authentic wine. After these failures, forgers will continue to try different technologies to simulate real wines, and some methods will eventually succeed. Now that counterfeiters know that certain technologies have been able to escape the shopkeeper's inspections, he can begin to further improve and improve the fake wine based on these technologies.

At the same time, the owner may get feedback from other shop owners or wine experts that some of the wines she owns are not original. This means that the shop owner must improve her discriminating methods to determine whether the wine is fake or authentic. The goal of counterfeiters is to create wines that are indistinguishable from authentic wines, and the goal of the shopkeeper is to accurately distinguish whether the wine is authentic.

It can be said that this kind of cyclical competition is the main idea behind GAN.

Generated against the network's components

Through the above example, we can propose a GAN architecture.

There are two main components in GAN: generators and discriminators. In the example we described above, the shop owner is called a discriminator network, usually a convolutional neural network (because GAN is mainly used for image tasks), mainly to assign the probability that the image is real.

Counterfeiters are called generative networks and are usually also a convolutional neural network (with deconvolution layers). The network receives some noise vectors and outputs an image. When training a generative network, it learns which areas of the image can be improved/changed so that the discriminator will have difficulty distinguishing the images it produces from the real images.

The generative network continuously generates images that are closer to the real image, and at the same time, the discriminative network attempts to determine the difference between the real image and the false image. The ultimate goal is to create a generative network that can produce images that are indistinguishable from real images.

Writing a simple generative network against Keras

Now that you understand what GAN is, and their main components, now we can start trying to write a very simple code. You can use Keras, if you are not familiar with this Python library, you should read this tutorial before proceeding. This tutorial is based on an easy-to-understand GAN.

First of all, the first thing you need to do is to install the following packages via pip:

- keras

- matplotlib

- tensorflow

- tqdm

You will use matplotlib to draw, tensorflow as Kerras backend library and tqdm to show the fancy progress bar for each round (iteration).

The next step is to create a Python script where you first need to import all the modules and functions that you will use. Each explanation will be given when using them.

Import os

Import numpy as np

Import matplotlib.pyplot as plt

From tqdm import tqdm

From keras.layers import Input

From keras.models import Model, Sequential

From keras.layers.core import Dense, Dropout

From keras.layers.advanced_activations import LeakyReLU

From keras.datasets import mnist

From keras.optimizers import Adam

From keras import initializers

You now need to set some variables:

# Let Keras know that we are using tensorflow as our backend engine

Os.environ["KERAS_BACKEND"] = "tensorflow"

# To make sure that we can reproduce the experiment and get the same results

Np.random.seed(10)

# The dimension of our random noise vector.

Random_dim = 100

Before you start building discriminators and generators, you should first collect data and preprocess it. You will use the common MNIST data set, which has a set of single digital images from 0 to 9.

MINST digital sample

Def load_minst_data():

# load the data

(x_train, y_train), (x_test, y_test) = mnist.load_data()

# normalize our inputs to be in the range[-1, 1]

X_train = (x_train.astype(np.float32) - 127.5)/127.5

# convert x_train with a shape of (60000, 28, 28) to (60000, 784) so ​​we have

# 784 columns per row

X_train = x_train.reshape(60000, 784)

Return (x_train, y_train, x_test, y_test)

Note that mnist.load_data() is part of Keras, which allows you to easily import MNIST datasets into the work area.

Now you can start creating your generator and discriminator network. In this process, you will use the Adam optimizer. In addition, you also need to create a neural network with three hidden layers whose activation function is Leaky Relu. For the discriminator, you need to add dropout layers for it to improve the robustness against unknown images.

Def get_optimizer():

Return Adam(lr=0.0002, beta_1=0.5)

Def get_generator(optimizer):

Generator = Sequential()

Generator.add(Dense(256, input_dim=random_dim, kernel_initializer=initializers.RandomNormal(stddev=0.02)))

Generator.add(LeakyReLU(0.2))

Generator.add(Dense(512))

Generator.add(LeakyReLU(0.2))

Generator.add(Dense(1024))

Generator.add(LeakyReLU(0.2))

Generator.add(Dense(784, activation='tanh'))

Generator.compile(loss='binary_crossentropy', optimizer=optimizer)

Return generator

Def get_discriminator(optimizer):

Discriminator = Sequential()

Discriminator.add(Dense(1024, input_dim=784, kernel_initializer=initializers.RandomNormal(stddev=0.02)))

Discriminator.add(LeakyReLU(0.2))

Discriminator.add(Dropout(0.3))

Discriminator.add(Dense(512))

Discriminator.add(LeakyReLU(0.2))

Discriminator.add(Dropout(0.3))

Discriminator.add(Dense(256))

Discriminator.add(LeakyReLU(0.2))

Discriminator.add(Dropout(0.3))

Discriminator.add(Dense(1, activation='sigmoid'))

Discriminator.compile(loss='binary_crossentropy', optimizer=optimizer)

Return discriminator

Next, you need to combine the generator and discriminator!

Def get_gan_network(discriminator, random_dim, generator, optimizer):

# We initially set trainable to False since we only want to train

# generator or discriminator at a time

Discriminator.trainable = False

# gan input (noise) will be 100-dimensional vectors

Gan_input = Input(shape=(random_dim,))

# the output of the generator (an image)

x = generator(gan_input)

# get the output of the discriminator (probability if the image is real or not)

Gan_output = discriminator(x)

Gan = Model(inputs=gan_input, outputs=gan_output)

Gan.compile(loss='binary_crossentropy', optimizer=optimizer)

Return gan

For the sake of completeness, you can also create a function to save the generated image once every 20 training sessions. Since this is not the core content of this course, you do not need to fully understand this function.

Def plot_generated_images(epoch, generator, examples=100, dim=(10, 10), figsize=(10, 10)):

Noise = np.random.normal(0, 1, size=[examples, random_dim])

Generated_images = generator.predict(noise)

Generated_images = generated_images.reshape(examples, 28, 28)

Plt.figure(figsize=figsize)

For i in range(generated_images.shape[0]):

Plt.subplot(dim[0], dim[1], i+1)

Plt.imshow(generated_images[i], interpolation='nearest', cmap='gray_r')

Plt.axis('off')

Plt.tight_layout()

Plt.savefig('gan_generated_image_epoch_%d.png' % epoch)

You have now coded most of your network. The rest is to train the network and see the images you created.

Def train(epochs=1, batch_size=128):

# Get the training and testing data

X_train, y_train, x_test, y_test = load_minst_data()

# Split the training data into batches of size 128

Batch_count = x_train.shape[0] / batch_size

# Build our GAN netowrk

Adam = get_optimizer()

Generator = get_generator(adam)

Discriminator = get_discriminator(adam)

Gan = get_gan_network(discriminator, random_dim, generator, adam)

For e in xrange(1, epochs+1):

Print '-'*15, 'Epoch %d' % e, '-'*15

For _ in tqdm(xrange(batch_count)):

# Get a random set of input noise and images

Noise = np.random.normal(0, 1, size=[batch_size, random_dim])

Image_batch = x_train[np.random.randint(0, x_train.shape[0], size=batch_size)]

# Generate fake MNIST images

Generated_images = generator.predict(noise)

X = np.concatenate([image_batch, generated_images])

# Labels for generated and real data

Y_dis = np.zeros(2*batch_size)

# One-sided label smoothing

Y_dis[:batch_size] = 0.9

# Train discriminator

Discriminator.trainable = True

Discriminator.train_on_batch(X, y_dis)

# Train generator

Noise = np.random.normal(0, 1, size=[batch_size, random_dim])

Y_gen = np.ones(batch_size)

Discriminator.trainable = False

Gan.train_on_batch(noise, y_gen)

If e == 1 or e % 20 == 0:

Plot_generated_images(e, generator)

If __name__ == '__main__':

Train(400, 128)

After training 400 rounds, you can view the resulting image. After viewing the image generated after 1 round of training, you will find that it does not have any real structure. After viewing the images generated after 40 rounds of training, you will find that the numbers have begun to form, and finally, you are viewing. After 400 rounds of training, you will find that most of the numbers are clearly visible except for a set of numbers that are difficult to identify.

Results after 1 round of training (top) | Results after 40 rounds of training (middle) | Results after 400 rounds of training (bottom)

It takes about 2 minutes for this code to run once on the CPU, which is why we chose this code. You can try more rounds of training and add more number (kind) layers to generators and discriminators. Of course, when using a more complex and deeper architecture, the corresponding code runtime will be extended with the use of only the CPU. But don't give up trying.

At this point, you have completed all your learning tasks. You have learned the basics of Genesis GAN (Gan) in an intuitive way! Also, you implemented your first model with the help of the Keras library.

60w Solar Panel

60W solar panel

A solar cell panel, solar electric panel, photo-voltaic (PV) module, PV panel or Solar Panel is an assembly of photovoltaic solar cells mounted in a (usually rectangular) frame, and a neatly organised collection of PV panels is called a photovoltaic system or solar array. Solar panels capture sunlight as a source of radiant energy, which is converted into electric energy in the form of direct current (DC) electricity.

60w Solar Panel,Solar Panel System For Home,Solar Panels 200 Watt,Solar Panels

suzhou whaylan new energy technology co., ltd , https://www.whaylan.com

This entry was posted in on