Deep Learning - Codemotion Magazine We code the future. Together Thu, 01 Dec 2022 11:30:36 +0000 en-US hourly 1 Deep Learning - Codemotion Magazine 32 32 TensorFlow Tutorial: An Essential Deep Learning Language? Fri, 02 Dec 2022 08:00:00 +0000

Why does it make sense to check out a TensorFlow Tutorial today? Well, Artificial intelligence (AI) is the future. Recently, we’ve seen the emergence of the first driverless cars and developments such as Project InnerEye have huge implications for healthcare. But it’s not all about the technology of tomorrow.  If you look closely you’ll find… Read more

The post TensorFlow Tutorial: An Essential Deep Learning Language? appeared first on Codemotion Magazine.


Why does it make sense to check out a TensorFlow Tutorial today? Well, Artificial intelligence (AI) is the future. Recently, we’ve seen the emergence of the first driverless cars and developments such as Project InnerEye have huge implications for healthcare. But it’s not all about the technology of tomorrow. 

If you look closely you’ll find evidence of AI in your day-to-day life. AI is everywhere, from smartwatches and automated lighting to film and product recommendations. AI is changing businesses too. We’ve seen the introduction of the workflow orchestrator. This has allowed for the coordination and management of systems across multiple cloud vendors and domains.  

The AI industry is expected to grow to $1,394.30 billion USD by 2029. If you want to capitalize on this growth, innovation is key. Underpinning many recent developments is machine learning, a branch of AI that allows it to adapt. This article will look at Tensorflow, and how it can help you to build your own machine-learning models. 

With Tensorflow on your side, you can be at the forefront of AI innovation. So, without further ado, let’s learn more about the language. 

What is Tensorflow? 

Tensorflow was developed in 2012 by the Google Brain Team. It’s an open-source, end-to-platform designed to support innovations in machine learning. At the core of Tensorflow is usability. In fact, it’s often viewed as one of the easiest code development platforms to use.  

One of the reasons for this is that it includes readily available APIs. This saves users the time-consuming task of editing certain sections of code. You’ll spend less time training a model and find fewer errors in the program. The use of the IBM Mainframe AS400 can further enhance the use of APIs. 

Tensorflow has integration with multiple programming languages. This includes popular options such as C++ and Python, making development even quicker. It is also extremely scalable. Once code has been written, it can be run on either the CPU or GPU. 

Take a look at Azure Databricks TensorFlow for even deeper insights into Tensorflow. 

Tensorflow offers a number of improvements to how we will access Web 3.0 storage data.

Machine Learning Vs Deep Learning

As we’ve explained, machine learning has powered a great deal of recent AI development. Deep learning is a subset of machine learning, an AI technology that can complete tasks without human intervention. Deep learning is essentially a more complex version of the former. 

In this system, the structure of AI is based on the human brain. Like the human mind, it can adapt to different circumstances and make independent decisions. Another difference is that deep learning relies on a large amount of unstructured data, perhaps stored in virtual servers. Unstructured data can take many forms, including: 

  • Images. 
  • Videos. 
  • Text. 
  • Audio.  

This data is processed and can be used to power many different processes. This ranges from data analytics to mobile app testing automation

What is MLOps? 

MLOPs is a term that you will see often in relation to Tensorflow. It stands for Machine Learning Operations. MLOps is at the heart of all machine learning engineering. It usually involves multiple groups of data scientists and engineers. 

MLOps will take models to production, as well as ensure that they are properly maintained and monitored. It’s important to learn MLOps before beginning any operations within Tensorflow.

Artificial Neural Networks 

Neural networks mimic brain cell connections.

As previously explained, deep learning is intended to mirror the human brain. To accomplish this, a complex neural network is created. Similar to a biological network, nodes act as neurons, links as axons, and acceptor algorithms of a perceptron act as dendrites. 

You are almost certainly going to have to define, save and restore a model if you want to do machine learning

A model is an abstract form that comprises:

  • A function whose purpose is to compute something on tensors.
  • Variables that are updated in response to learning.

In most models, the machine learning convolutional neural network is divided into layers. The layers have defined mathematical structures that can be reused and have trained variables. Let’s look at each type of layer. 

Input Layer 

Within the input layer, neurons receive data from the surrounding environment. No computation is performed here, nor does the layer interact with data. 

Hidden Layer 

All processing takes place in the hidden layer (otherwise known as the deep neural network). Here, features are extracted and data is converted into useful information. 

Output Layer 

Once the processing of data is complete, it is transferred to the output layer. Here, it is computed and sends information to the outside environment, 

The Components of Tensorflow 

There are two major components to Tensorflow: the tensor and the graph framework. Let’s look at both in more detail. 


As the name suggests, the tensor is the main component in Tensorflow. In other words, it is the foundation of all software. The tensor is a matrix or vector (a mathematical object that has a direction and a magnitude) that makes up all forms of data. 

The values within a tensor contain values that are alike with a partly or completely known shape (the dimensionality of the array or matrix). 

Images are great examples of the application of a tensor. When a Red Green Blue image is processed, it is a 3-D tensor. It has layers for each color as well as dimensions for height and width. 

Each operation that you carry out in Tensorflow is called an OP node. Nodes are connected, and the edge of each node is the tensor. This can be populated with data. Tensorflow provides a graph to display each operation. Graphs will allow you to see the connections between different nodes. 


This is an arrangement or a series of elements such as symbols, numbers, or expressions. Arrays can be made of multiple dimensions.  A matrix, for example, is an array with 2 dimensions.


Vectors are mathematical objects that have a direction and a magnitude. A vector is used to locate the position of one point in space relative to another point. 

Graph Framework

TensorFlow uses a graph framework for displaying operations.

As shown above TensorFlow uses a graph framework to display operations. Graphs are a central concept of Tensorflow. When training a new model, the graph describes the different computations that are taking place. 

Graphs within TensorFlow are extremely portable and can be accessed via mobile operating systems. They can also be saved so that they can be run again in the future. 

Computations take place by connecting tensors. Each tensor contains both a node and an edge. The node drives the operation as well as generating an endpoint. Computations in graphs take place by connecting tensors. Every tensor has an edge and a node. The edge explains the input and output relationship of different nodes.  

More on Graphs

For this tutorial, we are using Tensorflow 2.3. Now back to graphs! Graphs are data structures that contain a set of tf.Operation objects. As already stated, graphs are of central importance. They are very portable since they can be saved, run, and restored without the original Python code. 

The computational structure is defined by using a TensorFlow graph. TensorFlow graphs have three main objects. These objects are used in operations. The first of these is the constant. The constant is set at a given value which is not changed. Then, there is the variable. Variables will output their current value. 

This enables them to retain their value over multiple executions of a graph. Lastly, we have the placeholder. Placeholders come into play at runtime and feed in future values into the graph. These are useful when a graph requires external data. 

The Advantages of Tensorflow 


One of the biggest advantages of TensorFlow is its flexibility. Regardless of whether you’re using a computer, laptop, or operating from the cloud, Tensorflow is accessible. Recently Tensorflow Light has been introduced, further increasing flexibility. This version of the language is to test mobile applications, and can be used on both Android and IOS. 


It’s simple to integrate Tensorflow with many different AI programs. This allows it to be used alongside other deep-learning applications. In addition to this, as already mentioned, Tensorflow was developed by Google. This means it allows you to gain access to Google’s vast library of resources. 

Building Computational Graphs in TensorFlow 

To create any Tensorflow program, you need to carry out two steps: Create a Computational Graph and run it. 

What is a Computational Graph? 

Graph computation is a term associated with a graph that displays equational data. Each graph displays a mathematical expression, for example, a postfix, infix and prefix calculation. The graph contains multiple codes each with a variable, operation or equation. 

Graph computation is useful in Tensorflow because it provides a visualization of the tensorboard. You can take a closer look at a graph and generate insights. 

Advantages of Computational Graphs 

Better Performance  

A computational graph acts as a useful alternative for visualizing calculations that take place within Tensorflow. Operations that have been assigned to nodes can be performed side by side. This allows for better computation performance. 

Because of their node-link structure, computational graphs can store a great deal of information. In addition to this, both network-based and relational issues can only be represented by this format. 

The Right System 

Before you can carry out any deep learning within Tensorflow, you need the right system. Without the right computing power, your system will fail to carry out any deep learning. Your people will need the following stats if it is to support deep learning: 

  • Windows 10 or Ubuntu OS. 
  • NVIDIA GeForce GTX 960, or higher. 
  • Intel Core i3 Processor. 
  • 8 GB of RAM. 

If you have the right system, you can begin performing algorithms.  

Should You Choose Pytorch or Tensorflow?

What is Pytorch?  

Of course, Tensorflow is not your only choice of deep learning language. Another option that is growing in popularity is PyTorch. Similar to Tensorflow, PyTorch is an open-source platform that can be used for deep learning training. 

A sign of its popularity is that the platform now has an active developer community. We’ve seen the creation of libraries that have been optimized for a variety of user cases. This includes the computer vision and NLP. 

Again, like Tensorflow, one of the major benefits of PyTorch is the ease of use. It can run and iterate code extremely quickly.  As the name suggests, the platform is most commonly used alongside Python. It can, however, also speak to other programming languages, including CUDA and C++. It can also integrate with Python-based data science tools. 

PyTorch provides an intuitive framework, allowing for the creation of computational graphs. These can be smoothly iterated whilst running. This makes the language ideal for powering machine learning even if you are unsure of your resource requirements. 

How is it Different from Tensorflow?

There are a variety of differences between PyTorch and Tensorflow. Let’s look at two major  differences. 

Graph Definition 

Tensorflow allows a user to create a stateful dataflow graph before running the model. PyTorch, on the other hand, makes use of dynamic graphs 

This makes Pytorch a favorite within the research community, allowing for  the easy creation of bespoke models. As the platform is dynamic, it can be run at the same time as the model. This means that the computation graph is generated at each execution stage. It also allows a user to make changes to a graph when needed. 


Tensorflow allows visualization using the tensorboard. This allows users to better understand deep learning models using graphs, distributions, scalers, histograms and graphs. 

Pytorch uses a system called Visdom, a simple tool that allows easy visualization of data. This option is flexible and easy to use but features less options than the Tensorboard. It should be noted that the Tensorboard can now be integrated with PyTorch, allowing for the use of both tools.  

Is Tensorflow Right For You? 

Does tensorflow meet your needs? There is no definitive answer to this question. You will need to take time to consider your plans for AI development. As we’ve explored, there are other alternatives for programming languages. Ultimately, the question centers on your specific requirements. 
There’s no denying that AI and deep learning are the future. It seems that the technology is destined to touch every aspect of our lives. This ranges from integration with IP telephone systems to innovations in cybersecurity. (What is an ip phone? It’s an internet-based phone system also referred to as VoIP). If you want to be at the heart of this, you need the right tools. There’s no denying that Tensorflow can help shape innovations.

The post TensorFlow Tutorial: An Essential Deep Learning Language? appeared first on Codemotion Magazine.

Ludwig Toolbox Makes Deep Learning Accessible to All Tue, 24 Nov 2020 10:52:02 +0000

The last decade has seen exponential growth in deep learning capabilities and their application in research and development. Traditionally deep learning as a discipline has been limited to those with considerable training and knowledge of machine learning and AI. Fortunately, we’ve seen the growth in efforts to democratize deep learning such as the creation of  Ludwig… Read more

The post Ludwig Toolbox Makes Deep Learning Accessible to All appeared first on Codemotion Magazine.


The last decade has seen exponential growth in deep learning capabilities and their application in research and development. Traditionally deep learning as a discipline has been limited to those with considerable training and knowledge of machine learning and AI. Fortunately, we’ve seen the growth in efforts to democratize deep learning such as the creation of  Ludwig toolbox, an open source, deep learning tool built on top of TensorFlow that allows users to train and test deep learning models without writing code.

Piero Molino is a Senior Research Scientist at Uber AI with a focus on machine learning for language and dialogue. I spoke with him before his presentation at Codemotion’s online conference: The Italian edition to find out more.

What are the origins of Ludwig toolbox? 

Ludwig toolbox

Piero describes Ludwig toolbox as “a project that got started quite a while ago. There’s probably still some line of code in the current code base that comes from a project that I worked on when I was at this startup called Geometric Intelligence. We were trying to do visual question answering, giving images questions and answering those questions. For instance, is there a cat in this image or who’s jumping over the boom in the image?

We wanted to compare different models for doing this task, so I started to create the abstractions that are there now in Ludwig.

Uber acquired geometric Intelligence in 2016, and Piero details, “While at Uber, I was tasked to solve a bunch of different machine learning problems. One was in customer support, and the other one was neural graph networks for recommender systems. Another one was a dialogue system. So I had many different tasks. And I tried to reuse that code that I wrote when I was a Geometric intelligence, and make it more general to be applied to all these different tasks. It was just me building tools for myself, making my life easier. 

The code Piero was writing was available inside the company, and other people started using it internally so after a couple of years he decided to make it open source so people could use it externally in February 2019. He recalls “In the last year and a half I’ve kept on improving it and updating it. A new version was released about a month ago which introduces a bunch of new features.

What was your motivation towards making it open source as opposed to proprietary?

Piero notes that Ludwig toolbox is built on top of other open source libraries such as TensorFlow, Scikit-learn, Pandas and SpaCy. “On the one hand, this was a way to give back to the community. On the other hand, I was working at Uber, and Uber is not a company that sells machine learning platforms. So there was no real advantage to keeping it proprietary. Open source means that other people from the community could use it and also improve it. So I felt like it was win-win.”

The open source release has resulted in some unexpected use cases. Researchers in biology have used it to analyze images of worms, which they would otherwise not be able to do because they don’t have the expertise to be able to use deep learning models for their tasks. 

Machine learning developer

The software means you don’t have to write your own machine learning model, Ludwig does it for you. You can do it by a common line interface. So you don’t have to write code. One command can do as much as 500 lines of code from other handcraftedTensorFlow or pi torch cold. 

Piero describes Ludwig in terms of declarative machine learning as opposed to no code, “Because you’re just declaring what model you want. Saying, these are my inputs, and these are my outputs. And then Ludwig figures out how to write the model for you depending on those.”

What have been the biggest challenges in evolving Ludwig?

According to Piero, the community aspect and management of expectations have been challenging. As a project attached to a company like Uber the expectations from the users, in particular, the beginning were really high in terms of how fast I could answer requests for adding features or link requests for solving issues. I tried to do my best, but basically, it was just me and a small number of people who later assisted, not a huge team.

In the beginning, I was checking and answering messages every hour. I learned that that’s probably not sustainable. Now I’m a little bit more disciplined in doing that. But at the same time, there have been other people emerging who really liked the project and wanted to assist, so that has been very beneficial.”

From a technical point of view, Piero notes that the biggest hurdle was “probably the shift between TensorFlow1 and TensorFlow2 because I started developing Ludwig during TensorFlow1. So all the code base was structured around the abstraction of TensorFlow. TensorFlow2 changes the abstraction, so it took quite some time to adapt.” 

Fortunately, TensorFlow2 offered significant advantages in terms of code structure such as 

extensibility, general quality and the ease of dealing with the underlying TensorFlow layer were definitely worth it. 

What’s next for Ludwig toolbox? 

Ludwig’s design makes it highly accessible to new improvements in that it provides to a certain extent, interfaces that allow people to “add their own models, their own features, add new optimizers, new learning rate schedulers, all these things can be plugged in in Ludwig relatively easily. 

Maintainers can be used in many different ways, we can add more models and features, or we can improve what is there, making it more scalable, making it faster, etc.”

Piero shares that they are currently pursuing scalability in terms of data pipelines, pre-processing and interaction with data. “This is important because that will basically enable it to be used in industrial contexts where there is a huge amount of data. Right now, those kinds of tasks are not very well supported. What is more supported is the use case of a data scientist trying to solve their own problems, rather than help a company that has a terabyte of data that wants to train a model on a terabyte of data that sits in a remote machine.” 

Community involvement is most welcome – Piero hopes that the interested people will help to add new models and features, especially once the large scale use cases are in operation. 

Want to learn more about Ludwig toolbox? 

Join Piero and learn more about the deep learning toolbox at The Codemotion Virtual conference: The Italian Editionheld November 3-5, from 14:00 to 19:00 CET. 

A single ticket grants you attendance to four conferences spread over the week, offering a deep dive into a plethora of topics relating to Backend, Frontend, Emerging Technologies, and AI / ML / DL. It’s a fantastic opportunity to learn first-hand about the best state-of-the-art technology, activities, good practices, and case studies for everyone working in tech regardless of your profile or your level of experience.

The post Ludwig Toolbox Makes Deep Learning Accessible to All appeared first on Codemotion Magazine.

How to build a GAN in Python Fri, 15 May 2020 13:00:00 +0000

Introduction Generative Adversarial Networks (GANs) are a hot topic in machine learning for several good reasons. Here are three of the best: GANs can provide astonishing results, creating new things (images, texts, sounds, etc.) by imitating samples they have previously been exposed to. A GAN offers a new paradigm in machine learning – a generative… Read more

The post How to build a GAN in Python appeared first on Codemotion Magazine.



Generative Adversarial Networks (GANs) are a hot topic in machine learning for several good reasons. Here are three of the best:

  1. GANs can provide astonishing results, creating new things (images, texts, sounds, etc.) by imitating samples they have previously been exposed to.
  2. A GAN offers a new paradigm in machine learning – a generative one – that combines pre-existing techniques to provide both current and brand new ideas and results.
  3. GANs are a recent (2014) creation of Ian Goodfellow, the former Google, now Apple, researcher (also the co-author of a standard reference in deep learning with Joshua Bengio and Aaron Courville).

It is likely that readers will already have encountered some of the impressive results GANs are capable of, especially in the realm of image processing. Such networks are able, upon request, to draw a picture of a red flower, a black bird or even a violet cat. Furthermore, that flower, bird, or cat does not exist at all in reality, but is entirely the product of the network’s ‘imagination’.

These images are not photos of real people – they have been generated by a properly trained GAN!

How is this possible, and can we share in the fun? This article endeavours to address both questions, using functional Python code that can be run on your laptop. You may need to add some packages that are missing from your Python installation, but that’s what Pip is there for…

What is a Generative Adversarial Network?

Neural networks (NNs) were devised as prediction and classification models. They are powerful, non-linear optimizers which can be trained to evolve their inner parameters (neuron weights) to fit the training data. This will enable the NN to predict and classify unknown data of the same kind.

We all know how impressive the data approximations of neural networks, in which ‘data‘ can mean just about anything, can be. However, the features of such algorithms also suggest some of their drawbacks, such as:

  • Neural networks need labelled data to be trained properly
  • Worse, they need a lot of labelled data
  • Worse still, we generally have no idea what the contents of a neuron actually do, except in some special cases

Intrinsically, neural networks are supervised algorithms. Nonetheless, some of their variants work perfectly well as unsupervised algorithms. These can be trained on any kind of data, without requiring the ‘label‘ usually attached to enable the network to differentiate known things from unknown things.

Unsupervised networks have previously been discussed in my articles, using the example of dealing with time series. Any time series may be thought of as a labelled training set if it points to the prediction, while any remaining series provide the input data (see this article for more details).

The GAN paradigm offers another interesting unsupervised setting for neural networks to play in, and is decribed briefly below.

Let us begin with the words the acronym GAN stands for: generative, adversarial, networks. The last is the most obvious – networks: GANs are built up using (usually deep) neural networks. A GAN starts out with an input layer with a certain amount of parallel input neurons (one for each number represented by the input data), some hidden layers and an output layer, connected in a directed graph and trained by a variant of the gradient-descent backpropagation algorithm.

Next, we come to the word generative, which denotes the aim of this class of algorithms. They produce rather than consume data. More specifically, the data these algorithms produce contains new information of the same ‘class’ as the input data used to generate it. The generation process is not spontaneous, but data are generated from other data, via a mechanism that will be described later.

Finally, the word adversarial – the most mysterious term in the acronym – explains how generation occurs, namely through a competition between two adversaries. In the case of a GAN, the adversaries are neural networks.

Therefore, a GAN aims at generating new data via networks deliberately set up in competition with each other in order to achieve this goal. A GAN is always split into two components – two neural (usually deep) networks. The first is known as the discriminator, and it is trained to distinguish a set of data from pure noise. For example, the input data could include a collection of photos of flowers as well as a huge number of other images which have nothing to do with flowers. Each photo may not have an explicit label, but which photos belong to the collection of flowers, and which do not, is known.

The network can then be trained to differentiate flowers from non-flowers or, for that matter, to distinguish photos from pictures created from random pixels. This first ‘discriminator’ component of the GAN is a standard network trained to classify things. The input is an example of the data we want to generate (a collection of photos of flowers if we want to generate flower images), while the output is a yes/no flag.

The other network is the generator: this produces as output the kind of data the discriminator is trained to identify. To achieve this output, the generator uses a random input. Initially this will produce a random output, but the generator is trained to backpropagate the information, whether or not its output is similar to the desired data (e.g., photos of flowers).

To that end, the generator’s predictions are fed into the discriminator. The latter is trained to recognize genuine flowers (in this example), so if the generator can counterfeit a flower sufficiently well to trick the discriminator, then our GAN can produce fake photos of flowers that a well trained observer (the discriminator) will take for the genuine article.

At last, our generation task is accomplished.

One way to think of a GAN is as a room where a forger and an art critic meet: the former offers fake paintings, affirming their authenticity; the latter tries to confirm whether or not they actually are the real deal. If the forger is so good at counterfeiting that the critic mistakes the fakes for the original paintings, then the fakes may be offered at auction in the hope that someone will buy them…

At first glance, GANs may seem to be analogous to reinforcement learning, but the apparent similarity does not stand up to scrutiny. A GAN sets up two networks in competition with each other – the goal is to augment their opposing skills in order to produce fake data that seems genuine. Reinforcement learnng, on the other hand, checks a single agent against an environment and either ‘reinforces’ or ‘punishes’ the agent to correct its behaviour. There’s no competition – just a pattern that needs to be discovered in order to survive.

Instead, GANs may be thought of as a generalisation of the Turing test principle: the discriminator is the tester and the generator the machine willing to pass it, the only difference is that in this case both actors are machines ( see here for more detail on why Turing’s ideas were seminal for machine learning).

A homemade GAN

GANs usually find their most spectacular applications in counterfeiting images, as already discussed. However videos, texts, and even sounds may be generated, although technical issues can complicate the implementation of such ‘time series generators’.

In most tutorials, classic image generation is demonstrated, typically by using the MNIST dataset to teach the GAN how to write letters and digits. However, convolutional networks are required for this process, and the GAN element itself is often neglected in favour of details about setting up the convolutional and ‘deconvolutional’ networks which implement the discriminator and generator. In addition, training is quite a long process when appropriate equipment is lacking (a description of such GANs can be found in another contribution to the Codemotion magazine).

Instead, what follows is an explanation of a simple GAN programmed in Python, using the Keras library (which can be run on any laptop) to teach it how to draw a specific class of curves. I’ve chosen sinusoids, but any other pattern would work equally well.

Below, I’ll demonstrate how to:

  1. Generate a dataset of sinusoids;
  2. Set up the discriminator and generator networks;
  3. Use these to build up the GAN;
  4. Train the GAN, showing how to combine the training of its components, and;
  5. Contemplate a somewhat skewed and distorted sinusoid drawn by the program from pure noise.

An artificial dataset

Instead of a collection of images, I’ll produce a description of the curves I am interested in: sinusoids may be mathematically described as the graph of functions

a sin(bx+c)

where a, b, c are parameters which determine the height, frequency and phase of the curve. Some examples of such curves are plotted in the following picture, produced via a Python snippet.

import matplotlib.pyplot as plt
import numpy as np
from numpy.random import randint, uniform
X_MIN = -5.0
X_MAX = 5.0
X_COORDS = np.linspace(X_MIN , X_MAX, SAMPLE_LEN)
fig, axis = plt.subplots(1, 1)
for i in range(4):
    axis.plot(X_COORDS, uniform(0.1,2.0)*np.sin(uniform(0.2,2.0)*X_COORDS + uniform(2)))

We want our GAN to generate curves with this sort of form. To keep things simple we consider a=1 and let b∈[1/2,2] and c∈[0,π].

First, we define some constants and produce a dataset of such curves. To describe a curve, we do not use the symbolic form by means of the sine function, but rather choose some points in the curve, sampled over the same x values, and represent the curve y = f(x) by the vector (y1,…,yN) where yi = f(xi) for the fixed xs.

The y values are generated by using the previous formula for random values of b and c within the prescribed intervals. Having definined the training set, some of these curves can be plotted.

import numpy as np
from numpy.random import uniform
import matplotlib.pyplot as plt
SAMPLE_LEN = 64       # number N of points where a curve is sampled
SAMPLE_SIZE = 32768   # number of curves in the training set
X_MIN = -5.0          # least ordinate where to sample
X_MAX = 5.0           # last ordinate where to sample
# The set of coordinates over which curves are sampled
X_COORDS = np.linspace(X_MIN , X_MAX, SAMPLE_LEN)
# The training set
for i in range(0, SAMPLE_SIZE):
    b = uniform(0.5, 2.0)
    c = uniform(np.math.pi)
    SAMPLE[i] = np.array([np.sin(b*x + c) for x in X_COORDS])
# We plot the first 8 curves
fig, axis = plt.subplots(1, 1)
for i in range(8):
    axis.plot(X_COORDS, SAMPLE[i])

Our GAN in small pieces

Next we define our discriminator, namely the neural network used to distinguish a sinusoidal curve from any other set of sampled points. The discriminator consequently accepts an input vector (y1, …, yN) and returns 1 if it corresponds to a sinusoidal curve, otherwise 0.

The Keras library is then used to create a Sequence object in which to stack the different layers of the network. This discriminator is arranged as a simple shallow multilayer perceptron, with three layers: the input layer with N neurons, N being the size of the input vectors, a second layer with the same number of hidden neurons, and a third with just one neuron, the output layer.

The output of the input and hidden layers is filtered by a ‘relu’ function (which cuts negative values of its argument x) and by a ‘dropout’ (which randomly sets input units to 0 at a prescribed frequency during each step of training, to prevent overfitting).

The output neuron is activated via a sigmoid function which smoothly extends from 0 to 1, the two possible answers.

from keras.models import Sequential
from keras.layers import Dense, Dropout, LeakyReLU
DROPOUT = Dropout(0.4)        # Empirical hyperparameter
discriminator = Sequential()
discriminator.add(Dense(SAMPLE_LEN, activation="relu"))
discriminator.add(Dense(SAMPLE_LEN, activation="relu"))
discriminator.add(Dense(1, activation = "sigmoid"))
discriminator.compile(optimizer = "adam", loss = "binary_crossentropy", metrics = ["accuracy"])

Next we come to the generator network. This is in a sense a mirror of the discriminator; we still have three layers, in which the input layer accepts a noisy input of the same size as the output (a vector with N elements), and applies a ‘leaky relu’ function (which cuts negative values of its argument x to a small multiple of x itself). However, this network does not perform dropout, and outputs the result via a hyperbolic tangent function. Since classification is not our goal, we use mean square error as the loss function instead of binary cross entropy when training the network and using it to make predictions.

LEAKY_RELU = LeakyReLU(0.2)   # Empirical hyperparameter
generator = Sequential()
generator.add(Dense(SAMPLE_LEN, activation = "tanh"))
generator.compile(optimizer = "adam", loss = "mse", metrics = ["accuracy"])

Next, we plug the output of the generator into the discriminator as input, so that the whole GAN network is ready to be trained.

gan = Sequential()
gan.compile(optimizer = "adam", loss = "binary_crossentropy", metrics = ["accuracy"])

How is a GAN trained?

The GAN is now ready to be trained. Instead of immediately launching the fit Keras method on the gan object we just instantiated, let’s pause and reflect on the concept of GAN to understand how to train it properly.

As has already been stated, the discriminator needs to learn how to distinguish between a sinusoid and another curve. This can be done by simply training it on our SAMPLES dataset and a noisy dataset, labelling elements in the former sinusoids, and in the latter non-sinusoids.

However, the aim of the discriminator is not merely to learn our dataset but to intercept the fakes produced by the generator. With this in mind, the discriminator is trained as follows:

  1. For each epoch, a batch training is performed on both the discriminator and the generator.
  2. This batch training starts by asking the generator to generate a batch of curves.
  3. The output of this is coupled to a batch of sinusoids from our SAMPLE dataset, and a dataset with labels 1 (=genuine sinusoid) and 0 (=sinusoid produced by the generator) is provided to batch train the discriminator, which is thereby trained to recognise the generated sinusoid among the genuine examples.
  4. The generator is batch trained on random data: this training backpropagates along the whole GAN network, but weights in the discriminator are left untouched.

The result is that the discriminator is not trained to recognize sinusoids, but to distinguish between sinusoids from our datasets and sinusoids produced by the generator. Meanwhile, the generator is trained to produce sinusoids from random data in order to deceive the discriminator.

When the success rate of this deception is high (from the point of view of the discriminator), the GAN is able to generate fake sinusoids. Because we want the code to run without starving our laptops (which can be assumed in the absence of GPUs etc.) relatively small parameters are used to produce our dataset and train the GAN. Therefore we cannot expect the network to draw a smooth sinusoid; instead we expect a rather wobbly line that nonetheless displays a sinusoidal pattern.

To demonstrate how the GAN starts by drawing randomly, then gradually improves its skill at drawing a sinusoid during its ‘apprenticeship’, I have plotted some of the GAN outputs created during its training (10 epochs are plotted, since we are using just 64 epochs in total).

ONES = np.ones((SAMPLE_SIZE))
ZEROS = np.zeros((SAMPLE_SIZE))
print("epoch | dis. loss | dis. acc | gen. loss | gen. acc")
fig = plt.figure(figsize = (8, 12))
ax_index = 1
for e in range(EPOCHS):
    for k in range(SAMPLE_SIZE//BATCH):
        # Addestra il discriminatore a riconoscere le sinusoidi vere da quelle prodotte dal generatore
        n = randint(0, SAMPLE_SIZE, size = BATCH)
        # Ora prepara un batch di training record per il discriminatore
        p = generator.predict(NOISE[n])
        x = np.concatenate((SAMPLE[n], p))
        y = np.concatenate((ONES[n], ZEROS[n]))
        d_result = discriminator.train_on_batch(x, y)
        discriminator.trainable = False
        g_result = gan.train_on_batch(NOISE[n], ONES[n])
        discriminator.trainable = True
    print(f" {e:04n} |  {d_result[0]:.5f}  |  {d_result[1]:.5f} |  {g_result[0]:.5f}  |  {d_result[1]:.5f}")
    # At 3, 13, 23, ... plots the last generator prediction
    if e % 10 == 3:
        ax = fig.add_subplot(8, 1, ax_index)
        plt.plot(X_COORDS, p[-1])
        plt.ylabel(f"Epoch: {e}")
        ax_index += 1
# Plots a curve generated by the GAN
y = generator.predict(uniform(X_MIN, X_MAX, size = (1, SAMPLE_LEN)))[0]
ax = fig.add_subplot(8, 1, ax_index)
plt.plot(X_COORDS, y)

The output is:

epoch | dis. loss | dis. acc | gen. loss | gen. acc
 0000 |  0.10589  |  0.96484 |  7.93257  |  0.96484   
 0001 |  0.03285  |  1.00000 |  8.62279  |  1.00000   
 0002 |  0.01879  |  1.00000 |  9.54678  |  1.00000   
 0003 |  0.01875  |  1.00000 |  11.18307  |  1.00000   
 0004 |  0.00816  |  1.00000 |  13.98673  |  1.00000   
 0005 |  0.01707  |  0.99609 |  16.46034  |  0.99609   
 0006 |  0.00579  |  1.00000 |  13.86913  |  1.00000   
 0007 |  0.00189  |  1.00000 |  17.36512  |  1.00000   
 0008 |  0.00688  |  1.00000 |  17.61729  |  1.00000   
 0009 |  0.00306  |  1.00000 |  18.18118  |  1.00000   
 0010 |  0.00045  |  1.00000 |  24.42766  |  1.00000   
 0011 |  0.00137  |  1.00000 |  18.18817  |  1.00000   
 0012 |  0.06852  |  0.98438 |  7.04744  |  0.98438   
 0013 |  0.20359  |  0.91797 |  4.13820  |  0.91797   
 0014 |  0.17984  |  0.93750 |  3.62651  |  0.93750   
 0015 |  0.18223  |  0.91797 |  3.20522  |  0.91797   
 0016 |  0.20050  |  0.91797 |  2.61011  |  0.91797   
 0017 |  0.24295  |  0.90625 |  2.62364  |  0.90625   
 0018 |  0.34922  |  0.83203 |  1.88428  |  0.83203   
 0019 |  0.25503  |  0.88281 |  2.24889  |  0.88281   
 0020 |  0.28527  |  0.88281 |  1.84421  |  0.88281   
 0021 |  0.27210  |  0.88672 |  1.92973  |  0.88672   
 0022 |  0.30241  |  0.88672 |  2.13511  |  0.88672   
 0023 |  0.33156  |  0.82422 |  2.02396  |  0.82422   
 0024 |  0.26693  |  0.86328 |  2.46276  |  0.86328   
 0025 |  0.39710  |  0.82422 |  1.64815  |  0.82422   
 0026 |  0.34780  |  0.83984 |  2.34444  |  0.83984   
 0027 |  0.26145  |  0.90625 |  2.20919  |  0.90625   
 0028 |  0.28858  |  0.86328 |  2.15237  |  0.86328   
 0029 |  0.34291  |  0.83984 |  2.15610  |  0.83984   
 0030 |  0.31965  |  0.86719 |  2.10919  |  0.86719   
 0031 |  0.27913  |  0.89844 |  1.92525  |  0.89844   
 0032 |  0.31357  |  0.87500 |  2.10098  |  0.87500   
 0033 |  0.38449  |  0.83984 |  2.03964  |  0.83984   
 0034 |  0.34802  |  0.81641 |  1.73214  |  0.81641   
 0035 |  0.28982  |  0.87500 |  1.74905  |  0.87500   
 0036 |  0.33509  |  0.85156 |  1.83760  |  0.85156   
 0037 |  0.29839  |  0.86719 |  1.90305  |  0.86719   
 0038 |  0.34962  |  0.83594 |  1.86196  |  0.83594   
 0039 |  0.32271  |  0.84766 |  2.21418  |  0.84766   
 0040 |  0.31684  |  0.84766 |  2.22909  |  0.84766   
 0041 |  0.37983  |  0.83984 |  1.79734  |  0.83984   
 0042 |  0.31909  |  0.83984 |  2.10337  |  0.83984   
 0043 |  0.30426  |  0.86719 |  1.98194  |  0.86719   
 0044 |  0.30465  |  0.86328 |  2.31558  |  0.86328   
 0045 |  0.35478  |  0.84766 |  2.40368  |  0.84766   
 0046 |  0.30423  |  0.86328 |  1.93115  |  0.86328   
 0047 |  0.30887  |  0.83984 |  2.17885  |  0.83984   
 0048 |  0.35123  |  0.86719 |  2.00351  |  0.86719   
 0049 |  0.24366  |  0.90234 |  2.21016  |  0.90234   
 0050 |  0.33797  |  0.84375 |  1.99375  |  0.84375   
 0051 |  0.35846  |  0.84375 |  2.17887  |  0.84375   
 0052 |  0.35476  |  0.83203 |  2.15312  |  0.83203   
 0053 |  0.28164  |  0.87109 |  2.60571  |  0.87109   
 0054 |  0.25782  |  0.89844 |  1.87386  |  0.89844   
 0055 |  0.28027  |  0.87500 |  2.30517  |  0.87500   
 0056 |  0.31118  |  0.84375 |  2.00939  |  0.84375   
 0057 |  0.32034  |  0.85547 |  2.22501  |  0.85547   
 0058 |  0.34665  |  0.84375 |  2.11842  |  0.84375   
 0059 |  0.32069  |  0.85547 |  1.79891  |  0.85547   
 0060 |  0.32578  |  0.87500 |  1.85051  |  0.87500   
 0061 |  0.32067  |  0.87109 |  1.70326  |  0.87109   
 0062 |  0.31929  |  0.85938 |  1.99901  |  0.85938   
 0063 |  0.38814  |  0.83984 |  1.55212  |  0.83984   
[<matplotlib.lines.Line2D at 0x1b5c3054c48>]

Notice that the first picture, after three epochs, is more or less random, while the subsequent images move towards a smoother curve (even if our 64 epochs are not enough for a really good curve!) and, more importantly, towards a curve that displays a sinusoidal trend.

What can also be observed is the progress of loss and accuracy for both the discriminator and the whole generative network during training. On examining this log we can see that the lower the loss value of the GAN, the better the curve approximates a sinusoid. Finally, on examining the values for the discriminator, it is clear that some adjustments in the hyper-parameters (or even in the architecture of the networks) are in order.


The example we have played with here may not seem especially impressive, but it really should. In the course of this article, two shallow networks have been assembled which (dropout and leaky relu aside) could have been programmed in the late 1980s. However, setting these networks up against each other in competition has produced a generating network that ‘draws’ curves resembling the one fed to it.

Beyond that, the network understands which models to imitate from just a small sample description, and running the programs on your computer has probably taken a few minutes at most.

By combining more sophisticated networks along the same lines, a GAN able to generate digits, letters, or more complex figures can be created. Some modifications in the training techniques and in the representation of data would allow the GAN to generate speeches, videos, and in the near future, anything of which there are plenty of examples on the Web, which is to say, almost everything!

Recommended Article: Top Trending Python Frameworks

The post How to build a GAN in Python appeared first on Codemotion Magazine.

iPhone X’s Face ID using Deep Learning Tue, 03 Dec 2019 17:30:07 +0000

Interview with Norman Di Palo to learn more about the artificial intelligence behind iPhone X's Face Detection, siamese neural networks and more!

The post iPhone X’s Face ID using Deep Learning appeared first on Codemotion Magazine.


Hello Norman. Recently you’ve presented to the members of the Facebook Developer Circle community your talk on “How I implemented iPhone X’s Face ID using Deep Learning”. Would you like to tell us what it was about?

I gladly accepted the invitation to give a speech at the Facebook Developer Circle community. I always watch Apple keynote speeches, in which they present new products and as a fan, I’m very interested. When I saw the iPhone X, I was very surprised, especially by the deep learning aspect used to unlock the device by face and no longer by fingerprint. The amazing thing is that it needs to “see” the phone owner’s face for a few seconds and then can remember him or her forever with various different nuances. As such, it’s a challenge that aroused my curiosity, and I started to do some scientific research to understand how to implement it. I found similar datasets that combined photos of people’s faces with photos that gave depth, so I created a neural network called Siamese, which has the advantage of being trained offline and can learn a new face quickly, compared to the classic deep learning that takes a long time to learn something new.

Later on, I wrote an article on my blog because I like to share these things with the others. After that, I woke up in the morning with hundreds of views! Apparently, someone had posted the article on Hacker News, which is a very famous news portal in the tech industry, and it was translated into five languages.

Siamese neural network, what is it? Can you explain it to us?

A Siamese neural network is composed of two identical neural networks. Imagine the same network taking two images as input and instead of saying they are of dogs, cats or a picture of Norman or Simon, simply learning to tell what is the distance between the two photos, or how much they resemble each other. What makes this difference from classifying dog and cat, seeing the difference? I can take millions of photos of people offline, we get a neural network, we perfect it over time, we take two faces of people and can tell if they are the same person or not. In conclusion, this network tells you how different or similar the images are to one another.

That’s super interesting. We’re really looking forward to other talks like this. But let’s change the topic for now. For those that did not have a chance to meet you in person, can you tell us a bit about yourself?

My name is Norman Di Palo, I’m from Naples but I live in Rome, though currently I live in Helsinki and I work as an AI researcher in a startup. I have a degree in Automation Engineering where I worked on control and intelligent systems. That’s how I started being interested in cognitive robotics, so I started looking for a Master’s degree that would also include Artificial Intelligence. I’m about to graduate now and I’m working on my Master’s thesis. As for as my work, I founded a startup together with my colleagues. I’m also working in a Pi School and we are involved in making a Pi school where, twice a year, we choose 15 engineering students, any talented figures from around the world. We carry out AI studies and solve problems given to us by companies, including Amazon, BNL, Cisco and Poste Italiane. I am an AI Advisor and Project Manager of some of these projects.

The post iPhone X’s Face ID using Deep Learning appeared first on Codemotion Magazine.