The Fallacious Simplicity of Deep Learning: wild data

This post is the fourth in a series of posts about the “Fallacious Simplicity of Deep Learning”. I have seen too many comments from non-practitioner who thinks Machine Learning (ML) and Deep Learning (DL) are easy. That any computer programmer following a few hours of training should be able to tackle any problem because after all there are plenty of libraries nowadays… (or other such excuses). This series of posts is adapted from a presentation I will give at the Ericsson Business Area Digital Services technology day on December 5th. So, for my Ericsson fellows, if you happen to be in Kista that day, don’t hesitate to come see it!

In the last posts, we’ve seen that the first complexity lay around the size of the machine learning and deep learning community. There are not enough skilled and knowledgeable peoples in the field. The second complexity lay in the fact that the technology is relatively new, thus the frameworks are quickly evolving and requires software stacks that range from all the way down to the specialized hardware we use. The third complexity was all about hyper-parameter setting, a skill specific to machine learning and that you will need to acquire.

Next challenge with machine learning, or should I say first challenge is the data! You need data to perform a machine learning task. For deep learning, you arguably need even more said data.

This is an angle that you will not see if you take courses (online or in schools) about data science, machine learning or deep learning. At least I’ve never seen it being properly presented as a difficulty. When you take a course, the data is most of the time provided. Or you get a clear indication as where to obtain it from. In most case this is well curated data. Well explained and documented fields and formats. If the data is not already available and the exercise wants you to concentrate on the data scraping aspect, then the other parameters of the exercise will be well defined. They will tell you where to scrape the data from and what is of importance for the exercise. It will make sure the scraping can be done in a structure fashion. If the exercise wants you to concentrate on data cleaning, or data augmentation, the dataset will have been prepared to properly show you how this can be done. Not to say that those courses or exercises are easy, but they do not show the real difficulty of wild data.

I think that for most companies, it all started with wild data. As data science grows in a company, peoples put structure and pipelines around data. But this comes with time and size. And it certainly does not prevent the entry of some wild data beast in the zoo from time to time. So, assuming you have access to data, it might really be wild data and you will have to tame it. Some says deep learning is less about data engineering and more about model architecture creation. It is certainly true in computer vision where the format of the data has been agreed on some time ago. But what about another domain, is the format so widely accepted? You might still have to do some feature engineering to input your data in your model. Adding the feature engineering problem to the hyper-parameter tuning problem…

On the other hand, you might well be in a situation where you do not have access to data. What if you are used to sell a specific type of equipment. That equipment might not be connected to a communication network. If it is, the customer might never have been asked if you could use its data. How do you put in place a contract or license that allows for you to collect data? Does the legislation in the regions where you sell that equipment allows for that collection or is there any restrictions you need to dance with? Is there personally identifiable information you will need to deal with? Will you need to anonymize the data? If you start a new business, based on a new business model, you might be able to build it in your product. But if you have a continuing business, how to you incorporate it in your product? Does it need to be gradual?

I guess you now have a good understanding of our fourth complexity. Where do you get your data from?

Data might be hard to come by. It might be wild and messy. It might not even relate to the question you want to answer… and that will be the subject of our fifth complication: do you have the right question?

Advertisements

The Fallacious Simplicity of Deep Learning: hyper-parameters tuning

This post is the third in a series of posts about the “Fallacious Simplicity of Deep Learning”. I have seen too many comments from non-practitioner who thinks Machine Learning (ML) and Deep Learning (DL) are easy. That any computer programmer following a few hours of training should be able to tackle any problem because after all there are plenty of libraries nowadays… (or other such excuses). This series of posts is adapted from a presentation I will give at the Ericsson Business Area Digital Services technology day on December 5th. So, for my Ericsson fellows, if you happen to be in Kista that day, don’t hesitate to come see it!

In the last posts, we’ve seen that the first complexity lay around the size of the machine learning and deep learning community. There are not enough skilled and knowledgeable peoples in the field. The second complexity lay in the fact that the technology is relatively new, thus the frameworks are quickly evolving and requires software stacks that range from all the way down to the specialized hardware we use. We also said that to illustrate the complexities we would show an example of deep learning using keras. I have described the model I use in a previous post This is not me blogging!. The model can generate new blog post looking like mine from being trained on all my previous posts. So without any further ado, here is the short code example we will use.

 

KerasExample
A Keras code example.

In these few lines of code you can see the gist of how one would program a text generating neural network such as the one pictured besides the code. There is more code required to prepare the data and generate text from model predictions, than simply the call to model.predict. But the part of the code which related to create, train and make predictions with a deep neural network is all in those few lines.

You can easily see the definition of each layers: the embedding in green, the two Long Short Term Memory (LSTM) layers, a form of Recurrent Neural Network, here in blue. And a fully connected dense layer in orange. You can see that the inputs are passed when we train the model or fit it, in yellow as well as the expected output, our labels in orange. Here that label is given the beginning of a sentence, what would be the next character in the sequence. The subject of our prediction. Once you trained that network, you can ask for the next character, and the next, and the next… until you have a new blog post… more or less, as you have seen in a previous post.

For people with programming background, there is nothing complicated here. You have a library, Keras, you look at its API and you code accordingly, right? Well, how do you choose which layers to use and their order? The API will not tell you that… there is no cookbook. So, the selection of layers is part our next complexity. But before stating it as such let me introduce a piece of terminology: Hyper-parameters. Hyper-parameters are to deep learning and machine learning any parameter for which value you can vary, but ultimately have to finetune to your data if you want you model to behave properly.

So according to that definition of hyper-parameter, the deep neural network topology or architecture is an hyper-parameter. You must decide which layer to use and in what order. Hyper-parameter selection does not stops at the neural network topology though. Each layer has its own set of hyper parameters.

The first layer is an embedding layer. It converts in this case character input into a vector of real numbers, after all, computers can only work with numbers. How big this encoding vector will be? How long the sentences we train with will be? Those are all hyper-parameters.

On the LSTM layers, how wide or how many neurons will we use? Will we use all the outputs all the time or drop some of them (a technique called dropout which help regularizing neural network and reduce cases of overfitting)? Overfitting is when a neural network learns so well your training examples that it cannot generalize to new examples. Meaning that when you try to predict on a new value, the results are erratic. Not a situation you desire.

You have hyper-parameter to select and tweak up until the model compilation time and the model training (fit) time. How big the tweaking to your neural network weights will be at each computation pass? How big each pass will be in terms of examples you give to the neural network? How many passes will you perform?

If you take all of this into consideration, you end up with most of the code written being subject to hyper-parameters selection. And again, there is no cookbook or recipe yet to tell you how to set them. The API tells you how to enter those values in the framework, but cannot tell you what the effect will be. And the effect will be different for each problem.

It’s a little bit like if what you would give as argument to a print statement, you know like print(“hello world!”) would not be “hello word”, but some values which would print something based on that value (the hyper-parameter) and whatever has been printed in the past and you would have to tweak it so that at training time you get the expected results!!! This would make any good programmer become insane. But currently there is no other way with deep neural networks.

HelloWorldHyperParam2
Hello World with Hyper-Parameters.

So our fourth complexity is not only the selection of the neural network topology but as well as all the hyper-parameters that comes with it.

It sometimes requires insight on the mathematics behind the neural net, as well as imagination, lots of trials and error while being rigorous about what you try. This definitely does not follow the “normal” software development experience. As said in my previous post, you need a special crowd to perform machine learning or deep learning.

My next post will look at what is sometime the biggest difficulty for machine learning: obtaining the right data. Before anything can really be done, you need data. This is not always a trivial task…

 

The Fallacious Simplicity of Deep Learning: the proliferation of frameworks.

This post is the second in a series of posts about the “Fallacious Simplicity of Deep Learning”. I have seen too many comments from non-practitioner who thinks Machine Learning (ML) and Deep Learning (DL) are easy. That any computer programmer following a few hours of training should be able to tackle any problem because after all there are plenty of libraries nowadays… (or other such excuses). This series of posts is adapted from a presentation I will give at the Ericsson Business Area Digital Services technology day on December 5th. So, for my Ericsson fellows, if you happen to be in Kista that day, don’t hesitate to come see it!

In the last post, we’ve seen that the first complexity lay around the size of the machine learning and deep learning community. There are not enough skilled and knowledgeable peoples in the field. To illustrate the other complexities, I’ll show an example of deep learning using keras. Don’t worry if you are not used to it and even if you have not programmed in a while, or at all, I’ll keep it simple. Below is one of the software stack you can use in order to perform deep learning. This one can be deployed on both CPU, so your usual computer or computer server; but it can also be deployed on Graphic Processing Units or GPU. Basically, the video card in your computer.

kerasstack
The stack we will use for demonstration.

To use the video card to do the type of computation required for deep learning, one of the GPU manufacturer, Nvidia, has created a software layer to program those GPU. CUDA is the Compute Unified Device Architecture, and allow someone to program the GPU to do any highly parallelizable task. On top of that layer, Nvidia still has created another layer targeting the task of running deep neural network. This is the cuDNN layer, or CUDA Deep Neural Network library. For my example, I’ll use on top of cuDNN the google framework for graph computation, Tensorflow. Lastly, to simplify my task, since I won’t build new kind of neurons or new kind of layers, I’ll use google Keras librairy which makes simpler the process of defining a deep neural network, deploying it, training it and testing it. For something simple, we already have 5 layers of librairies, and I don’t even mention the language I’ll use and the libraries required for it as well (note that in the latest release of TensorFlow, keras has been integrated). But no biggies, in software development we are used to have many layers of software piling up.

The software stack I’m using for this example is only one of the possible one to make use of. Just for the nVidia GPU there are already more than a dozen of frameworks that builds on top of cuDNN. Moreover, Intel, AMD and google are coming up with their deep neural network hardware accelerator. Many other companies are doing the same, creating accelerated hardware for deep neural networks. All this new hardware will come with their equivalent of CUDA and cuDNN and frameworks will proliferate for a while.

cuDNNAccelereted
Some of the cuDNN accelerated frameworks.

I’m not even going to talk about the next layer of frameworks (e.g. Tensorflow and keras). Hopefully, they will adapt to the new hardware… otherwise, we’ll have even more frameworks. Same for the next layer e.g. keras builds on top of tensorflow (or theano or CNTK but let’s not open that door now). Hence, we can see our next complexity.

Second complexity, the piling of frameworks (including specialized hardware) and the proliferation of frameworks. Which one to learn? Which one will become irrelevant?

The machine learning but especially the deep learning landscape is evolving rapidly. To be efficient it requires new kind of hardware that we did not see as common in industrial servers even a few years ago. This means that the whole development stack, from hardware to released data product is evolving quickly. Changing requirement is a known issue in software development, it is not different in data product development.

My next post will tackle through an example the next complexity: Hyper-parameter tuning, something you do not see in software development but which is necessary for the development of a data product.

The Fallacious Simplicity of Deep Learning: the lack of skills.

This post is the first in a series of posts about the “Fallacious Simplicity of Deep Learning”. I have seen too many comments from non-practitioner who thinks Machine Learning (ML) and Deep Learning (DL) are easy. That any computer programmer following a few hours of training should be able to tackle any problem because after all there are plenty of libraries nowadays… (or other such excuses). This series of posts is adapted from a presentation I will give at the Ericsson Business Area Digital Services technology day on December 5th. So, for my Ericsson fellows, if you happen to be in Kista that day, don’t hesitate to come see it!

Artificial Intelligence (AI) is currently touching our lives, it is present on your phone, your personal assistants, your thermostats, pretty much all web sites you visit. It helps you choose what you will watch, what you will listen to, what you will read, what you will purchase and so on. AI is becoming a cornerstone of user experience.

Today, if you look at AI cutting edge technology, you must conclude that you can no longer trust what you see and hear. Some neural network techniques allow researchers to impersonate anybody in video, saying whatever they want with the right intonations, the right visual cues, etc. Neural networks are creating art piece, for example applying the style of great paint masters to any of your photography.

Soon, it will become even more relevant to your everyday life. We can already see looming the days of the autonomous cars. Eventually it will be all transportation, then regulation of all technology aspect of our life and even further…

The Artificial Intelligence technology reach growth is so fast and so transformational that we sometime have the impression that it must be all easy to apply. But is it so?

AI may look easy if you look at all the available resources, all the books available. The plethora of online courses, you cannot visit a web page nowadays without getting a machine learning course being proposed to you! There are tons of video available online. From those courses, but also from enthusiasts, or university teachers. And if you start digging in the available software frameworks, you’ll find plenty of them.

So why would someone like Andrew Ng, one of the world’s best-known AI expert would come forth with the mission of training a million AI experts? Well, Machine Learning, Deep Learning and Artificial Intelligence knowledge is still sparse. Big companies have grabbed a lot of the talented peoples leaving universities empty. A lot of Universities still don’t have programs dedicated to that topic. For those who have courses, most only propose a couple of introductory courses. Then from the online front, there will be countless number of people who will start to learn but will abandon it along the way for many reasons. Online course completion rate is much lower than University program completion rate.

Moreover, this is quite a difficult subject matter. There are many considerations which are quite different from what you would have from a software development background.

A first complexity, not enough skilled and knowledgeable peoples, a smaller community than say: web programming.

Stay tuned for my next post which will tackle the next complexity: the piling and proliferation of frameworks.