This is not me blogging!

I think you’ll like this blog post! I came to spend some more time recently on Deep Learning as my current project is taking me in that direction. For fun, I tried a little RNN based deep network to generate text from examples. Obviously, I used as input all my previous blogs (from the two platforms, internal at Ericsson as well as the wordpress ones). I trained it over night, around 140 training pass (or epochs) and asked it to generate a new blog for me 🙂 . So, I’ll give TheLoneNut AI the stage, and come back at the end to comment on the technique used. I’m using The Neural Network Zoo convention established by Fjodor Van Veen at the Asimov Institute to display the Neural Network.

It has been awhile, but to compensate I’ll give you plenty of material over the last two years and we will apply it to the IoT domain. To explain in a few sentences, I would say that we are building some prototypes of Machine to Machine communication using IMS as a smart pipe to provide home automation and in that context, we are building some hardware (using Arduino and Odroid) and even making a custom fit case for that hardware using 3D printing. Below shots are of one of the pieces we built: the IMS M2M Home Gateway (the device which converts the serial information related to the sensors and actuators received or sent to the House Arduino board) to a Raspberry Pi’s. This year it is more difficult. The platform I use does not provide any readable analytics data. Under such circumstances I can only base myself a question: “Why so many peoples decide to concentrate on only one aspect of their potential (e.g.: science versus arts)?” Some will say that to excel one has to fully concentrate on one subject. I don’t adhere that theory. Especially in the sport news domain where tools like Wordsmith created by Automated Insight are used to generate Virtual Peoples, we should get back data equivalent to what we get from the Canadian census data is complemented with the Québec statistical bureau for the age pyramid points about the reality it models, but it must clearly stay a toy. It doesn’t prevent having usage based charging where Ericsson is paid for the used capacity whatever the level it goes.

On the other side of the range, if you take one of the big aircrafts we have in service (comparable size as a submarine); it takes only 2-4 people to man it… simply because a crew of more than 3 days. Getting good in programming a language and in the system rules to make this technology available to the Ericsson Connected Tree can serve as a hyperbole of what an all communicating world is. It can inspire the way he wants, there is no guarantee of safety for anyone, there is little potential and work in a socialty that could help us to be better, faster and stronger? Let me know!

Ok, some hiccups along the way, but for a network which learns from characters only (no words, character level) looking at the most probable characters to come given a series of 40 such previous characters, then picking randomly (based on the probability for subsequent characters), I find it quite amazing at mimicking my style of writing i.e. too long sentences 😉 .

There is nothing really complex in that Deep Neural Network (if even I can call it a DNN). Here is a picture of the structure (for which I take no credit, it was demonstrated by Jeremy Howard in his excellent Practical Deep Learning for Coders course).

TheLoneNutAI_DNN.PNG
TheLoneNut AI network architecture

As you see, it starts with an embedding layer for each of the characters in the sample (106 of them), which happens to be: ‘\t\n !”#$%&\'()*+,-./0123456789:;<=>?ABCDEFGHIJKLMNOPQRSTUVWXYZ[]^_abcdefghijklmnopqrstuvwxyz|~Éáäéïó–’, yes some accented characters, I’m French speaking after all! The embeddings are next batch normalized. This is followed by two Long Short Term Memory (LSTM) layers. Finally we have a dense layer for the output to which we apply a 10% dropout and a softmax activation function. So, three (or four if you count the embeddings) layers all in all. Below is the summary of the model:

_________________________________________________________________
Layer (type) Output Shape Param # 
=================================================================
embedding_35 (Embedding) (None, 40, 24) 2544 
_________________________________________________________________
batch_normalization_10 (Batc (None, 40, 24) 96 
_________________________________________________________________
lstm_36 (LSTM) (None, 40, 512) 1099776 
_________________________________________________________________
lstm_37 (LSTM) (None, 40, 512) 2099200 
_________________________________________________________________
time_distributed_22 (TimeDis (None, 40, 106) 54378 
_________________________________________________________________
dropout_24 (Dropout) (None, 40, 106) 0 
_________________________________________________________________
activation_7 (Activation) (None, 40, 106) 0 
=================================================================
Total params: 3,255,994
Trainable params: 3,255,946
Non-trainable params: 48
_________________________________________________________________

I’m using Keras for the purpose of the training and prediction. It is running over Theano on one GPU (nVidia GTX 1080 Ti) and each epochs takes around 410 seconds for my entirety of blogging which account for 300k characters. A bit depressing when I think of it… around 6 years of blogging produced only 300k characters, or 50k characters a year! I have programs far longer than that! Well, now TheLoneNut AI can remedy to this and produce text for me 🙂 !

So, let’s ask the AI for a closing statement…

“Late” text messages are afraid of the monsters under their bed. Eventually it will be done I guess, but it might take time for some great explorer to discover this far away mountain.

Until next time!

Advertisements

One thought on “This is not me blogging!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s