Snake oil and Piña Colada: Will I Survive the Artificial Intelligence Summer of Love?
I have been around the block for a while and I have seen this happening already in a form or another but this time is big. This time is crazy big. I am talking about the machine learning madness that took over everything.
How did it start?
Long time ago in the field of computer science a brand new area of research was born: we were in the 60s and this new field of investigation aimed at understanding how the human mind worked was — without false modesty — called artificial intelligence. The general idea was that if you are able to create something that looks intelligent or indistinguishable from something intelligent, then, it must be intelligent. All of this, mainly, out of the lack of a formal definition of what intelligence means. You maybe heard of the so called «imitation game», recently made famous by a movie of the same name, and while that was a good start, did not go particularly well.
The years passed, many different techniques came along, from expert systems to fuzzy logic, but nothing seemed particularly intelligent, in the intuitive sense of the word. People lost interest into the field and what is usually referred to the «winter of artificial intelligence» begun.
During that hard and cold winter, someone still in the area, happened to remember the existence of a field of mathematics called statistics and of a concept called fitting. Fitting is an operation that given in input a bunch of inputs and outputs tries to approximate some unknown function. Wait a minute! Does that mean that in practice this system is "learning" a function and since "learning" seems to be a characteristic of "intelligent". Cool! We can re-use statistics to make artificial intelligence! Lesson learned from before is not to aim too high and then the name was scaled down to machine learning: it ain't cool as artificial intelligence but still sounds better than statistical learning!
The field grew and many techniques were invented: support vector machines, neural networks, decision and regression trees, boosting, etc. There was only one little problem and it was that machine learning was very slow as all those techniques had to go through data multiple times, iterating, multiply big matrixes, etc. Don't take me wrong, that was nice: a restricted number of people working in the shadows knowing that history would have proven them right.
It worked decently well if you didn't have any time constraint but it was not too practical. Moreover knowing that you always have a chance of your model to go goofy far from the average case where it was trained and where the error was effectively minimised, was not the greatest of the selling points. Another problem was the cost of getting all the data labelled or just enough data to be able to train a good model.
At a certain point two things happened.
The first one was the fact that inexpensive enough graphic cards come to existence with the realization that you can leverage them to do more general computations that were exactly the ones needed for machine learning. The second one was google realizing that they had — and have — enough data to actually do shit in machine learning and make that useful and famous.
As the graphic cards are good at matrix multiplications that well fitted neural networks, usually very expensive but at the same time very powerful. This combination took to the race to what is known as deep learning.
Deep learning is a umbrella of techniques rooted in the principle that if information can be compressed it is because the various pieces of data compressed together do mean similar things. From this derives that compression is learning. For learning as in "generalize". This is beautiful but, exactly as in kids stories, good things come at a price, the price is that what is learnt is a black box: it maybe works but you are not able to say why. Moreover, it does not live out of explicitly labelled data but it works out of your algorithm getting rewarded when doing things right.
What went strange? (Too early to say wrong, but I am confident in my pessimism)
The problem is — as always happens with humans — the hype and the overexcitement
Now, the shitty part.
As in all the other — respectable! — cargo cults before this one, c-level people in companies started reading that there is new thing in town and that it can solve any problem like magic: you shake the fairy machine learning wand on top of problems and those will transform into money! Worderfultasticulous™! They started hiring as crazy experts of machine learning but since they didn't know what they were buying, they bought piles of if-then-elses — in the best case! — instead. They contracted everyone said deep learning with the right accent and winking at the right moment. Is that bad? No: if they want to buy shit, they should be free to do so!
Francesco, are you saying that you do not like machine learning? I do like it, in the measure in which I like all the other tools I use in my job. I do not get a hard on when I use it: if that helps doing my job, I use it, otherwise I can live happy without. The — my? — problem is that I would not be dishonest enough to sell machine learning as a solution to every problem just because there is someone who is ready to buy anything.
And after complaining about the sellers, let's talk about the buyers.
People who want to buy the last shiny new thing can be divided in two groups. The first one is composed by the easily impressionable ones (easy prey of the snake oil sellers) and the second one is represented by the ones that have no clue of what they are buying (easy to prey but not to keep).
The first ones are so easily impressionable that telling them that the coffee machine is using machine learning can make them cream in their pants so hard that they would need to go to the local hospital because they are dehydrated. And that's always soooooo coooool: even if stuff would obviously not work, they are going to be super happy of having bought it. These customers are easy to spot: they form their technical judgement reading Wired (the lack of a link is intended). Done, let's move to the second class of respectable customers™.
The second ones do not really understand what you sell them but they buy it out of fear that their competitors are buying it first. There is one caveat thought, they didn't get the part about the "approximation": they do not understand what precision or recall would mean and they continuously talk about tuning, fixing by hand, etc. Well, they are kind of right: before they had fully manual solutions and if something was not "according to specs" that was a bug and a ticket was opened. They can be spot easily as they are the same guys that freak out and start saying that an algorithm is racist.
What can we hope for the future?
I personally hope for mass extinction. No, seriously. I hope for the hype to finish soon with customers becoming more aware of what they are buying — becoming more realistic — and, as a consequence of this, for the snake oil sellers to disappear, or to move on to the next shiny new thing™. Because, and I am very serious here, machine learning will stay but it will both stay where it is actually useful, i.e., not for solving everything, and it will be invisible to the eyes, i.e., it will be part of the solution you buy and not the solution you buy. Will there still be people who want to use machine learning everywhere? Yes, there will be and we will look at them as we look at the ones who want to use XML everywhere.