The History of Machine Learning: A Comprehensive Overview

Traducciones al Español
Estamos traduciendo nuestros guías y tutoriales al Español. Es posible que usted esté viendo una traducción generada automáticamente. Estamos trabajando con traductores profesionales para verificar las traducciones de nuestro sitio web. Este proyecto es un trabajo en curso.
Create a Linode account to try this guide with a $ credit.
This credit will be applied to any valid services used during your first  days.

Machine learning (ML) is a subset of artificial intelligence (AI) and it is fast becoming the very backbone of most software today. It exists in several different forms from chatbots on websites and digital assistants like Siri and Alexa, to algorithms driving social media like Facebook, and a variety of office software. Machine learning is so prevalent that it is difficult to list all the places where it digitally resides and all the tasks it performs.

As cutting edge as machine learning is today in our daily lives and work, it’s been around for a good long time. The history of machine learning is compelling and surprising given how far back it is rooted. Its seemingly light speed advancements now are because computer scientists have gotten better over time in imagining new applications and perfecting the math in the models and algorithms.

To appreciate the possibilities for machine learning that lay before us, it’s best to first savor the accomplishments it has already served by taking a look at machine learning history.

The Origins of Machine Learning

Like most technological breakthroughs and computing cornerstones, machine learning cannot be attributed to a single person or organization. Instead, many people have contributed to its development and resulting applications, and many more will in the future too.

There is debate on who invented it and when. The truth is that few things this complex are created from a single, sweeping vision dump into a workable form completely new to the scene. Instead, different people work at creating different aspects, components, concepts, processes, models, and algorithms that collectively comprise the technology called machine learning. Each step was an incredible accomplishment on its own, but there is no doubt that the whole is more powerful than the parts.

There are, of course, people who stand out even among the esteemed and talented group of computing history-makers who collectively breathe intelligence into inanimate computers. The section below highlight a few notable figures who were key to advancing machine learning at pivotal points in its history.

When Was Machine Learning Invented?

Despite ongoing debates, the general consensus is that machine learning became a known entity, and thus was officially invented, in 1952 by Arthur Samuel, a computer scientist who worked at IBM. Samuel gave this new approach to computing its name: machine learning.

Machine learning was once a stepping stone on the path to AI’s development, that long trek to producing general AI, which is the form that science-fiction movie and TV portrayals made famous and familiar.

However, machine learning is much narrower in its focus and capabilities than general AI. It eventually became apparent that it would be faster and easier (although still not easy) to develop machine learning to more immediate and diverse payloads than to aim it solely at AI’s development.

Machine learning has its own subset, called deep learning, which is even narrower than ML as it is far more specialized. General AI is a smaller subset of self-aware AI, a truly powerful but wholly futuristic form.

A tidbit in interesting history in machine learning: Deep learning was invented in 1943, which was nine years before machine learning came along. There is some debate over who invented deep learning as it traces back to Walter Pitts and Warren McCulloch’s model in 1943. It didn’t widely go by the name “deep learning” until Gregory Hinton rebranded neural net research by that moniker in 2006.

In the 1970s, machine learning branched out as a discipline on its own and it now powers much of the world’s software. In an ironic turn in machine learning and AI history and background, today it is often labeled as AI for marketing purposes because most consumers are unfamiliar with the term and concept of machine learning.

Who Invented Machine Learning?

The first and arguably most publicly recognizable person in the history of machine learning and AI is Alan Turing who created the Turing Test to determine if a machine can truly think. It became a famous benchmark to measure whether a machine possesses sufficient intelligence to not only think like a human, but to fool one into believing it was human too. This definition gave rise to several ways to artificially mimic human thinking and to test the effectiveness of its thoughts and actions as well.

The aforementioned Arthur Samuel who gave this smart technology its name, machine learning, also developed a machine learning-based program that played checkers. As an interesting note in machine learning and artificial intelligence history, it’s thought to be the first, or at least one of the first, games a machine could play.

The game was a simple way to illustrate how the machine could learn and respond to human actions and reactions. It learned the game with practice – much like humans do – using a minimax algorithm. Even though it is an earlier development in machine learning algorithm history, this type of algorithm is still used in game theory and automated decision-making to select the optimal next move for a player, or user.

But it was Frank Rosenblatt who is credited with the first design of a neural network for a computer. It was an early attempt at mimicking human thought processes. While a brilliant advancement, it took until 1967 for someone to write an algorithm that could recognize patterns. Machine learning at its core is a pattern detector. It can rapidly find patterns in huge amounts of data, learn from those patterns, and then determine an outcome based on the patterns it finds.

It is exceedingly difficult to credit any one of these people as the inventor. Even more so when you consider that these are just a few of the many people who contributed to this astounding innovation.

A Timeline of the Evolution of Machine Learning

Machine learning was first made possible when humans discovered that numbers and objects could represent things that exist in the real world. This was the first data. And machine learning learns from data and also feeds upon it.

Machine learning is math based and its models commonly work with data representing entities or actions in the real world. By this definition, one could say that machine learning became possible the moment someone counted something and made a record of the number, be that by purposeful notations on a cave wall or pebbles in a pouch.

Better methods of gathering and storing data followed, as did the machines that could do the necessary calculations. In 1642, a teenager in France built the first mechanical calculator. For the first time, a machine rather than a human was doing the math. The modern binary system was created.

The first chess-playing automaton made its debut in 1770 and promptly set about confounding Europeans for decades afterwards. In 1834, punch-card programming, widely considered the “father of computers,” was invented.

A woman created the first algorithm in 1842. In doing so, Ada Lovelace became the world’s first computer programmer. A mystic’s work in algebra makes CPUs possible but it would be more than a century before anyone invents a CPU.

In 1927, AI made its debut in the movies. But Alan Turing didn’t conceive his “Universal Machine” until 1936, proving once again that science fiction writers tend to be prescient or at least born before their time.

And so the history of machine learning continues, built piece by piece over centuries.

In recent times, machine learning is applied to diverse types of data. For example, today it is sometimes paired with other technologies such as computer vision to recognize patterns in images ranging from medical x-rays to facial recognition in videos. Algorithms now understand natural language queries as well.

It is far more practical to pick up machine learning’s history on something more directly related and found somewhere further along the timeline between the first forms of data and today’s many applications.

Most would point to the emergence of the idea of neural networks as just such a natural starting point for machine learning’s origin story. So it is from that point that the following timeline of machine learning evolution to its current state begins:

1943 – The idea of an artificial neural network arises from the study of how neurons in the human brain work in a function described then as “connectionism.” This is the year when neurophysiologist Warren McCulloch and mathematician Walter Pitts used simple electrical circuits to copy intelligent neuron behavior.

1949 – A book titled The Organization of Behavior was published. Its contribution was in the proposal that neural pathways strengthen over each use. This was important to understanding and quantifying the complex processes in the human brain.

1950 – Researchers begin trying to translate brain networks onto computational systems. Alan Turing also developed his Turing Test this same year, a test that would become the benchmark in assessing a machine’s level of intelligence.

1951 – The first machine was built with an artificial neural network by Dean Edmonds and Marvin Minsky who based it on the Hebb’s model and called it the SNARC machine.

1952 – Arthur Samuel dubbed these new computing processes that copied human thinking: machine learning and used it to develop a computer program to play checkers.

1954 - The first Hebbian network, which connects the psychological and neurological underpinnings of learning, was successfully implemented at MIT.

1957 – The Perceptron was invented, primarily for image recognition, by Frank Rosenblatt using a Hebb’s model and Samuel’s early machine learning algorithms.

1959 – An artificial neural network learned to make phone calls clearer, using Stanford’s ADALINE and MADALINE models. ADALINE was developed to recognize binary patterns. It could successfully predict the next bit while reading streaming bits from a phone line. MADALINE used an adaptive filter to eliminate echoes on phone lines and was the first neural network used to solve a real world problem. Both are still in use today.

1968 – Stanley Kubrick’s movie 2001: A Space Odyssey strongly advances the concept of general AI and self-aware AI.

1975 – The first multilayered and unsupervised network was developed.

1979 – The Stanford Cart was invented to solve the problem of driving a moon rover from earth. Its developer, a Stanford graduate student, used a camera and a remote to navigate it around a room filled with obstacles, most of which were chairs.

1981 – The concept of Explanation Based Learning (EBL) is introduced by Gerald Dejong. It amounts to using ML to analyze training data to create a general rule by itself to identify and discard unimportant or irrelevant data. Think of it as self-cleaning.

1982 – The movies spur public expectations of ML and AI again this time in the form of the replicants in Blade Runner. Scientists are now on notice that the public expects not only intelligent software to mimic the human brain, but bodies to encase and mobilize them that closely replicate human bodies with superpowers.

1985 – A neural network called NETtalk taught itself to correctly pronounce new words at the pace of 20,000 new words a week. Initially it spoke gibberish as it tried to learn to speak but it continued learning until it could pronounce new words nearly effortlessly.

1997 – IBM’s machine, dubbed Deep Blue, beats chess grandmaster Garry Kasparov at chess.

1999 – The University of Chicago’s CAD Prototype Intelligent Workstation analyzed 22,000 mammograms and proved 52% more accurate in diagnosing cancer than human radiologists.

2006 – Gregory Hinton rebranded neural net research as “deep learning.” Today huge Internet companies use his techniques to improve tools such as image tagging and voice recognition.

2009 - Netflix sought to improve its customer movie and TV recommendations by offering a $1 million dollar prize to anyone who could outperform its algorithm in consumer film ratings. The BellKor team of AT&T scientists took home the big prize, but only by a hair. They were just minutes ahead of the second-place winners.

2010 – Microsoft’s Kinect tracks 20 human features at a rate of 30 times per second enabling the first game and computer controls through movements and gestures.

2011 – IBM’s Watson won the game of Jeopardy by besting two human competitors. It stumbled a bit but still won.

2012 – One of Google’s neural networks taught itself how to recognize cats and humans in YouTube videos. It wasn’t 100 percent accurate but the scores were still very accurate for a self-taught machine. It detected felines correctly 74.8 percent of the time, and human faces with 81.47 percent accuracy.

2013 – Boston Dynamics created Atlas, a humanoid robot. Earlier, the same company created BigDog, a four-legged, dog-like robot.

2014 – Eugene Goostman successfully passed himself off as an Ukrainian teen to 33 percent of the human judges making it the first intelligent machine to pass the Turing test (about 60 years after Alan Turing’s death). Goostman is a chatbot.

This is the year healthtech began using machine learning in earnest to improve patient outcomes. One of the first applications predicted ER wait times from event simulations based on data points like staffing levels, medical histories, illness saturation (for example, outbreak periods such as flu season, or a regular summer day with no outbreaks), and hospital layouts.

It’s also the year that Facebook develops DeepFace for facial recognition.

2015 – Google’s AlphaGo won the world’s hardest board game called Go. This meant that machines bested humans at every board game. It’s hardly novel (or sporting) to pit machine against humans in games anymore.

This is also the year Amazon launches its own machine learning platform and Microsoft created its Distributed Machine Learning Toolkit.

And, it’s the year that Tesla debuted its Autopilot, a semi-autonomous feature for hands-free driving in consumer cars. Autopilot was delivered as a single software update to Model S owners overnight.

2016 – HAL came to life. Well, not really. But an AI system known as LipNet did successfully read lips with 93.4 percent accuracy which put it on par with the fictional machine known as HAL in the famous 1968 movie, 2001: A Space Odyssey.

This is the same year that natural language processing made its debut in the real world. A retailer called The North Face used IBM’s Watson’s natural language processing in a personal shopper chatbot on its mobile app.

This is also the year that Google Assistant, an AI-powered virtual assistant, was born.

2017 – Alphabet’s Jigsaw team built a ML system to automate troll busting for companies who lacked the resources or will to moderate website comments.

2019 – Google AI scored a win in accurately diagnosing lung cancer.

2020 – Machine learning, deep learning, and AI were drafted and sent to the front lines in the fight against the COVID-19 pandemic.

2021 - A crewless, AI-controlled ship called The Mayflower Project sets sail across the Atlantic Ocean.

Conclusion

Today, there are many available open-source tools and frameworks that you can use to power machine learning applications. PyTorch is a Python-based machine learning framework that makes use of CPU and GPU to accelerate its processing performance. You can install PyTorch on an Ubuntu 20.04 Linode server and make use of GPU or dedicated CPU compute instances.

This page was originally published on


Your Feedback Is Important

Let us know if this guide was helpful to you.


Join the conversation.
Read other comments or post your own below. Comments must be respectful, constructive, and relevant to the topic of the guide. Do not post external links or advertisements. Before posting, consider if your comment would be better addressed by contacting our Support team or asking on our Community Site.
The Disqus commenting system for Linode Docs requires the acceptance of Functional Cookies, which allow us to analyze site usage so we can measure and improve performance. To view and create comments for this article, please update your Cookie Preferences on this website and refresh this web page. Please note: You must have JavaScript enabled in your browser.