Find your machine learning mojo!

A Brief History of Machine Learning and Data Science

The history of machine learning is longer than you think. The idea of machine learning is not a new concept.

In fact, believe it or not, the idea of artificial intelligence is well over 100 years old!

It’s taken a little while to come into existence but now we are beginning to reap the benefits of a centuries research.

Going back to the development of the first algorithms that would later be used in machine learning, through to A.I. Singularity, today we are covering it all!

This is a brief history of machine learning and big data science.

We shall begin at the start

We start our quest to develop artificial intelligence way back in 1642 when the first calculator was made. The invention of the calculator was really the first step in allowing human beings to use machines to process data at greater speed.

Things stayed fairly quiet over the next few years but then a big step forward was taken.

The first major development was the invention of the binary system by German philosopher in 1703. Gottfried Wilhelm Leibniz is the man we have to thank for the developmental foundation of virtually all computer programming architectures.

During the 120 years, things really hotted up in the algorithm world.

In 1805, the least squared method, one of my personal favorites, was completed by Adrien-Marie Legendre.

Not long after, the derivation of Bayes theorem was completed in 1812.

Those of you who have been practicing machine learning for a while will be familiar with good old Bayes and his theorems.

Bayes’s theorem determines the probability that one thing is true given another thing is true.

The theorem is key for classification and clustering algorithms.

TRY THESE POPULAR PRODUCTS

Artificially Intelligent Claire | Machine Learning Unisex Crew Neck T-Shirt
ARTIFICIALLY INTELLIGENT CLAIRE | 404 COFFEE UNISEX CREW NECK T-SHIRT
ARTIFICIALLY INTELLIGENT CLAIRE | PROGRAMMER HARRY UNISEX CREW NECK T-SHIRT
Artificially Intelligent Claire | I Need A-rrays 11oz Mug
Artificially Intelligent Claire | Programmer Harry 11oz Mug
Artificially Intelligent Claire | Feel So Empty 11oz Mug
Artificially Intelligent Claire | Machine Learning Unisex Crew Neck T-Shirt
ARTIFICIALLY INTELLIGENT CLAIRE | 404 COFFEE UNISEX CREW NECK T-SHIRT
ARTIFICIALLY INTELLIGENT CLAIRE | PROGRAMMER HARRY UNISEX CREW NECK T-SHIRT
Artificially Intelligent Claire | I Need A-rrays 11oz Mug
Artificially Intelligent Claire | Programmer Harry 11oz Mug
Artificially Intelligent Claire | Feel So Empty 11oz Mug
Artificially Intelligent Claire | Machine Learning Unisex Crew Neck T-Shirt
ARTIFICIALLY INTELLIGENT CLAIRE | 404 COFFEE UNISEX CREW NECK T-SHIRT
ARTIFICIALLY INTELLIGENT CLAIRE | PROGRAMMER HARRY UNISEX CREW NECK T-SHIRT
Artificially Intelligent Claire | I Need A-rrays 11oz Mug
Artificially Intelligent Claire | Programmer Harry 11oz Mug
Artificially Intelligent Claire | Feel So Empty 11oz Mug

1700s – who knew they were so big for Machine Learning

I know – before writing this I had no idea the 1700s were so exciting!

Not wanting to leave all the glory to algorithms in the pre-1900s, the first punch card computer was invented in 1834.

Aged 27, the impressive Ada Lovelace was then able to propose an algorithm that used punch card computing to perform a series of tasks and calculations. This algorithm, completed in 1842, made Ada Lovelace the mother of all computer programming.

Other big news in the world of algorithms, Markov Chain Monte Carlo (MCMC) methods have been developed. MCMCs sample probability distributions. This class of algorithms is a foundational technique for calculating numeric approximations on multi-dimensional data.

Ok so technically MCMCs were not developed until 1913, but I wanted to keep my algorithms together.

History of machine learning - pre-1900S

Now let’s move to the 19th century

At the start of the 19th century many of the core aspects that would allow machine learning as we know it today to develop.

That being said, we still have a lot to discover before we can really make significant steps forward in the field.

Enter Alan Turing.

Becoming more familiar – welcome Alan!

Alan Turing was a British mathematician. He is the father of both modern computing and artificial intelligence.

Basically, a big deal as well as being a war hero, the work of Alan Turing guided us along the path that made AI possible.

In 1936 Turing published a paper outlining a computational machine that would be able to perform calculations on numerical data.

But Turing wasn’t done.

In 1950 he published another landmark paper outlining how a computer could become artificially intelligent. In this paper, the intimidation game was proposed as a test for Artificial Intelligence. This test would go on to become known as the Turing Test.

The Turing Test has since become the gold standard for testing the artificial intelligence of an artificially intelligent natural language processing machine.

Shortly after the Turing Test paper came into our world, another development occurred.

In 1951, the first neural network learning machine was built by Marvin Minsky.

Other key players in machine learning development

Another key requirement for the future of computational neural networks came in 1957. The perceptron was developed by Frank Rosenblatt. This was another step forward in the development of machines mimicking the processes used by the human brain to pass data through a system.

One final point from the 50’s before we jump forward a few years.

In 1959, Stanford’s MADALINE was the first application of a neural network. The MADALINE used neural networks to develop an adaptive filter to remove echoes over phone lines.

Even more interesting point about the MADALINE, it’s still in use today!

As promised I’m not going to jump forward a few years in the history of machine learning.

History of machine learning - 1900-1980

Landing in the 1980s!

The 1980s may not have been great for hairstyles but they were good for neural networks.

The history of machine learning is about to get smarter!

During the 1980s we see several big step forwards in the development of neural networks that would go on to form a basis for deep learning.

A couple of highlights from this time are:

  • Recurrent Neural Networks: Proposed in 1981 by John Hopfield
  • NetTalk: A program that was able to teach itself new words. The system learned in the same way as babies learn to pronounce words. This happened in 1985 as the brainchild of inventor Terrence J. Sejnowski
  • Backpropagation: This was first applied in 1986
  • Reinforcement Learning: Q learning was invented in 1989 making reinforcement learning more practical

Before I leave the 80’s I should flag one final development.

Undoubtedly a major contributor to the history of machine learning was the commercialization of ML to the personal computer.

In 1989, Evolver was released by Axcelis, Inc. This software program allowed machine learning algorithms to be used on commercial computers.

Now it’s time to dive into the 90’s.

There was Britney and there were algorithms

It wasn’t all Britney Spears and bootcut jeans.

While I was happily singing along to classics such as Hit Me Baby One More Time, some clever folks were making big steps forward with algorithms.

During the 1990s many of the algorithms that form the basis of practical machine learning courses everywhere were developed.

These include but are not limited to:

  • Boosting algorithms to reduce  bias in algorithms and boost gradients on different features
  • Random Forest algorithm
  • Support Vector algorithms
  • MNIST database for handwriting recognition

Here we go – Big steps forward in machine learning tech

Ok, now we’re starting to move forward at speed.

Deep Learning becomes a big player in the history of machine learning as a whole.

Deep learning really came into the forefront of machine learning in 2006 when it was rebranded by Geoffrey Hinton.

One of the reasons deep learning was able to make such rapid advances over the naughties through to today was the commercialization of GPGPUs.

General Purpose Graphical Processing Units (GPGPUs), allows for a much more rapid calculation of multiple similar calculations by running them in parallel. This parallel calculation works well for the matrix structures used in machine learning and deep learning, reducing training time for a neural network.

As the era of technology becomes a lot more mainstream, we also see today’s big players coming to the market with AI-based products.

To give you some examples of A.I. in action:

  • Facebook facial tagging
  • DeepMind Alpha Go
  • Amazon Machine Learning Platform
  • Google Brain facial recognition
  • Netflix Competition
  • IBM
  • Self Driving cars
  • NLP – Alexa

There have also been numerous other developments through deep learning. These include applications in healthcare, finance, robotics, cybersecurity, and e-commerce.

History of machine learning - 1980s-now

So where do we go next in the history of machine learning?

We are on our way to developing singularity and general artificial intelligence.

To date, AI, and specifically machine learning has been limited to solving specific problems.

The development of more general A.I. that is able to perform multiple tasks and solve different types of problems using one system is the next frontier.

Sentient A.I, in particular, is seen as the next frontier. This would allow the system to design and update itself without human interference.

There is much debate over when this will come to reality but if the history of machine learning development is anything to go by, it will come into reality.

Want to understand more about what this world of A.I. singularity might look like? Check out this article highlighting some films that propose different outlooks on our future.

Love it? Pin it!

A brief history of machine learning. We have got you covered - all the key players and innovations that led us to the world of AI we live in today and could live in tomorrow!
Follow:

Leave a Reply

Your email address will not be published.

%d bloggers like this: