Let’s Demystify Machine Learning

John McDonald
4 min readAug 27, 2020

The hype machine is cranked up to eleven on the topic of machine learning (sometimes called artificial intelligence, though it’s not really intelligence and there’s nothing artificial about it). It’s about to empower the world or take it over, depending on what you read. But, before you get swept away by the gust of hot air coming from the technology industry, it’s important to pause to put things into perspective. Maybe just explaining it in reasonable terms will help.

Well, shortly after the first caveman figured out how to make fire, the second caveman wanted to learn. However, he didn’t check out a book from the local library, nor take a 3-credit hour college course, but instead watched caveman one make fire, tried to do it himself, failed, was corrected, and did it again until he got it right. Fundamentally this is how all humans have ever learned anything — by watching, trying, failing, correcting, repeating.

Cavemen learning about fire. Fortunately there was a photographer present.

Think about it from a modern perspective. If you were to drop your phone and crack the screen, many of us would go directly to YouTube and search for “how to replace an iPhone screen.” After watching the video, if you felt as if it might be within your capability, you would go to Amazon and order a replacement screen kit (though you should probably order two because you are going to mess up one of them.) When the box arrives, you go back to YouTube and watch the same video and try to match what the person on the screen is doing. If you succeed, you’ve now done a very technical task for which you never went to school or took a class to do, nor thought you’d be doing when you got up that morning. That is how all humans have ever learned anything, and accounts for the popularity of things like YouTube, which is mostly about humans watching other humans do things and relaying their experiences and lessons.

But curiously this is not how we’ve been using computers. What we’ve been doing is collecting all of the data and storing it, and then developing programs to process the data down into information. The early history of computing was automating manual tasks — taking data and “processing” it down, first by collecting all of the data (“big data”) into data warehouses and lakes and pools, and then using software we call “analytics” tools to study it. Trouble is that analytics software doesn’t really analyze anything: it just slices and dices data and displays it on a screen or a report for some human to figure out what it means.

Fortunately, this era is largely over, because now it’s about harnessing knowledge from data vs. just processing it.

What we are doing is teaching computers to learn the way we do. We send sets of data, be pictures or text or numbers, to very powerful machine learning software built into cloud platforms like Google and IBM, and ask the machine to figure out what the patterns are and what the data means.

Of course, it gets it wrong, but then the task is to correct the model and do it again. After multiple iterations, the model becomes better and better, almost like a pixelated photograph becomes sharper each time more data is sent to fill it in. Then, you can send huge amounts of data through the model it responds rapidly with great insight. The objective is a “human assist” in chewing through lots of data very rapidly, and advising the human operators on insights gleaned from repeatedly applying a model created by the data itself.

This idea of computers having cognitive power is relatively new in human history. Human cognitive power has increased somewhat since year zero as we live longer and have better schooling. But, starting in about 1950, the amount of computer cognitive power starts coming up off of the zero mark, so by the year 2015, it’s estimated that the total amount of computer cognitive power was roughly equal to the brain of one mouse — not the one attached to your computer, but the rodent. There is an interesting moment that is predicted to occur about 2023, where the total amount of computer cognitive power will roughly equal one human brain, and just beyond that, depending on who you read, lies a moment in about 2045 ominously called the “singularity”, where the total computing cognitive power will roughly equal ALL human brains. Of course, that’s a pure-play concept that doesn’t consider edge cases like electronically assisted human brains.

If you plotted the rise of computing cognitive power on a time scale starting with the beginning of civilization, it’s so fast in human history that it appears to be a vertical wall. We are only beginning to figure out the impact of this on humanity, and that’s what scares so many about the topic. But, it’s important to understand what it is before you can understand what it isn’t — which will the topic of my next post.

--

--

John McDonald

I am a Managing Entrepreneur at NEXT Studios, the venture studio by entrepreneurs, for entrepreneurs, with entrepreneurs.