Algorithms are a hot topic, especially among those concerned about how information is spread. Social networks like Instagram and TikTok famously use “algorithms” to decide what content to show you. Facial recognition algorithms famously fail, especially if you aren’t light skinned. But … What is an algorithm anyway? Is it really at fault for all of this?
Featuring Tom Merritt.
Please SUBSCRIBE HERE.
A special thanks to all our supporters–without you, none of this would be possible.
Thanks to Kevin MacLeod of Incompetech.com for the theme music.
Thanks to Garrett Weinzierl for the logo!
Thanks to our mods, Kylde, Jack_Shid, KAPT_Kipper, and scottierowland on the subreddit
Send us email to email@example.com
Why is the algorithm showing me Pat Benatar videos?
I can’t search for nuclear war or the algorithm will know.
Well those algorithms are biased. You know that!
Algorithms are a hot topic, especially among those concerned about how information is spread. Social networks like Instagram and TikTok famously use “algorithms” to decide what content to show you. Facial recognition algorithms famously fail, especially if you aren’t light skinned.
But … What is an algorithm anyway? Is it really at fault for all of this? Including the Pat Benatar videos?
Let’s help you Know a Little More, about Algorithms.
Let me tell you a story.
A story has a beginning. A middle. And an end.
Beginning: Mathematicians in ancient Baghdad write multiple books laying the foundation of modern mathematics
Middle: Hundreds of years later one of those author’s name becomes synonymous with operations, and is translated into latin as algorithm.
End: In the twentieth century, algorithms are defined, refined and by the early 21st century, combined, to make systems that can draw pictures, write you a resume and replicate your voice and even video.
That’s the quick story of the algorithm. And algorithms themselves, like stories, have a beginning, the input. A Middle, the operations performed on the input. And an end. The output. They’re simple really. But just like a story can be much more complex than the one I just told, an algorithm can be much more complex as well. And when you combine multiple complex algorithms. You get a whole lot more.
OK OK but what does all that mean right? To explain we may need to start in the middle of the story. With how an algorithm is defined. You know like Captain America: The First Avenger.
A great example of an algorithm is a recipe.
Step one. Boil water
Step two. Place egg in water
Step three wait 8 minutes.
Step four remove egg from water and let cool
Step five peel egg and eat it.
That’s an algorithm for a hard-boiled egg.
But of course algorithms aren’t all linear steps. More complex algorithms can use what are called conditionals – you know, like an If Then situation. This can let it look like it’s making inferences and executing reasoning.
You use conditional algorithms all the time.
Let’s say you’re on vacation and want to go to a grocery store you’ve never been to before and find a can of peas. Because you love peas in this example OK? You need peas. So even though you’ve never been to this grocery store, you know how grocery stores generally work. The grocery store is kind of your operation system. So you employ your personal pea-finding algorithm.
First you look at aisle signs. If an aisle sign says “canned food” you proceed down the aisle and look for cans of peas. Else check next aisle.
This algorithm saves you time. Instead of wandering up and down each aisle looking for canned peas, you skim along the edge of the aisles and quickly get to the right shelf.
Now imagine how a more powerful algorithm can help a computer sort through billions of data points faster than humans could.
There are multiple types of algorithms. The brute force algorithm just tries all possible solutions until it hits on the right one. That would be just walking up and down the aisles.That one is used sometimes because computers can do things faster. So a “brute force” attack to find a password is using that kind of algorithm.
Another one is called Divide and Conquer. You break the problem up into smaller subproblems then combine the solutions to solve the original problem. Processors use this one to great advantage. For our grocery store example this would be good if you need canned peas and fresh carrots and there are two of you on vacation. One handles the carrot search and one handles their pea search. You combine at the register to checkout. Output= peas and carrots.
There are others that don’t fit so nicely into the grocery store format. An encryption algorithm transforms data in a way that only a person who knows how it was transformed can easily revert it back to the original. I guess that means the grocery store is locked?
And there’s also a type called Randomized algorithms where you use a random number at least once along the way to solving a problem. One example of this is the Las Vegas algorithm which will always return a correct result you just don’t know how long it will take. If you’ve ever used QuickSort, you’ve used this kind of algorithm. For the grocery store it would be like walking right to a random aisle and looking up at the signs, hoping you happened to hit the canned foods on the first try.
That might sound like it’s going to take as long or longer to work.
But here’s a way to think about it. You have a line of 100 people and you’re looking for Pat. You can wait as everyone in line walks by you one after the other and eventually you’ll find Pat. But that may take a long time, unless Pat happens to be near the front. Or you can call out line positions at random. “Number 5!” “Number 64” etc. until you find Pat. It may not sound guaranteed that this will be quicker, and it’s not guaranteed, but it does end up being faster in a lot of more complex situations, especially when you can repeat it a lot of times very fast– like a computer does. This is often used in search.
One thing to keep in mind about algorithms is they are meant to be precise. An algorithm is not the same as a heuristic. A heuristic is a shortcut that may or may not give you the exactly correct result. That’s OK when an approximation is good enough.
So let’s say there’s a person walking toward you and you hold out your thumb. If their head appears to be about half the size of your thumbnail, you guess they’re about 200 meters away. That’s not accurate of course, but it may be close enough for what you need to know. And it’s an example of heuristic.
An algorithm on the other hand would want to use more information to come up with a precise measurement. So perhaps taking into account known distances of objects near the walking person, along with any shadows cast etc.
But the key thing in all algorithms is they have an initial input, a set of instruction and an end. It has to terminate after a finite amount of time. It also should give the same output if given the same input.
All right. We have Cap out of the ice. Let’s head back to the 1940s. Or for you non-marvel fans, let’s go back to the beginning of algorithms.
We start in Babylon. 2500 BCE. Birth place of laws and the code of Hammurabi. The most advanced mathematicians in the world have discovered something that will make grade school children wail and gnash teeth centuries into the future. Division! One of the earliest examples of an algorithm.
On to India. 1,000 years BCE. Mathematics is developed that is predominantly what we would now call algorithmic.
Forward 1800 years to the 800s and the Islamic Empire. The most advanced mathematicians the world had yet seen are pushing the discipline close to its modern form. They take Indian number symbols and develop them into the Arabic numerals we use today. One man involved in this is called Al-Kindi. And were this a more accurate world, algorithms might be called Alkindis. He wrote a book called Manuscript on Deciphering Cryptographic Messages, which is credited with creating cryptanalysis – breaking cryptography. He developed frequency analysis, where you look at how often certain symbols occur to help deduce what they may mean. All of these are examples of algorithms.
However, one of his contemporaries, while not developing as much algorithmic math, would leave a stamp on the discipline still felt to this moment. He was called Muḥammad ibn Musā al-Khwārazmi, most often referred to as al-Khwarizmi, which meant he was from Khwarazam, a Persian region, which is now part of Uzbekistan.
He worked in the Abbasid caliphate’s libraries, including the Grand Library of Baghdad and wrote a lot of important mathematical treatises. Even way back then retro things were cool. So 300 years later al-Khwarizmi’s writing becomes all the rage in Europe. His book on Algebra was a particularly huge hit. Those retro math enthusiasts couldn’t get enough of his work, but not all of them wanted to learn to read Arabic, so they translated the books into latin. And when they translated Al-Khwarizimi’s name, they latinized it to Algorizmi. You can see where this is going. Over the centuries his name was used to mean all kinds of related things like decimal number system etc, and by the 1600s it was occasionally being spelled algorithm and by the late 1800s it was being used in its current sense.
There are innumerable strands of history that led to the modern algorithm but I’m going to use a heuristic to narrow in on one of them.
Gottfried Leibniz, the guy who also invented calculus, at the same time as Isaac Newton. Leibniz imagined a machine that could manipulate symbols in order to determine the truth of mathematical statements. You know, a computer. He also realized you’d need some kind of formal language for the symbols in order for it to work. You know, programming. His musings kicked around in mathematical communities for a few centuries.
Then in 1928 David Hilbert picked up Leibniz’s torch. Hilbert posed 23 problems for mathematicians to solve. The second was a “Decision Problem” that Leibniz would have recognized. It asked for an algorithm that could prove whether any mathematical statement.
And then we have a relay race that ends in defining the algorithm,
In 1931, Kurt Gödel demonstrated that certain math questions can be neither proved or disproved. In 1935, Alonzo Church picks up the baton from Gödel and in the process ends up defining what an algorithm should be – something called “effective calculability.” And taking us over the end line in 1936 Alan Turing expands the proof and goes on to use algorithms in his Turing machines, the most famous being the Enigma, which broke German cryptography during World War II.
Thanks to that relay team we have a very good understanding of algorithms. And while there are all kinds of variations, they fall into two big buckets.
An algorithm with a yes or no answer is called a decision procedure because the question is decidable. An algorithm that goes beyond yes or no is called a computation procedure because it can compute the answer.
Is X number prime? That is decidable. It either is or it isn’t.
Long division? That is computable. There’s no yes or no in division, just your quotient and your remainder. And your agony, depending.
So we found our protagonist, the algorithm, at the beginning of this story. We flashed back to its origins in the ancient empires. Now it’s time for our metaphorical Cap to wake up and start asking Nick Fury questions.
For our story the question is:
So is AI an algorithm?
Or rather, some kinds of things that are called AI, use algorithms.
For example, machine learning uses algorithms to identify patterns in data and group them into subsets. And other kinds of AI use machine learning. You’ll often hear scientists describe AI as using unstructured data, while machine learning needs structured data. Then you’ll hear other scientists yell at this scientists. Don’t get too caught up in that. Machine Learning uses algorithms to be able to solve a problem it’s been trained to solve. And more advanced AI sometimes uses machine learning to create more complex behavior – like driving a car.
If you’re still confused, think of it this way. There isn’t a “Facebook algorithm” there are hundreds of them.
Credit card fraud detection is an example of this. A set of algorithms processes the date, location and amount of transactions and uses math to determine a range in which variability is normal. If a transaction falls outside that norm it generates an output that can be used to stop the transaction and maybe send you a text message. [[“No I did not buy 200 pound of red vines. I am a confirmed Twizzler person!”]]
As we’ve indicated, Algorithms themselves are deterministic. They each take inputs, perform operations on them and produce outputs. But when you chain them together and use them in the right way, they can exhibit seemingly complex behavior.
For example, let’s consider a machine learning model that recognizes images of cats and dogs. This model consists of several algorithms, including image preprocessing, feature extraction, and classification. Each algorithm takes the output of the previous algorithm and transforms it in some way. The end result is a model that can accurately classify images of cats and dogs.
Also, that last paragraph about cats and dogs was written by ChatGPT when I gave it the paragraph before. Just an example of the kind of complex behavior we’re talking about.
Anyway, back to human written stuff– a key part of the complexity of AI is that the outputs of the algorithms can be used to change the algorithms. This is how AI systems “learn.” It’s the instructions to modify some algorithms and sometimes create new algorithms that causes AI to have the appearance of “intelligence” in the way that an algorithm on its own does not.
This is an important difference when talking about algorithms vs. AI. Even though AI is made up of algorithms. An algorithm can be fully transparent. It has defined steps you can look at (if the programmer lets you) to see how it makes its decisions.
Many types of AI though are a black box. The systems are modifying their code in such a complex way that it isn’t always clear why it created the output it did.
A great example of this is a kind of AI called deep learning. It uses connections of nodes called a neural network. Each node is essentially an algorithm. It takes input and passes output to the next node. As data passes from node to node, the nodes can be altered as a result of each output. This is similar to how our neurons strengthen connections as we think. Hence the name neural network. Deep learning alters these nodes in a way that makes it very good at recognizing patterns. But we only know that it’s doing this and that it works. We don’t see the output between the nodes. We only see the conclusion.
There is an effort to build something called explainable AI, that would let us see how the decisions inside the system are made. This is technically complex and can slow down or limit the effectiveness of the systems. It may also open the systems to manipulation by bad actors. But if it gets good enough, it could help us understand AI better.
In the end though, none of the shortcomings of AI are the fault of the algorithms themselves, which as we all know by now- say it with me– are finite with defined inputs and outputs. Just like our neurons are finite cells with defined synapses that send and received signals from other neurons. Connect enough neurons together and you get a brain, possibly even a human brain, or a dolphin. Something that can think anyway. Connect enough algorithms together and you get AI. We have a long way to go though before that can mimic thinking.
Aunt that’s our story. Until the sequel.
In other words, I hope you know a little more about Algorithms.