Artificial intelligence carries both benefits and dangers |
Artificial intelligence: good or evil
While the phrase "artificial intelligence" is undeniably misused, this technology is doing more than ever, both good and bad. It is used in combat and healthcare; helps people write music and books; evaluates your creditworthiness and improves the photos taken with your phone. In short, she makes decisions that affect your life, whether you like it or not.
It can be difficult to agree with the hype and hype that tech companies and advertisers are discussing with AI. Take Oral-B's Genius X toothbrush, for example, one of many devices unveiled at this year's CES that touted alleged AI capabilities. But upon closer inspection, it becomes clear that the brush is simply giving you feedback on whether you are brushing your teeth for the right amount of time and in the right places. There are some clever sensors out there that can tell where your brush is in your mouth, but calling it artificial intelligence is bullshit, nothing more.
The hype breeds misunderstanding. The press can inflate and exaggerate any research by sticking the Terminator to any vague AI story. This often leads to confusion about what artificial intelligence is. This can be a tricky topic for non-specialists, and people often mistakenly associate modern AI with the version they are most familiar with: the sci-fi representation of a conscious computer many times smarter than humans. Experts call this particular image of AI general artificial intelligence, and if we can ever create something like this, it will be very far away. Until then, exaggerating the capabilities, intelligence, or capabilities of the AI system will not help the process in any way.
It is much better to talk about " machine learning " and not about artificial intelligence. It is a subfield of artificial intelligence that encompasses almost all of the methods that have the greatest impact on the world today (including what's called deep learning). There is no "AI" mysticism in this phrase, but it is more useful in explaining what this technology does.
How does machine learning work? Over the past few years, you and I have had the opportunity to read dozens of explanations, and the most important difference I have found for myself lies right in the name: machine learning is everything that allows computers to learn on their own. But what it really means is a much bigger question.
Artificial intelligence problems
Let's start with the problem. Let's say you want to create a program that can recognize cats. You can write it in the old-fashioned way by programming obvious rules like "cats have pointy ears" and "cats are fluffy". But what does the program do when you show it a picture of a tiger? Each rule will be time-consuming to program, and you will have to explain many different concepts such as fluffiness and mottling. Better to let the machine teach itself. So you give her a huge collection of cat pictures and she goes through them to find her own patterns in what she sees. It connects the dots at first, mostly by accident, but you test it over and over to keep the best versions. And over time, she begins to quite well define what a cat is and what is not.
Soon there will be one solid artificial intelligence around us.
So far, everything is predictable. In fact, you've probably read a similar explanation before - sorry about that. Another thing is important. What are the side effects of training a decision-making system like this?
The biggest advantage of this method is the most obvious: you never have to program this system. Of course, you will work hard to improve the system's data processing principles while it finds smarter ways to extract information, but you will not tell the system what to look for. This means that she will be able to find patterns that people may even miss or not even think about. And since all the program needs is data - 1s and 0s - it can be trained to do a variety of tasks, because the world is literally teeming with data. With the hammer of machine learning in your hand, the digital world will be full of nails ready to go into action.
But now let's think about the disadvantages. If you are not teaching a computer, how do you know how it makes decisions? Machine learning systems cannot explain their thinking, which means that your algorithm might work well for the wrong reasons. Likewise, since all a computer knows is the data that you provide them, it can develop a bias against things, or it can only be good for narrow tasks that are similar to the data it has seen before. It doesn't have the common sense you would expect from a person. You can create the world's best cat recognition software, but it will never tell you that kittens cannot ride motorcycles."
Teaching computers to learn on their own is a brilliant trick. And like all tricks, this one includes tricks. AI systems have intelligence, if you want to call it that. But this is not an organic mind, and it does not play by the same rules as humans. You might as well ask: how smart is the book? What experience is encoded in the frying pan?
Where are we now, with our artificial intelligence? After years of headlines ringing about another big breakthrough (which hasn't happened yet, and the headlines aren't fading away), some experts conclude that we've reached some plateau. But this does not hinder progress. In terms of research, there is a tremendous amount of opportunity to explore with the knowledge already available to us, and in terms of product, we only saw the tip of the algorithmic iceberg.
Kai-fu Lee, a venture capitalist and former artificial intelligence researcher , describes the current moment as the "era of adoption" - when technology begins to "spill out of the laboratory into the world." Benedict Evans compares machine learning to relational databases, which made a fortune in the 90s and changed entire industries, but it will be so mundane that you get bored if your eyes are clouded by the grandeur of cinematic AI. We are now at the stage when AI should become normal, habitual. Very soon, machine learning will be in each of us and we will stop paying attention to it.
But so far this has not happened.
At the moment, artificial intelligence - machine learning - is still something new that often remains unexplained or insufficiently studied. But in the future, it will become so familiar and mundane that you will stop noticing it.