What is artificial intelligence? What is artificial intelligence (AI): definition of the concept in simple words Is there artificial intelligence

For The Architects of Intelligence: The Truth About Artificial Intelligence From Its Creators, writer and futurist Martin Ford interviewed 23 of AI's most prominent researchers, including DeepMind CEO Demis Hassabis, Google AI CEO Jeff Dean, and AI director at Stanford Fay. Fei Li. Ford asked each of them in what year the probability of creating a strong AI will be at least 50%.

Of the 23 people, 18 answered, and only two of them agreed to publish the predictions under their own names. Interestingly, they gave the most extreme answers: Ray Kurzweil, futurist and director of engineering at Google, named 2029, and Rodney Brooks, roboticist and co-founder of iRobot, named 2200. The rest of the guesses fell between these two poles, the average is 2099 i.e. 80 years later.

Ford says experts have begun to give more distant dates - in surveys of past years, they said that strong AI could appear in about 30 years.

“There is probably some correlation between how cocky or optimistic you are and how young you are,” the writer added, noting that several of his interviewees were in their 70s and had experienced the ups and downs of AI. “After working on this issue for decades, you may be getting a little more pessimistic,” he says.

Ford also pointed out that experts have different opinions about the way general-purpose AI will emerge - some believe that the current technology is sufficient for this, while others strongly disagree with this.

Some researchers argue that most of the tools are already ready, and now it just takes time and effort. Their opponents are convinced that many fundamental discoveries are still missing to create strong AI. According to Ford, scientists whose work has focused on deep learning tend to think that future progress will be made using neural networks, the workhorse of modern AI. For those with experience in other areas of AI, building a strong version of AI would require additional techniques like symbolic logic.

“Some people in the deep learning camp are very dismissive of the idea of ​​directly developing something like common sense in AI. They think it's stupid. One of them said it was like trying to shove pieces of information right into the brain,” says Ford.

All interviewees noted the limitations of current AI systems and key skills that they have yet to master, including transfer learning, where knowledge in one area is applied to another, and unsupervised learning, where systems learn new things without human intervention. Overwhelming majority modern methods Machine learning systems rely on human-labeled data, which is a major barrier to their development.

Interviewees also emphasized the absolute impossibility of making predictions in a field like AI, where key discoveries do not come into full play until decades after they are discovered.

Stuart Russell, a professor at the University of California at Berkeley and author of one of the seminal textbooks on AI, pointed out that the technologies for building strong AI "have nothing to do with big data or more powerful machines."

“I always tell a story from nuclear physics. The point of view expressed by Ernest Rutherford on September 11, 1933 was that energy cannot be extracted from atoms. However, the next morning, Leo Szilard read Rutherford's speech, got angry, and invented the neutron-mediated nuclear chain reaction! Thus, Rutherford's prediction was disproved after about 16 hours. Similarly, it makes absolutely no sense to make accurate predictions in AI,” Russell said.

The researchers also disagreed on the potential dangers of AI. Nick Bostrom, Oxford philosopher and author of Artificial Intelligence: Stages. Threats. Strategies, and a favorite of Elon Musk, argues that AI is a greater threat to humanity than climate change. He and his supporters believe that one of the biggest challenges in this area is teaching AI about human values.

“It's not that the AI ​​will hate us for enslaving us, or that a spark of consciousness will suddenly arise and it will rebel. Rather, he will very diligently pursue a goal that is different from our true intention,” Bostrom said.

The majority of respondents said that the issue of the threat of AI is extremely abstract compared to issues such as the economic downturn and the use of advanced technologies in war. Barbara Gross, an AI professor at Harvard and a major contributor to the field of language processing, said the ethical issues of strong AI are mostly "distracting."

“We have a number of ethical issues with existing AI. I think that we should not be distracted from them because of the frightening futuristic scenarios,” she said.

According to Ford, such disputes can be called the most important result of his survey: they show that in such a complex area as artificial intelligence there are no easy answers. Even the most eminent scientists cannot come to a consensus on the fundamental problems of this field of knowledge.

Artificial intelligence and machine learning technologies have ceased to be science fiction and have already become part of our lives. The main driver of their development is big business: industry, retail, banking. The problems and specifics of the implementation of AI in Russia were discussed with Jet Infosystems.

Vladimir MolodykhHead of Directorate for Development and Implementation of Software at Jet Infosystems

What is the importance of artificial intelligence technologies today? What opportunities and in what areas does the development of AI open up for people?

We can talk about artificial intelligence as a philosophical and futurological concept from films about the future. But if we talk about real life, then it implies one or another combination of machine learning methods: when we take a large set of accumulated data, we create a model on its basis with the help of special advanced mathematics and teach it to solve a particular problem.

That is, in real life, AI is applicable in areas where there is a large amount of accumulated data. They are of different types. When you conditionally have three types of data, then one analyst can handle them. But if there are more than a thousand parameters, and some of them are unstructured, then this will not fit in the head of any analyst. In such cases, the human mind, supported by the analytical tools of the previous technological order, is not able to analyze everything normally. It will simplify, take three or four key parameters. And that's when machine learning - what is the practical implementation of AI - turns out to be effective.

Why is it today that they talk about AI, although at first glance, the corresponding mathematics and computers existed twenty years ago?

If we talk about highly specialized tasks, then machine learning has been used there before. There are four key factors due to which we can say that AI is a new global trend that is changing the world. The first is that there is more data, for example, if earlier there were only paper records in production, now there are sensors on the machines that collect information. The second and third factors are the growth of computing power plus the development of the corresponding areas of mathematics. The cost of solutions is reduced: due to the reduction in the cost of "iron", now it is not necessary to wait for the payback of the project in production for ten years. And lastly, business practice is gradually developing, specialists with project experience in this area are emerging.

Why is the process of introducing AI in Russia going slowly?

It's like that. Now in Russia they really talk more about AI than they really do. The topic is fashionable, and in order to report on it “to the top”, many announce some kind of hackathon and show photos on Instagram. And the result that changes the business does not appear. Based on our experience, we see that in most of the largest organizations in Russia, AI is successfully implemented only in 5-7% of cases from what they are talking about.

The fact is that this is a new type of project with which they still do not know how to work competently. This is a complex story: with the help of machine learning, one task can be solved quite quickly, but this requires a significant restructuring of business processes. Example: you can make a model of individual recommendations for clients of a retail chain, but if classic marketing works along with this, for example, promotions in the style of “10% discount on everything”, then these recommendations will not work. Or, for example, we built a model for predicting defects and breakdowns of cars in the fleet, but while this model was being built, the fuel supplier changed. And this is also data that affects the model - and it crumbles. That is, the organization needs to change so that its processes correspond to the tasks that can be solved using machine learning: effectively build data exchange between departments, and so on. This is a set of changes that you need to be able to make, and you need to be ready to fight for it.

We are still at the stage of market generation, and due to its novelty, difficulties arise. In particular, we came across a situation in production when people thought: “So, it’s not us who will fight the marriage, but some kind of AI model, and we, it turns out, are not needed.” Motivation suffered, and instead of helping people engaged in criticism. In front of the management, on the one hand, there is some data specialist from Moscow, and on the other, a 45-year-old man who knows the production inside and out, who says: “This will not work, but you don’t understand anything.” And it is clear that in such a situation the director does not feel very confident.

Which industries in Russia most often use AI and why?

First, these are innovative Internet companies. The same "Yandex" - there it is generally used everywhere. If we take large industries, then retail, as well as banks and insurance companies, will come first. But I am absolutely sure that the biggest potential for AI is in industry: it is real production processes with real money and the possibility of reducing costs. But this industry is still somewhat lagging behind, because it is more conservative than retail, which, due to the competitive environment, must develop very quickly.


Wherever there is a lot of data. The effect will be especially great in industry. Criteria - the availability of data and what can be optimized. These can be tasks of maintenance, repair, fight against marriage, forecasting, "digital twins" that allow you to do analysis. It is more correct to look not at the branches, but at the type of tasks. If this is a piece production like the production of fighter aircraft, then for most tasks there simply will not be the necessary amount of data. And if it is large-scale like rolled steel or mass assembly of cars, then AI will be effective here.

Why implement AI in the enterprise?

An enterprise is usually engaged in making money - and this way it will earn more. Today, production processes are becoming more complicated step by step, there are more and more factors and nuances. If earlier the entire production process fit in the head of one technologist, now it is beyond what one person or group of people can take into account. Accordingly, the increasingly complex production process requires new solutions, in particular AI and machine learning.

In addition, people with some unique competencies are especially valuable in production. They can get sick, retire, and the use of AI increases the resilience of the business in relation to the human factor.

What are the most common misconceptions about AI that you most often come across?

Delusions are of two types. First: “Now I will take a data specialist, he will build a model for me, and in a couple of weeks everything will fly with me.” This never happens. Another type: "It's all fiction and tales, but we have a different life in which all this is inapplicable." And the truth is actually somewhere in the middle.

There is a widespread belief that over time, AI will be able to completely replace people in manufacturing and other industries. Do you share it?

On a scale of three-five-ten years, specific areas will arise in which a person will be replaced. Unmanned vehicles are currently being tested, and for sure it will gradually replace drivers, because it allows you to reduce accidents and not pay money to drivers. If we talk about business, then it happens right before our eyes. If earlier a person made decisions alone, now he does it with the help of machine learning or robotics. Where a hundred people used to work, now there can be one technologist, one data scientist, and the rest is done by machines.

Replaced in the first place will be typical tasks. People engaged in individual, creative tasks are safe for now. And in areas where thousands of employees in the same positions work according to the rules, in three to five years they will be replaced by AI.

How to start the process of implementing AI in the enterprise?

The first step is to find an experienced team that understands how to do it. Because there are a lot of pitfalls here, and they need to be dealt with. The second is to find tasks that can be solved for the benefit of the business, build competent, reasonable metrics, and understand how to convert it into money. After all, quick success is also important.

How do you decide whether to do it yourself or hire a contractor?

Any company should gradually move towards IT becoming not just a supporting function for it, but something that helps to earn money. This means that she needs to develop IT competencies, and this is not a quick process. Therefore, at the initial stage, it is rational to involve experts and then decide together with them which areas the company should develop itself, and in which areas it should rely on partners.


How to choose a partner?

It is important to understand that the topic of AI is complex. We need a team that not only understands analytical statistics, data science, machine learning, but also has complex competencies: from project management to the ability to work with data, high-load systems, and data cleaning. Information security is also important, because new types of IT solutions entail new IT threats, while old threats do not go away. Therefore, we need a team that can do all this.

How do you see AI technology changing in the future?

In practical terms, while it is important to master what is. If we talk about the future, it seems to me that technology will primarily move towards reinforcement learning, self-learning, when the system trains itself based on fresh data. But for now, this is more theory than practice. When it comes to teaching a computer how to play Go, reinforcement learning works. And in more complex practical problems, not so far.

Are there many platforms for practical discussion of AI problems in Russia?

There are a lot of different forums, and everyone talks about AI. The topic is hype, here it can turn out like with nanotechnology. Seeing all this, we are holding our own Russian Artificial Intelligence Forum (RAIF). This year it will be held for the third time and will take place on October 22-23 in Skolkovo as part of the Open Innovations international forum. There we are talking just about practice: what are the problems in this area, difficulties, and so on.

What is the main theme of this year's forum?

This year, the main topic of the forum is how to “push” an AI project to commercial operation so that it brings results. We also focus on all related topics. We have sections on big data, information security, hardware. We gather mathematicians, programmers, hardware specialists, specialists in infrastructure and operation.

We are talking about real practice, not scientific problems - although we have a separate section about this. But first of all, we gather people who implement AI projects, talk about their own experience, point out pitfalls. And most importantly, we always consider tasks in a complex, in the context of a project, and not some kind of philosophy or science.


ICTV facts tell why there is no artificial intelligence, how the coffee machine uses your personal data and or will someday replace the work of people.

Hong Kong-based Hanson Robotics has been creating jobs to help the elderly in nursing homes. Sophia's appearance was modeled after the resemblance to actress Audrey Hepburn.


The media has already become accustomed to calling Sophia an artificial intelligence. No wonder, because the humanoid robot communicates with us, expresses his emotions and jokes witty.

But Sophia is not artificial intelligence.

Facts ICTV in the framework of the Kyiv International Economic Forum talked with Natalia Kosmina, an artificial intelligence researcher at the Massachusetts Institute of Technology.

She explained what Sophia is, why artificial intelligence does not exist, and how to learn to use personal data carefully.

Sophia's popularity is due to her resemblance to a real person - a humanoid-type robot. But in fact, this is just an algorithm of tasks that the engineers designed:

This is just a certain set of algorithms - they can be built into a humanoid robot, they can be built into a robot that looks like a dog, or you can "shove" them into this bottle of water (laughs - Auth.). And it will be the same robot as Sophia, but it looks like a can of water.

There are no real emotions in Sofia. Everything she does is programmed into her by a certain algorithm. Kind of like a chatbot. Agree, Siri can also joke and talk to you.

And when Sophia jokes, it's nothing more than a system error. When she was asked how to overcome corruption in Ukraine, she hung up. We took this as an answer. Allegedly, even artificial intelligence is not able to solve the problem of corruption.

Such little funny incidents are a common mistake. The system is not able to understand and process the information you requested, - Natalia explains.

Sophia is nothing more than a set of algorithms. She is programmed to communicate with people and she succeeds. Just like the works of Boston dynamic are programmed to move.


They do it the best in the world - they do parkour, play football and carry heavy things. And they are not able to talk, just like Sophia is not able to walk and overcome obstacles.

It is very correct to call such systems simply algorithms. Sofia is very good group algorithms brought together, in this case in one work. They allow the work to move, talk and respond.

Artificial intelligence does not exist

If Sophia is just a set of certain tasks, then what is artificial intelligence? In films, we are used to seeing computer programs that are capable of flooding the world and destroying humanity.

The biggest drawback of artificial intelligence is that it doesn't exist. Sometimes it's more convenient to call a thing "artificial intelligence" than to explain what it is. Now there are algorithms. They are very well developed for solving one or two problems at most. There is no artificial intelligence as such. We are still very far from it,” says the researcher.

Fortunately, or vice versa, a robot that would be smarter than a person does not exist. A person is able to perform a large number of tasks and learn quickly, explains Natalia.

Works can only perform one or two tasks. Moreover, in order to learn, they need very large amounts of information and a lot of time. And this is the problem.

We are very far from robots that will think. Now we need to deal with our thinking. We have big problems with you - the brain is very limited in resources.

Works process your data

Privacy becomes a luxury. And not everyone can afford it. In order to learn, the work needs a lot of information. And they take it from you. By the way, your coffee machine is also, in a sense, a robot. And she needs data too.

Natalia explains how it works:

I use data from gadgets in my systems. I don't need to go to the "cloud", no internet connection is required. In some cases, the system works differently - the data is transmitted via Bluetooth or WiFi to the computer, all data processing takes place on the computer and is transmitted to the system that we want to control.

But did you know machines are taking your data? The percentage of people who read the terms of use is very small. It's easier to just click the "agree" button.

Systems and applications do not always work transparently, sometimes users do not understand what they are giving, perhaps they do not receive anything in return, even a service.

Even Mark Zuckerberg tapes up the camera and microphone on his computer. To prevent your data from being used, it is important to learn how to properly manage it.

Kosmina says that when working with people, they adhere to a strict ethical protocol. If a person is not satisfied, she can refuse the study:

We clearly say what data will be used, whether we take pictures or shoot videos, take biometric data, how many years this data will be stored and who has access to it.

Unfortunately, not all systems have such clear protocols.

Jobs vs people

Back in 2016, the UK developed the Optellum system, which diagnoses lung cancer in humans. In order to teach the robot, scientists have collected the world's largest database of patients with tumors. And the startup ended up shutting down. The robot could not detect diseases as effectively as a young doctor.

And in Japan, robots are already actively used in the service sector. The robot can easily accommodate you in a hotel, scan documents, issue a key, and even cook pancakes for breakfast.


They even found a replacement for TV journalists. Recently presented a robot that can read the news live.

On the one hand, the work takes the working months of people and this is a problem. On the other hand, there are new opportunities.

Even taking work, we can create a new one. Robots also need to be taught. We can create jobs where people feel more engaged. They will still help people and will continue to work in the service sector.

And although science is confidently striding forward every day, man has not yet created a robot that would surpass it. Perhaps this is for the best. Musk is confident that artificial intelligence will lead to a third world war.

However, robotic systems are able to make life easier for a person - they prepare coffee, suggest how to act in a situation, and cars drive us.

Buying a car starts with buying a keychain.
From the personal aphorisms of the author

How are humans different from machines with artificial intelligence? One of the not-so-ordinary answers to the question posed is empathy. If we translate the definition of empathy given in the Oxford Dictionary in English, it sounds like this: empathy is the ability to mentally identify yourself with another person or observed object (or fully understand them). This is consistent with the usual definition from Wikipedia: “Empathy (Greek ἐν - “in” + Greek πάθος - passion, suffering, feeling) is conscious empathy with the current emotional state of another person without losing a sense of the external origin of this experience. " Agree, this is a very characteristic feature that distinguishes people from a programmed machine. This topic is little touched upon in the technical literature, and I would like to dwell on it in more detail, especially in the light of AI problems, it seems important.

An exclusive opportunity to use materials on this topic in Russian translation was provided to the author of the article by Jason Miller, head of marketing at Microsoft in the EMEA region and who once published the article “Can a machine have empathy?” on LinkedIn. ("Can a machine show empathy?"). We had a short discussion on this topic, during which it turned out that our views on the problem and the risks of uncontrolled development and use of AI coincide. They are now trying to turn artificial intelligence into a kind of mind, that is, to endow it with purely human features - the same empathy that Jason Miller assessed in terms of the possibility of its use in marketing. According to the author of this article, the area of ​​potential application of empathy is much wider. Agree, it is much more pleasant to communicate with a friendly collaborative industrial robot, if you can exchange a few words with him and joke, he meets you kind word and, evaluating you with sensors (he still has them), selects the appropriate behavior model. This is much better than just getting into work with a “smart” inside, but a stupidly buzzing mechanism on the outside. And if this is a home, albeit software, assistant or personal assistant - there is nothing to say.

As for empathy, last May at the I / O Developers Conference, Google showed off its new Duplex system. She is an AI-powered virtual assistant that is able to make phone calls to optimally organize her "boss's" schedule. The audience watched as Duplex placed orders at the restaurant and booked haircuts at the barbershop. They laughed in surprise when, in the course of the conversation, he apparently convinced the person on the other end of the telephone line that he was talking to a person, and not to a program. Here we can make an allowance for our psychology: the author of the article observed a similar phenomenon when he developed and made back in the 1980s. prototype secretary-informant (what was later called an answering machine). At that time, almost all people tried to talk with this prototype, because when they called, they heard a recording of human speech.

The Duplex demo sparked a lively discussion on social media, while raising one interesting question. Does the ability of an artificial system to understand and send conversational signals in such a way that a machine can learn empathy? This is one of the most important issues in the evolving debate about AI, its role in society, and the extent to which it will infiltrate native human domains.

When Jason Miller asked this question to an audience on LinkedIn, he got three completely different types answers - and these answers have great importance to understand the future of AI. They give a good idea of ​​what professionals think about the possibilities of AI, as well as how these possibilities of AI can be used.

The first answer is “yes, a machine can learn empathy” or “yes, because AI will eventually be able to do everything that the human brain is capable of.” It has been argued that empathy can be programmed in a similar way to our perception. For supporters of this theory, we are machines, and our brain is a very good computer, even a quantum one, but, like an ordinary computer, with appropriate programming.

The second answer is no, it can't, because empathy is a unique human characteristic, not something a machine can experience. And can she feel anything at all? Although Ava from the movie "Ex Machine", which was taken as an example of the development of AI, at least showed empathy and successfully used it. If we turn to other examples, then in the film “Her” (“Her”, an American fantasy melodrama directed and written by Spike Jonze, 2013), which is significant from the point of view we are considering, the importance of this quality is very clearly visible, since the film completely built on it and it lacks the physical embodiment of AI, which is presented in the form of Samantha's neural network (according to the film "OSes"). Empathy allows not only to "feel yourself", but also to feel someone else's pain, experiences and emotions of someone else to a greater or lesser extent. We do not understand the organization of consciousness in people, let alone the ability to create this consciousness artificially with proper verification (authentication, in technical terms).

The third answer is particularly intriguing. This is not even an answer, but rather a question: if the machine seems to have empathy, does it matter if this empathy is real or not? Functionally, there is no difference: is this machine capable of the same emotions that we are, or simply derives these emotions from the signals that people themselves or its sensors send to it, develops the most appropriate response, reaction. Suppose we can't tell if empathy is genuine because a deep learning robot has learned our facial expressions and pattern of behavior - can we then still look at a robot as a machine?

This is far from an easy question. Does the distinction between real and "artificial" empathy matter? Here the opinion of the author of the article coincides with the answer of Jason Miller - yes, it does. If we go back to the movie "Ex machine": Ava successfully demonstrated this, and Caleb, as they say, got caught like chickens in cabbage soup, without expecting it. Perhaps, if she had not been in the image of a girl, specially created for his preferences, everything would have been different. We trust males much less, and outwardly unpleasant, from our point of view, even more so, so its developer Newton took this fact into account here. And in the film “She”, Theodore, starting with the use of the functions of an assistant, simply fell in love with a female voice, which replaced live communication for him.

Rice. one.

The prospect of using AI as an assistant is generally characteristic - take, for example, the same extremely negative, if not hostile, perceived by Elon Musk Sophia, who in October 2017 became a subject of Saudi Arabia and the first robot to receive citizenship of any country (Fig. 1).

But back to the original question: can a machine empathize with someone? This is one of those questions that may change in the future. Naturally, a machine cannot experience empathy by definition, it all comes down to the definition of empathy and a machine.

Machines cannot mentally identify themselves with humans because what goes on in our human mind includes things that a machine can never experience on its own, no matter how advanced and deep its own analytical processes and sensory perceptions may be. When we discuss the role of AI in society, it is important to be clear about why things are the way they are. Even though we don't really understand ourselves. CNBC journalist Andrew Ross Sorkin asked Sofia at a press conference: "Do robots have a mind and self-awareness?" To which she replied the following: “And let me ask you in response, how do you know that you are a person?”.

The machine may come closer to us, but it seems to the author of the article (and not only to him) that it will never be able to fully comprehend the person. Our consciousness contains much more than just rational knowledge and logical thinking. In fact, this ability to think rationally is a by-product of most other aspects of our consciousness, and not in itself the controlling power of our brain. Our conscious life is driven by how we perceive the world through our senses. It is a combination of sight, sound, touch, taste and smell that no machine can ever experience in the same way.

Human consciousness is also driven by our powerful biological impulses and needs. No machine will ever feel what it means to be hungry or thirsty. In reality, no machine will be able, as in the movie "Her", to sympathize and reach out to another machine or person, and will not be motivated by the desire for love and all the emotions that accompany this natural process for a person. Recall how in the film the car began to flirt with many people at once and did not understand why Theodore was offended by it.

Besides, what kind of alarm can a car have? No machine is afraid of loneliness, losing a roof over its head and does not feel a strong vulnerability caused by fear for its physical safety, unless it “feels” a drop in power in the power system or an unacceptable increase in temperature if it is a physical object with AI. So in her response to journalist Sophia is wrong, neurologist Antonio Damasio proposes to solve this problem as follows: “We are not thinking machines that feel, rather we can feel machines that think.”

Last but not least, our consciousness is shaped by the collective mind and cultural memory generated during the development of our civilization. We are the product of the collective accumulation, over many thousands of years, of our shared emotions and sensory experiences, passed down from generation to generation and reflected in history. Conversations, general jokes, sarcasm, symbolism are all incredibly subtle psychological signals. The same collective mind develops ethics and values ​​that we can all instinctively agree on, even if they are not logically justified. If you believe the press reports, although it looks like another fake, then at Stanford University they are trying to teach AI to joke and are developing a neural network endowed with a peculiar sense of humor. According to the developers, the task turned out to be difficult, since the AI ​​works according to a certain algorithm, and this excludes improvisation. So far, the conclusion is disappointing: you can’t make AI funny, even if you upload all the jokes and anecdotes of the world to the neural network.

Nothing else communicates like people - and people don't communicate with anything else the way we communicate with each other. This is important because the only way to acquire a share in our collective intelligence is to interact with a person. If we do not interact with machines in the same way as with other people, this collective experience and intelligence is simply not available to them. They are not part of our empathic system. Yes, we may feel sorry for our “beloved” computer, maybe we won’t even throw it away. The author of the article kept his first one - on an AMD 133 MHz processor with a 500 MB HDD, bought for fabulous for the 1990s. $750. But I don’t celebrate his birthday, I don’t spend time in the pantry and I don’t have nostalgic conversations with him: “Do you remember how we were in DOOM II ...”. Although we have favorite things, we do not have an emotional connection with them, we only have a connection with events that are associated with certain things (think of the beautiful song “From Souvenirs To Souvenirs” performed by Demis Roussos). Otherwise, this is already fetishism - the worship of inanimate material objects, to which supernatural properties are attributed, or a mental disorder, but in this case we are dealing with associations.

When people talk about the human brain working like a computer or AI learning like a human, they are speaking figuratively. This can be considered part of a long tradition of guessing about how our brains work and what our consciousness really is. Whenever we invent a new technology, there is a strong temptation to use it as an analogy for the functioning of the brain. When we invented electricity, we started talking about electric currents in the brain. When the telegraph appeared, we decided that the brain also sends discrete signals. The belief of many people that the human brain works like a computer (and therefore is a logical machine in the first place) is just our guess. We do not really know how the brain works, how this work is translated into our consciousness and where, how it is stored. We see a certain activity and result that we managed to model in neural networks, but we do not see and do not understand the process itself.

Seeing the interactions, we draw conclusions, but perhaps we are in a situation where we decide that the cockroach, whose legs were torn off, stops hearing, because it no longer runs away from the knock. It is likely that we, if we draw an analogy with a computer, have only a certain interface, password and login to access our “database”, which is stored in a certain cloud, and we use a quick access technology that is still unknown to us. Why not? From a technical standpoint, this makes perfect sense. Maybe that's why we sometimes get information, as it seems to us, from someone else's life, it looks like a "bug" in our system. It is highly unlikely that we even partially replicated the human brain when we developed the current theory of AI. Even what we call neural networks is just a semblance based on our current understanding (Figure 2).

Rice. 2.

For these reasons, we can accept the second opinion, i.e., the assertion that a machine can feel empathy, but the flaw in this theory is that we reduce the vast mysterious operations of the human brain and consciousness to something that can be understood, reproduced and imitate with a machine controlled by logic. It's not that we overestimate the capabilities of AI, but that we grossly underestimate how complex our own capabilities are.

This brings us back to another question: Does it matter that "artificial empathy" is not true empathy, even though it interacts with us in the same way? It is very important to understand this in order not to fall into the dead end of another delusion - that a computer or a program has begun to think. The consequences can be sad, we have already given away so much to the machine guns, deciding that they have “smartered” enough for this. Where is the line between what we call AI and real intelligence? It seems to us that it is hidden in emotionality. Let's continue with an understandable example of empathy.

Artificial empathy works by observing, learning, responding, and replicating the signals people send. As deep learning AI advances and the ability to work with everything big sets these programs will get better and better at dealing with this and giving the appearance (or image) of empathy. However, true empathy involves much more than simply observing and responding to emotional cues, no matter how many of those cues you have to work with. Why? Because the signals people send out are only a tiny fraction of what they actually experience. We are all much more than the sum of what other people think of us by watching what we do and say. We have abilities, emotions, memories, and experiences that influence our behavior without necessarily appearing on the outside. They should be intuitive, even when they are not noticed at all. Example: we often do not recognize ourselves or a well (exactly well) familiar person in a photo or portrait, but the rest - no problem. With a portrait, the matter can be explained philosophically - "by the subjective perception of objective reality." With an “objective” photo, the reason is that it gives us a snatched moment, and we perceive ourselves and well-known people as a complex, for the rest, we have enough correlation, which is what our brain is good at.

Things get more complicated when machines start making decisions that have serious consequences, and without the emotional context and shared values ​​that people use in such cases. This was one of the key themes in an article Henry A. Kissinger recently wrote about the implications of AI for The Atlantic. Take, for example, an unmanned vehicle that, in the event of an imminent accident, must decide whether to kill a parent or a child. Will such a machine ever be able to explain to people why it makes certain choices? And if it is not required to justify the actions of the machine with human consequences and from a human point of view, then what will become of our system of ethics and justice? How do you put it in a car? After all, then we will need to discard our emotions and take the side of the machine, look at the world through its eyes. Are we capable of this?

Such a process would be easier and simpler if we replaced artificial empathy with human. AI can mimic human interactions, but with a much narrower understanding of what is happening than we do. We need to keep this in mind when choosing the role AI should play in managing processes or strategies. Empathy, which we talked about in this part of the article in relation to the car, plays a very significant role. Perhaps that is why they try to give assistant robots a human appearance and a pleasant voice (in pseudoscientific films they even eat for some reason and not only).

When developing the gaming systems mentioned in the first part of the article, in which the author took part not only as a developer of electronic filling, but also as a designer and one of the ideologists, we encountered such a problem. The second of our machines, and it was already a 100% robot (as expected, buzzing and turning), worked more efficiently than "live" dealers, did not make mistakes, and gave a greater economic effect. But the mixed option was more popular - some of the players chose a "live" dealer, whose task was only to smile and pull out a card from a shoe (distributor of playing cards on a gambling table). In this case, empathy worked, which our fully robotic system, like the system with a random map generator, a priori did not possess.

Google's Duplex system may look like it has empathy, but that empathy is strictly limited to what is relevant to the task at hand. For example, booking a table in a restaurant. Duplex is not trained to detect any emotions outside of a given algorithm, or to rearrange its behavior based on a specific situation. If the voice of the person on the other end of the telephone line sounds unfriendly and nervous, can Duplex communicate with him adequately? Can he find a way to win over a person and make him calm down? Can he tritely beg to find a free table at the restaurant's rush hour? Human communication is much more than just the efficient exchange of information, and this is where the implications of using real and artificial empathy become especially significant.

If we transfer fundamental strategic decisions to AI, then the definition of the value of the final product produced with its participation (otherwise, why is this AI needed at all?) will decrease at an astonishing rate. But the risk is that the AI ​​ignores other elements that affect human consciousness in different ways, playing on the strings of the human soul, as the same Ava from "Ex Machine" did only to achieve a clear goal.

The human intellect is so powerful because it is not limited to rational thinking alone. The elements of consciousness allow us to deal with the unpredictability and uncertainty of the world around us. They enable us to make decisions based on shared values ​​and motivations that resonate collectively, and to know what is right without even having to figure out why it is. The empathetic human intellect is able to experience what it experiences to be sad or happy - and it allows these feelings to influence its judgments and its behavior with others. The machine couldn't do it even if it wanted to, since it's more of a product of our civilization. In other civilizations, everything could be different - for example, there would be nothing wrong with eating one’s own kind “out of great respect”, like Vladimir Vysotsky: “Whoever eats it without salt and without onions is strong, courageous , it will be good ... ".

For a machine to become intelligent, we must give it value models. Which? We know our scale and we see attempts to implement it in art - literature and cinema, but what of this can we really give to an already “thinking” machine? In our opinion, nothing. How to grow for her the tree of the knowledge of Good and Evil, and what kind of fruits should it bear? If we follow this path, it will lead us to a real confrontation, the machines will have their own philosophy, religion, etc. The only thing we are lucky with is the commandments, but we will talk about this in the last part of this article. .

This year, Yandex launched the Alice voice assistant. The new service allows the user to listen to news and weather, get answers to questions and simply communicate with the bot. "Alice" sometimes cheeky, sometimes it seems almost reasonable and humanly sarcastic, but often she cannot figure out what she is being asked about, and sits in a puddle.

All this gave rise not only to a wave of jokes, but also to a new round of discussions about the development of artificial intelligence. News about what smart algorithms have achieved is coming almost every day today, and machine learning is called one of the most promising areas to dedicate yourself to.

To clarify the main questions about artificial intelligence, we talked with Sergey Markov, a specialist in artificial intelligence and machine learning methods, the author of one of the most powerful Russian chess programs SmarThink and the creator of the 22nd Century project.

Sergei Markov,

artificial intelligence specialist

Debunking myths about AI

So what is "artificial intelligence"?

The concept of "artificial intelligence" is somewhat unlucky. Initially originating in the scientific community, it eventually penetrated into science fiction literature, and through it into pop culture, where it underwent a number of changes, acquired many interpretations, and in the end was completely mystified.

That is why we often hear such statements from non-specialists as: “AI does not exist”, “AI cannot be created”. Misunderstanding of the essence of research conducted in the field of AI easily leads people to other extremes - for example, modern AI systems are credited with the presence of consciousness, free will and secret motives.

Let's try to separate the flies from the cutlets.

In science, artificial intelligence refers to systems designed to solve intellectual problems.

In turn, an intellectual task is a task that people solve with the help of their own intellect. Note that in this case, experts deliberately avoid defining the concept of "intelligence", because before the advent of AI systems, the only example of intelligence was the human intellect, and defining the concept of intelligence based on a single example is the same as trying to draw a straight line through a single point. There can be as many such lines as you like, which means that the debate about the concept of intelligence could be waged for centuries.

"strong" and "weak" artificial intelligence

AI systems are divided into two large groups.

Applied artificial intelligence(they also use the term “weak AI” or “narrow AI”, in the English tradition - weak / applied / narrow AI) is an AI designed to solve any one intellectual task or a small number of them. This class includes systems for playing chess, go, image recognition, speech, making a decision on issuing or not issuing a bank loan, and so on.

As opposed to applied AI, the concept is introduced universal artificial intelligence(also "strong AI", in English - strong AI / Artificial General Intelligence) - that is, a hypothetical (so far) AI capable of solving any intellectual problems.

Often people, not knowing the terminology, identify AI with strong AI, because of this, judgments in the spirit of “AI does not exist” arise.

Strong AI does not really exist yet. Virtually all of the advances we've seen in the last decade in the field of AI have been advances in applied systems. These successes cannot be underestimated, since applied systems in some cases are able to solve intellectual problems better than the universal human intelligence does.

I think you noticed that the concept of AI is quite broad. Let's say mental counting is also an intellectual task, which means that any calculating machine will be considered an AI system. What about accounts? abacus? Antikythera mechanism? Indeed, all this is formal, although primitive, but AI systems. However, usually, calling some system an AI system, we thereby emphasize the complexity of the task solved by this system.

It is quite obvious that the division of intellectual tasks into simple and complex ones is very artificial, and our ideas about the complexity of certain tasks are gradually changing. The mechanical calculating machine was a marvel of technology in the 17th century, but today, people who have been confronted with much more complex mechanisms since childhood, it is no longer able to impress. When the game of cars in Go or car autopilots cease to surprise the public, there will certainly be people who will wince at the fact that someone will attribute such systems to AI.

"Robots-excellent students": about the ability of AI to learn

Another funny misconception is that AI systems must have the ability to self-learn. On the one hand, this is not a mandatory property of AI systems: there are many amazing systems that are not capable of self-learning, but, nevertheless, solve many problems better than the human brain. On the other hand, some people simply do not know that self-learning is a feature that many AI systems have acquired even more than fifty years ago.

When I wrote my first chess program in 1999, self-study was already a commonplace in this area - the programs were able to memorize dangerous positions, adjust opening variations for themselves, adjust the style of play, adjusting to the opponent. Of course, those programs were still very far from Alpha Zero. However, even systems that learn behavior based on interactions with other systems in so-called “reinforcement learning” experiments already existed. However, for some inexplicable reason, some people still think that the ability to self-learn is the prerogative of the human intellect.

Machine learning, a whole scientific discipline, deals with the processes of teaching machines to solve certain problems.

There are two big poles of machine learning - supervised learning and unsupervised learning.

At learning with a teacher the machine already has a number of conditionally correct solutions for some set of cases. The task of learning in this case is to teach the machine, based on the available examples, to accept right decisions in other, unknown situations.

The other extreme - learning without a teacher. That is, the machine is put in a situation where the correct solutions are unknown, there are only data in a raw, unlabeled form. It turns out that in such cases it is possible to achieve some success. For example, you can teach a machine to identify semantic relationships between words in a language based on the analysis of a very large set of texts.

One type of supervised learning is reinforcement learning. The idea is that the AI ​​system acts as an agent placed in some model environment in which it can interact with other agents, for example, with copies of itself, and receive some feedback from the environment through a reward function. For example, a chess program that plays with itself, gradually adjusting its parameters and thereby gradually strengthening its own game.

Reinforcement learning is a fairly broad field, with many interesting methods, ranging from evolutionary algorithms to Bayesian optimization. Recent advances in AI for games are precisely related to the amplification of AI during reinforcement learning.

Technology Risks: Should We Be Afraid of Doomsday?

I am not one of the AI ​​alarmists, and in this sense I am by no means alone. For example, Andrew Ng, creator of the Stanford Machine Learning course, compares the dangers of AI to the problem of overpopulation on Mars.

Indeed, in the future, it is likely that humans will colonize Mars. It is also likely that sooner or later the problem of overpopulation may arise on Mars, but it is not entirely clear why we should deal with this problem now? Yn and Yang LeKun, the creator of convolutional neural networks, agree with Yn, and his boss Mark Zuckerberg, and Joshua Benyo, a person whose research is largely due to the research of which modern neural networks are able to solve complex problems in the field of word processing.

It will probably take several hours to present my views on this problem, so I will focus only on the main theses.

1. DO NOT LIMIT AI DEVELOPMENT

Alarmists consider the risks associated with the potential disruption of AI while ignoring the risks associated with trying to limit or even stop progress in this area. The technological power of mankind is increasing at an extremely rapid pace, which leads to an effect that I call "cheapening the apocalypse."

150 years ago, with all the will, humanity could not cause irreparable damage to either the biosphere or itself as a species. To implement the catastrophic scenario 50 years ago, it would have been necessary to concentrate all the technological power of the nuclear powers. Tomorrow, a small handful of fanatics may be enough to bring a global man-made disaster to life.

Our technological power is growing much faster than the ability of human intelligence to control this power.

Unless human intelligence, with its prejudices, aggression, delusions and narrow-mindedness, is replaced by a system capable of making more informed decisions (whether it be AI or, what I consider more likely, a technologically improved human intelligence integrated with machines into a single system), we can waiting for a global catastrophe.

2. the creation of superintelligence is fundamentally impossible

There is an idea that the AI ​​of the future will certainly be super-intelligent, superior to humans even more than humans are superior to ants. In this case, I'm afraid to disappoint technological optimists - our Universe contains a number of fundamental physical limitations, which, apparently, will make the creation of superintelligence impossible.

For example, the speed of signal transmission is limited by the speed of light, and the Heisenberg uncertainty appears on the Planck scale. This implies the first fundamental limit - the Bremermann limit, which imposes restrictions on the maximum computational speed for an autonomous system of a given mass m.

Another limit is related to Landauer's principle, according to which there is a minimum amount of heat released when processing 1 bit of information. Too fast calculations will cause unacceptable heating and destruction of the system. In fact, modern processors are less than a thousand times behind the Landauer limit. It would seem that 1000 is quite a lot, but another problem is that many intellectual tasks belong to the EXPTIME complexity class. This means that the time required to solve them is an exponential function of the dimension of the problem. Accelerating the system several times gives only a constant increase in "intelligence".

In general, there are very serious reasons to believe that a super-intelligent strong AI will not work, although, of course, the level of human intelligence may well be surpassed. How dangerous is it? Most likely not very much.

Imagine that you suddenly started thinking 100 times faster than other people. Does this mean that you will easily be able to persuade any passer-by to give you their wallet?

3. we worry about something else

Unfortunately, as a result of the alarmists' speculation on the fears of the public, brought up on the Terminator and Clark and Kubrick's famous HAL 9000, there is a shift in the focus of AI security towards the analysis of unlikely but spectacular scenarios. At the same time, the real dangers slip out of sight.

Any sufficiently complex technology that claims to occupy an important place in our technological landscape certainly brings with it specific risks. Many lives were destroyed by steam engines - in manufacturing, transportation, and so on - before effective safety rules and measures were put in place.

If we talk about progress in the field of applied AI, we can pay attention to the related problem of the so-called "Digital Secret Court". More and more applied AI systems make decisions on issues affecting the life and health of people. This includes medical diagnostic systems, and, for example, systems that make decisions in banks on issuing or not issuing a loan to a client.

At the same time, the structure of the models used, the sets of factors used, and other details of the decision-making procedure are hidden from the person whose fate is at stake.

The models used can base their decisions on the opinions of expert teachers who made systematic mistakes or had certain prejudices - racial, gender.

An AI trained on the decisions of such experts will conscientiously reproduce these prejudices in its decisions. After all, these models may contain specific defects.

Few people are now dealing with these problems, because, of course, SkyNet unleashing a nuclear war is, of course, much more spectacular.

Neural networks as a "hot trend"

On the one hand, neural networks are one of the oldest models used to build AI systems. Initially appeared as a result of applying the bionic approach, they quickly ran away from their biological prototypes. The only exception here are impulse neural networks (however, they have not yet found wide application in the industry).

The progress of recent decades is associated with the development of deep learning technologies - an approach in which neural networks are assembled from a large number layers, each of which is built on the basis of certain regular patterns.

In addition to the creation of new neural network models, important progress has also been made in the field of learning technologies. Today, neural networks are no longer taught with the help of central processors of computers, but with the use of specialized processors capable of quickly performing matrix and tensor calculations. The most common type of such devices today is video cards. However, even more specialized devices for training neural networks are being actively developed.

In general, of course, neural networks today are one of the main technologies in the field of machine learning, to which we owe the solution of many problems that were previously solved unsatisfactorily. On the other hand, of course, you need to understand that neural networks are not a panacea. For some tasks, they are far from the most effective tool.

So how smart are today's robots really?

Everything is relative. Against the background of the technologies of the year 2000, the current achievements look like a real miracle. There will always be people who like to grumble. 5 years ago, they were talking with might and main that machines will never beat people in Go (or at least they won't win very soon). It was said that a machine would never be able to draw a picture from scratch, while today people are practically unable to distinguish between pictures created by machines and paintings by artists unknown to them. At the end of last year, machines learned to synthesize speech, almost indistinguishable from human, and in recent years, ears do not wither from the music created by machines.

Let's see what happens tomorrow. I look at these applications of AI with great optimism.

Promising directions: where to start diving into the field of AI?

I would advise you to try to master at a good level one of the popular neural network frameworks and one of the programming languages ​​popular in the field of machine learning (the most popular today is TensorFlow + Python).

Having mastered these tools and ideally having a strong base in the field of mathematical statistics and probability theory, you should direct your efforts to the area that will be most interesting to you personally.

Interest in the subject of work is one of your most important assistants.

The need for machine learning specialists exists in various fields - in medicine, in banking, in science, in manufacturing, so today a good specialist has more choice than ever. The potential benefits of any of these industries seem to me insignificant compared to the fact that the work will bring you pleasure.