Russia's MISiS

AI Development in Moscow Hits Stride

In a recent article in, John Koetsier, a journalist, analyst, author and featured speaker, interviewed the Head of the Russian Department of Engineering Cybernetics about how long the country has been working on AI and how far they may have progressed by 2018.

Russian president Vladimir Putin has said that the nation that leads in AI will be the ruler of the world. And China is investing heavily in winning the race.

In Moscow to speak at Skolkovo Robotics Forum, Koetsier visited Russia’s top cybernetics institute, the National University of Science and Technology, or MISiS, which had its 50th anniversary. There, he asked two of its leaders about AI, its dangers and its role in self-driving cars.

Below, Olga Uskova, Head of the Department of Engineering Cybernetic, and Konstantin Bakulev, Deputy Head of the Department of Engineering Cybernetics, answered some of his questions.

Here are two excerpts from the interview:

Koetsier: There’s a lot of noise about AI today. We have machine learning and neural networks … but what is true AI?

Olga Uskova: “From my experience I can say that under ‘artificial intelligence’ people understand the state of an object when it actually becomes the subject and begins to have an independent abstract thinking.”

Koetsier: “You’ve been working on AI for decades, and the MISIS Cybernetics department just celebrated its 50th birthday. How far have we come in that time?”

Konstantin Bakulev: “Fifty years ago, when MISiS Department of Engineering Cybernetics was created… in the first year of the department’s existence, young scientists together with students created the first heuristic algorithms for playing cards. The computer complex at that time occupied two rooms — an area of about 80 square meters. The code of the program was entered by perforated cards. Each response of the machine to the cards took several minutes.

In 2018 the students of the department, as a course work, created a system for semantic analysis and analysis of news feeds to create analytical reports on changing of purchasing power of Russians during the holidays. Now gigabytes of information are processed in just a few seconds and this work was done by two fourth-year students just within a month.”

The two researchers talked about the risks of AI and autonomous cars. Both agreed that it’s important to teach neural networks carefully, so that they don’t develop aggressive characteristics and so that they have ethical guidelines to follow. Self-driving cars have already shown some independence beyond how they’ve been programmed, pointing to the need for testing and retesting for a lengthy period to prevent accidents.

Even now for a lot of people it’s not always obvious that they are useful for an existing ecosystem. People destroy ecology, litter a lot, kill rare species of animals, so the logically thinking AI will quickly come to a conclusion about the uselessness of mankind.

Koetsier: Should we worry about super-intelligent AIs? Will they be dangerous?

Konstantin Bakulev: I think that it’s necessary to solve two problems in parallel: we need to adjust our own behavior towards good and love and impose some moral restraints on the whole territory of the planet when programming AI on the principles of Isaac Asimov.

The principles of AI development and management should be similar to the principles of working with weapons of mass destruction.

Koetsier: When will we get there?

Olga Uskova: Well, here I want to be extremely honest. When programming neural networks, we clearly understand what is the input (what happens at the entrance), we understand what is the output (what is the result), but we do not always understand what is done inside.

In some of the tests at the testing facility, there were cases in which a multi-ton vehicle suddenly made its own independent decision to improve the situation, which wasn’t programmed.