Intelligent Machines Can Learn to Evolve

Caroline Petrow-Cohen

The living brain is nature’s greatest masterpiece. The ability that it gives us to remember and utilize certain pieces of information involves complicated and unique processes we take advantage of every second of our lives. Now look at the computer. We generally think of computers as concrete stores of information, that process that information in concrete ways. We can always change the code put into the computer to optimize it, but unlike the human brain, when that code is wrong, it won’t correct itself.

So robots shouldn’t become smarter or more creative than humans in the future, right? They always have to be preprogrammed, plain and simple, right?

Wrong.

Computer scientists are working on optimizing the performance of computers by essentially letting the computers make themselves better based on recognition of their own mistakes. There’s already lots of technology using these learning techniques — we can find it in self-driving cars, google translate, spam detection, lawyer robots, medical research, even snap chat filters.

This performance enhancement is accomplished mainly in two ways: genetic algorithms and machine learning. As the names suggest, they are both based off of real human methods of learning. Genetic algorithms are applied and changed over many generations of a system, meaning the results of a generation are looked at holistically and it is altered accordingly. Think natural selection, in which the best methods make it to the next generation. Machine learning, on the other hand, involves a single system taking in information and applying feedback to itself until it meets a specific performance threshhold. Think of a someone learning how to talk and being corrected in their speech, until they speak perfectly.

Researchers at Michigan State University have been working on optimizing genetic algorithm-based evolvable neural networks known as Markov Brains by adding feedback gates to aid in machine learning. These feedback gates alter the gates already within the Markov Brain nerual network. Markov Brains, by definition, have both deterministic and probabilistic gates in them, and the feedback gates that the researchers at Michigan are focusing on alter the probabilities to aid in the machine learning.

The researchers at Michigan tested the markov brain algorithms by creating a simulation in which an “agent” is on a 2D, 64 by 64 tile lattice with random walls, and the agent must choose the working path out of 24 distinct choices. They ran 500,000 generations of the 3 agent groups — one with only probabilistic logic gates, one with only deterministic logic gates, and one with probabilitistic, deterministic, and feedback gates. Each agent group had 300 agents of that type. It was found that the third group worked the best, due to the way in which the feedback gates made the system mimick long term memory usage. Out of the 300 agents, only 75 failed, which tremendously beat the probabilistic, in which all 300 failed, and the deterministic, in which 225 agents failed.

The results, while they are for a rather arbitrary simulation, do show how evolution can be combined with learning to make for incredibly powerful machine learning enhancement techniques. It is not a new idea, but it is a powerful one for future AI systems. The idea of a feedback gate could be a useful method of making our machines much smarter.

Noah Bergam ’21

Sources:

https://www.nature.com/articles/s41598-017-16548-2

Leave your thought