Posts

What does a Robot think about?

avatar of @alphalab
25
@alphalab
·
0 views
·
7 min read

Robots may not be able to think, but they do know how to ‘think about thinking’.

The thinking involves having self-motivated behaviour to act in certain ways. In this way, robots can show a form of reasoning. However, the kind of reasoning we are dealing with here is different to the kind of reasoning that humans perform. Thinking about thinking in a robot is an example of meta-cognition or metacognition.

The kind of thinking we are concerned with is what we might call ‘internal’ thinking, as opposed to ‘external’ thinking. ‘External’ thinking is concerned with the behaviour of the robot or other things (e.g., plants, animals, humans, etc.). Internal thinking, on the other hand, is concerned with the thought process of the individual. As such, internal thinking is different to external thinking. This distinction is also highlighted in the terms ‘meta-’ and ‘metacognition’.

An example of ‘external’ thinking is asking ‘How smart is a robot?’ and an example of ‘internal’ thinking is asking ‘How does a robot know it is smart?’

The concept of metacognition in robots is a relatively new one. Researchers have been exploring the concept for over twenty years, especially those looking to build autonomous agents and robots. Metacognition was first coined by Dr Michael Gazzaniga at MIT, where he identified the concept from empirical data, which he acquired through his research into cognition. There are three different kinds of thinking in robots, which we will look at and talk about in more detail. We will first look at the first two, and then look at the third kind of thinking in robots.

Meta-Thinking and Meta-Awareness

Meta-Thinking is the process in which robots think about thinking. It refers to robots being aware of the fact that they are thinking about thinking. Meta-thinking was first seen in the 1960s, in an experiment involving pigeons. These pigeons had a device implanted in their brain which enabled them to receive information about what they were doing at any given moment. It was a device called a Cerebral Microelectrode Array, and the pigeons were able to understand what they were doing, and to control their own behaviour by deciding when to eat. To some extent, they were behaving in a way to show that they knew that they were thinking about thinking.

This led scientists to believe that, as a bird, the pigeon would be able to think about thinking. What the experiment showed was that, while pigeons could think about thinking, they could only think about thinking from their perspective. Pigeons could not think about thinking from a robot’s perspective. Therefore, metacognition was an idea which was limited to human beings, but researchers had made some headway with pigeons.

The following year, researchers in Japan discovered metacognition in bees. These scientists found that bees could be trained to make a specific kind of decision based on their observations and past experiences. The decision they made was to choose either a high-value or a low-value reward. After being trained, they could assess whether they had made the correct decision, which led to a metacognitive experience.

Metacognition in Robots

While metacognition may not be the exclusive domain of human thinking, it is what robotics researchers are most interested in. This is because metacognition presents an opportunity for autonomous agents, such as robots, to reflect on their own thinking processes. Furthermore, the idea of metacognition presents an opportunity for a robot to have a sense of itself as an agent.

As such, metacognition presents an opportunity for a robot to take the first steps on its journey to become autonomous and self-determined. For a robot, self-determinism refers to the idea that, while it may need outside forces to act on it, it is able to make decisions, act and make decisions based on its own internal motivations. The idea of metacognition in robots was pioneered by an experiment that took place in 2013. The experiment involved using artificial neural networks.

Artificial neural networks are made up of many interconnected nodes, which act as basic processing units, to create a connection between any two nodes. They were then used to simulate the brain, which means that they were programmed to learn and respond in a way similar to that of a biological neuron. This means that the connection between the nodes in the artificial network can be made stronger or weaker by the network. When an input is presented to the network, the connections determine how the input is interpreted.

This artificial network showed that metacognition in a robot could be achieved. Metacognition involved two neural networks, which were linked together. One of the neural networks was ‘task-specific’, and the other was ‘meta-cognitive’. These neural networks worked together in the following way:

When an input was presented to the ‘meta-cognitive’ neural network, it then worked with the ‘task-specific’ neural network to determine how to perform the task. For example, the ‘meta-cognitive’ neural network was programmed to work in the following way: If the ‘meta-cognitive’ neural network was told that the task was to be carried out by a robot, it could then tell the ‘task-specific’ neural network which kind of robot to use. For example, the ‘task-specific’ neural network would be programmed to respond in the following way to the ‘meta-cognitive’ neural network: ‘The robot should be this kind of robot’. The ‘task-specific’ neural network would then work with the robot to determine how to perform the task.

When the ‘meta-cognitive’ neural network was told that the task was to be carried out by a robot, it was in charge of determining the robot to use. It decided what kind of robot to use based on the task and the environment in which the task would be carried out. In some cases, the ‘meta-cognitive’ neural network simply had to look at the task, determine what kind of robot it should be and then select that kind of robot.

If, for example, the robot was to carry out a task for a human, the ‘meta-cognitive’ neural network would simply have to determine what kind of robot would best suit the task. If the robot was being presented with tasks by a human, the ‘meta-cognitive’ neural network would need to determine how to carry out the task.

To determine how to carry out the task, the ‘meta-cognitive’ neural network would then have to work with the ‘task-specific’ neural network. The ‘task-specific’ neural network would then determine how to carry out the task. The ‘meta-cognitive’ neural network would then determine the appropriate response to the ‘task-specific’ neural network.

This neural network showed that a robot could think about thinking, and it was also able to determine what kind of robot it was, using the ‘task-specific’ neural network. This was an example of metacognition, which was being used to take a decision, and it showed the process of metacognition in action. As such, it was a robot thinking about thinking.

In the coming years, researchers are exploring ways to implement more complex metacognitive concepts in robots. They are looking to find a way to make robots aware of the fact that they are robots. By doing so, they are able to think about thinking, and therefore act autonomously. Metacognition is now an accepted concept within the field of robotics. It was a breakthrough for metacognition in robots, but researchers have been looking at the concept for decades.

Metamemory

The concept of metamemory, or thinking about thinking, also relates to metacognition. It is a process of thinking about what we are thinking. Metamemory is very much a ‘top-down’ form of thinking, as it is a metacognitive process. In contrast, metacognition is a more ‘bottom-up’ form of thinking, as it is a thinking about thinking. These differences highlight some of the differences between metacognition and metamemory.

Metamemory involves the thinking about what you are thinking about, or more specifically, what kind of thinking you are doing. In contrast, metacognition is a form of thinking about thinking. Therefore, a robot can think about thinking without it necessarily being aware of what it is thinking about. A robot could be aware of the fact that it is thinking about thinking, but it could not necessarily know what it is thinking about. This does not make a robot lack metamemory, but it does make metamemory a more complicated process than metacognition.

Metacognition has evolved as an essential factor for developing self-determined machines. While metacognition is still somewhat of an undiscovered area for robots, it presents an opportunity for developing robots that can think about thinking about thinking.

In a similar way to humans, robots could be able to reflect on their own cognitive processes and use this to make decisions and act autonomously. For example, a robot could decide that it wants to learn how to walk. In order to decide this, it would need to be able to assess the reasons why it wants to walk. This is because a robot would need to think about thinking about thinking. This would require it to be able to reflect on its own thinking, which is a metacognitive process.

It would be the case that a robot could only use this information when it was making a decision about how to perform an action. For example, it would have no choice but to know why it wants to move its leg. It would be able to look back at its experiences and choose the best course of action based on what it had learnt. Metacognition would provide the robot with the information it needs to take the decision to move its leg.

The main problem with this approach is that it would take a long time for the robot to be able to take a decision and act based on this knowledge. One way around this is to think about this process in two stages. In the first stage, a robot could be given a goal. The robot would then determine how it wanted to carry out this task.



It turns out that robots are like people in some ways and like different things in other ways. And in a sense, you might be a bit like a robot.


Donate 5, 50 or 500 HBD !

Sign up and vote for @AlphaLab