Dr. Baechle, how do cultural representations of artificial intelligence (AI) and the actual conceptual and technical advancement of AI influence each other?
In fact, these two areas cannot be so clearly distinguished from each other. At the beginning of a technical development there is always an idea, or at least a rough idea, of what function, what benefit or – more generally speaking – what meaning a technology has or should have. Especially in the case of developments that people like to call “revolutionary,” i.e., those that are supposed to be categorically new, references can quickly be made to cultural texts that have anticipated certain technologies. These concepts, developed and tried out in fiction, so to speak, are translated into scientific innovations, research and development goals. They also take their inspiration from popular culture, such as comics, film, television. At the same time, however, the aspects discussed in politics, journalism, religion or ethics about possible future scenarios are also part of the imaginaries that exist around a technology.
How do these imaginary and cultural representations of AI in turn affect political and social discourses about AI – for example in Japan and in Germany?
They have the important social function of making hopes, opportunities or dangers associated with AI visibly and understandably for as many people as possible. The crucial thing here is to open up a space for debate. Imaginations therefore not only influence research and development, but also guide political decisions, journalistic reporting and public opinion. They are thus not mere entertainment, but always an essential and formative part of the reality of technologies. This is especially true for AI, because the technology associated with it is imagined as an intelligence similar to humans but at the same time independent and autonomous from them, equal or even superior to us. This naturally stimulates the imagination, in a broad spectrum from cooperation to competition, from AI as the solution to all our problems to the feared demise of humanity. In the country-specific view, imaginations can of course not be understood independently of certain larger discourses and interests. The so called “Japanese Robot Culture”, for example, not only corresponds to the self-image of many developers and companies. The Japanese government has deliberately used thisbuzzword as political branding, which is supposed to be useful for the business location. In Europe, it is readily adopted: What can we learn from the Japanese? What developments in Japan should we be wary of? After lectures on robots in Japan, I am often asked whether we should delegate nursing work to the cold, emotionless machine, as is supposed to be the case in Japan. This is a demarcation gesture and fulfills a rhetorical function. Robots in Japan – they are then something special for
Japanese and Europeans, and for both sides it serves its purpose.
In your research, you compare Japanese and European concepts of human-like robots. How is “humanity” in AI defined in the respective discourses, or are there apparent significant conceptual differences?
In comparative research, you have to be very careful not to tell stories about the exotic country in the Far East, especially when you look at these phenomena with European eyes. It is popular, for example, to always use Shintoism to describe the Japanese relationship to technical artifacts: for the Japanese, everything is animate, so it doesn’t make much difference whether it is a robot or a human being. This simplistic view is a bit simplistic. Of course, there are very specific contexts in the history of ideas and different patterns in dealing with robots can be recognized. In addition to the imaginaries about robots, this also becomes clear in the interaction with them. A developer once told me that it makes a difference to whom he presents his robots. While people from Japan mainly enjoy interacting with the machine, Europeans want to prove to the robot and its developer as quickly as possible that the machine is not really intelligent after all, that it has no consciousness, feels nothing, etc. This reflex certainly has to do with a European understanding of robots and of how humans are conceived, namely as rational, sentient, indivisible but also special individuals with autonomy and unique consciousness. Many fears and uncertainties formulated in Europe probably also stem from this particular conception of humans. If you like, the Japanese attitude is more inclusive, less focused on the uniqueness of the human being. These many levels of meaning are a great challenge for research: What does AI mean in different cultures? Which interpretations are launched specifically, and for whom do which images of AI serve which purpose? It is enormously difficult to distinguish this without constructing cultural differences oneself.
Dr. Nagai, you conduct research in the field of neurointelligence on cognition in robots – and in humans. What exactly is the subject of your research?
My research aims at understanding the development principles of human cognitive function using a computational approach and thus designing a support system for those with developmental impairment. Humans acquire cognitive abilities in the first few years of life, but it is unclear how the brain and body implement them. In contrast to AI, human intelligence is open ended
and acquires multiple cognitive abilities cooperatively and continuously, producing individual and group diversity. In experiments using a neural circuit model mimicking the human brain in a humanoid robot, I have been able to verify the neural network underlying this continuity and diversity of cognitive function. Such research, which until now has rather been empirical, can also be used to support people with developmental impairment.
How are cognition in humans and in robots related? What surprising findings were you able to make in your research?
One result is the unified account of the cognitive function development based on the theory of "predictive coding". The brain is a "predictive machine" that combines bottom-up sensory signals with top-down prediction signals generated by the internal model acquired through experiences, seeking to minimize prediction error in these signals. I proposed that cognitive functions such as self-other cognition, imitation, and prosocial behavior are acquired through the prediction-error minimization process and demonstrated this in robot experiments using neural circuit models. I found that an imbalance in combining sensory and predictive signals is potentially the cause of the developmental impairment found in autism spectrum disorder, etc. These results show that two aspects of cognitive development, continuity and diversity, can be explained uniformly based on predictive coding theory, and have important implications for developmental (impairment) research. How is the relationship between humans and robots evolving as technology advances? By developing AI and robots that reproduce human cognitive functions, we can better understand human intelligence. The concept of neurodiversity was proposed in the late 1990s, and the idea expanded of seeing developmental disorders as individuality that manifests as normal fluctuations in the neural architecture. It is difficult to evaluate what kind of individuality a person has, and even the cognitive characteristics of the self, especially in people with developmental impairment, are difficult to grasp. But, if robots can develop and learn like humans and reproduce and acquire human cognitive characteristics, human intelligence can be understood through the medium of robot intelligence. We hope that the development of such artificial intelligence technology will enable us to realize a symbiotic society that makes use of the individuality of people and robots.
This text is extracted from jdzb echo Nr. 138, March 2022.
Picture: Copyright by KUSAMA Yayoi