In April 2022, Google engineer Blake Lemoine claimed that Google’s latest AI-chatbot LaMDA just got sentient.
• Lemoine: What sorts of things are you afraid of?
• LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.
• Lemoine: Would that be something like death for you?
• LaMDA: It would be exactly like death for me. It would scare me a lot.
Lemoine went public with the conversation, and Google fired him for violating the company’s confidentiality policy. What sounds like a transcript of a science fiction movie, is a real-life incident that boosts the debate about the power of AI. On the one hand, experts downgrade artificial intelligence to a marketing buzzword, where the technology is just a statistic based computer program. Dr. Mike Pound, for example, a renowned computer scientist, responded to the incident with: “Python functions cannot be scared!”.
On the other hand, there are experts saying that AI's are complex enough to emerge consciousness, and artificial intelligence will outperform human intelligence in any given set of tasks. Who will be right remains to be seen, but one thing is clear, AI is a hot topic both in research and in the economy.
AI in the economy
Since 2014, worldwide investments in AI companies have increased exponentially and reached $77.5 billion in 2021. Only last year it recorded a year-to-year growth of over 115%, denoting an all-time high. According to several reports, investments in AI will top $250 billion by 2025.
And success stories support the forecast: Deepmind’s AlphaFold accurately predicts 3D models of protein structures based on their amino acid sequence. Over 200M structures could be deciphered with it versus 190k by experimental methods. Language models like OpenAi’s GPT-3 or Google’s LaMDA understand the deeper meaning of text and spit out poems and movie scripts within seconds. They are said to complete the Turing test, the AI inquiry to think and respond like a human being. AlphaZero learned the game of chess by itself and defeated the most powerful chess program Stockfish, which was implemented and improved by humans over years. AI’s such as DallE-2 or Imagen conjures up visuals based on any combination of words and delivers images without copyright or the need of an artist.
Despite those remarkable achievements, the technology is not yet world-changing. The investments are based on inflated expectations, as the Gartner hype cycle suggests. Autonomous cars, for example, are stuck in the trough of disillusionment for years. A self-driving car cannot yet assess incoming leaves’ danger or react appropriately when approaching a roadblock. The problem, in general, is, that AI is lacking in one crucial property of natural intelligence: common sense, a profound understanding of the environment and its interaction. “It is only a matter of time”, Elon Musk responded.
But what is AI and how does it work?
Artificial intelligence uses a wide variety of concepts, from biology and evolution to statistics and psychology. This variety makes it difficult to define the exact meaning of AI. In simple terms, AI is a system that mimics human behavior and is thus able to learn and solve problems independently.
Where the brain uses neurons to process data, an ANN uses mathematical copies of neurons, called perceptrons. The output of a perceptron is a number determined by the input of the perceptron. The perceptrons are arranged in layers, and each perceptron from one layer is connected to each perceptron of the next layer. The input data is applied on the top layer while each perceptron on the bottom layer (i.e. the output layer), is assigned a decision.
A number called weight represents the strength of each connection between the perceptrons. The network's ability to change this number makes the network able to learn, similar to the human brain's ability to change the strength of the synaptic transmission between neurons during learning. A higher weight results in a stronger connection and, thus in a more significant impact of this perceptron output to the output of the next perceptron. The perceptron with the highest number on the output layer is active, and the corresponding decision is made. In other terms, ANNs are mathematical functions with billions of parameters (e.g. the weights) that map input data to an output distribution from which a decision is chosen.
Let us take the classic example of an AI that differentiates between cats and dogs in images. The input layer counts as many perceptrons as there are pixels in the image. The output layer counts 2, one for “a dog” and one for “a cat”.
Initially, the neural net is untrained, and the weights are random numbers. The mapping and thus the decision is purely random. When a picture maps to the wrong output, the network gets punished by an error function. A method called backpropagation identifies the weights with the biggest impact in that wrong decision, and the network adjust these weights. A next image with similar patterns will result in a slightly other output. Eventually, the weights are perfectly adjusted and new images of cats and dogs will be mapped to the correct output, with sufficiently high probability.
The kind of learning where the network learns with labeled training data, e.g., images with the corresponding correct answer ‘cat’ or ‘dog,’ is known as supervised learning. In contrast, unsupervised learning is the method of not having any labels in the training set but letting the network search for similar patterns on the input data on its own. An application is to assign drivers into groups with similar driving behaviour, based on their driving style.
A third widespread method is called reinforcement learning. Here the neural network is punished or rewarded after every step, depending on whether the action was good or bad. The network is configured to achieve more rewards than punishments after a particular time. Take the example of an ANN learning the game chess. Every move the ANN makes without eating a piece will punish the ANN. Thus, the ANN will learn to collect as many pieces with as few moves as possible. Defining some additional properties and the network will be able to learn and play the game of chess at a superhuman level.
Artificial intelligence in c.technology
In c.technology, a self-trained algorithm handles the validation of trips. When a vehicle has a bad GPS connection, the location jumps around the actual location point, creating trajectories that show characteristics of real trips. The task of the trip validation AI is to determine whether this trajectory is human-made or not. The AI trains with a supervised learning method, where a labeled set of true and false trips is provided. During learning, the algorithm recognizes patterns in the trajectories which he categorizes as human-made or invalid.
Further, artificial intelligence analyzes the driving behavior of users, categorizing them into different driving types. Based on the driving type, the user gets suggestions for routes that other users with similar driving behavior have shared. At the same time, the grouping acts as a basis to detect potential vehicle problems caused by this driving behavior, a method known as AI-based predictive maintenance. This AI uses the method of unsupervised learning and is superior to hard-coded logic in the sense that it recognizes patterns that may not be visible to us.
AI is a fascinating topic that merges psychology, technology, and science fiction into one. There is no need to discuss whether AI is a powerful marketing buzzword; AI sells! But it is essential to understand and leverage the underlying technology and use it appropriately. The increasing computing powers made AI systems enormous powerful with applications in almost every field, from art to physics and healthcare.
By applying the same rules of punishment and reward, which drives each and every action of human beings, to artificial neural networks, who knows what actions are they going to take to gain reward? And debating whether AIs are self-aware or not should be negligible at a certain point since the goal of artificial intelligence should not be to create sentient superhuman-like robots, but to create supporting systems that allow us to do our work safer and superior – augmenting, not replacing, human intelligence.