The launch of ChatGPT was a bombshell. When I worked with this service for the first time, I experienced emotions that I think were similar to the feelings of the first phone users or the first plane passengers — Is this possible now?! Has this come down from the pages of fantasy stories into our lives?
A lot and hot discussions immediately started. And an interesting fact is that the majority of the talks began not inside the community of AI specialists but inside a broader audience that was previously not interested in artificial intelligence, neural networks, or machine learning. As a result, ChatGPT’s launch pushed millions of people to think about the potential of artificial intelligence, whether AI can replace humans, and, if so, in what areas.
But one more important question needs to be discussed now: what are the dangers waiting for us from the interaction of AI and people from far and near perspectives?
Let’s look at the dangers we can face today.
Artificial intelligence can be an excellent professional assistant but a dangerous tool for an amateur. This is because the recommendations and solutions offered by AI are only sometimes correct, but sometimes they can have errors and inaccuracies. To notice these errors and inaccuracies, users need to have sufficient expertise in the area where they are trying to apply AI. Therefore, users must be able to adequately evaluate the information received from AI in such areas as engineering and medicine.
On the one hand, the benefits that people receive from using AI are very tempting:
- Using the capabilities of AI to process large amounts of data quickly;
- Searching for information and obtaining new knowledge;
- Developing better solutions to complex problems.
On the other hand, AI can become a potential trap for those users who are new to the field of knowledge in which they plan to use possibilities AI offers. In this case, one of the main problems is that AI has extremely high authority in the eyes of people who don’t understand the basic principles of how AI works.
When people work on the Internet and use search engines, they have specific rules for estimating the reliability of various data sources. However, in the case of AI, the practice and culture of interaction still needs to be sufficiently formed. A person with insufficient knowledge may not understand that an error exists in the AI recommendation, and it leads to fatal results in some activities.
When we speak about a long-term perspective, we need to understand that the broad access to AI can significantly affect how specialists’ training goes. For example, the tools based on neural networks can become a springboard that allows students to leap their abilities, or these tools can be an obstacle to developing creative thinking because most of the tasks will be shifted to the electronic mind.
Today, educational institutions should consider introducing AI into the learning and work process harmoniously.
The problem is in a knowledge gap how something works and how this something can be found on the horizon. This problem already exists, but it is not so critical. Today, many complex systems are a part of life around us, but at the same time, we know almost nothing about how they work inside. The Internet, jet engines, microwave ovens, internal combustion engines — we use all these things every day, but often we may not even know the basic principles of how it all works. Nevertheless, many specialists not only know about the functioning of all these systems but also work on their modernization and development. Let’s imagine that AI will take up most of our tasks for servicing complex systems and mechanisms in the distant future. In that case, we can have a situation when a certain amount of knowledge disappears from human and engineering society because we do not need to care about many things — AI can care about it for us. And then we will again understand that Arthur C. Clarke was right in his third law: Any sufficiently advanced technology is indistinguishable from magic.