Demystifying AI: From Sensationalism to Facts

Cefas Garcia Pereira
5 min readJul 24, 2024

--

Photo by Nick Fewings on Unsplash

In recent years, artificial intelligence (AI) has made great strides, and with these advances come concerns about the role these technologies will play in our lives. Science fiction movies and sensationalist headlines often paint a dystopian picture where superintelligent AIs dominate the world and enslave humanity. However, the reality is quite different.

AI, including advanced models like Chat GPT, is still far from possessing the autonomy and cognitive capacity needed to take over the world. These models are essentially sophisticated computer programs that process and generate text based on patterns in the data they were trained on [1]. They do not have consciousness, emotions, or intent. An AI’s understanding of the world is limited to what has been programmed and the data it has received.

Something somewhat revolutionary is generative AIs, that is, models capable of generating content from the training data. Another important factor is that intelligences like Chat GPT or Gemini operate with Natural Language Processing (NLP). In traditional machine learning models, the response provided is usually numerical, often representing a probability. For example: identifying if an image is of a cat or a dog; identifying if a terrain is at risk of landslide or not, weather forecasting, etc. Of course, all models are useful for their purposes, but the experience is much more impressive when the model’s response is a text in the form of a conversation rather than a probabilistic number. This is important because it brings AIs closer to the human universe. Indeed, this was the test model proposed by Alan Turing when he wanted to answer whether a machine could pass as a person without being identified [2]. He chose a model based on NLP instead of a machine that played some game or did any other activity.

For this reason, we are easily impressed by these models capable of conversing, generating images, audio, etc. But it is necessary to remember that essentially these AIs are capable of reproducing patterns with some level of randomness, giving us the sensation of creativity, but they do not create truly new ideas and concepts. That is:

If we train a model similar to Chat GPT with scientific records up to the early 20th century, it is a mistake to believe that this model would be able to theorize Einstein’s Theory of Relativity.

Because it does not synthesize knowledge. It only reproduces the patterns it was trained on.

Furthermore, studies have been showing that the evolution of AIs is reaching a plateau [3, 4, 5]. They are not evolving at the same speed as they were between the early models. Not only that, but it has shown a decline in some scenarios, where the responses are less stable and less accurate [3]. This refutes the idea that many have: “it is not capable yet, but it is a matter of time.” Because researches has shown that AI will not learn forever, in fact, learning is becoming increasingly slower and more financially and computationally costly. Therefore, unless an unprecedented technical advance occurs, either in the power of the hardware used in training or in the efficiency of the algorithms, the models we know today tend towards stagnation.

It is impossible to deny the importance of these models and their innovation, but the importance of this movement is hyperbolic to satisfy the interest of the companies involved and their investors. More and more we will see this type of intelligence embedded in products for market reasons. A study conducted in London interviewed 2,830 AI startups in the EU and found that 40% of them did not use AI significantly. So, we see an excessive fuss around AIs [6]. NVIDIA, a manufacturer of graphics cards used for training these AIs, has quadrupled in size in recent years [7]. OpenAI, developer and maintainer of ChatGPT, in turn, needs to keep the topic in vogue to justify the investment of its shareholders. For these reasons, this topic is being so pushed by companies and the media.

Artificial Intelligences, however, are excellent at performing repetitive tasks and functioning within a minimally controlled environment. For example: generating ads for social media, fixing code bugs, conducting code reviews, generating emails, creating presentation models, etc. However, they do not replace human interaction to oversee their operation. AI does not have knowledge of the business rules of companies to, for example, develop an end-to-end system and will not have this capability in the short term. It is important to have this clarity.

When an AI is capable of writing the source code of the snake game, it is not reasonable to say that soon it will be able to produce a game for the latest generation of consoles.

This distance is enormous! Programs developed by professionals are designed to be maintainable, adaptable, and follow a series of techniques [8] that are not considered by AI. Reiterating, AI only reproduces patterns it was trained on.

Although AIs like Chat GPT are powerful and innovative tools, they intrinsically have technical limitations such as lack of autonomy, human control, and regulations. Instead of fearing these technologies, we should focus on how we can use them responsibly and beneficially for society.

While artificial intelligence itself does not have the potential to replace a professional, its inadequate implementation can have negative consequences in the workplace. AI could reduce the workload, but depending on how it is applied by the company, it can result in a relentless pursuit of productivity, intensifying the pace of work and the demand for quick results. Additionally, automation threatens the sense of job security, generating constant pressure among workers who fear being replaced by supposedly more efficient and economical machines. This scenario contributes to exacerbated exploitation and labor instability.

Technology should be seen as an ally in our progress, not as a threat. By continuing to develop and regulate AI responsibly, we can ensure it serves to improve our lives while maintaining safety and ethics. That is why it is necessary to understand it and face it with sobriety.

Thanks for reading! Before you go:

References

  1. OpenAI. 2023. GPT-4 Technical Report. https://arxiv.org/pdf/2303.08774
  2. Shieber, S.M. ed. 2004. The Turing Test. The MIT Press.
  3. Chen, L. et al. 2023. How is ChatGPT’s behavior changing over time?.
  4. Goli, S. K. (2024, February 20). Chatgpt’s growth hits a plateau: Navigating the challenges ahead for openai. Medium. https://medium.com/@golisaikrupa.409/chatgpts-growth-hits-a-plateau-navigating-the-challenges-ahead-for-openai-c1774e4a1bd7
  5. Pogla, M. (2024, March 11). CHATGPT’s growth: A detailed examination of the recent stagnation. AutoGPT Official. https://autogpt.net/chatgpts-growth-a-detailed-examination-of-the-recent-stagnation/
  6. Eschulze. (2019, March 6). 40% of A.I. start-ups in Europe have almost nothing to do with A.I., research finds. CNBC. https://www.cnbc.com/2019/03/06/40-percent-of-ai-start-ups-in-europe-not-related-to-ai-mmc-report.html
  7. The Economist Newspaper. (n.d.). Why do Nvidia’s chips dominate the AI market? The Economist. https://www.economist.com/the-economist-explains/2024/02/27/why-do-nvidias-chips-dominate-the-ai-market
  8. Robert C. Martin. 2008. Clean Code. Pearson.

--

--

Cefas Garcia Pereira

Master in Software Engineering | Software Engineer at CI&T | Writing about technology | cefas.me