This was a real conversation with the technology that is on everyone’s lips, when asked: “How does ChatGPT influence politics?”, the tool replied: “ChatGPT is a language model developed by OpenAI and has no ability to influence in politics directly. Its purpose is to provide answers to questions and perform linguistic tasks, but it does not have political opinions or political intentions. Influence on policy depends on how technology and the data generated by it are used, and it is the responsibility of individuals and organizations to apply it ethically and responsibly.
This exchange with the aforementioned technology begins with a disclaimer. The computer also answers things that have nothing to do with the question: from a pretext it goes on to give lessons in responsibility and ethics.
ChatGPT is created by one person. It can be a large team of programmers under a foundation with several guarantees and cucardas, as stated on its website; however, ultimately, just as in a publication, the leader’s judgment prevails.
In the case of the organization that developed it, OpenAI, there is a gray and unknown executive who imposed his will when deciding what to say, and –more importantly– what to omit, the text regurgitated in response to a query of this nature.
It’s a conversation with technology that doesn’t take responsibility
This is an opinion search engine and it is very well written; however, it is still a conversation with an OpenAI executive who has a particular life experience. We don’t know that person, but we know well how he thinks.
The real fact is that the exercise of politics in all countries has varying degrees of ethics and responsibility. However, whoever programmed this tool has that belief and wants to give us a selection. The bottom line is that ChatGPT, in this iteration, is more of the same. There is a war between the objective and the subjective, visible in every corner of intellectual activity in recent years.
After the assault on objectivity by subjectivity, we see that ethics –disguised as universal criteria– attacks science. Scientific sciences now have to be ethical, and science has a last ethical, and that weight was reinforced with the rise of subjectivism.
Before, the information came through people who struggled to present the data with maximum clarity, separated from the opinion of those who presented it. From journalists to vaccine creators, the struggle to find comprehensive criteria was constant.
In the face of the subjectivist torrent, everything is confirmation bias. Data is consumed, decisions are even made on issues of the highest relevance, to confirm one’s own way of seeing things, not to see what reality is like outside of one’s comfort zone.
With ChatGPT, another opportunity to return to broad or more objective criteria has been lost. The tool again prompts a person’s ideas. In addition, it accentuates the tendency towards the subjective, abandoning the opportunity to seek some return to the data, without interpretation or recommendations. After all, this is another attempt by one person to pass off their worldview as universal, when it really isn’t. Things as they are.
*Analyst in geopolitics. Philosopher and lawyer specializing in anthropology from Temple University in Philadelphia. Author of Disillusionism (Editorial Planeta).
You may also like