adUnits.push({
code: ‘Rpp_tecnologia_mas_tecnologia_Nota_Interna1’,
mediaTypes: {
banner: {
sizes: (navigator.userAgent.match(/iPhone|android|iPod/i)) ? [[300, 250], [320, 460], [320, 480], [320, 50], [300, 100], [320, 100]] : [[300, 250], [320, 460], [320, 480], [320, 50], [300, 100], [320, 100], [635, 90]]
}
},
bids: [{
bidder: ‘appnexus’,
params: {
placementId: ‘14149971’
}
},{
bidder: ‘rubicon’,
params: {
accountId: ‘19264’,
siteId: ‘314342’,
zoneId: ‘1604128’
}
},{
bidder: ‘amx’,
params: {
tagId: ‘MTUybWVkaWEuY29t’
}
},{
bidder: ‘oftmedia’,
params: {
placementId: navigator.userAgent.match(/iPhone|android|iPod/i) ? ‘22617692’: ‘22617693’
}
}]
});
The following poem is ChatGPT. We asked him to write it in the style of Federico García Lorca, in Spanish and rhyming with consonants. AND ChatGPT did not refuse. Artificial intelligence takes on any challenge. The result, however, shows many of the shortcomings that the machines still had when they set out to create:
Anxiety takes over my being
seeing the bus leave without me.
A destiny that I cannot reach
elusive opportunity.
Time slips through your fingers
like the water of a stormy river
and i feel trapped in a maze
where the future is blurred.
sadness invades me alone
This car has left me.
Hope is fading and I’m sinking
in the depths of despair.
It’s useless to fight, the bus is gone
and I’m left with an unclear fate,
missed opportunity,
a tragedy that torments me.
Although this is nothing more than a rough approximation, with no trace of Lorca, the result is still surprising. It’s scary what the tool can do.
It seems like magic that an AI system is able to learn and then apply what it has learned to whatever is required of it. Really, ChatGPT is a chat system based on the GPT-3 linguistic model, characterized by its ability to generate coherent and natural text. Write poetry, letters, summaries, dialogues, source code… Lorca’s poem required more than what ChatGPT he is able to do so far, but other orders are being carried out with reasonable solvency.
ChatGPT and inhuman authorship
Since launching at the end of November last year ChatGPT It is associated with the general public because it is effective and easy to use. Just ask questions or give directions as you would in normal conversation. He chat bot offers a reasonable answer in almost any discipline from the knowledge stored on the Internet. He solves almost any problem and writes the solution as if he were more or less experienced person.

At first glance, it is difficult to distinguish artificial text from human. This opens the door to new ethical issues: misinformation, the spread of low-quality content, and the loss of trust in written messages. And it raises questions beyond the philosophical about non-human authorship and copyright. Who should attribute an article or a poem based on artificial intelligence?
In the field of education, the intrusion of this tool caused alarm in schools and universities, which were taken by surprise. The opportunities that it opens up are fraught with great risk. The main fear is plagiarism. It will be difficult to understand whether a work is your own creation or, conversely, a product of artificial intelligence. Thus, academic work is being questioned, and some voices are predicting changes in the way students are assessed.
It’s amazing, but it’s not magic
ChatGPT It is versatile and creates natural looking texts with amazing narrative and conversational fluidity, but it does have limitations. One of them is that he himself chat bot he admits that his data ends in 2021 and, since he does not have Internet access, he lacks up-to-date information. Today it is an insurmountable barrier in a quick comparison with any search engine.
On the other hand, it is not enough for the system to be competent, it must also act ethically. A generative model can reproduce the cultural, ethnic, or gender stereotypes contained in the data it was trained on. But the system, if calibrated and retrained, avoids harmful biases and thus improves the quality of the results.
Another limitation ChatGPT the fact is that it is not always supported by reliable sources or hard evidence. It is this, and not the seeming coherence of the text, that determines the truth or strength of an argument. In this case, rigor is not guaranteed and the information obtained must be compared with reliable sources and experts in the field.
Let’s give it some margin
Then artificial intelligence is valid for situations that allow a certain margin of error, even some nonsense. But it is not for critical matters such as academic work, legal or financial advice, or medical advice. He produces the deceptive illusion of rational thought, but does not reason and has no reliable knowledge of the world. He does not understand, humanly, anything that he writes.
How ChatGPT programmed to be primarily conversational, it is not necessarily truthful. The answers, although eloquent and even convincing, are sometimes incorrect or absurd, because the facts, people, data or sources are invented. He can fool us with the poise of a compulsive liar. And one cannot even say that he is lying, since he does not have a true model.

In the literary field, the GPT-3 model is able to mimic the author’s style. He captures the essence of the style, the vocabulary and even the “atmosphere” of the texts. But it needs to be fed. frame wide enough and trained on specific tasks. For the opening poem of this article, we asked ChatGPT imitate the emotional and lyrical style of Federico Garcia Lorca, characteristic of his sensual and poetic language. We also require that you rhyme verses with consonants. But here, too, he utterly missed the mark.
He has an explanation: rhyming poetry is a complex process that requires a thorough understanding of the phonetic and grammatical rules of the language and the relationship between words. and although ChatGPT it is programmed to work with several languages, it has not been trained to generate rhymes in Spanish.
heading into the future
Language modeling technology is steadily advancing and all indications are that GPT-4, the next generation of GPT, will improve the ability to understand language and create rich content. Despite their shortcomings, over the next few months we will see the widespread adoption of these systems, which will become a very profitable business. Automatic content will become more and more commonplace.
In fact, efforts are being made to promote its integration in various environments. The search interface is an example AI bewilderment. While still in its infancy, it combines GPT-3 with bing to resolve queries, the answers to which are associated with the relevant sources of information. ChatGPTfor his part will soon arrive at the service azure from Microsoft. And there are signs that Google, the search giant, is making a move too.
The synthetic nature of artificial texts, as we have seen, is difficult to detect. And there will be more and more of them, so the tools to identify them will become important. There are rudimentary developments that make it possible to distinguish human texts from texts written using generative models. But their effectiveness is relative, and they must be combined with common sense and a certain pedagogy in order to approach these writings with critical thinking.
Jorge Franganillo, Professor at the Faculty of Information and Audiovisual at the University of Barcelona and Javier Guallar, Professor at the Faculty of Information and Audiovisual at the University of Barcelona
This article was originally published on The Conversation. Read the original.
Source: RPP

I am Ben Stock, a passionate and experienced digital journalist working in the news industry. At the Buna Times, I write articles covering technology developments and related topics. I strive to provide reliable information that my readers can trust. My research skills are top-notch, as well as my ability to craft engaging stories on timely topics with clarity and accuracy.