adUnits.push({
code: ‘Rpp_tecnologia_mas_tecnologia_Nota_Interna1’,
mediaTypes: {
banner: {
sizes: (navigator.userAgent.match(/iPhone|android|iPod/i)) ? [[300, 250], [320, 460], [320, 480], [320, 50], [300, 100], [320, 100]] : [[300, 250], [320, 460], [320, 480], [320, 50], [300, 100], [320, 100], [635, 90]]
}
},
bids: [{
bidder: ‘appnexus’,
params: {
placementId: ‘14149971’
}
},{
bidder: ‘rubicon’,
params: {
accountId: ‘19264’,
siteId: ‘314342’,
zoneId: ‘1604128’
}
},{
bidder: ‘amx’,
params: {
tagId: ‘MTUybWVkaWEuY29t’
}
},{
bidder: ‘oftmedia’,
params: {
placementId: navigator.userAgent.match(/iPhone|android|iPod/i) ? ‘22617692’: ‘22617693’
}
}]
});
A 49% stake in OpenAI, which owns ChatGPT, was recently acquired by Microsoft for $10 billion.
ChatGPT is a natural language processing (NLP) artificial intelligence platform that generates coherent and natural text in response to various queries. To do this, the model was trained on a large corpus of texts. This allows it to be used for tasks such as text generation, question answering, and general natural language processing.
Academic Use: Issues of Honesty and Originality
The use of language models such as ChatGPT for academic work raises a number of ethical issues. While they can make research and writing tasks more efficient, allowing scientists to focus on the more complex, higher value-added aspects of their work, there are questions about the integrity and originality of the work being created.
One of the main ethical issues associated with the use of language models in academic work is the problem of plagiarism. Using a template to create text makes it much easier for people to present someone else’s work as their own. By simply automatically paraphrasing, that is, saying the same thing in different words and in a different syntactic order, one and the same work can be passed off as another.
This can devalue the hard work and dedication required to prepare research papers. Which, in turn, can also undermine the credibility of the academic community as a whole.
The very detection of plagiarism is much more difficult. This difficulty arises because the text generated by the model is likely to differ from the original source. As researcher Daniel González Padilla mentioned, generative language models such as ChatGPT can be used to create plagiarized texts whose content is always different from the original and therefore difficult to detect.
lack of originality
Another ethical issue is that the use of language models in academic work can lead to a homogenization of ideas and points of view.
If people rely on these models, the generated text may be less diverse than purely human-generated text. This can lead to stagnant ideas and a lack of critical thinking. A type of thinking especially needed in the academic community.
Lack of context and consequences
Also, language models are not intended to provide a complete context for the information they generate, as ChatGPT itself acknowledges. The model generates text based on input and training data, but is unable to understand the ethical, cultural, or political implications of the information provided.
This lack of reflection can lead to misinformation, bias or even discrimination. All these ethical considerations must be taken into account before using these models in academic work.
It would also be appropriate to develop guidelines and best practices to ensure the integrity and originality of the work produced.
intelligence with a conscience
Philosopher Juan Arana argues that consciousness is and will be an unapproachable riddle for science. Moral conscience frees us for moral action and goes beyond mere cognitive experience.
Whether new and challenging advances will lead to awareness is as unclear as solving the mystery of why humans have it, Arana continues. Thus, he warns of the need to stop and think about scientific advances and their consequences. To do this, she refers specifically to conscience, arguing that it is an irreducible ability to technical information systems. People will have to think about themselves and their works in our professional and personal work.
Help?
As you may not have noticed, this article was written using ChatGPT. The goal was twofold: to test the model and to invite the reader to ask a few questions.
We originally used ChatGPT after asking it what ethical dilemmas its own use might cause. He immediately gave us an idea of the three problems presented.
Then we wanted to delve into one of them, and this led us to one of the cited authors. Finally, we use your own answers as references in some parts of the article.
We can say that the test was a success, but we couldn’t stop asking ourselves the following questions, which we think are outside the scope of the academic world: Should we resort to using these applications to meet the growing demands of our work? What would be the limits of this use, if adopted?
Intelligence with or without consciousness
According to the philosopher François Vallee, we should ask ourselves what is the ideal combination of a mind without conscience (like ChatGPT) and a mind with conscience (ours). While not wishing to take a stand on the issues raised, it is necessary to reflect on its use, usefulness and implications.
According to Vallace, we cannot ignore tools that help us simplify some tasks. But its use must be complemented by the personal and empathic experience of the author of the work. It is this contribution that will allow us to deepen dialogue and critical reflection.
It is about finding balance and possible solutions among all. Answers that go beyond simplistic conclusions. Only in this way will we preserve what is different, what is amazing, what is non-trivial and true to the human condition.
Javier Morales Mediano, Co-Director of the DBA in Management and Technology and Professor of Marketing at the Pontifical University of Comillas, and Diana Loyola Chavez, Professor of Philosophy. Applied Ethics and Practical Philosophy, Pontifical University of Comillas
This article was originally published on The Conversation. Read the original.
We recommend you METADATA, an RPP technology podcast. News, analytics, reviews, recommendations and everything you need to know about the world of technology.
Source: RPP

I am Ben Stock, a passionate and experienced digital journalist working in the news industry. At the Buna Times, I write articles covering technology developments and related topics. I strive to provide reliable information that my readers can trust. My research skills are top-notch, as well as my ability to craft engaging stories on timely topics with clarity and accuracy.