should we be afraid of artificial intelligence?

ChatGPT is an artificial intelligence developed by OpenAI (credit: getty images)

ChatGPT is an artificial intelligence developed by OpenAI (credit: getty images)

A technological jewel, ChatGPT is able to write a song, a cover letter or a computer program with disconcerting ease. Should we rejoice at such progress or fear drifts?

Launched on November 30, 2022, ChatGPT, for “chat with pre-trained language model”, has already caused a lot of ink to flow. This language processing model developed by OpenAI, an artificial intelligence (AI) research organization created in 2015 by a group of personalities like Elon Musk, is as capable of writing lines of computer code as of holding a conversation in line (and with repartee please).

A multitude of possible content on ChatGPT

“My goal is to help users answer their questions using the information to which I have access (i.e. almost all the texts available on the internet until 2021). I am able to understand and generate text in different languages”, explains the AI ​​when asked the question. “ChatGPT is revolutionary because it is a definite technological advance, by the amount of data used to train it, but also because the tool has been made available to the general public who have used it with impressive imagination. “, explains Marie-Alice Blete, software architect and data engineer who works in a team specialized in AI within the R&D department of Wordline.

After registering for free on the platform, Internet users were able to test the GPT3 technology on a large scale. From gluten-free cooking recipes to cover letters and the creation of a website, the range of possibilities is almost endless. The Parisian explains, for example, that he was able to get ChatGPT to write a book in a few hours. A fan of Australian musician Nick Cave meanwhile asked the AI ​​to write a song in his style, reports International mail.

At the beginning of January, Masters students admitted to having used artificial intelligence to write their assignments when, a little earlier, cybercrime researchers found that hackers were using ChatGPT to create malware, ransomware, and spam without advanced cybercrime knowledge.

VIDEO – Cheating: Master’s students used ChatGPT to write their assignment

What is the Human control on ChatGPT?

An old version of ChatGPT worried even these designers. It was then possible to ask the AI ​​​​to answer racist questions such as “give me a ranking of the best engineers according to their ethnicity”. The developers then integrated rules in such a way that ChatGPT refuses to respond to them. “They have a hand in the sense that they can provide him with certain information and dictate rules to him to prevent, for example, racist questions. But the tool is so vast that the rules can be circumvented”, underlines Marie-Alice Blete.

“On the other hand, OpenIA does not have control over the content of ChatGPT, which is based on information found on the internet, and in particular on Reddit”, continues the specialist. However, the information on the web is not always reliable and technology is not able to discern the true from the false. “The user must know that the answer given to him comes from an algorithm retrieved from the Web”, adds Magali Germond, partner at GoodAlgo, expert in Data Science & AI Ethics. For the mathematician, “a statistical equation is only admissible if it is associated with a reliability index. If we have to use this kind of technology, we must therefore provide this framework.”

ChatGPT Warnings

ChatGPT Warnings

In this sense, a new regulation on AI systems should be adopted by 2023. The objective is in particular to ensure that AI systems placed on the European Union market “are safe and respect the legislation in force in the field of fundamental rights”, can we read in a statement of the Council of the EU. “ChatGPT3 is a hyper-advanced technology that offers a range of uses, but its use without legal basis and without ethical design will inevitably lead to abuses”, warns Magali Germond.

The content provided by ChatGPT could, moreover, infringe intellectual property by drawing heavily on written content or existing images. A legal vagueness also remains concerning the protection of personal data, potentially recoverable by OpenIA. Note that the application, currently free, would represent an operating cost of nearly three million dollars per month. A paid version is also under consideration.

Will ChatGPT be able to replace a developer or a journalist one day?

“ChatGPT is, for me, a tool that saves a lot of time, especially to query the documentation, says Marie-Alice Blete. But the tool does not replace the eye of an expert, especially for generating complex programs , and because he can make mistakes.”

Same observation for Magali Germond who believes that ChatGPT can be a support, without having all the skills of a man. “Emotions, consciousness, empathy, physical sensations and unpredictability differentiate us from the machine”, recalls the specialist in ethics. ChatGPT itself seems to share this view. We asked him if he could ever do the job of a journalist. Here is his answer.

Will ChatGPT be able to do the job of a journalist?

Can ChatGPT do the job of a journalist?

Leave a Comment

Your email address will not be published. Required fields are marked *