OpenAI, research laboratory in artificial intelligence, had already announced in February the creation of GPT-2, its algorithm able to write paragraphs of text of an impressive coherence. But instead of publishing the AI ​​tool in its entirety, the team chose the full version of the program and only shared a smaller model of the model lest people use the tool. the most robust for malicious purposes to produce, for example, fake news articles or spam.

Meanwhile, OpenAI has observed the receipt of the published version to detect non-recommended uses, but it has “seen so far no evidence of misuse,” according to a blog post published Tuesday. This is how the company, whose goal is to develop an AI that will benefit all humanity, has decided to publish the model in its entirety. GPT-2 is part of a new generation of text generation systems that impressed experts with their ability to generate consistent text from minimal prompts. The system has been trained on eight million text documents selected by browsing the Web and reacts to text samples provided by users by producing a complete article.

The OpenAI text generation algorithm stands out for its quality. The model was formed by giving him more knowledge on how to understand the written text. GPT-2 is much more general than previous text templates. By structuring the captured text, it can perform tasks such as translation and synthesis, and pass simple tests of reading comprehension, often as well or better than other AI tools designed specifically for these tasks.

The GPT-2 model produces very convincing renderings for humans. Cornell University’s OpenAI partners conducted a survey of people to assign the text released from GPT-2 a credibility score for all model sizes. According to the research lab’s blog post, people rated the 1.5B a “credibility rating” of 6.91 out of 10. This is slightly higher than the 774M (6.72) and significantly higher than the 774M (6.72). that of the 355M average model (6,07). “These results give us more incentive to release the 1.5B model, as the incremental increase in perceived human credibility compared to 774M seems weak,” the OpenAI team wrote.

In February, The Guardian tested the OpenAI algorithm. GPT-2 is a text generator that must be text fed, from a few words to a whole page, and is able to write the following sentences based on its predictions about what should happen next. The Guardian concluded at the time that the system pushes the boundaries of text generation beyond what was thought possible, both in terms of the quality of production and in terms of the wide variety of possible uses. The model is able to write plausible passages that correspond to what is given to it in both the style and the subject, even if it sometimes produces texts out of context.

According to experts, GPT-2 could be used for malicious purposes. This led to OpenAI’s approach to disseminating the tool, starting with a reduced version. In addition, experts have pointed out that easy access to state-of-the-art AI tools can allow malicious actors to take advantage of them. Partners at the Middlebury Institute of International Studies’ Center on Terrorism, Extremism and Counterterrorism (CETC) have found that extremist groups can misuse GPT-2, in particular by tweaking GPT-2 models into four ideological positions: white supremacy, Marxism, jihadist Islamism and anarchism. CETC has demonstrated that it is possible to create models that can generate synthetic propaganda for these ideologies, according to OpenAI.

OpenAI limited the publication of its model because of this concern. However, not everyone agrees with the laboratory approach. Many experts have criticized this decision, saying it limits the amount of research that others could do to mitigate the drawbacks of the model, and creates unnecessary hype about the dangers of artificial intelligence.

Source : OpenAI

Related Articles
Leave a Reply

Your email address will not be published. Required fields are marked *