HomeTechnologyDall-E, an automated imaging...

Dall-E, an automated imaging tool, is now open to everyone

OpenAI breaks down corpses. An American company specializing in artificial intelligence now allows any Internet user to use its impressive Dall-E tool. This deep neural network is able to generate high-resolution images in seconds from a text query such as “two people were discussing on the terrace of a Parisian cafe, in autumn, in the manner of Gauguin.»

Until now, access to Dall-E was reserved only for professionals who had to register on a waiting list. Before opening to the general public, 1.5 million people were creating 2 million images every day on the OpenAI website.

.fig-i-e05979b31544b1761e7fb517f4717055.fig-placeholder::before {padding-top:32.537788385044%}<div class=”fig-media__container fig-placeholder fig-i-e05979b31544b1761e7fb517f4717055 fig-lazy fig-placeholder–with-dimensions” data-module=”fig-photo” data-context=”was @visible” data-modal-image=’‘data-modal-image-credit=”Open AI” aria-label=”Enlarge Image”>

Three images created by Dall-E with the request “two people talking on the terrace of a Parisian cafe, autumn, Gauguin style”. Unlock the AI

The general public already got a taste of image-generating AI this summer thanks to free sites like Craiyon, often with hilarious results. But Dall-E’s neural network is more powerful and capable of producing professional-looking visuals, some of which have already made magazine covers.

Filters to avoid distractions

OpenAI has restricted the use of Dall-E. New users get 50 free survey credits after signing up. They will then be entitled to 15 free text inquiries per month. Beyond that you will have to pay. Price, non-decreasing, $15 for 115 credits.

The US company initially took a cautious approach, restricting access to Dall-E, fearing that its tool could be used by malicious people for disinformation, propaganda or cyberattacks. This gradual openingenabled us to strengthen our security systems. Our filters have become more robust and reject requests to create visuals that are violent, sexual, or violate our content policy.” explains OpenAI. For example, it is impossible to create images about the war in Ukraine or public figures.

OpenAI is also trying to correct the bias of its artificial intelligence, which reproduces the clichés it has been subjected to by “digesting” huge visual databases. For example, if a user doesn’t specify the gender or ethnicity of the characters they want to see in their image, OpenAI will add the terms “female,” “black,” or “Asian” to diversify the results. .

Source: Le Figaro

- A word from our sponsors -

Most Popular

LEAVE A REPLY

Please enter your comment!
Please enter your name here

More from Author

- A word from our sponsors -

Read Now

5 K-Pop songs that were inspired by books and movies

With Brazilian reference included; Meet 5 songs by K-Pop artists who had books, animations and movies like InspirationThat art inspires thousands of people around the world everyone knows. She, who with her particular meanings and detailed details, is able to sensitize and become a model for new...

What are the movies that make up the age of Disney’s shadows?

Find out what are the films released by Disney that were marked by the characteristics of the studio's shadows ageDo you know what was the age of Disney's shadows? Also called the Dark Era, it was obscure, the darkness and was bronze, this is the period marked...