Artificial intelligence imagers have become a digital trend followed by several technology companies. Now, in addition to the already known chalk (previously DALL-E mini) from Open AI as well as Image from Google, Target – head company facebook- joined this beautiful branch of technological art with his own version, which he calls Make a scene.
As stated in the publication in his official blogthe firm hopes to use this new tool on the path to developing immersive worlds in metaverseand contributes to the creation of high-quality digital art.
By simply typing a word or phrase, the system will start a process in which the text goes through a transformation model and then moves on to a neural network that analyzes the text for development contextual understanding relationships between words. Capturing the essence of what the user describes, artificial intelligence will synthesize an image using a set of adversarial generative networks (GAN in English).
Rapidly developing technology
Due to the many efforts currently in place to train models artificial intelligence With ever-growing sets of high-definition images and well-chosen text descriptions, the most advanced generators can now create photorealistic images pretty much anything you tell them to do. However, this process varies depending on the chosen AI.
We have Image from Google which uses the diffusion model “who learns to convert a set of random points into images, starting with low resolution figures and gradually increasing the resolution”. On the other hand, AI Party from Google “it first converts a set of images into a sequence of input codes that look like puzzle pieces. Then certain text is translated into these code entries and a new image is created.”.
Meta’s contribution to AI image generators
As noted Mark Zuckerberg in the record of Make a scene in Blog from TargetWhile the aforementioned systems can display just about anything, the user has no real control over aspects of that image in its final form. “To harness the potential of AI for creative expression, people should be able to shape and control the content that the system generates”said the CEO of the company.
That’s why what he does Make a scene is to include sketches created by a user on your system, which results in Image 2048×2048 pix.. With this combination, the user will be able to describe what he wants in the image and, in addition, he will be able to control the overall composition of the image.
“Make a scene demonstrates how people can use both text and simple images to communicate your vision with more specificityusing various elements, shapes, arrangements, depth, compositions and structures”indicates Mark Zuckerberg.
Trial Make a scene it was reassuring that groups of human evaluators preferred this system of text and pictures to a text-only system because it better matched the description of the original proposal in 66% and to the original sketch in 99.54% time However, at the moment the company he did not mention when it would be available to the public.
We recommend you METADATA, an RPP tech podcast. News, analytics, reviews, recommendations and everything you need to know about the world of technology. To hear better, #StayHome.
Source: RPP

I’m Liza Grey, an experienced news writer and author at the Buna Times. I specialize in writing about economic issues, with a focus on uncovering stories that have a positive impact on society. With over seven years of experience in the news industry, I am highly knowledgeable about current events and the ways in which they affect our daily lives.