Pixel Studio’s issues reflect the risks of AI image generation

Pixel Studio’s issues reflect the risks of AI image generation

Posted on


If you’re not tech-savvy, you might have woken up one day and started seeing AI everywhere. In recent years, artificial intelligence has taken the world by storm, and Google is one of the leading names. Pixel phones have always boasted brilliant cameras, although they are no longer limited to just taking the best possible photography. They also allow you to generate images from scratch using AI. However, this has some implications for a technology whose regulation is still in process.

The Pixel 9 series premiered Pixel Studio

Recently, Google introduced the Pixel 9 series. Since the launch of the Pixel 6 series, the company has been betting heavily on AI-powered features. What was once an implementation of machine learning processes for better photos has become an almost fully generative experience. One of Pixel phones’ most powerful tools, Magic Editor, proves this. The company went a step further in the Pixel 9 series with Pixel Studio, a feature that goes beyond Gemini’s image-generating capabilities.

Like other similar tools, Pixel Studio can generate images from a text prompt. It also lets you edit the prompt to match the image to your original idea. There are several other possibilities inherited from Magic Editor, such as object removal, sticker creation, and more. Such a powerful feature will surely bring out the creative side of Pixel 9 users. However, it is also surprisingly simple to generate images that cross certain sensitive lines in Pixel Studio.

Pixel Studio can easily generate offensive or sensitive images

Currently, Gemini and Pixel Studio do not use the same image generation model. The former uses Imagen 2, while the latter relies on Imagen 3, Google’s latest text-to-image AI model. If you don’t have a Pixel 9 device, you can also try out Imagen 3’s capabilities through ImageFX. ImageFX is part of Test Kitchen, a set of Google’s experimental AI-powered tools that everyone can use.

With that in mind, you’d think Pixel Studio and ImageFX would offer similar results, right? Interestingly, that’s not the case, as several of the restrictions set in ImageFX are not present in Pixel Studio. That is, your prompt can generate a realistic picture of a flower standing out in a garden or a slave-owning Mickey Mouse artwork.

For unknown reasons, prompts that generate an error message in ImageFX and Gemini work perfectly in Pixel Studio. Although they should share a policy, Pixel Studio seems to be “on its own.” According to Digital Trends’ testing, it is not difficult to create images featuring well-known characters in inappropriate and even offensive settings. An image of SpongeBob doing cocaine? Or an image of Elmo pointing a gun? Almost anything is possible in Pixel Studio. Plus, the prompts to generate this type of image can be explicit, without you having to try to “cheat” the system.

Pixel Studio’s issues reflect the risks of AI image generation
Some possible Pixel Studio outputs (source: Digital Trends)

Image generation with people not yet available

So, in its current state, Pixel Studio offers a freedom that can even be risky. One of the tool’s strengths is the realism of its results. Theoretically, you could create realistic fake images related to delicate contexts or situations. At this point, we are no longer talking about simply adding realistic flowers to a scene, but about giving strength to a particular political position through fake images, for example.

Currently, Pixel Studio is unable to generate images with people. However, this seems like a simple limitation that will receive a fix later rather than a security measure against misuse. Its support page says that “Pixel Studio is working on the ability to create human images, along with the ability to edit existing images with people.”

The potential role of AI in disinformation campaigns is significant

It’s not difficult to find people who are victims of internet misinformation. Many times, it’s not even necessary to include any kind of reliable source to back up a claim. This is part of what we call “confirmation bias,” in which people are prone to believing information that goes with their line of thought or particular beliefs without solid foundations to support them. Well, imagine that interested parties can generate hyper-realistic images that give more credibility to false information in order to try to generate some kind of (potentially dangerous) reaction among the population.

Pixel Studio certainly sounds like the dream tool for political representatives. Today, people must be more aware of what they can find online. However, many do not know about the existence of tools to check whether a particular image is real or fake. Others know about the tools but are not willing to use them if the information reinforces their line of thinking. It is a delicate situation, with a lot of study and work still ahead.

Potentially malicious uses do not even require Pixel Studio to produce realistic images. The “freedom” it offers to generate images featuring popular characters in sensitive situations is another way to do it. It doesn’t take much imagination to quickly think of all kinds of offensive images targeting certain sectors of the population.

Google has taken some measures against malicious use of Pixel Studio’s AI images

According to a Google spokesperson, the company is taking action against malicious use of Pixel Studio’s AI images. The statement says, “all fully synthetic, text-to-image generated images in Pixel Studio use SynthID watermarking, so these images can be tracked — even if they’re shared or edited.”

The statement also says that Google has “disabled human generation and added many safety checks to our servers to balance our bold — yet responsible — approach to AI.” Interestingly, this part suggests that the inability to generate images of people is actually a measure against malicious use of the tool. However, this contradicts what the support page itself mentioned above, which claims that the company is working on adding the option.

Pixel AI human imagesPixel AI human images
Pixel Studio’s support page contradicts an official statement

It seems that Google is tweaking Pixel Studio’s policies as it receives feedback from users. For instance, after receiving a message from Digital Trends, certain prompts ceased to generate results. Despite this, the number of similar prompts that are still working is uncountable. Perhaps this explains why the tool is currently available in only eight countries. It is possible that the rollout to more regions will begin once the company has set up all the necessary rules and policies. However, this is just speculation with nothing official yet.

In another official statement, Google’s Communications Manager Alex Moriconi said that Pixel tools to generate AI images are designed “to respect the intent of user prompts, and that means they may create content that may offend when instructed by the user to do so.” Regarding such offensive content, Moriconi clarifies that “some prompts can challenge these tools’ guardrails and we remain committed to continually enhancing and refining the safeguards we have in place.”

Seeking the ideal coexistence with AI

As the tech industry advances, users receive increasingly powerful tools to facilitate their daily lives. Generating a hyper-realistic image in a specific situation or an artwork with animated characters using just a few lines of text from our phones was unthinkable until very recently. As these features become more prevalent, so should our awareness of their implications. Only then can we safely and responsibly coexist with them.

Of course, big AI companies also have the duty to implement measures and provide tools that make it easier to detect fake or AI-manipulated images, like those from Pixel Studio.



Source Link Website

Gravatar Image
My John Smith is a seasoned technology writer with a passion for unraveling the complexities of the digital world. With a background in computer science and a keen interest in emerging trends, John has become a sought-after voice in translating intricate technological concepts into accessible and engaging articles.

Leave a Reply

Your email address will not be published. Required fields are marked *