When it comes to next-level image editing, wouldn’t it be great if you could take a decent photo with your smartphone, tell your device to make this photo look like a pro took it, and seconds later end up with something you can frame, hang on the wall and make everyone who sees it think you’re a visual genius?
That’s where Adobe headed with its next addition, Photoshop Generative Fill AI infused with Firefly.
Firefly, Adobe’s new AI factory, launched six weeks ago. It is now adding new features in Photoshop, one of the most popular graphics and photo editing software on the market.
I took a tour of the new AI tools in Photoshop just before its recent launch. The standout feature for me is the ability to highlight or lasso an area in a photo, then click a new Generate icon, type in some keywords to describe exactly what you want to see replace your selection, and it feels like magic.
How to use Adobe Firefly
For example, take a look at the reindeer photo in a forest below.
With the new Generative Fill integration, you can isolate a deer, tap Spawn, and type a message like Wet Alley at night. You can even layer a vintage red arrow sign over it for added drama. This is just one example that John Metzger, Adobe’s director of product management for Photoshop, demonstrated on the video call.
AI phone scams:Calling from within the internet: AI voice scams on the rise with cloning technology
Find unclaimed money:You may get some money. Where to look for unclaimed accounts that belong to you.
Hackers use artificial intelligence:Hackers Use AI to Crack Passwords: How to Pick Better Passwords to Keep Them Out
His other new AI Generative Fill trick gives you a way to remove objects without weird glitches in the result. Previously you could erase a person or, in the example below, a surfboard, but you were left with weird blobs that you need to fix where that person or object was. In the beach example below, the Firefly AI mines the board and fills in the railing, sky, sand, and even shadow in ways that make it impossible to see your edits. This is a tool that most content creators use on a daily basis, and I’ve never worked with one this quick and easy before.
Another standout feature overall is how this new generative fill matches perspective, lighting, and image style better than we’ve seen with a plethora of other tools. It lets you make layered changes, so you’re not messing around with your original or married change that you might want to break up with later.
I’ve been playing around with similar beta AI ideas in Canva and other apps, like Magic Eraser Background, and as you might expect, they don’t perform as well against a heavyweight like Photoshop. You can try these new features free for seven days online, then $21 a month for Photoshop or $55 a month for Adobe’s entire suite of Creative Cloud apps. You can also try it for free using the Firefly beta.
Don’t expect Photoshop perfection just yet. As mentioned, this generative fill tool is available in Firefly and Photoshop in beta testing. It struggles with human hands and facial features, the same we have seen in all AI image makers including Midjourney, Dall-E2 and others. That’s no surprise since Adobe uses a similar AI image generation technique called diffusion, which others like Dall-E, Midjourney, Stable Diffusion, and Google Imagen also use.
While it hasn’t said when yet, Adobe plans deeper AI integration among its other apps like Illustrator and Premiere as well. Yes, that means it’s coming to video editing, which is equal parts exciting and terrifying.
Fake images generated by artificial intelligence are circulating every day on social media and so-called news sites. Remember the Pope in a puffy jacket? Or even in May, when that fake Twitter story allegedly showed an explosion near the Pentagon?
How to detect deep fake images
Ashley Still, senior vice president of Digital Media at Adobe, told me they’ve built in a number of security barriers to make sure people aren’t using Firefly AI to create deep fakes. Adobe now embeds metadata into every creation, including attribution, provenance, and the role AI plays in crafting the final image.
We think of it as a nutrition label for (content), Still explained. This kind of transparent metadata applied to each file is the only way to trust or even understand the superpower of AI’s generative padding and what it enables, he added.
Adobe’s Content Authenticity Initiative outlines transparency tools and has a verification check you can use to check if even a photo is fake.
Can artificial intelligence steal from artists?
The other big problem with AI artwork is how often it uses discarded images from the internet without attribution, leaving photographers, designers, and creatives empty-handed when someone else uses their work. Adobe says they thought of that too.
Our models only train on IPs that we have explicit permission to use, Still said. We use content from hundreds of millions of professional-grade licensed images from Adobe Stock. We do not use other people’s intellectual property or brands. Additionally, Adobe is developing a compensation model for Adobe Stock contributors and will share details once it (Adobe Firefly) exits beta.
How does artificial intelligence change photography?
When it comes to next-level image editing, I want a tech tool that can take any number of my near-but-not-quite-perfect iPhone photos and instantly make them look like a professional shot and edit them.
Firefly’s AI hasn’t arrived yet, nor is that its purpose. It’s a tool to make the process faster, more intuitive, and allow more people like me to dabble in creativity. The real secret sauce for the pictures I’ve framed on my walls is the human touch however and it’s not going away anytime soon.
Jennifer Jollyis an Emmy Award-winning consumer technology columnist. The views and opinions expressed in this column are those of the author and do not necessarily reflect those of USA TODAY.
#Photoshops #generative #Adobes #latest #foray #photo #editing