IA • NOVEMBER 18, 2025 • 6 min read

I Generated Product Photos With AI in 15 Minutes

I generated lifestyle photos of a vintage Pan Am bag with AI in 15 minutes. The results looked convincing. Here's what this means for e-commerce creative workflows.

I Generated Product Photos With AI in 15 Minutes

I spent about 15 minutes generating lifestyle photos of a vintage Pan Am bag with AI.

The results were realistic enough that when I showed them around, people didn't question their authenticity. They just asked where I found them. It's that reaction that made me realize something about how creative workflows in e-commerce are shifting.

The Starting Point

I came across this tweet showing a vintage Pan Am airline bag. What caught my eye wasn't just the nostalgia factor. It was that the product had enough visual detail to be interesting, without being so complex it would break an AI model.

I'd seen people online generate lifestyle shots from product images before, and I was impressed. But there's a difference between seeing someone else's results and actually doing it yourself. So I decided to test it with this Pan Am bag.

The Process: Seven Prompts, 15 Minutes

I used Gemini's image generation model (nicknamed Nano Banana, officially Gemini 2.5 Flash Image) to generate the lifestyle shots. I uploaded the Pan Am bag image from the tweet, and the model maintained product consistency across every iteration. The bag looked identical across different scenes, which is exactly what you need for lifestyle product photography.

The process was simple: start with a basic prompt and the original product picture, then iterate to refine the scene and aesthetic. I went from a straightforward "lifestyle photo of a blonde woman at an evening event, shot on Kodak film" to increasingly specific directions about the nightclub vibe:

A 35mm analog film photo shot in a nightclub at night. Direct harsh flash, deep blacks, high contrast, visible 35mm grain, slight motion blur, sweaty skin highlights, strong specular reflections, and authentic point-and-shoot imperfection. Slightly washed colors with red/green cast from club lights. The authentic feel of a candid disposable camera photo from the late 90s–early 2000s.

The model nailed it.

Then I completely switched contexts with a simple prompt: "ok now let's do a business man going into a plane with this bag too"

A few adjustments to get the composition right, and done.

Seven prompts total, each building on the previous result.

The Results (And The Limits)

What impressed me most wasn't just the visual quality. It was how realistic the scenes felt. The nightclub shots nailed the vibe I was going for. The airport terminal shot captured that clean, modern terminal atmosphere with just the right vintage feel from the Pan Am bag.

But AI isn't perfect, and the imperfections are worth noting:

  • Text rendering issues: On the nightclub shot, there's a passport visible in the bag with slightly blurry, bleeding text
  • Hand anatomy: Some fingers look a bit off (a common AI tell)
  • Model resistance: Sometimes Gemini's model pushes back on simple changes, requiring you to start from scratch (though this wasn't an issue in this test)
  • The interesting part? My girlfriend didn't notice any of these issues at first glance. And honestly, in the context of browsing an online store, most customers probably wouldn't either.

    That's the key insight here. If the quality is good enough to pass casual inspection, it changes what's possible for e-commerce creative workflows.

    Where This Actually Fits in E-commerce

    Here's what this means for e-commerce teams:

    This isn't about replacing photographers or creative professionals. It's about changing how we approach the creative process.

    To understand the shift, look at a traditional product photography workflow: you need to plan the shoot, book talent, rent locations, arrange props, shoot multiple angles, and post-process everything. This takes time and budget. For every concept you execute, there are probably five others you couldn't afford to test.

    AI-generated lifestyle images change that equation. You can now generate multiple concepts in minutes to test which visual direction resonates with your brand before committing to a full shoot. Show your creative team several different scenarios (urban street style, corporate setting, vacation vibe) and make informed decisions about which ones deserve a real shoot.

    With the same budget, you can test more concepts or reduce costs on simpler shots while investing more in complex, high-value photography. Launch a new product with AI-generated lifestyle shots to test market response, then commit to professional photography once you've validated demand.

    The creative professionals who understand composition, lighting, emotion, and storytelling aren't going anywhere. They're going to be more valuable than ever. What changes is their toolkit and their efficiency. They'll spend less time on concept testing and more time on the work that truly requires their expertise.

    Will AI eventually generate images indistinguishable from professional shoots? Maybe. The models have improved dramatically over the past year. But even then, you'll still need people who know what to create, why to create it, and how to make it align with a brand's vision.

    These tools are ready to be part of the creative workflow, especially in the ideation and testing phases. What I learned from this experiment is that the question isn't whether to use AI, it's where in your process it adds the most value.

    Thomas Tastet

    Written by Thomas Tastet. I'm a Founder & CTO who builds products and companies. Find me on X/Twitter or LinkedIn .

    About

    I'm Thomas Tastet, founder and CTO based in Paris. I build technology companies and experiment with new tools to shape the future of digital.

    Contact

    Want to discuss a project or just chat? Feel free to reach out.

    © 2025 Thomas Tastet. All rights reserved.

    Tastet Digital