
Recent releases made the shift in design impossible to ignore. Google DeepMind’s Nano Banana Pro showed how far image generation has moved toward precise, controllable editing, with tools that let creators adjust camera angle, focus, depth, and color treatment. For video, Seedance 2.0 combined audio-visual generation with much more direct control over performance, lighting, shadow, and camera movement.
These tools are turning design into a controllable production system, so the designer’s role is moving towards that of a systems architect, says Aleksandr Loginov, a product designer and creative leader who combines broadcast visual craft, technical fluency, and product thinking. As Chief Design Officer at Prequel, a consumer app company in photo and video editing whose 4 apps repeatedly reached No. 1 in the App Store’s Photo & Video category in markets including the US, the UK, France, and Canada, he helped shape the strategy behind the company’s rapid expansion. Before moving into product and AI design, Aleksandr was a broadcast designer at STS, a popular Russian entertainment television channel, where he led his team to a silver PromaxBDA award in the UK in 2015 for high-level work in TV promotion and broadcast design. Now, as he has just joined Lazarev Agency as Art Director for agent-based AI product interfaces, he moves into an award-winning B2B design company with more than 600 shipped products, focused on complex, data-heavy platforms such as AI copilots, decision engines, and vertical SaaS.
Across all those roles, Aleksandr observed that as AI absorbs more of the manual craft, the real competitive edge is shifting elsewhere: toward judgment, system design, and making complex tools usable.
To understand the shift in design, start with the stack itself. Creative teams are no longer using isolated tools. They are assembling a production engine. As Aleksandr notes, Nano Banana Pro is especially strong when the goal is a polished image with better lighting, composition, localized edits, and cinematic texture. But consistency of faces is not its main advantage. That is where Seedream is stronger. Right now, its clearest edge is identity transfer: keeping faces recognizable and consistent across outputs better than any other model in the stack. Kling and Seedance add the cinematography layer, making it possible to generate video with synchronized audio, controlled motion, and more coherent shot sequences. ElevenLabs adds the voice layer, giving visuals a believable multilingual narrative.
“I have already noticed that even a small amount of coding knowledge is now becoming essential for designers. Not to turn them into engineers, but to help them connect models in the right order, speed up iteration, and work with far less dependence on long engineering cycles,” Aleksandr says. Once the stack can provide photorealistic visuals, identity consistency, motion, and voice, the advantage is the ability to turn those capabilities into a dependable pipeline.
That shift becomes easier to recognize when you have had to lead products at scale. At Prequel, where Aleksandr served as Chief Design Officer, he was responsible not just for visual quality, but for the workflow behind image, video, and audio technologies across R&D, Data Science, Art, and key parts of Mobile and Backend. Part of the job was to improve quality, speed, cost, and time to market at the same time. One result, as he describes it, was a workflow that eventually cut the release cycle for AI features from roughly three months to 30 minutes, giving the company a much faster way to respond to signals from marketing. Once a creative stack can deliver photorealistic visuals, preserve identity, and handle motion and voice, the real advantage lies in turning that complexity into a pipeline people can actually use.
The manual labor of design is being automated into oblivion. If your value was based on how fast you could mask an image or navigate a complex software menu, the market is shrinking.
Aleksandr has witnessed this shift while building the kinds of systems that are redefining the designer’s role. In a multi-agent workflow for marketing, he did not focus on producing each asset by hand. He defined the creative logic, structured the sequence of models, and decided where human judgment needed to stay in the loop. Instead of scaling output by hiring dozens of designers, Aleksandr and his team built a system around Gemini and Nano Banana in which the designer began by describing the image and the criteria it had to meet. The model then generated 10 to 20 options. A separate vision-language model reviewed those outputs, identified the ones that matched the original brief most closely, and surfaced the strongest candidates for the designer to evaluate.
This way, Aleksandr shaped the next stage of the workflow. After the designer made a selection, the team animated the chosen images in Kling and assembled them into a single creative or a broader pack of creatives. They then tested that set either in Facebook ad accounts or through SplitMetrics to see which approaches attracted users most effectively. Aleksandr treated that stage not as a final checkpoint, but as part of the system itself: the team fed the performance data back into the workflow so the next round of creatives could build on what had already proven effective.
In practice, that workflow increased creative output many times over while sharply reducing the designer’s manual workload. Under Aleksandr’s leadership, the work that remained essential sat at a higher level: setting intent, defining quality, evaluating outputs, and steering the system as it iterated. For him, that is where the profession is moving. The designer’s value no longer lies mainly in making each asset by hand but in shaping the process that can produce strong creative results at scale.
He argues that this is also why consistency is becoming one of the hardest requirements in AI design:
“When a system produces many versions of the same person, the question is not whether it can generate an image, but whether it can preserve identity, recognizability, and stability across outputs. That is where the designer’s role changes most. The job is no longer just to make things look good, but to define the process, control the edge cases, and make sure the system produces results that are consistent enough to trust and ship,” he says.
For years, the ideal creative professional was T-shaped: broad across disciplines, with one deep specialty. In generative design, that model is starting to loosen. The role is becoming more fluid. A designer may move from visual direction to product logic, from interface structure to content behavior, depending on what the system needs at that moment. The craft does not disappear, but it stops living in one fixed place.
Aleksandr’s own career helps explain the shift. Early in his career, he worked in a television medium where images had to register at once (with precision, clarity, and emotional force), and that work led his team to a Silver PromaxBDA in the U.K. Later, at Prequel, he was no longer focused only on frames or campaigns. He concentrated on product systems that had to hold up across millions of user interactions while remaining intuitive enough to help the company’s apps repeatedly rise to the top of the App Store’s Photo & Video category in major markets. The role had expanded from making images to defining how creativity operates inside the product.
As Art Director for agent-based AI product interfaces at Lazarev Agency, he is not confined to one design lane. One week, the work is about understanding what AI capabilities can realistically support in a product. The next step is about shaping those capabilities into a usable flow with the right controls, review points, and product logic. Then the focus moves back to creative direction: defining what quality should look like when images, video, and audio are generated at scale. That is the new reality of generative design teams. Depth still matters, but it now means the ability to shape, connect, and govern systems across disciplines, not just master one static craft.
The next shift in design is not just better media, but a different kind of interface, Aleksandr is sure.
One direction is generative UX. Instead of designing fixed pages, designers will increasingly define rules, states, and priorities. The system will assemble the right interface in real time based on the user’s intent and context. In that model, software becomes less like a set of screens and more like a temporary control surface that appears when needed.
Aleksandr has already seen the logic in product work built around ordinary users, not specialists. One of the central ideas he pushed at Prequel was that editing should help people express the feeling of a moment without forcing them to master the mechanics behind it. That same principle, he argues, can shape the next generation of interfaces: systems that infer intent, surface the right controls at the right moment, and ask for confirmation only when the stakes are high:
“When a complex capability is reduced to a simple action, adoption improves because users do not have to learn the system first. The same principle can shape the next generation of products: interfaces that infer intent, surface the right controls at the right moment, and ask for confirmation only when the stakes are high,” he says.
Further ahead, the profession may change again. Neural interfaces could make it possible to sketch ideas directly from thought into digital space. At the same time, fully human-made design may gain premium value as a mark of authorship and authenticity.
AI is not eliminating designers. It is stripping value from the most repeatable parts of the craft. What remains valuable is judgment: the ability to structure workflows, preserve coherence, define limits, and steer a product when the model becomes unstable. Aleksandr has moved in exactly that direction. He started by making visuals himself. He began with visuals. Now he works on systems that determine how creative work gets produced, scaled, and experienced. That is also the direction he is choosing deliberately: building tools that let people without design training create strong content, while giving experienced creators a way to move faster and produce far more. For him, the point is not automation for its own sake. It is to make creative expression more accessible on one side and more powerful on the other.
Read more:
How Aleksandr Loginov Is Redefining Design in the Age of AI