Reading Paz Perez’s “Model Designer” offers a clear and accessible approach to the current wave in AI Product Development.
It makes an interesting matter why designers should step beyond the interface and help to create the behavior of AI agents, the argument that I fully support. In the development of the model, it is timely and essential to call designers “to get seats on the table” in a timely and necessary to help create this major change in the society.
Still, as I reflect the pen in my hand, or I would say, the keyboard is in the hand, I found myself a wider, more systematic ideology about its point of view. This is a complex journey with which we all together not only form new characters, but also create new ways to think about design and language in this emerging and exciting period.
Throughout his article, Perez encourages designers to play an active role in shaping both the interface and the basic model. It argues that this dual focus is essential to making AI products that truly meet people’s needs. Its approach is a timely reminder that the future of the design is higher than the screens. It is about the formation of intelligence that strengthens our digital experiences.
The article focuses on how we Should To create excellent indicators, develop excellent skills of writing abilities, and align LLM behavior with the user’s input. The author has rightly emphasized the importance of “impression loop” to improve cross -collagen and agent’s performance with engineers. This can be done in different ways. We, as a collective, are beginning to discover and understand more.
However, in this story, the model is kept in the center, almost isolated, an intelligent mind, maybe, but without a body or environment. Although Perez guides us by improving AI’s knowledge and thinking, in my opinion, focusing focus on this approach can attract us to ignore the dynamics of the important system.
Consider Customer Service AI Agents: Designers often focus on the ability to improve the reaction tone and solve problems, but sometimes, we ignore the system of important integration. The AI agent needs a smooth contact with the customer data system, smooth hand office with human agents, and both the requirements of climate support and the fluctuations of volume. As if Ken Yang Et El. (2020) Note, these system elements significantly affect the user’s experience, regardless of how well the gesture is developed.
AI agents, like any product, are present in the ecosystem. He is an actor in complex, evolutionary gardens. Consumer workflows, organizational processes, and even social principles create the output of these models. For example, think about the health care sector, medical AI, stationed at a hospital, easily provides clinical recommendations in isolation. The results are influenced by cultural attitudes (different patients prefer some patients), document requirements (according to billing and legal standards), and patients’ sovereignty (different offering of the treatment of the treatment of local medical exercise principles).
The same model installed in different hospitals can produce different recommendations, not only because of its basic abilities but because the questions were asked by the surrounding ecosystem and its results have been translated and implemented.
Their impact is not limited to how accurate their reactions are or how well their indicators are ready, but it extends how they renew work, affect confidence and introduce new moral dilemas. If we focus fully on the model, we will be ignoring potential problems that only appear when AI works in real -world context. Sometimes, these business preferences put pressure against the needs of the user, the decisions that the user cannot understand or even question. There is a huge risk of creating adverse effects that were not identified during testing.
When we talk about “designing for the entire system, not just model” we are advocating for a full, end -to -end approach to the AI product design. A approach that affects the cost of user experience, confidence and long -term products at every stage of the AI life cycle.
This mentality draws from the thinking of the system, which encourages us to look beyond the isolated ingredients (like the AI model) and instead see the integrated web of data, processes, people and policies that create the final product. This system view is straightforward with the argument of Rahwan Eth (2019), which is an argument for AI to understand environmental rather than purely technical tendency. This research shows that we cannot understand AI’s behavior in which it operates in isolating the social, organizational and physical environment. In other words, machines work fast in the same environment like humans. It is important to understand their behavior.
This means to the designers, this means moving into an environmental system -based approach. It is worth acknowledging that designers are sometimes boxing in interface work, and we have no influence on the system level.
As Yang et al. (2023) Note in their comprehensive study of AI design practice, ‘designers often encounter organizational barriers that limit their ability to affect algorithmic decisions despite being uniquely positioned to advocate for consumer needs in technical systems. Even when designers are limited to interface level work, we can still implement the system thinking that Diet El. (2022) Call the interface mediation advocate.
For example, designers can document the friction points tested by users, connects between interface decisions and wider organizational processes, and can advocate for ‘systemic touch points such as Liao Et El. (2024) was identified. These are the key moments where consumers experience the results of upstream AI decisions. These brave designers who permanently frame interface challenges as system -level concerns, even gradually increase their influence beyond the traditional UX boundaries in technically influential organizations.
I can’t help but think, if, if, as designers, do we create a specific map -making method for AI interactions life cycles? Perhaps something that imagines the flow of data and the “meaning flow”. With the movement of information between the models, interfaces, and the user’s context, they analyze how these interpretations and decisions are prepared. How do users feel output, and how does it change their behavior? These systems can help show our friction points and moral suspicions that cannot be solved only by engineering.
Imagine deploying AI -powered feedback for students. If you just design the model, you can improve the Optim for the classification accuracy. But if you are making a whole system map, you will consider:
- How is the student’s data collected, and is it a representative?
- How are the specifications of feedback so that students understand them and trust them?
- What happens if a student does not agree with AI’s diagnosis?
- How is the system monitored for flow or prejudice using new peers?
Many questions and factors have to be considered, so where do we start? Attracting my ongoing research on this topic, I shared with you recommendations for product designers who want to clarify my view of AI systems design.
1. For the AI design adopt the “first principle” point of view
This means breaking their most basic elements and questioning every assumption, especially about who benefits from automation and why.
Instead of accepting stagnation or relying on existing models, designers should start checking basic requirements and values in the game: Who benefits from making the process automate? As it is well described in the pair.
In addition, designers should consider transparency at different levels: What a expert user needs to understand about AI’s reasoning can be very different from the need for a baby.
2. Create the concept of system
Creating simple visual arigram that shows the limits of the AI system and how it is connected with users and other systems. These visuals help everyone understand what AI is responsible for and what is out of control. In order to show these areas, it is also important to highlight the areas where the AI behavior can be unexpected or uncertain. Sharing these arigas with stakeholders makes the complex AI system easier to understand, encourages open conversations, and helps the team to agree where human surveillance is needed. This approach promotes common understanding and leads to better, more reliable AI products.
3. Practice “temporary design”
As designers, we need to consider how AI relationships develop. Unlike static products, AI systems are changed through use, design samples require samples that expect and guide this evolution.
For example, how can the interface reach the growing understanding of the system about user preferences without creating extraordinary experiences? How do we design for the changing nature of confidence because users become more familiar with AI capabilities and limits?
Are you designed for a model, or are you designed for a system? Because in the end, users do not experience models, so they experience the system. And this is the standard of this system, not only from the model’s capabilities, which will determine whether your AI product grows or fails.
If we want to create an AI that truly serve people, let’s design not only for the talent of the agent, but also for the complexity of the world.
References:
- Kong, C, and Fabricant, R (2019). User friend: How the hidden rules of the design are changing our living, working and playing style. Macmulin.
- Hangfan Zhang 1 ∗ Xiu Ko 2 ∗ Xinaron Wang (2025) .They agent debate is the answer, what is the question?
3. Gray, CM (2016) “This is a way more mentality”: the concept of design methods of UX practitioners. Chi conference on human factors in the computing system.
4. 2020. Re -checking whether, why, and how to design human AI conversation is individually difficult. I Human Factors’ Conference in Computing System (CHI ’20, April 25-30, 2020, Honollo, HI, USA. ACM, New York, New York, USA 17 pages. https://doi.org/10.1145/3313831.3376301
5. Holmold, S (2009). Co -, cooperative, freedom: from co -design to service design. First Nordic Conference on Service Design and Service Innovation.
6. The role of design thinking in AI’s implementation: a case study analysis. International Journal of Design.
7. Designers: AI needs context. How UX Teams should embrace data… | By Paz Perez | Ux collective
8.Ai Product Design: To indicate the difference of skill and how to turn them off By Tia Clement | Ux collective
Unlock Your Business Potential with Stan Jackowski Designs
At Stan Jackowski Designs, we bring your ideas to life with cutting-edge creativity and innovation. Whether you need a customized website, professional digital marketing strategies, or expert SEO services, we’ve got you covered! Our team ensures your business, ministry, or brand stands out with high-performing solutions tailored to your needs.
🚀 What We Offer:
- Web Development – High-converting, responsive, and optimized websites
- Stunning Design & UI/UX – Eye-catching visuals that enhance engagement
- Digital Marketing – Creative campaigns to boost your brand presence
- SEO Optimization – Increase visibility, traffic, and search rankings
- Ongoing Support – 24/7 assistance to keep your website running smoothly
🔹 Take your business to the next level! Explore our outstanding services today:
Stan Jackowski Services
📍 Located: South of Chicago
📞 Contact Us: https://www.stanjackowski.com/contact/
💡 Bonus: If you’re a ministry, church, or non-profit organization, we offer specialized solutions, including website setup, training, and consultation to empower your online presence. Book a FREE 1-hour consultation with Rev. Stanley F. Jackowski today!
🔥 Looking for a done-for-you autoblog website? We specialize in creating money-making autoblog websites that generate passive income on autopilot. Let us handle the technical details while you focus on growth!
📩 Let’s Build Something Amazing Together! Contact us now to get started.