The Future of User Interfaces: AI at the Forefront
Examining the current and future possibilities of recommendation AI, predictive AI, and adaptive AI.
Have you ever found yourself getting frustrated with a website or app because it just doesn’t seem to understand what you want? The good news is that we may not be far from a future where all experiences are dynamic and tailored perfectly for each unique user. We’ve already been using AI to generate dynamic content and product recommendations. But with recent advancements in AI, I thought it’d be worth exploring the current state and possible future of using AI to create truly dynamic user experiences.
AI models rely on ‘inputs’ to provide an ‘output.’ For our purposes, ‘inputs’ refer to user behavior within a product (which can be tracked through events like clicks, scrolls, time delays, and conversions). ‘Outputs’ is how the experience changes for each unique user. The challenge is determining which events and metrics to track and how to interpret them. With the right inputs, AI can enhance the user experience in several ways, including 1) recommendation AI, 2) predictive AI, and 3) adaptive AI.
The most widespread implementation of AI is generating recommendations and curating content. Curating content that users see when shopping or browsing home feeds involves using ranking and recommendation algorithms to surface the most relevant and engaging content for each individual user.
Ranking algorithms use AI to determine the order of content we see (and sometimes influence the timing). Products with a home feed typically start with a chronological ranking (most recent posts in order) and eventually switch to an algorithmic feed. However, algorithmic feeds have received criticism for favoring businesses and ads over the content the user follows. Following Instagram’s shift to an algorithmic feed in 2016, they’ve since released an option to switch back to a “chronological feed” due to the backlash.
On the flip side, TikTok has proven how effective a truly accurate recommendation AI is at engaging and retaining users. So why can’t other platforms clone TikTok’s feed ranking and tailored content strategies? The main problem lies in how bad pre-TikTok content feeds were designed to capture user sentiment. Let’s consider a standard social feed unit; there’s the main content (image, video, text), some information about the user profile posting the content, some basic metrics (comments, likes, shares, etc.), and some user actions (like, favorite, share, comment). In this design, the sentiment signals we can capture are all explicit “actions” on each feed unit. Explicit actions require user input, which means we can’t even gather a holistic picture of sentiment since not all users will perform those actions.
On the other hand, TikTok can capture more sentiment signals that reflect most (if not all) of its users. This is due to the TikTok being full-screen for each individual piece of content. Therefore, all sentiment signals map to one piece of content instead of 3–4 pieces like traditional feeds.
Everything you do is a (mostly implicit) sentiment signal. Positive sentimental signals include: watching most or all of the video (implicit), letting the video loop (implicit), sharing the video (explicit), clicking on the profile or music tag (explicit), and following the creator (explicit). Negative sentiment signals include: swiping up to skip to the next video (while this is explicit, it is also a mandatory user behavior). Based on this structure, all users will provide some implicit sentiment signals in addition to the standard explicit signals other platforms have. This makes TikTok’s recommendation AI much more powerful because they have ten times the ‘inputs’ compared to its competitors.
There are other creative uses of recommendation such as Netflix’s dynamic thumbnail. In 2014, Netflix conducted some consumer research that indicated the video thumbnail was the most important factor in a user’s decision to watch something. Since each user has unique preferences, it doesn’t make sense to use the same thumbnail for all users if we wanted to maximize conversion and watch time. Their brilliant solution involved creating a recommendation algorithm that takes in user ‘inputs’ like current location and watch history to select the best thumbnail (output).
As we move towards more AI-powered experiences, consider how algorithms “see” your users become more and more critical. Designers of the future will have a much more important role in creating interfaces that help algorithms effectively capture user behavior (inputs) and identify opportunities for AI-tailored experiences (outputs).
The growth of AI has opened up new opportunities for enhancing the user’s experience through predictive AI. By anticipating what users might require or desire in the future, products can display relevant interfaces that correspond to their current tasks.
One of the earliest forms of predictive (non-AI) user experience is predictive search and autocomplete. When a user begins to type an input, the interface suggests potential endings and presents them as options that the user can choose from. Although this is a simple example, it’s the best way to illustrate how predictive AI should function: invisible until needed.
For instance, Gmail’s “smart composer” function is a great example of predictive AI in action. When composing an email, users can press the ‘tab’ key to auto-complete the sentence they are writing. This feature significantly reduces the time and effort required to manually finish a sentence — which adds up to significant time savings if you think about how much you’d normally type otherwise.
As predictive AI continues to advance, we can expect to see significant improvements in Conversational User Interfaces (CUI). A CUI is a type of interface that enables computers to interact with users through natural language, such as Siri or automated chatbots. One of the primary benefits of CUI is that there is no learning curve for users and no navigational difficulties.
However, a common issue with CUIs is the interaction cost of forcing users to formulate their own questions and responses. Traditional chatbots have some basic logic for responding to user inputs, but they are still limited in their ability to have natural, non-scripted conversations. However, with the growth of predictive AI, we can expect to see advancements in Conversational User Interfaces (CUI) that allow for more seamless and natural interactions.
CUIs powered by predictive AI would be able to proactively resolve issues or provide contextual actions to users. This means that, instead of waiting for users to encounter a problem and then seeking support, the system can anticipate potential friction points and address them before they arise. For example, a predictive AI-powered chatbot could automatically suggest solutions to all problems (not just scripted ones).
Last of all, and perhaps the most ambitious and future-facing application is adaptive user interfaces. To some extent, this already happens with location and user persona-based interfaces. For example, most companies already design separate websites and apps for different locales based on the significant differences in market preferences across countries and cultures.
For instance, Chinese users tend to prefer cluttered ‘super-apps’ (i.e., WeChat) whereas western audiences prefer minimalist single-use case apps. There are also differences in how languages are shown in interfaces. For example, Latin-based languages read from left to right, Arabic languages read right to left, and German text tends to be 35 percent longer than English text. Without AI, the best we can do is to create adaptive experiences for large distinct markets (i.e., different countries, kids' versions, different consumer demographics, etc.). With AI, we have the potential to create adaptive interfaces unique (output) to each user that can evolve based on their in-product behavior (input).
The first non-AI example of adaptive UI I’ve found was the concept of “progressive reduction”. Progressive reduction is based on the premise that a user’s understanding of your product’s interface improves over time and your application’s interface should adapt to your user.
In the example above, their ‘Signpost’ button has three versions: 1) beginner (button with icon and label), 2) intermediate (an icon button), and advanced 3 (icon). If a user becomes proficient enough at the Signposting feature (based on usage) they start seeing ‘intermediate versions’ of various UI elements. If the user has a long period of dormancy or starts to struggle with the feature, their UI regresses back to the ‘beginner version’.
The notion of creating ‘tiered complexity’ versions of each component is interesting as we already do this to some extent. Many complex and more technical products have ‘power user features’ that eventually unlock or get surfaced. However, what if AI could take this a step further? For example, what does your email inbox look like? Do you have thousands of unread messages, or do you like to keep your inbox tidy? Let’s say based on your inbox maintenance behavior we gathered inputs about your personality; specifically your ‘conscientiousness’ and ‘neuroticism’ levels.
Once we gather enough of these behavioral signals, we can then show users a unique version of the interface based on their personality. Let’s use this notification badge as an example. Version A with a blue pulse dot indicates that there are new unread messages without showing the exact number. This pattern might be better for users who have high ‘neuroticism’ and might be more anxious seeing the exact number count on a red background. On the other hand, someone with more ‘conscientiousness’ might appreciate seeing the exact number count as they prefer being more disciplined and structured.
The future of design systems could include complexity ‘levels’ and ‘personality variants’ for components. There are probably limitless dimensions designers could explore based on user behavior. So, while current design systems focus on consistency and raising the minimum UI quality bar, the future of design systems should be about providing constrained options that correspond to specific user behaviors. With the help of AI, the possibilities are endless and the future of user interfaces is sure to be exciting and truly personalized.
In conclusion, the future of AI in creating dynamic user experiences is a promising one, with many new and exciting possibilities to be unlocked. As AI models continue to improve, they will be able to provide a more seamless and tailored experience to users by interpreting user behavior in real time.
Recommendation AI, such as TikTok, is already demonstrating the benefits of having accurate and comprehensive sentiment signals and data to provide engaging content. Predictive AI, such as Gmail’s “smart composer,” can save users significant time by augmenting their abilities; CUI can cover all the neglected experience edge cases. As AI continues to advance, Adaptive AI can be used to surface custom interfaces uniquely tailored for each user. However, it’s critical to remember that AI is only as good as the data it’s fed, and designers will play a crucial role in capturing user ‘inputs’ and translating them into dynamic ‘outputs’.
Ready to level up your design skills and reach your full potential? Subscribe to “The Ambitious Designer” newsletter for weekly doses of product thinking, design concepts & frameworks, and career insights.
I also have a Youtube channel helping designers with career coaching and interview prep. If you’d like to schedule a 1–1 coaching session with me, you can book an appointment.