Tyk Keys and Policies Journey

2019 - 2021
figma | Miro

Tyk is an API Management solution, enabling development teams to build a secure, stable and scalable API-led business.

This case study looks into how the keys and policies journey within the Tyk dashboard was implemented. For this project I was the only UX resource and responsible for driving the project. The released outcomes are documented in the following videos for Keys and Policies.

Screenshot of policies


  • What are common problem themes?

  • What are our competitors doing?

  • Do things align with our corporate and product strategies?

We saw many customer requests expressing confusion in setting up keys and policies. A reason for this was previous feature-rich releases. Multiple features were added without consideration of how they work cohesively or fit into the user’s workflow.

Competitors don’t fare better than Tyk in their equivalent of policies + keys implementation, but this gives us an opportunity to stand out in the space. We want to provide easy access and use of our platforms by applying design thinking to create a positive user experience for everyone.


  • Do we know who the users are?

  • Do we understand their problems?

  • Do we know what their goals are?

Understand: our users

Before diving into the wireframes for the policies + keys journey, it’s vital to understand our users; otherwise, how can we create something that works for them? By making assumptions, we create something that works for ourselves, not necessarily for our current and future users.

Our UX researcher undertook persona research for the entire Tyk product by gathering insights from; current Tyk users, GitHub and Freshdesk tickets, HotJar feedback, Google Analytics, and Persona Workshops.

These insights came together to form our Persona Spectrum and indicate users who would interact with the dashboard.

Persona insights

Understand: our user’s problems

Let’s start getting specific. The Persona research is an excellent overview of Tyk users, but what about our users’ problems during the policies + keys journey?

Using insights from Freshdesk tickets, the community forum, Stack Overflow, user interviews, industry standards, and client feature requests, we created a list of difficulties users were facing during the policies + keys journey.

User insights

Understand: our user’s goals

What tasks do our users need to complete, and can they complete them efficiently? The overarching goals are similar: create, edit, delete or view a policy or key. But, within each of these goals are smaller components that need to be considered.

For example, you don’t just create a policy. You have to name it, set a trial period, decide on the policy state, add access rights, path-based permissions, rate limits and quota. Will the API inherit global settings or set them per API? Am I going to partition my policy?

As you can see, it’s not a linear journey, and we have to account for all possible tasks the user wants to complete.

User goals documented


  • What journeys have been identified?

  • What collective needs should be focussed on?

  • What areas do we need to prioritise and can solve?

Define: the user journeys

What is the current journey our users go through with policies + keys? We know that it’s not a linear journey, so we spent some time mapping the existing dashboard and identifying every user’s touchpoint with either policies or keys.

We used the touchpoint map to cross-reference our user’s goals to ensure they could be met. It showed that most user’s goals were able to be completed, but a couple of micro-interactions were missing, e.g. not editing tags and meta-data - only delete and add new.

We used this map to highlight overlaps between both policies + keys and the screen flows and micro-interactions to consider.

User journey map

Define: the focus of needs

A significant advantage of having a UX team that isn’t super technical is that we ask all the basic questions. Often, we don’t understand how something works - and that could be because our technical knowledge is lacking - or it could be that a specific feature isn’t explained or presented clearly within the dashboard.

So we questioned everything.

Why would you choose to authenticate with Mutual TLS? What is the benefit of enabling throttling? Why would you use an alias? Why would I partition my policy? Add path-based permission to my policy? Etc etc. We worked with the development team to help answer some of these.

By breaking down every feature within the policies + keys journeys, it became clear that there were areas to focus on for user needs.E.g. When creating a key, we have checkboxes that the user ticks depending on the auth type of the API. We can optimise this by having the back-end detect the auth type of the policy or API and provide the correct field in the UI instead of showing all.

Question understanding

Define: areas of priority that are solvable

Release timelines mean that improvements have to be realistic. There are many ‘nice to haves’, e.g. Github type search and error management, that would significantly improve the use of the product. However, they aren’t detrimental to a first release and can be added later, leaving the more essential items to be developed now. This puts less strain on teams and allows us to start planning features to be added at later stages.


  • Are the problems broad enough for innovation?

  • But narrow enough to control?

As important as it is to research and discover, we can’t do that forever - a plan of action is needed. From examining the policies + keys journey, we’ve identified problematic areas and synthesised these into common themes. Next to each theme, we determined what course of action would be best to solve them and the outcome.

Solvable areas


  • What ideas could solve our problems?

  • How can we take it to the next level?

Ideate: ways to solve the problem

Workshops are a great way to ideate. These are primarily run using Miro, where we can all collaborate. For the next project, it would be great to run workshops with more teams to get various ideas contributed and pushing innovation.

One of the main areas we concentrated on was searching/filtering/selecting. We broke this down by taking screenshots of the current dashboard and discussed with the front-end team:

  • the current implementation in the dashboard

  • if there was anything that could be added to improve that interaction

  • the goals of the user for the section

  • ideas for improvement

Workshop work

Our other aim was to improve the overall journey experience by putting things into a logical and expected order to make completing a task second nature. This was planned out by creating revised journey maps for both keys policies + keys.

Journey map


  • Can the idea be visualised?

  • Does the proposed solution work?

  • How can we test the idea and validate the concept?

Iterate: ideas visually

Now that we have solutions, we can create a wireframe and send it to developers to build, right? If only we lived in that perfect world! With the policies + keys journeys, we went through 4 major wireframe iterations plus countless minor changes to get it to the position it’s in today.

Design thinking is not a linear process. The solutions we come up with during the ideation phase aren’t set in stone. It’s important to explore each solution because often, working on them presents a new insight into a more efficient solution. And we can’t just look at wireframes screen by screen - we have to link them together as a prototype to understand if the flow is correct as well.

Here is an excellent example of how the ‘Add Policy’ started and how it evolved.

First Iteration:

The layout was kept similar to the original, but we looked to make the flow more straightforward. This meant leading the user to choose access rights first - triggering an overlay to set the limits, quota and path-based permissions, which was then added to the policy.

We realised that introducing the overlays was overcomplicating things, and staying within the screen would be better. This layout also didn’t solve the issue of keeping the form short and in a logical order.

Iteration 1

Second Iteration:

The introduction of tabs. The API Designer uses tabs to order its information logically, so we tried it with policies - separating the access rights from all the other details. This meant information wasn’t getting lost. We also changed adding the access rights to keep it on-screen rather than overlays. This made the steps easy to follow and grouped things by API rather than the type of information.

While this layout was more successful than the previous it didn’t demonstrate how the API inherited global values or how policy partitioning would work - it needed to be clearer.

Iteration 2

Third Iteration:

We separated the global settings from within each API to sit above all API. This kept the global in one singular place, so you’d know it was affecting all related API when a change was made. It also solved the problem of how to display policy partitioning.

Iteration 3

Iterate: and test

One of the improvements we need to work on is user testing. The policies and key wireframes were tested against engineering teams to ensure the logic is possible and build time is realistic. It would be great to get more internal stakeholders to test the prototypes/ideas and a more extensive selection of our current clients in the future.

Timeframes also came into play - this project was much bigger than expected, which has limited testing times. Another challenge we face is getting accurate testing. Our wireframe prototypes in Figma are essentially clickable images and are great to test flow. However, we can’t place dynamic content within them that makes sense to the user. This means we can’t test in a real scenario until the first builds are created - which can sometimes bring up problems too late in the process.


With the wireframes and workflows in place, using Figma I was able to build the UI on top that respected our new brand guidelines. I worked with the developers during the build to ensure the workflows were accurately translated and also to make any amends that needed to occur during the build. Since releasing the keys and policies update we have had excellent feedback from our clients. Ross from RBFCU commented that:

"he and his team are very happy with the 3.0 release, specifically with the UI improvements in the Dashboard. In addition to the new look and feel he commented that the policies and keys user experience changes are making it easier for their team members to understand the process flow when setting these up and they're also helping them see the bigger picture of how these pieces integrate into the larger whole."

The full walkthrough explaining the UX/UI changes can be viewed for both policies and keys.

policy screen

UX Key Takeaways

  • Not fully understanding the technology is an issue. In future projects, we need to have someone technical explain how something works currently to understand ways to improve. Technical consults need to be done in the very early phases of a project.

  • We need to get a complete understanding of real-life use cases so we can cater to them. E.g. We understood the use case for policies but didn’t for the parts within it, like how people use tags, meta-data etc.

  • Back-end teams need to be involved earlier - they need to understand the use case of what we are doing so they know why they are coding something and how it’s expected to work, but also we need to understand how time-intensive a task is.

  • Docs need to be considered - with changes to flow and layout, the docs need to reflect this. We need to ensure that enough time is given so docs can be correctly updated at the same time as a new release.


  • The most interesting/scary thing that we found out was the lack of understanding of how something worked. You could ask three different engineers what the expected behaviour should be, and you could get three completely different answers. There needs to be some single source of truth and mutual understanding.

  • The keys journey was the most difficult to define - especially in the cases of policy partitioning. We are relatively flexible with what policies we can combine when creating a key - however, we don’t know what combinations our users are doing. It will be a massive advantage once we can obtain tracking data to understand these sorts of interactions. There was also difficulty understanding how combining partitioned policies worked, which isn’t documented anywhere.

  • The docs provide technical documentation but often don’t say what something is or why it’s used with the dashboard - could it be that it’s expected in the industry to know that info already? It may be interesting to add use cases to docs or create other resources people could use.