We have heard it repeatedly: Data insights should always trigger an action. As a general rule, this should obviously be the goal. Too much time is wasted on analysis that will only be used if it confirms what we were already planning to do, and on dashboards that serve only the purpose of supporting your data driven image whenever you bring guests to the office. When someone asks me to analyze something, I personally like to ask in advance what the result of the analysis will need to be to sway the decision in the other direction, and a lot of the time I don’t get an answer. Often, the data is not swaying or driving anything.
While this can be a daily ongoing struggle for many, the opposite case can lead to problems of its own. Because what do we do when we don’t want people to take action? That is the challenge for automation products.
The Unnecessary Human in the Loop
Automation products are the definition of data driven decision making. Whether it is rule based, personalized or AI driven, the point is to automatically solve the problem based on the input data. There should be no manual decisions, and there should be no manual actions.
While this sounds nice, most managers will still want to see that the automated process is working. We can’t blame them, there are a lot of bad tech solutions out there, and in the end the responsibility lies with the manager, automated or not. This brings us to the challenge, because how do we present insights that provide this overview without encouraging an action?
This might not sound hard, but it is. When you put actionable insights in front of people who are used to making decisions, it can be very hard for them not to. I once was a part of a team that developed an AB-testing tool for news article titles. Editors could add a few titles, the tool would run an AB test on them, and once it collected enough data the tool would pick the winning title and apply it for all of the users.[1] However, because we also provided the editors with a dashboard where they could see the results of the test, it developed into a manual process where they would use the tool to run the test, the tool would pick a winner, and they would then turn the tool off and manually assign the title to all users. There was no need for manual intervention in this process, but we had provided them with actionable insights, and they wanted to be in charge of doing clever actions.
This could have been only a story about an interesting observation of human behavior, but it also became a huge problem where costs were growing because of unnecessary data processing and angry emails were fired at the Director of Data[2] for depriving editors of “critical tools” when the dashboard (not the tool) stopped working. Frustrations grew on multiple ends, over a product that was doing just fine on its own.
Providing Insights Without Encouraging Action
When you provide insights as part of an automation product you should try to keep it non-actionable. An automation product that triggers a manual process is not achieving what it is supposed to. Nothing is as non-actionable as actions already taken, so a good way to do this is by removing metrics related to the evaluation part of the process, and providing data about what decision was made because of the evaluation.
In the example of the AB testing tool, this could have been achieved by not displaying continuous click rates on the individual titles, but instead showing how many users the winning title was being shown to after the test ended. We do not need to provide metrics to interpret and evaluate, because it has already been evaluated, and this is a way to make it clear that the issue has already been taken care of. While this can’t always be achieved in a dashboard, it can work well for many SaaS automation products where you have access to customized visualizations.
If your product is any good, another set of non-actionable insights you could bring to the table are the ones focused on how the product works. The reason to provide insights in the first place is the assurance that your product is managing what it is supposed to manage. Data is core to automation products, and finding ways to visualize how data is used to make intelligent decisions will increase trust. It also separates you from your competitors who are all bragging about some “customized algorithm” without any evidence that it actually works (or exists, for that matter).
A Case From Pistachio: Defining What Matters
I currently work at Pistachio, a cybersecurity startup that automates awareness training. Automation is core to Pistachio’s brand, which made the focus on keeping it non-actionable all the more important when developing the new performance page we’ll be releasing this month.
We had two main goals for our new performance page. The first one was to communicate that the product was working. The second was to communicate that the product was superior to other competing products in the market. To start off the process I thought about what a customer should care about to feel assured that Pistachio is managing the assignment properly. I landed on three factors:
Frequency: Staying aware means continuously being reminded that you are exposed to the threat. If the threat feels too far away, people will lower their guards. Metrics for frequency should show customers that their employees receive content often enough to stay aware.
Variety: Cybersecurity threats come in all sorts of shapes and sizes. Receiving different types of simulations increases the ability to identify the threat. Displaying content variety reassures the customer that their employees are familiar with the different methods used to steal information.
Difficulty: As the sophistication of social engineering attacks increase, customers should want to know that the employees have experience with threats that match this level of sophistication. After all, in cybersecurity, you are only as strong as your weakest link.
Some of the data on these topics I decided to visualize as text. I wanted to mimic the reporting from a manager to a senior manager, as the role of the product is to manage the area of cybersecurity awareness training. When something is managed for you by others, you don’t necessarily have time for all the details. What you need is a brief explanation of what’s going on, and the assurance that it is being handled. For the visualization we are using an LLM to generate small reports about what’s going on, based on the customer’s data and the setup of our systems. The AI will explain to our customers what the data is telling us and how our systems will adjust the training in order to respond to that.
Aligning Metrics with Product Strategy
Not everyone will be happy with my choices on this, I’m aware of that. People like metrics, even the ones they don’t actually use for anything. It’s a different approach than the one made by many competitors, but it is also a different product. Pistachio is an automation product, and the metrics have to align with the product strategy. Automation products shouldn’t provide actionable insights, unless you don’t really care how your product is used. This probably applies to a lot of SaaS products, but it’s easy to get carried away in dashboard land.
While we don’t want our customers to feel like we are gatekeeping data from them, we also don’t want to be a company that displays metrics for the purpose of displaying metrics. We don’t want our customers spending their valuable time on data we didn’t put any thought into. We are choosing to display the information we think you should care about, and we are displaying it in a way that tells you we are already taking care of it.