Preface
Unlike commercial products, internal tools/platforms mostly don't have clear direct business value; their efficiency value needs to be measured through quantifiable indicators. This article attempts to establish a directly applicable data indicator framework, making the value of internal tools/platforms visible and explainable.
I. Analyze Core Elements of Production Activities
From an object-oriented perspective, frontend engineering is the relationships and interaction behaviors between objects.
(From Frontend Engineering System from Object-Oriented Perspective)
Among them, objects are divided into two categories: subject objects and object objects:
Objects are abstractions of various entities in frontend application production activities, where some objects are subjects (such as people in different roles), and others are objects (such as tools, platforms, and other concrete things). Objects complete the development and delivery of frontend applications through a series of interaction behaviors.
That is, people and tools are the core elements directly related to productivity:

The more powerful and intelligent the tool, the higher the user's operational efficiency and the smaller the mental burden.
P.S. Mentality refers to people's methods and habits of understanding things, which will affect how users perceive the surrounding world and how to take action. It depends on the cognitive situation, memory, channels and methods for active and passive education acquisition, and competitor usage habits based on roles, etc. See Four-Quadrant Model for Experience Measurement of Tool Products (1) for details.
II. Identify Key Goals of Tools
For tools, balancing efficiency and experience is an unchanging goal, but different tools may have different focuses, for example:
-
Underlying tools not directly facing users: such as build modules, release modules, etc., efficiency is relatively more important, experience is secondary
-
Upper-level tools that users directly interact with: such as debuggers, release platforms, etc., more focused on experience, although efficiency is equally important
On the other hand, tools are always born to solve problems. Choosing a tool is不外乎 4 situations:
-
Irreplaceable: The only tool that can solve the target problem, no choice, so regardless of experience or efficiency, it must be used
-
Best experience: The tool with the best usage experience among similar tools, precisely meets needs, no obvious gap in efficiency compared to other tools
-
Highest efficiency: The most efficient tool among similar tools, quickly solves problems, obviously much faster than other tools
-
Experience is not bad, efficiency is passable: A tool that balances experience and efficiency among similar tools, no obvious shortcomings, barely solves problems, not very troublesome to use
Excluding the no-choice situation, when there's no obvious efficiency gap, tools with better experience are more popular. Tools that can show obvious efficiency gaps will definitely be very popular if there are no experience deal-breakers, no doubt about it.
However, it needs to be noted that if the optimal options in terms of experience and efficiency both have obvious shortcomings, users are more inclined to choose a mediocre alternative tool rather than endure the shortcomings long-term:
Ah.. yes.. I just don't want to use xxx anymore
III. Establish Efficiency Value Measurement Model
After determining key goals, the next question is how to quantify efficiency and experience to make them measurable.
Measuring Efficiency
Analogous to the work efficiency calculation formula:
Work Efficiency = Work Volume / Work Time
Tool efficiency can be defined as:
Tool Efficiency = Problem Scale / Operation Time
Problem scale is still not something quantifiable, further concretized as time cost:
Tool Efficiency = Time Cost (without using this tool to solve) / Time Cost (with using this tool to solve)
Then, there are 3 situations:
-
Ratio equals 1: Using or not using the tool is the same, the tool brings no efficiency improvement
-
Ratio less than 1: Better not to use, because using the tool takes more time
-
Ratio greater than 1: Using the tool is more efficient, the larger the value, the more obvious the efficiency improvement brought by the tool
Measuring Experience
Experience cannot be calculated to accurate numerical values through unified rules like efficiency, but a measurement model can still be established:

Experience is the degree of overlap between product and user mentality (the mentality row in the figure above). The closer the tool's functionality and performance are to user expectations, the higher the experience evaluation, reflected in:
-
Ease of use: Mapping from user mentality to product functionality, ultimate ease of use is intuitive, ready to use out of the box
-
Stability: Mapping from user mentality to product performance, ultimate stability is complete trust, never doubting the tool will have problems
That is:
Tool Experience = Ease of Use * Stability
That is, tool experience is the product of ease of use and stability. As long as there's a slight disadvantage in usability or stability, experience will drop sharply.
Measuring Efficiency Value
In summary, the efficiency value brought by tools is reflected in 2 aspects:
Efficiency Value = Efficiency Value * Experience Factor
Among them:
-
Efficiency Value: Reduces user's time cost to solve problems, letting users solve problems more quickly
-
Experience Factor: Reduces user's mental burden, letting users solve problems more easily and pleasantly
The two complement each other; experience upgrades may improve efficiency, and efficiency improvements may also drive experience.
Therefore, with experience guaranteed, efficiency can simply be used as the measurement standard for efficiency value, and an accurate ratio can quantify efficiency value.
IV. Choose Appropriate Data Indicators
With the measurement model, next is to frame specific data indicators.
Time Cost
Based on the above analysis, (when experience is guaranteed) the direct manifestation of efficiency benefits is the time cost that the tool can save, closely related to user volume, usage frequency, usage duration, etc.:
-
User volume: Cumulative users, daily/weekly/monthly UV, daily new users, daily/weekly/monthly active users (users who operated core functions during the period)
-
Usage frequency: Daily/weekly/monthly PV, function usage rate, core operation count, average daily usage count
-
Usage duration: Core operation duration
P.S. Function Usage Rate = Number of users using a certain function / Total users, can also be used to measure the contribution of different functions to the whole.
For example:
Time cost saved per day = Daily users * Daily function usage rate * (Time without using this tool to solve - Operation time)
= 100 * 35% * (1.5 person-days - 0.8 person-days)
= 24.5 person-days
In addition, some side data can also reflect efficiency value:
-
User distribution: Target users, user penetration rate, proportion of users by attributes, penetration rate of users by attributes
-
Output result distribution: Quantity, importance, average time, proportion of output results by attributes
P.S. User penetration rate can be simply understood as User Penetration Rate = Existing users / Target users
For example:
Covers 2/3 of target users, including over 60% of frontline developers, 10% of testers
Covers 8 major product lines, supports over 40 projects in half a year, including the xx key project with excellent results
Ease of Use
Ease of use can also be measured through some numerical values:
-
User satisfaction: User complaints and inquiries count/rate, sample survey satisfaction
-
Operation difficulty: Misoperation count
-
Mental burden: Help document word count, number of notes
In addition, a requirement collection method commonly used by product managers is observing real users' actual operations, recording frustrations encountered by users. During the process, don't interrupt or rush to provide help; this can often precisely discover some usage problems.
Stability
Stability can be reflected from anomaly indicators, for example:
-
Crash rate
-
Bug count
-
Operation failure count
Among them, operation failure is a vague definition, including runtime errors, service interface errors, search returning no results, etc. Stability issues easily destroy usage experience,进而 significantly reducing efficiency. For example, if the tool always crashes and is almost unusable, there's no efficiency value to speak of.
V. Let Data Speak When There's Data
The reason for establishing quantifiable data indicators is to let data speak, verify some previous assumptions, and provide guidance direction for tool iteration and optimization:
-
Did the new feature get user support? How's the function usage rate? Did promotion channels have effect?
-
Is user operation smooth? Is there a big gap between actual time spent and expectations?
-
How are the output results? Is ROI high enough? Is it necessary to continue?
Do things with PM's mature methodology
No comments yet. Be the first to share your thoughts.