Project Background

This project centers on an automation tool integrated within a project management platform, aimed at streamlining workflows and boosting productivity. The tool offers a range of features designed to minimize manual effort, including advanced task management and automated notifications.

Despite its robust functionality, initial analysis reveals inconsistent levels of adoption and engagement across different features. The objective of this project is to map the user journey, uncover challenges in feature adoption, and propose actionable improvements to enhance utilization and user satisfaction.

In this discussion, we will focus exclusively on the problem space, not the solution space. This means identifying and understanding the challenges users face without yet exploring potential fixes or improvements.

Overview of Product Features

The automation tool consists of seven distinct features:

    1. Task Automation: Automate actions triggered by task-related events, such as task creation, status changes, or deadlines.
    2. Reminder Automation: Automate time-based notifications to remind users of overdue tasks, upcoming deadlines, or scheduled activities.
    3. Workflow Triggers: Automate responses to task or workflow changes, such as status updates, task completion, or dependency resolution.
    4. Recurring Task Creation: Automatically generate tasks on a recurring schedule, such as weekly meetings or monthly reports.
    5. Report Automation: Generate and schedule reports automatically based on predefined metrics, formats, or timeframes.
    6. Dependency Management: Automate notifications or actions based on task dependencies, such as blocked tasks, delays, or resolved dependencies.
    7. Escalation Automation: Automate escalation of critical tasks to higher management when predefined conditions, such as deadlines or priorities, are unmet.

Understanding the Feature Adoption Funnel

The first step is to analyze the Feature Adoption Funnel to determine which stage is experiencing the most significant drop-offs or challenges. For that, we need to understand the various stages of the FAF and the metrics to track.

The four stages of the Feature Adoption Funnel are:

1. Awareness

This stage identifies users who become aware of the automation tool and its features. It measure the discovery rate, which helps assess the tool’s visibility and initial awareness among users.

At this stage, we track the metric “Feature Awareness Rate“.

Feature Awarenes Rate (%) = (Number of users exposed to the feature)/(Total number of users of the product) x 100

To count the number of users who become aware, several interactions can be tracked, such as:

    • Views or interactions with popups introducing the feature
    • Clicks on in-app spotlights or banners
    • Participation in product walkthroughs or tutorials
    • Navigation to the feature page from the main menu

2. Activation

Identifies users who take the first step toward using the feature by clicking on the feature button.

At this stage, we track the metric “Feature Activation Rate:

Feature Activation rate (%) = (Number of users who start using the feature)/(Number of aware users) x 100

3. Adoption

Measures the number of users who successfully complete a workflow and use the feature as intended.

There are several useful metrics to track at this stage, such as “Feature Adoption Rate“, “Feature Success Rate” or alternatively the “Feature Drop-Off Rate“, and the “First-Time Completion Rate“:

Feature Adoption Rate (%) = (Number of users who successfully used the feature)/(Number of activated users) x 100

Feature Success Rate (%) = (Number of times feature used successfully)/(Total number of times feature activated) x 100

Alternatively, instead of Feature Success Rate we can track the Feature Drop-Off Rate, which is defined as:

Feature Drop-Off Rate (%) = 100 – Feature Success Rate

First-Time Completion Rate (%) = (Number of users successfully using the feature on first attempt)/(Number of activated users) x 100

4. Retention

Tracks users who continue to use the feature over time. This metric helps assess the product-market fit, as a high retention rate indicates that the feature effectively meets real customer needs and is supported by an effective user experience.

At this stage we track the metrics “Feature Retention Rate” or alternatively the “Feature Churn Rate“, and the “Feature Usage Rate“:

Feature Retention Rate (%) = (Number of returning users)/(Number of adopted users) x 100

Alternatively, instead of Feature Retention Rate we can track the Feature Churn Rate, which is defined as:

Feature Churn Rate (%) = 100 – Feature Retention Rate

Feature Usage Rate (%) = (Number of tasks completed using automation)/(Total tasks completed (automation + manual)) x 100

Example: Task Automation Feature

Let us review sample usage metrics for one of the features: Task Automation.

Stage 1: Awareness

The Feature Awareness Rate, along with its target values, is presented for the first three months. The data shows that the Feature Awareness Rate consistently meets the target, indicating that users are successfully discovering the task automation feature. This suggests that our onboarding mechanisms, such as popups, in-app highlights, and tutorials, are performing as expected.

Stage 2: Activation

A similar trend is observed for the Feature Activation Rate, which also meets its target. This indicates that users are engaging with the feature as anticipated.

Stage 3: Adoption

However, the data for the Feature Adoption Rate, which reflects the percentage of users successfully using the task automation feature, reveals a concerning pattern. The adoption rate falls significantly below the target, highlighting potential issues with feature usability or workflow complexity. This trend persists consistently across all three months, emphasizing the need for further investigation and improvement.

Similarly, the Feature Success Rate for the task automation feature follows a comparable trend. Despite including users who have used the feature at least once and are presumably familiar with it, the success rate remains at approximately 50%. This highlights persistent challenges in achieving desired outcomes, even among repeat users, indicating potential issues with the feature’s design, functionality, or user support.

The First-Time Completion Rate presents an even more concerning trend. Despite setting a modest target of 50% for users successfully utilizing the task automation feature on their first attempt, the actual data falls to half of that benchmark. This indicates significant challenges in the onboarding process or usability of the feature, requiring immediate attention to improve user experience and ease of adoption.

These three metrics collectively point to challenges within the Adoption stage of the Feature Adoption Funnel.

Stage 4: Retention

To analyze the Feature Retention Rate, users are segmented into monthly cohorts. For instance, the January cohort comprises users who activated the feature during January. We will track how many users from this cohort successfully reuse the feature after one month, two months, and three months. A similar analysis will be conducted for the February cohort (with data available for the first two months) and the March cohort (with data for the first month only).

The data reveals a significant decline in retention for the January cohort over three months. Only 23% of users reuse the feature in the first month, dropping to 17% in the second month and 13% in the third month.

The February cohort shows an even sharper decline, while the March cohort begins with only 19% of users reusing the feature in the first month.

This indicates a lack of sustained engagement and potential issues with the feature’s long-term usability, requiring deeper investigation into user behavior and barriers to retention.

The data shows an increase in the Feature Usage Rate for both the January and February cohorts.

Combined with the retention metric, this suggests that while the majority of users are not utilizing the feature, those who do adopt it find it valuable and continue to use it. This indicates that while the feature demonstrates clear value for users who successfully adopt it, there are significant usability or workflow challenges preventing broader usage.

Summary of Result

The following diagram summarizes the conversion across various stages for the first three months. The Conversion Funnel Chart shows stage-over-stage percentages, focussing on the conversion rate between consecutive stages. The Drop-Off Funnel Chart shows percentage relative to the initial user base, and highlights the overall retention at each stage.

In this example, we have conducted the analysis for just one feature. In a real-world scenario, a similar analysis would be performed for all seven features to identify the stages of the Feature Adoption Funnel where users encounter pain points for each feature.

Feature Prioritization Process

Having identified the stages of concern in the funnel, the next step is to pinpoint the exact problem and address it. However, given our automation tool has seven distinct features, it is crucial to prioritize features so that we focus our efforts on the areas that provide the maximum impact. For this, we use the Importance vs. Satisfaction matrix to guide our decision-making.

Assessing Feature Importance

Before developing the features, we conducted surveys to rank them based on their importance to our users. Additionally, we collected user feedback through in-app surveys and feedback forms integrated into the application. These tools allowed users to rate the importance of each feature on a scale and share their satisfaction levels based on their experience. This data was then aggregated to evaluate trends across the user base.

Sample questions for understanding the importance as well as satisfaction of users are as follows:

    1. How important is it for you to have a feature that automates task assignment and management?
    2. How important is task automation for managing larger teams or projects efficiently?
    3. How important is it for you to save time by automating repetitive task-related actions?

We use a five-point response scale:

    1. Not at all important
    2. Slightly important
    3. Moderately important
    4. Very important
    5. Extremely important

We calculate the average score across all responses as the importance rating and map this five-point scale to a 0–100 scale for easier interpretation. Similar questions are asked for each feature to assess their importance.

Evaluating User Satisfaction

For satisfaction, we ask: “How satisfied are you with the process of creating and managing automated tasks while using the task automation feature?”

We use a seven-point response scale:

    1. Completely dissatisfied
    2. Mostly dissatisfied
    3. Somewhat dissatisfied
    4. Neither satisfied nor dissatisfied
    5. Somewhat satisfied
    6. Mostly satisfied
    7. Completely satisfied

We average the satisfaction scores and map them to a scale from 0 to 100 for easier interpretation. This process is repeated for all features.

Mapping Importance vs. Satisfaction

Next, we plot these scores on the Importance vs. Satisfaction Matrix to visually compare the relative performance of all features and identify areas for improvement.

To focus on high-importance features for initial improvement, we first establish a threshold for importance. Features falling below this threshold will not be prioritized for improvement. We set the threshold at 60%, which eliminates three features from consideration.

Identifying Opportunities for Customer Value

Next, we evaluate the Opportunity to Add Customer Value, identifying features with the greatest potential for increasing customer satisfaction. This is calculated using the formula:

Opportunity to Add Value = Importance × (100 – Satisfaction)

Based on this analysis, we find that the Task Automation feature offers the greatest opportunity to add customer value. This outcome is logical, as the Task Automation feature scored high on importance but relatively low on satisfaction, indicating significant potential for improvement.

Incorporating ROI in Prioritization

We have prioritized features based on the potential customer value each feature can create. However, we have not yet accounted for the resources required to implement these improvements.

To address this, the usual process begins with brainstorming feature improvement ideas. This involves:

    1. Divergent Thinking: Generating as many ideas as possible without judgment or evaluation.
    2. Convergent Thinking: Evaluating these ideas to identify the most promising ones. In agile methodology, these selected high-level ideas are often documented as “epics.”

The next step is to break down these epics into smaller, manageable pieces of functionality. Each piece is assigned story points to estimate the effort required to build it.

We then calculate the ROI for each feature improvement using the formula:

ROI = Return/Investment

For our example, we can use the estimated customer value the feature will provide as Return. For investment we can use the estimated effort, represented by story points or developer-weeks.

The features are then rank-ordered by their ROI to prioritize development efforts.

In our case, we are skipping this step as the Task Automation feature has already been identified as the highest priority due to its significant potential to deliver maximum customer value. Additionally, automation features are a crucial component of our product strategy, making their improvement essential for enhancing overall user satisfaction and achieving business objectives. Since these features have already been developed, we have a clear understanding of the associated resource requirements, allowing us to proceed directly to implementation without extensive effort estimation.

Analyzing the Selected Feature

Now that we have identified the Task Automation feature as the priority for improvement, we need to understand uts user flow diagram, analyze usage metrics, and prioritize a stage for improvement.

Visualizing the User Flow

Now that we have identified the Task Automation feature as the priority for improvement, we need to understand the user flow for this feature. This will help us identify potential pain points, areas of friction, and opportunities to enhance the user experience.

As we can see, the user flow for the Task Automation feature consists of six distinct stages:

    1. Setting Trigger Conditions
    2. Setting Filters
    3. Setting Assignment Logic
    4. Configuring Notifications
    5. Updating Task Attributes
    6. Defining Escalation Rules

Of these stages, only Stage 1 (Setting Trigger Conditions) is mandatory, while the others are optional.

Usage Data Analysis

To identify stages of concern, we analyze usage data to track user drop-offs and engagement. For this purpose, I have created a Sankey Diagram to visually represent the user flow, highlighting user behavior and drop-off rates at each stage.

Several key observations can be made:

    1. Stages 1 and 2 contribute the most to drop-offs in absolute numbers.
    2. The overall drop-off rate is significantly high at 56.5%, indicating a critical issue in user retention.
    3. In terms of the percentage of users entering a stage and then dropping off, Stages 4 and 6 also contribute notably, despite being optional.
    4. While Stages 2 to 6 are optional, the majority of users follow a linear path from Stage 1 to Stage 6. This could indicate that users find all stages valuable, or alternatively, that they are unaware of the option to skip stages and proceed directly to subsequent steps.

Given that our user flow consists of six stages, we aim for a drop-off rate of less than 5% per stage as a starting goal. Even with a 5% drop-off rate at each stage, the overall success rate would be:

0.95^6 = 73.5%

This calculation assumes a linear progression through all stages. A drop-off rate higher than 5% per stage would significantly impact the overall success rate.

Targeting a Stage for Optimization

Given the high drop-off rate from Stage 1 (Setting Trigger Conditions) and the fact that users who drop off at this initial stage are significantly less likely to return, we have prioritized improving this stage. Additionally, Stage 1 is mandatory, meaning it acts as the gateway for all subsequent stages. Any issues at this point have a cascading effect, limiting user engagement with the rest of the workflow.

We now focus on the usage data for the first stage of the Task Automation feature. The following key observations can be made:

    1. Low Usage of Certain Options: New Task Creation, Tag-Based, and Category-Based options show extremely low usage, along with Task Status Changes to some extent. This raises questions about the usefulness of these options and whether they result in a more complicated user experience.
    2. High Usage and Success for Time-Based Option: The Time-Based option has high usage and a very low drop-off rate, indicating both the utility and the intuitive design of this feature.
    3. Priority-Based and Dependency-Based Options: Both options exhibit high usage. However, the Priority-Based option also shows a high drop-off rate, suggesting a potential issue in its user flow or design complexity.
    4. Task Status Changes: Although usage is relatively low, its drop-off rate is disproportionately high. Given that its user flow is similar to the Priority-Based option, this points to a likely shared problem affecting both features.

Investigating the Root Causes

While we have identified where the problem lies, the underlying reasons remain unclear. To uncover these root causes, we will employ several techniques:

    1. Check Error Logs: Analyze system logs to identify recurring errors or validation failures, which may highlight technical issues or UX-related challenges.
    2. Analyze Session Recordings: Review recorded user sessions to observe user interactions, detect usability issues, and identify points of friction.
    3. Click Path Analysis: Map user navigation patterns to uncover bottlenecks, abandoned tasks, or inefficiencies in workflow progression.
    4. Heatmaps: Use visual representations of user interactions—such as clicks, scrolls, and hovers—to pinpoint areas of high engagement or neglect.
    5. User Surveys: Collect structured feedback from a broader user base to identify common pain points, preferences, and overall perceptions of the feature.
    6. User Interviews: Conduct in-depth, one-on-one discussions to gain deeper insights into user experiences, motivations, and specific challenges.
    7. Usability Testing: Facilitate controlled, real-time tests where users interact with the feature to observe their behaviors, frustrations, and areas of confusion.

We will now delve into the actionable insights derived from these techniques. While these methods provided a wealth of information about user pain points and preferences, we will focus on the insights that directly informed feature improvements implemented in the first iteration.

Error Logs

While analyzing error logs for the Task Automation feature, we identified a recurring issue in the Task Status Changes and Priority workflows. Both workflows require users to select initial and final statuses or priorities as mandatory fields. However, error logs revealed that many users attempt to proceed without completing these fields, suggesting that they may not always find both fields relevant.

The error logs captured multiple validation errors, such as:

    • ValidationError: Missing initial status for Task Status Change rule.
    • ValidationError: Missing final priority for Priority-based rule.

These errors highlight that users frequently face challenges completing these two workflows due to the rigid requirement for both fields.

Session Recordings

To further analyze usability issues, we reviewed session recordings of users interacting with the Task Automation feature. These recordings revealed critical points of friction and areas for improvement:

  1. Mandatory Selection of Initial/Final Status or Priority
      • Users often hesitated or abandoned the workflow when required to select both initial and final status/priority fields. The recordings showed repeated attempts to proceed without completing these fields, highlighting a misalignment between system requirements and user expectations.
  2. Confusion Between “Priority Set for First Time” and “Priority Change”
      • Users frequently toggled between these two options or abandoned the workflow after interacting with them. The recordings suggested that the distinction between these options was unclear, as users struggled to match their real-world needs with the feature’s structure.
  3. Lack of Flexibility for Multiple Conditions
      • Recordings showed users repeatedly creating duplicate rules to simulate complex logic, such as combining multiple conditions for initial or final status. This behavior pointed to the need for more flexible configuration options, such as allowing multiple conditions within a single rule.

Click Path Analysis

Users frequently toggled between these two options of “Priority Set for First Time” and “Priority Change”. The analysis revealed that users struggled to understand the distinction, leading to repeated navigation and incomplete workflows.

Heatmaps

Dense click activity was observed around “Initial Status” and “Final Priority,” indicating user frustration with the rigid requirement to complete both fields.

User Surveys

In addition to conducting satisfaction surveys we talked about earlier, we also conducted in-app surveys to gather broader insights into user challenges and perceptions. For example, users configuring task rules were prompted with a short survey question:

    • “Was it easy to set up your task automation rules? If not, what did you find difficult?”
    • “What specific improvements would make the Task Automation feature more useful for you?”
    • “If you encountered any errors while setting up a rule, did the error message clearly explain how to fix the issue?”

Findings:

    1. Mandatory Fields: A significant number of users expressed frustration with being required to select both Initial and Final Status or Priority. Many reported that these fields were not always relevant to their use case.
    2. Feature Clarity: Users frequently mentioned confusion between the options “Priority Set for First Time” and “Priority Change.” Responses indicated that some users would find the feature useful irrespective of whether there was a priority change or priority set for first time.
    3. Flexibility Issues: Survey feedback highlighted a demand for more advanced rule configurations, such as the ability to combine multiple conditions (e.g., Initial Status = A OR B) within a single rule.

User Interviews

To gain deeper insights into user challenges with the Task Automation feature, we conducted in-depth user interviews. These one-on-one discussions allowed us to explore specific pain points and understand user motivations. The findings corroborated our findings discovered through the previously discussed means.

  1. Mandatory Fields:
      • Many users expressed frustration with being required to select both Initial and Final Status or Priority. They indicated that for some workflows, only one field was relevant, making the requirement unnecessarily rigid.
  2. Unclear Feature Distinctions:
      • Users were often confused about the difference between “Priority Set for First Time” and “Priority Change.” Some stated that they expected a single option that encompassed both scenarios, as their use cases did not differentiate between these triggers.
  3. Demand for Flexible Conditions:
      • Users frequently mentioned that creating multiple rules for similar conditions (e.g., Initial Status = A OR B) was cumbersome. They expressed a strong need for a more streamlined way to configure complex logic within a single rule.

Usability Testing

We conducted usability tests to observe how users interacted with the Task Automation feature in real-time. Participants were given specific tasks, such as configuring automation rules with triggers, conditions, and actions, to identify usability challenges.

  1. Mandatory Fields:
      • Test participants often attempted to proceed without completing both Initial and Final Status or Priority fields. This led to frequent errors, revealing that the mandatory requirement was misaligned with user expectations.
  2. Unclear Workflow for “Priority Set for First Time” and “Priority Change”:
      • During the tests, participants hesitated or sought clarification when deciding between these two options. This highlighted the need for clearer labeling or tooltips to explain their distinction.
  3. Redundant Actions:
      • Participants who needed complex logic conditions struggled to configure them efficiently. They spent time duplicating rules, which they found frustrating and time-consuming.

Feature Enhancements and Workflow Improvements

Based on the insights gained from the methods discussed above, we identified key pain points in the Task Automation feature and implemented the following changes to improve usability and user satisfaction:

1. Simplifying the “Task Status Changes” and “Priority” Workflows

Both “Initial Status/Priority” and “Final Status/Priority” fields were made optional, significantly streamlining the workflow:

  • Both Initial and Final Fields Set:
    • This retains the existing workflow:
      • Specific Status/Priority Change → Select Initial Status/Priority (Mandatory) → Select Final Status/Priority (Mandatory)
    • This option remains unchanged for users requiring specific transitions.
  • Only One Field Set:
    • A new workflow was introduced:
      • Tasks are triggered when either the initial or final status/priority matches the user-defined condition.
    • This adds flexibility for users who only need to track a single condition.
  • Neither Field Set:
    • This aligns with the existing “Any Status Change” or “Any Priority Change” workflows:
      • Task Status Changes → Any Status Changes

These changes also eleminated the need for the workflow: Priority → Priority Set for the First Time.