Optimizing efficiency in AWS IoT FleetWise console through usability evaluation and experience design
The scenario// Current Landscape
AWS IoT FleetWise is a managed service for seamlessly collecting, transforming, and transmitting vehicle data to the cloud in near real-time. It supports various protocols and formats, converting raw data into readable values while standardizing formats for cloud analysis. Users can define data collection campaigns to control what data is gathered and when it’s sent to the cloud.
Following the launch in September 2022, the FleetWise team has actively identified areas of usability gaps within the console experience. These gaps were determined through careful consideration of acknowledged trade-offs, invaluable customer feedback, and observations.
As a result, recognizing the critical significance of comprehending users’ current perceptions and interactions with the console, the primary objective was to gain insights into how users evaluate the existing console experience. Central to our investigation was the pivotal question:
Who are our customers?
The challenge// Rediscovering the magic in 80 days
Working backward from 12 weeks of my internship, I commenced with the creation of a project plan during the initial three weeks. Transitioning to weeks three through eight, my focus primarily shifted towards executing research strategies and immersing myself in the wealth of available data. The final phase of the project, spanning the last four weeks, was dedicated to synthesizing the collected insights, formulating comprehensive design recommendations, and writing the research report.
Despite encountering a few adjustments and overlaps, I managed to deliver a result that closely resonated with what I envisioned from the beginning.
The combination of my end date and the need to navigate ambiguity created an intense environment, necessitating significant coordination, time, and the acceptance of constructive challenges. This involved understanding complex workflows, being comfortable with not knowing everything, making assumptions, and narrowing down the problem space due to limited time. Additionally, I had to conduct research within constraints as we lacked direct access to customers.
The approach// Good.Fast.Frugal
To craft an efficient and economical process, I strategically selected components. It all began with a deep dive into the service, involving experts and stakeholders. I proceeded to investigate the information architecture and conducted heuristic analyses, examining user journeys and monitoring console analytics data. During testing, I leveraged the power of PURE analysis, collected valuable user feedback, and assessed FleetWise’s console experience for ease of use, effectiveness, efficiency, and satisfaction. In the validation phase, I synthesized discoveries, formulated design enhancements, and tested assumptions. These methods enabled us to evaluate the application’s usability from different perspectives and gather both objective and subjective insights into the user experience.
The discovery// Picking up the pieces
The analysis indicates that AWS IoT FleetWise falls short of addressing customer requirements, as the service lacks flexibility in completing core tasks necessary to onboard and use FleetWise for its customer goals. We recommend these changes be prioritized in order to meet customer needs and the health of the business with customers.
Users face limitations in modifying workflows once created, leading to a lack of flexibility and potential workflow disruptions
Users currently face a limitation in editing steps after creating Vehicle Models, Decoder Manifests, Vehicles, or Campaigns as the console doesn’t have a provision to modify the workflows once created. Presently, users duplicate workflows and modify them for new processes as a workaround. This absence of the ability to directly edit existing workflows poses a challenge and restricts user control. It also hampers user efficiency by requiring redundant duplication rather than straightforward modification of existing workflows.
(…)So the objective would be to get these all updated and result in a new campaign that would go out to (…) the vehicles and (…) update them. (…) the fact that I cannot take, vehicle models, (…) [Decoder Manifest] off of, active to just add new signals to them. (…) But yeah, I would say that taking it back to draft in modifying it would be already a big improvement.– Customer feedback
User’s inability to seamlessly manage signals and lack of visibility within Signal Catalog results in fragmented workflow and reduced efficiency.
A signal is a crucial feature associated with other workflows on the console. Presently, the console doesn’t allow users to import signals once they import as a new user directly from the Signal Catalog option. This limitation significantly impacts users’ ability to individually add signals and subsequently map them to other workflows. Additionally, the Signal Catalog lacks visibility into the associated Vehicle Model, Decoder Manifest, and Campaigns in the Signal Catalog, which restricts users from obtaining a comprehensive overview.
So yeah, providing an opportunity, for example, to either input the delta or input the new (…) DBC file and add the new signals or map the new signals in (…) the [console] interface. That (…)would be good too.– Customer feedback
Inadequate context within the descriptive text leads to confusion as users struggle to understand system feedback and make informed decisions effectively.
Although the console effectively provides proactive text in various instances, it fails to deliver accurate and contextually relevant text that aligns with the user’s task throughout the console. The absence of precise contextual information leads to potential errors, underscoring the need for improved guidance and clarity. For instance, there is a gap in information regarding the requirement to select signals after uploading a file in Step 3 of the Vehicle Model creation process. The console allows users to proceed without making a selection, leading to an ambiguous error message at the end of the workflow. This error message fails to guide users on how to rectify the issue, creating uncertainty and possible abandonment of the workflow.
“There wasn’t any information about what would happen after you upload the file or it wasn’t easy to tell that you would have to select the specific signals, or which ones to select, or by default all will be selected. I was unclear whether signal information was editable (…) I though it was a few steps within one step which was little unclear”– P1, User Experience Designer
To effectively serve customer and business needs, FleetWise must improve the customer insight mechanism for internal evaluation and attaining deeper insight into console analytics.
Insufficient data on user behavior and the lack of customer feedback and satisfaction data, combined with analysis gaps in overall console health, restrict the comprehensive understanding of customer experience and console performance.
To better understand user behavior, we need to collect data for granular analysis and improve telemetry monitoring techniques.
- Analyzing key customer flows through funnels yields deeper insights into engagement and completion of workflows, along with operational metrics. However, the FleetWise console lacks effective tagging, hindering valuable data collection for measuring UX performance. This falls short of providing detailed information on workflow abandonment and user preferences within steps.
- Customer Satisfaction Score helps measure how satisfied customers are with the service scored on a scale of 1 to 5. There is a lack of CSAT responses to thoroughly assess customer satisfaction with the experience to determine how satisfied customers are with the FleetWise console.
- There is a lack of customer feedback on a free-form text collector, which includes General Feedback, feature requests, and service-related issues. To comprehend actionable feedback, we must gather user feedback at multiple points throughout the user journey of the console experience.
The absence of consistent monitoring, along with the lack of systematic usability checks, represents a critical gap in ensuring usability insights
Establishing a monthly collaboration among PM, UX, and Engineering teams is essential for consistently monitoring impact, efficiency, and satisfaction levels, ensuring the timely identification and prioritization of unmet needs and issues every 12 weeks. Additionally, implementing a rolling usability program with end users is recommended to gain deeper insights into customer issues and requirements for future enhancements.
Note: Research artifacts, additional insights, recommendations and designs are confidential and cannot be shared under the terms of the non-disclosure agreement.
The impact// A Good Start…however, It’s Still Day One
To understand the impact of our research on users, we identified pain points, streamlined workflows for efficiency, and provided contextually relevant information. Strategically, we harnessed the power of cross-functional collaboration with UX, advocated usability program initiatives, and instituted regular monthly analytics reviews within our team. From a business perspective, I firmly expect this research will enhance user engagement, driving up conversion rates as we deliver a seamless experience—a promise that will yield a lasting Return on Investment through elevated customer satisfaction.
The realization// Reflecting back as an Amazonian
Throughout my term, I took proactive ownership of projects from the beginning and pushed myself outside the area of responsibility. I identified areas for improvement, simplified designs for enhanced customer experiences, and introduced process enhancements. My dedication to customer satisfaction drove me to thoroughly understand their feedback and usability issues, collaborating across cross-functional teams to deliver impactful results. My commitment to Amazon’s core principles of ownership, customer obsession, and innovation consistently guided my actions and contributions.