Tracking interactions to improve the user experience

Measuring the performance of various solution aspects is used to determine the quality of the experience users interacting with that solution have. Those aspects may range from load times, acquisition source viability, building affinity through increasing loyalty etc. and there are plenty of articles out there identifying what are the most valuable data points and how to best segment and interpret what I refer to as big-picture indicators. Where I find a significant gap in measuring experiences however, is in attention given to micro interactions users have with our solutions.

Smallest units of interaction between the user and the solution often lead to largest impact on the experience, sometimes even to abandonment. Think how an entire journey may be degraded in quality by user’s frustration caused by one simple (yet very annoying) password character requirements dialog or perhaps misalignment in expectations that an image should be clickable (to zoom) but it’s not.

Typical performance reports tend to focus on larger impact indicators such as overall conversion rate, but the composite individual interactions leading to that conversion are often overlooked, leaving a lot of room for speculation on what may be the cause. Following is an overview of considerations for tracking, measuring and optimizing different types of individual interactions as well as how to quantify their impact on the user experience and therefore the solution’s performance.

Observe the users

For solution designers and researchers alike, it is very humbling to look over the user’s shoulder as they interact with that solution and have their assumptions about interactions with it either validated or crushed. In any case, those observations are valuable because they provide feedback on how we may further optimize the solution.

Methods for obtaining user feedback include traditional research studies (controlled environment, contextual interviewing etc.), remote research (usertesting.com, loop11 etc.) or session recording (tealeaf, sessioncam etc.).
These methods only become viable for entire use cases, as it would be hard to justify getting a study together and even reviewing the data to determine if users are executing one particular interaction as expected. Even though ironically the most valuable findings come in form of one particular element of the solution not being aligned with the user expectations.

Another method (the one we will review), sometimes perceived as more ambiguous (even though it’s not at all) is to rely entirely on data points to determine the experience of a given interaction. This often presents a barrier as those working on the experience architecture favor seeing the problem visually rather than have it interpreted as a set of numbers. Fear not, we will get to that also.

Tracking with events

For purposes of illustrating the interactions we will be using Google Analytics events (Universal), but you may adapt the methodology to whatever reporting system you have integrated.

Before starting, a value of any given data point must be determined. How useful the information about a particular interaction is must be appraised against the effort for implementing and reviewing the data as well the impact on the solution’s performance – as each event being fired takes up processing power. With that in mind, following scenarios, where applicable, may carry different significance, which is up to you to determine.

Content consumption

Knowing which content users interact with, or are just aware of, on their journey to a conversion can significantly help us in prioritizing the same. We safely assume that users see the first few paragraphs of any page, and perhaps the most prominent call to action.

But what if we wanted to determine the difference in conversions between the users who finished reading the entire page and those who didn’t. This can be simply achieved by firing an event when a particular element, like bottom of the page (footer element perhaps) is viewed, as such:

$('footer').isInViewport(function() {
  ga('send', {
    'hitType': 'event',
    'eventCategory': 'Viewed section',
    'eventAction': 'Section viewed',
    'eventLabel': 'Scrolled to bottom',
  });
});

using isInViewport.js.

Similarly, if we wanted to know if the user saw content which is not shown by default (accordions, sliders, tabs etc.) we could:

$('.tab').click(function() {
  ga('send', {
    'hitType': 'event',
    'eventCategory': 'Tab',
    'eventAction': 'Opened tab',
    'eventLabel': $(this).text(),
  });
});

provided .tab is bottom most element with the title, otherwise $(‘.tab a’) etc.

Non interactions

In some circumstances users may expect an interaction where there isn’t one. For example, clicking on non-clickable elements etc. Identifying this misalignment of expectations can often lead to significantly improved experiences, therefore higher returns.

Let’s say you are running an ecommerce site and you have pictures of products being sold (or a portfolio with screenshots of projects), but you haven’t integrated a click (or hover) to zoom functionality on those pictures. By tracking users who are clicking on the pictures you can justify the need to build that functionality as it may be directly tied to users leaving the site. As we will shortly review how to do.

One option is to track absolutely every click, and there are solutions out there which enable you to do so:

Crazy Egg

Crazy Egg

However, limitations such as performance, ability to easily integrate, review and segment data certainly present themselves immediately.

Approach which I recommend is to form intelligent assumptions about what users may expect. For example to click on headings, prices, images etc. to get additional information. This may be easily implemented by categorizing non events as such

$('.non-event').click(function() {
  ga('send', {
	'hitType': 'event',
	'eventCategory': 'Non Event',
	'eventAction': 'Clicked on',
	'eventLabel': $(this).attr('id'),
  });
});

provided id for each is specified.

Error tracking

User input validation, timeouts, 404s etc. are where the most frictions are created in the experience.

Solutions (hopefully) usually have integrated methods for handling errors, so appending events to those can be an effortless way to integrate tracking of same. For example, if we wanted to track required fields users did not fill out:

$('.required').on('blur', function(){
  if ($(this).val() == '') {
    $(this).addClass('required-field');
    $(this).attr('placeholder';, 'This is a required field';);
    ga('send', {
        'hitType': 'event',
        'eventCategory': 'Missed required',
        'eventAction': 'Skipped field',
        'eventLabel': $(this).attr('name'),
    });
  }
  else {}
});

Or track missing pages on which user ended up for one reason or other

if ($('#404').length > 0) {
  ga('send', {
  'hitType': 'event',
  'eventCategory': 'Page not found',
  'eventAction': 'Landed on 404',
  'eventLabel': window.location.pathname,
  });
}

provided 404 id is unique to your 404 tempalte.

Navigations

Even though Analytics provides us with comprehensive mechanisms for page to page navigation through various flow reports a shifting paradigm for both one page solutions (think parallax and dialog websites) as well as in page navigation (scroll to section) is becoming more prominent.

Tracking code being similar as in examples above, but given variances in implementation I’ll just list out some variables for your consideration

  • $(this).attr(‘href’); gets the destination
  • $(this).attr(‘data-whatever’); allows for tracking occupied or segmenting values
  • window.location.pathname, providing current location
  • … you get the idea

Too much code anyhow, let’s now examine how to apply the extracted data

Integrate the data

So now that the events are firing and user interactions are being tracked, data is collected and ready for us to implement across our performance and experience reports.

It is important to note that events by themselves provide very little indication on how the users are interacting with the solution. If we take the content engagement example from above and plot:

tracking events plotting volume

Volume of visits

…we may (wrongly) assume a correlation between the number of events and how the users are engaging with content. By examining the graph above, one may rush to say, “Our positions” accordion is what people are interacting with the most hence we should prioritize it over everything else.

However, as soon as we start comparing against independent engagement metrics such as pages per visit and visit duration, the chart almost flips around entirely:

tracking interactions with events content engagement metrics

Events plotted against pages / visit, colored by visit duration.

Leading to a (correct) conclusion that “Energy/Utility” accordion is engaged with more contextually relevant audiences, providing justification for prioritization where appropriate, yielding in more relative experiences. This especially becomes true if you are running corresponding campaigns or have enough data to segment the audiences with statistical significance.

Find the cause

Continuing with our goal of providing better experiences through increasing relevancy, we might find ourselves looking at exit pages (where users leave the site), so that we may identify why they are doing so, and how we can make them stay.

Say the biggest drop-off surprisingly occurs on the careers page,

drop-off-page-careers-fa

We might reach a conclusion that users are not finding what they are expecting to find, and therefore abandoning the site. We would once again be dead wrong. Just by switching over to the events for the same page…

tracking external link clicks

… we notice that the External Link event category has been getting a significant uptick, and it’s a good thing since users are going to a link titled “See our latest job openings” – a profile on a job posting site. Luckily we have set up tracking for all external click interactions as events,  so we may use the data to suppress outbound clicks against exists as needed. For details on how to do this I recommend reviewing Outbound Link Tracking With Google Universal Analytics.

Inverting the indicators

We may instead use interactions to segment the visits.

tracking interactions as segments

Segmenting interactions

By leveraging established interaction segments comparatively to review solution performance we determine indicators and triggers for targeting and providing further relevance still.

 Where the value of tracking user interactions with events grows exponentially may also be in following circumstances,

  • Using them with goal appraisals, e.g. conversions with and without interactions
  • Pitting the Event Flow against the Behavior Flow for user expectation alignment

The value of having the user interactions available for integrating with other performance indicators is obvious and the potential for what you can do with the data is only limited by your imagination.

Optimize the solution

Data serves little purpose unless we make it actionable. We do so by translating the gathered information from tracking the interactions into solution experience hypotheses.

A common pitfall when forming assumptions to be tested is that they often encompass too large of a change to understand specifically why a particular variation resulted in gain or loss. Luckily we are micro focusing on very particular interactions and our tests should reflect that in order to provide proportionally deliberate results.

mapping layout interactionsWhich is not to say that tracked interactions have less value in larger endeavours. For instance, you may use same technique for tracking outbound links to determine on-page performance of content sections. Mapping comparative interactions which resulted in conversions against the layouts to determine priority and value in the hierarchy of the elements is a great way to justify a need for optimization.

In the provided illustration, we have determined that the topmost prioritized content is not performing as well as content in the middle of the page (26 < 88), and that the marquee element is significantly underperforming (15). That is all the justification needed to form a hypothesis stating why reordering said elements will match user intent and provide higher returns.

Prioritize accordingly

Capacity of reporting mechanisms to forecasts the impact on solution performance is crucial in determining how individual tasks may be optimized.

  • If your solution’s engagement model is gradual, and in each step users drop out, targeting issues higher up in the conversion funnel will yield in larger impact on the goals.
  • Low-hanging-fruit and crowd-pleasers are a great opportunity to validate initial assumptions and gain buy in for further optimization. Just because it’s easy, doesn’t mean it’s not as valuable, as opposite is usually true.

  • Know thy audience. If influencers, power-users and other referrers are one of your archetypes, prioritizing their experiences impacts beyond the noticeable or even traceable and may yield in unimaginable.

Note that data does not replace intrinsic intuitions we have about experiences, it merely challenges them. Performance reports are not guidelines we are expected to follow blindly, they are a knowledge resource we leverage to make the most informed decision on how to further optimize the solution performance and user experience.