How Do You Evaluate Your Digital Analytics Teams Success

There’s certainly a school of thought that ties digital analytics team success to the bottom line…did you make money, did you save money?

And then there is the customer satisfaction perspective…do your stakeholders think that you provided good service?

Or, you can make it personal, and create performance metrics as a target to be measured against during annual review.

Is there a right answer?  

Nearly 75 percent this year’s EY Digital Analytics Benchmarking Survey respondents indicated that revenue generated from the digital channel was the primary method of evaluation.

These numbers may surprise you based on what you experience daily, as well as what you see and hear from your industry peers. We know that it can be challenging to keep tabs on all of the executives, managers and subject matter experts using your reports, analyses and recommendations to determine whether or not their efforts resulted in a net revenue gain.  

Do slower digital adapters hold digital analytics to a higher standard?

In the first look at the survey, we divided groups into Engagers and Explorers. Engagers received 50 percent or more of their overall organization revenue from the digital channel; Explorers received less than 50 percent of their overall revenue from digital. 

You would think that Engagers would be more apt to appraise digital analytics’ effectiveness more readily against revenue … but this isn’t the case. Explorers indicated 78 percent usage of this metric versus 72 percent for the Engagers.

As Explorers are newer to the digital transformation model, perhaps it makes sense that digital analytics is being held to a more accountable standard…Explorers are newer to digital, are taking a more conservative approach to digital and may want to understand the risk/reward in tangible results.

Do different verticals have different expectations about digital analytics performance?

There are some differences between industry verticals in their use of revenue metrics as a primary metric for evaluating digital analytics program performance. However, it is in the secondary evaluation metrics that we see a real disparity.

Consumer Products and Retail, Media and Entertainment, Technology, Travel and Telecom, all sectors that we associate with leading the way in digital product and services development, show a 77 percent adoption of revenue generation as the leading evaluation metric.  We’ll call this group the Experienced segment.

Automotive and Transportation, Financial Services, Life Sciences, Healthcare, Non-Profit, Higher Ed and Government, sectors that we generally consider to be more followers in digital transformation, use revenue generation 68 percent of the time as the primary evaluation metric. Not a huge difference. We’ll call this group the Learning segment. 

Interestingly, it appears that revenue receives higher weighting in the Learning group than the Experienced group. 

For Experienced companies, stakeholder feedback follows revenue performance with 68 percent; projects completed (52 percent) and cost savings impact (23 percent). In the Learning group, stakeholder feedback as a secondary metric lags by 25 percent at 43 percent, project completion is 16 percent behind at 39 percent. Cost savings calculation is roughly even at 25 percent. 

Projects completed appears to be somewhat of a weak evaluation metric as it aligns with quantity and not quality. It is more of a legacy metric that analytics programs use to prove value in environments that are focused on implementation and production of dashboards, reports and ad hoc analysis requests. 

On the other hand, stakeholder satisfaction is an important metric and addresses the need for analytics programs to address the challenge of analytics adoption -- an opportunity identified by 86 percent survey respondents. 

Is the focus on digital analytics contribution to revenue too broad or too narrow?

What was the differentiating factor between Experience and Learning groups in the metrics that get most recognition? The presence of a digital governance council and/or processes to operationalize the digital channel and create a more formalized approach for evaluation of their digital transformation programs.

Roughly 43 percent of the Experienced companies indicated that there is a digital governance council or body that operationalizes digital strategy. Only 19 percent of the Learning organizations have this in place. 

In her recently published primer on digital governance, “Managing Chaos,” Lisa Welchman makes the case for creating metrics to track progress of all digital initiatives. Evaluating a digital analytics program through multiple metrics rather than a single metric suggests an effort to understand performance against target and performance improvement.

Both are newer concepts in the field of digital analytics as evidenced by overall low positive scoring in the survey results. Many survey respondents skipped this question altogether, suggesting that they had no answer to the performance question. 

Says one manager, “Digital analytics … is considered table stakes, so it just supports existing digital projects. At the project level we use analytics to work out incremental revenues, but the discipline itself doesn’t have any targets.”

While it is a good thing to tie digital analytics to true contribution to digital revenue, this concept bears more scrutiny within your organization as it can convey vastly different results.

Creative Commons Creative Commons Attribution-Share Alike 2.0 Generic License Title image by  billsoPHOTO