Organizations, People, Systems|

Key insights:

  • Benchmarking employee engagement, survey data, and spans of control is not strategic
  • You have to embed that data in the larger context of work design and what drives organizational performance to truly understand what’s actionable and where you need more info to drive change in the right direction

Leaders love to benchmark, which is how they evaluate operational performance. So benchmark data play a central role in a lot of analytics carried out both in the business and by HR.

Benchmark data on quality, margins, market share, customer satisfaction, and other areas are essential for measuring strategy execution. The 360 data that inform leadership competency models can be benchmarked against other organizations and roles. The time delivery people spend driving versus in store are benchmark data for go-to-market system performance. The number of support function employees in headquarters versus business unit roles provides benchmark information on organization design. Turnover rates can be benchmarked against industry competitors. And so on. Each of these types of benchmark data provides insight into understanding if the organization structure is correct and whether behaviors and processes are consistent with strategic objectives.

While benchmarking data play an important role, they also are often misused when viewed out of context. There are limits to how zealously and reliably any one benchmark can be used to measure strategy execution because there is a cost-benefit consideration to making improvements along any one dimension. At some point diminishing marginal returns dictate that you stop pushing for more.

In addition, organizational design choices make direct comparison of even “objective” data—headcount ratios, time to complete tasks, and others—hard to interpret out of context. For example, you may have more HR people per employee than your competitors, but if your HR people are more directly engaged in supporting competitive advantage, the extra expense could be well worth it. Alternatively, they might be doing little to add value to the bottom line, in which case the higher headcount might not be worth it.

Employee survey data. Benchmarking on employee attitudes always is a good thing, right? Unfortunately, not necessarily. Benchmarking employee survey responses to external data is at best mildly informative. At worst it can lead to misdirection and wasted effort which can undermine employee engagement – the opposite of the outcome we want to occur. Doing comparisons internally often leads to deeper and more actionable insights.

Employee psychology is a tricky and multifaceted thing. There are a large number of factors that combine to create satisfaction or dissatisfaction at work. Differences in compensation design, internal career paths, management processes, supervisor quality, development opportunities and more combine to create unique organizational cultures and work experiences across organizations. Oracle, Microsoft, Google, IBM, and Apple are all in the tech industry and have employees who work in both hardware and software. Yet their cultures, employee value propositions, and internal career paths are quite distinct. Comparing the answers to similarly worded questions across these organizations without any regard for the different contexts of the employees that answered them is misguided. This is why benchmarking survey items across organizations, even within the same industry, usually is not very useful. It can help overcome complacency if used to encourage leaders to address issues that are festering unattended. But it can be counterproductive if used as a rallying cry to “beat” your competitors with a goal of attaining higher levels of agreement with specific survey items.

The deepest insights possible from your employee survey come from understanding how the various parts of what you offer to your employees combine together as a package. You may be low relative to your competitors on pay satisfaction, but higher on opportunities for development and supervisor support. Faced with benchmark data like this, what is the conclusion you should reach about whether to try to close the gap on pay satisfaction, or double down and pay even more attention to development and managerial behaviors? The answer is: you cannot know what to act on simply by comparing average responses across survey items. Instead, use your data to model the drivers of employee satisfaction, intention to turnover, etc. and compare how important each of the different elements is using statistical analysis. These models are standard in social science research and many HR analytics experts, whether in your organization or employed by your survey vendor, are well equipped to apply them to the data you already have in hand. All you have to do is ask.

If you take this approach, you will get deeper insights into the real drivers of employee attitudes and inoculate your organization against a common misguided practice: setting arbitrary targets (percent agree) on specific survey questions as measures of whether things are going well. Leaders and consultants love to use a stop light analogy to create red/yellow/green indicators for survey responses as a way of focusing attention on areas that score relatively low: red for areas of urgent need, yellow for areas to be addressed but not as urgently, and green for areas that do not need to be addressed. Yet usually there is no scientific justification for classifying a survey item as red/yellow/green simply because it might have a lower percentage of people who agree – unless that conclusion is tied to a specific statistical model showing that the item in question is important for driving employee attitudes.

For example, pay satisfaction scores are typically low for all employees. Consider two jobs – machine operator and senior executive – and a model of intention to turnover. The percent agree for pay satisfaction for the machine operators is 78%, while for senior executives it is 66%. Does this mean that dissatisfaction with pay is more likely to drive senior executives to leave than it is for machine operators? Not necessarily. In fact, because frontline employees like machine operators are paid at substantially lower levels than senior executives, differences among them in pay and in pay satisfaction can be more important drivers of retention than for senior executives. For the senior executives, in contrast, their power and status in the organization may be more than enough to get them to stay, even if they would like greater pay. For both groups, the only way to be certain how important pay is relative to other parts of the job and opportunities at the organization is to run the statistical model separately for each group.

Headcount ratios and spans of control. Information on managerial spans of control can be very explosive when benchmarked against other organizations. If your company comes up as having lower spans of control and lower ratios of direct reports per manager, compared to others in your industry, it can be easy to jump to the conclusion that everyone is fat and happy – people aren’t working hard enough because they have too few direct reports. The solution? Cut out a bunch of managers and make the organization leaner and meaner.

Of course that’s too simplistic a conclusion to reach just from looking at benchmark data. And most leaders wouldn’t jump immediately to the action of cutting managerial headcount just to increase spans. But many leaders are quick to put the burden of proof on HR to show that people shouldn’t be cut, and that’s a hard place to start, like having two strikes against you in a baseball game. (For the non-baseball fans among you, it’s three strikes and you’re out in baseball.)

The answer is to look at the larger picture of how work gets done and the roles that each person plays. If managers in your organization have greater responsibility for decision making, client interaction, seeing projects through to completion, etc. they may be doing more independent work and less supervising than managers in other organizations. If that’s the case then they should have smaller spans of control because otherwise they would be stretched too thin and could not do the critical work that supervisors are supposed to with providing feedback and coaching, and holding people accountable and doing performance management.

Speed of decision making. Awhile back, there was a company that used a benchmarking service provided by a consulting company to survey its employees about a number of organizational processes.

The benchmarking data raised concern about decision making speed. Being in the pharmaceutical/biotech/medical devices industry, the company knew the value of being careful and deliberate in making key product decisions. If more time and data were needed to make sure a product was safe, the company would readily do what was needed. In that sense, taking a long time to make certain business decisions was a good thing, and leadership knew it.

What concerned them was that they rated low relative to their peers in the industry on decision making speed. They also knew that their consensus-based culture could slow things down. These two reasons together caused them to question whether decision making was too slow.

At an earlier point in time they had tried a process-based solution to increase decision making speed. They rolled out a bunch of meeting tools, including RACI charts and tools, in the hope that meeting time would be spent more effectively. But that had zero impact on decision making speed. It turned out that acting on the benchmark data alone was the wrong answer.

So the second time around they took a more systematic look at decision making throughout the hierarchy in the organization. Applying a systems analysis led to the root cause of slow decision making—unclear decision rights. It turned out that the benchmark data alone were insufficient to get to the root cause. Additional data on the organization structure and processes were needed to make the definitive determination of why decision making was so slow.

Cost-benefit tradeoffs in organizational effectiveness mean that benchmark data on organizational design and people processes are rarely useful when considered in isolation. What is the right level of turnover? Are there optimal headcount ratios? Is there such thing as “enough” of a leadership behavior? When are spans of control too narrow or too broad? And so on. In each case, the answer is that it depends on the other objectives you are trying to accomplish. Reaching the right conclusion requires additional data and deep knowledge of the context. You have to look at the bigger picture of what’s going on in the system. Acting solely on the basis of analyzing benchmark data can quickly turn into a fool’s errand.

Leave a Reply

Your email address will not be published.