fbpx

Performance reviews: What it means to untie them from compensation decisions

Francisco Homem de Mello

satellites

I’ve been talking to many executives and HR professionals who’ve heard about “untying performance reviews and compensation” but that don’t quite get what it means. They’re even a bit skeptical; how can you untie two such seemingly inseparable things?

One of these was the CEO of a startup, who wrote me the following:

We’re researching how to do annual performance reviews. My assumption had been that immediately after conducting those reviews we would then immediately conduct reviews of compensation as well. We plan to give most employees standard, reasonable annual raises each year, with exceptional cases receiving much higher (or lower) raises, based on the results of their performance reviews.

However, we surveyed how a number of companies do this, and at 100% of the companies we looked at that’s not quite how it works. The compensation reviews are time-shifted and held ~3 months after the performance reviews. I’m told there’s a real reason for this, which is that it’s supposed to reduce anxiety amongst employees. Especially when they are asked to rate their peers. They feel less like they are affecting someone’s comp directly. Some companies even go so far as to say “Compensation reviews are not tied to the performance reviews. They are separate processes.” Which is of course a bit silly. The results of the performance review do feed into the comp reviews, it’s just apparently less psychologically stressful for employees if there is this 3 month delay.

He’s confused, and I don’t blame him. HR people (and I consider myself one) are not doing their job in explaining what this means, and maybe that stems from lack of knowledge (I’ll leave that for another time). Anyway, I’ll attempt to explain what it means to “untie performance reviews and compensation” here.

What are performance reviews?

Performance reviews are a structured process usually spearheaded by HR where individual performance is both assessed – that is, measured – and coached for improvement. They are the end of a performance management cycle, that starts with setting expectations, like goals and behaviors, and ends with the review, which assesses performance against those expectations so that a new cycle can start.

Performance at a job has two sides to it: the “how”, which is usually measured via competencies and behaviors (think “teamwork”, or “works well with team”, respectively) and the “what”, which is usually a mix between results-based goals or deliverables like projects and tasks. These two groups make up the criteria with which employees are held accountable. Performance reviews usually follow this two-pronged approach.

The process is often comprised of a mix of reviews done by the manager and the employee being reviewed, but in some cases, a group of peers and internal customers is also chosen to assess performance, and this comprehensive process is called a 360-degree review. In the end, the reviewee gets a meeting with her manager and a report with various degrees of disclosure about the assessments that she got.

Why do performance reviews exist?

Companies deploy performance management tools to improve individual performance. By improving individual performance, companies hope to improve team performance and business performance, as measured by financial results, innovation, etc. So the reasoning goes (and I believe it’s sound logic).

We’ll call that first objective “performance development.”

But companies also use performance reviews as a major input for other HR processes, like compensation management (how much people are compensated for work) and talent management (how do we make sure we retain our top talent and keep up with future people needs throughout the organization).

The fact that performance management, and performance reviews, serve two masters is the root of what makes people and companies feel so unsatisfied with the process.

Performance development and decision making are too different

We believe performance review’s two goals – performance development and decision making – shouldn’t be achieved by the same process if a company wishes to extract maximum return on its investment. So why can’t these two goals coexist?

The first reason is that performance development and decision making are so different they need their own processes to be most productive. Trying to bundle them together reduces the effectiveness of both of them against their goals.

Performance development requires coaching, significant conversations, and narrative, for understanding and improving performance. If a manager has a start/stop/continue[1] conversation every month or two with her team, this goal’s probably going to be achieved at optimal ROI.

Making decisions, on the other hand, is a quantitative process of comparing people on a number of criteria, of which performance is only one of more than a handful. It’s more about ratings, and less about narrative and coaching [2].

The premise is that any company doesn’t have enough goodies to hand out to everybody. Resources such as opportunities, promotions, raises and performance incentives are scarce, and therefore not available to everyone at all times. Also, resource allocation is not solely a function of performance, so performance reviews are poorly suited as sole inputs for people decisions. Other criteria need to be taken account.

Let’s say your company does AI tech, and have one critical employee that’s extremely hard to replace and that is critical to the development of your product. Aren’t you going to allocate scarce resources to “make her stay”? Or let’s say you’ve got two top performers, but one already makes top of market comp, while another’s slightly below market. Which one are you likelier to lose? To which one, then, are you likelier to allocate more resources?

Making people decisions, be them compensation changes or succession planning, is about retaining those employees that bring the most ROI to the company or that can most negatively impact company performance if lost.

Decision making overshadows performance development

So we’ve talked about how the optimal processes for decisions and development are too different to be bundled. That was the first reason for untying them. The second one is that when they’re bundled, usually performance development gets crushed by decision making.

Performance development is based on narrative and coaching. And narrative and coaching are hard. It takes a lot of energy to properly put past performance into a cohesive form. It takes even more energy to think of how a person can improve in concrete, actionable terms.

Rating people on five-point scales, on the other hand, can lead to quite a mindless experience. Reviewers are pushing buttons, giving way to gut feelings, and more concerned about relative ratings – how on employee fares relative to others – and final ratings – what’s the overall message these ratings are going to send this person than to the actual criteria being evaluated. A common example is the tendency most people have to fill out these ratings one by one and then go back to see if the “big picture” is coherent. That is, if the specific ratings on each competency, for example, don’t contradict the overall impression held about the person being reviewed.

That way, the actual performance development goal of a quantitative review is far from achieved because the quality of the output of such a mixed process is terrible for improvement purposes.

Another negative spillover of bundling the two is that the anxiety derived from being rated turns off people’s ears to coaching and feedback. They just want to know the number, and nothing else (I’ve been there myself, on the receiving end of such reviews, and can attest to the fact).

What you should do, then?

By now I hope you’re convinced of why not to bundle performance development and decision making in a single “performance review” process. So what to do instead?

First of all, development and decisions should be tended to on different cadences. Performance development should be much more frequent than decision making. We suggest our customers do a bi-monthly or quarterly performance development check-in and maybe a talent management review twice a year.

The performance development check-in is composed of a self-reflection form filled by the employee, a manager reflection filled out by the manager and a conversation where agreements are made about the next cycle. 360-degree feedback may be held before the check-in, and their content may or may not be available to managers [3]. The output of the process is a bunch of actionable feedback for the employee, as well as a renewed set of goals (both business and development goals) and expectations.

The talent management review, on the other hand, is a highly quantitative process where a manager rates her team on a handful of criteria that serve as inputs for short-term talent decisions, such as promotions – in comp and responsibilities -, long-term incentives (equity and retention bonuses), career changes (when someone’s moved from software engineering to marketing, for example) and so on and so forth. Manager assessments are then calibrated with other managers: they all get into a room and try to sort out if there are any mistakes with each manager’s assessments relative to the others’. Finally, the outcome for the employee is a decision, if and only he’s going to get one. No ratings whatsoever are given back to the employee.

Closing remarks

By now I hope you’ve understood what it means to “untie performance reviews and compensation”, the main reasons for that, and what should be done instead. We at Qulture.Rocks help customers set performance management programs on a daily basis and it will be a pleasure to discuss it further with you – we love talking about it. So feel free to ping us at growth@qulturerocks.com.

By the way, if you want to learn more about the subject, we highly suggest you download our performance management ebooks.

————-

[1] The “Netflix feedback model”. Employees give each other feedback on what they think the others need to start doing, stop doing, and keep on doing. A great way to focus feedback on what the receiver can actually do with it. You may call this “feedforward”.

[2] I will use the term “coaching” with a simple denotation: to explain to people how they can do better. I won’t get into the mentoring x coaching argument, which is fruitless for our discussion.

[3] We suggest companies start doing “manager blind” 360-feedbacks before performance development check-ins. On these, managers have no access to the actual content of the feedback being received by their direct reports. This is to reinforce the purpose of the tool as being solely in the interest of the direct report to seek feedback from peers and internal customers, and not, in any matter, to influence a manager’s outlook on one of her direct reports.