Data Envelopment Analysis software

Benchmarking team and individual performance in R&D Laboratories

Individual and group performance and reward is a nagging problem for many organisations. Individuals wish to be rewarded for their specific contribution to the business. However, in many cases they work in teams and team members feel, quite rightly, that they contribute to each other’s success. So how do you differentiate one from the other if that is what you want to do? The other issue is measuring the comparative performance of different teams within a business unit where each team has a distinct and valuable contribution to make to the success of the organisation? A further complication arises when the work of one team can help or hinder the performance of another and reward strategies have to be adopted to encourage cross-function or cross-team working without detracting from getting the best from each team. These were the sorts of questions that a team of managers was faced with and partially solved using internal benchmarking with DEA as the measurement mechanism.

This study was undertaken by Dr. Mahen Tampoe, an independent management consultant, former director of operations for ICL’s Application Systems Division and former managing director of the ICL Information Technology Centre in Dublin. Mahen has published many articles on project management, managing knowledge workers, and on the practical application of the theory of core competencies. His consultancy work on organisational transformation and strategic change management has lead to a broad range of clients in the utilities, leisure, petroleum and financial sectors, local and central government, both in the UK and overseas.

The problem

In this particular case, the parent company had made the decision to introduce pay for performance as the major plank of its reward policies. This was acceptable to those employees whose contribution could be directly measured. But there were many pockets of the organisation where such clear demarcation of effort and outcomes were hard to identify, the R&D function being one of them.

Structural/organisational response

Tackling the problem required that the R&D function be structured and organised to measure performance at team and individual level. It also meant changing some of the traditional management methods that took a global rather than a team centred approach to managing performance.

The first step was to clearly identify the different teams and to define the outcomes expected over the timescale of assessment. Traditional project management techniques were used to define the outcomes and the time, cost and quality of these outcomes. The idea was to focus on what the teams were expected to do rather than how they should achieve their objectives. In other words the work of each team was broken down into objectives or goals and each goal had time, cost and quality criteria assigned to them. This was quite hard to do, particularly in areas of work that are traditionally considered to be free flowing and inspirational. But some hard thinking produced measures which identified outcomes that could be judged and validated.

The next challenge was to do the same for each team member. Here a further criterion of ‘dependency’ was also assigned to each task undertaken. What this meant was that each team member knew not only what was expected of him or her but also what impact their failure to deliver would have on the other members of the team.

At this stage a monitoring and review mechanism had to be devised that enabled team members to ‘call foul’ if their colleagues were hindering their progress. This meant a self-reviewing process being adopted where the team would meet once a week and review progress. Work which was claimed to have been successfully transferred had to be confirmed by the recipient before the individual could take credit for it. Failures and successes were documented and suitable scores allocated according to a scale agreed in advance as part of the whole process. If arbitration was needed the problem was referred upwards to the manager concerned. By this means each team member’s and the teams own performance was monitored and reviewed weekly by the team and then confirmed by higher management once a month as part of the team review.

Measuring comparative performance

What was needed was an easy means of collecting and analysing data so that comparative performance could be measured. Frontier Analysis proved to be an easy to implement tool and was chosen over other tools or techniques which would have required considerable programming effort, delay and associated costs. The availability of easy-to-use software should not be under-estimated. It took only a few days to progress from idea to the first pilot. The delay is usually in formulating the model and satisfying all those concerned that the logic and rationale is sound. Being able to quickly demonstrate the concept with ‘dummy’ values was a significant contributor to getting agreement from senior management to try the method. Finally, knowing that the technique being used had both academic rigour and was proven in practical applications by others provided a comfort factor for both managers and staff.

The variables used within the model covered four main areas. These were:

  • Effectiveness of the individual – keeping commitments, meeting quality standards, meeting commitments to colleagues, meeting personal development activities and so on.
  • Inter-personal skills – working with others, helping others, managing their managers and influencing colleagues in other parts of the organisation.
  • Task skills – report writing, presentation of technical information to non-technical colleagues and time management.
  • Specialist skills – ability to learn new skills, think creatively and apply creativity and the ability to apply the specialist skills for which a person was employed.

Benefits and opinions

So, how successful was the process? Overall the ability to create comparative data was seen as having value. There was much discussion about the fairness of the measures and the measurement process itself. This, however, is inevitable in a pioneering venture of this kind. Disputes about fairness always arise when people are measured against targets (sales staff always seem to complain that someone else has an easier patch). Disputes arose about the impact that one person’s delay had on the work of another but this disagreement, when well handled, resulted in a solution that smoothed the channels of communication and increased team working. There is a long way to go before the approach taken becomes part of the accepted form of measuring performance. However, the approach taken has helped highlight the need for measures and the validity of the method used. The easy availability of software that enabled an internal benchmarking exercise to be tried furthered the cause of fairness in bonus allocation and for the first time was seen to remove, quite substantially, managerial ‘guesstimating’ and the perceived arbitrariness in the allocating each individual’s share of the bonus pool. The method used also enabled the staff affected to know how their performance was measured.
With thanks to Mahen Tampoe, of M2L Consultancy for his time and co-operation in preparing this case study