This post is fulfilling a number of functions:
- summarising a discussion at the Heads of Educational Development Group (HEDG) on Monday about the value of KPIs
- my notes from Barbara Dexter’s session at the SEDA conference the following day on “Targets and performance measures in educational development: how helpful are they?” and David Baume’s session on “How do educational developers make an impact – and how do they know they have done?”. Both part of the Evaluating Impact and Value for Money strand at the SEDA conference
- my presentation as part of the same strand on “Making the drinking the coffee into KPIs – managing impact measures within an educational development setting”. I’ll post up the activity I did for this another time
As ever am pressure prompted so sorting out presentation the night before, but in fairness did have to go to the other strand sessions to work out what would do. Am hoping this post will help me clarify my thoughts.
So, why am interested in this topic?
Well for the past two years, as head of a largeish educational development team, I have been consistently saying that we need to demonstrate impact – I did write “value for money” but I’m not sure what this means in this context. I get quite cross with some of the (perceived) complacency I hear around this issue. That what we do is too subtle to measure, that a measurement culture is tantamount to some kind of management control and heresy, that it is too reductionist, that it is obvious that we are necessary, that the measures are useless. Whilst I would admit that there is some truth in some of these statements, if we do nothing, then unsuitable measures will be imposed upon us. And, we should not forget, that we are paid to do a job and therefore we should be able to demonstrate ON SOME LEVEL that what we do is having an effect. Otherwise, shouldn’t we all just go home?
I felt very strongly when I moved into my current role that it was important to consider fully whether the work we do is relevant, of value, appropriate and most importantly making a difference. Leading of staff who on the whole feel passionately about what they do and have moved into this area because they want to make a difference, I felt it was important to give them the opportunity to evaluate how they make a difference. I also thought that the culture at my institution was such that we would be asked to demonstrate in terms that senior management could understand, what we did and why we did it. Furthermore, in setting up a new service/department, we needed to ensure that we remained committed to what staff required from us. The caveat of course is that there is a balance between leading the Schools and being led by them, and this is a fine balance to strike. So, one of my mantras has been over the past two years “we need to demonstrate the value of what we do” with the other one being “if we are not of use to the University, then we may as well go home”.
So, that explains my interest in this, which I guess you could describe as threefold:
- wishing to evaluate work on team and usefulness of service
- path for new developments and innovation
- enable others to understand what we do (from senior management to institutional stakeholders)
At both HEDG and SEDA this week there has been a lot of discussion about KPIs and other measures. The HEDG discussions were prompted by the Australian Council of Academic Development’s work on performance measures – see – http://www.catl.uwa.edu.au/projects/tqi HEDG then also had a discussion on its mailing list about possible KPIs. At the meeting concern was raised about the Australian work as it was very focused on senior management satisfaction, although people felt that the eight overall areas (eg strategy, policy and governance; scholarship of learning and teaching) could be useful at providing comparative measures. There was a feeling that if HEDG didn’t grasp the mantle on some of this work then KPIs and other measures could be “foisted” on the sector. And there was an acknowledgement of the importance of “playing the game” in terms of being seen to measure to please senior management. The mailing list discussion was more problematic as many of the measures were around things such as NSS scores, student retention, but the problem is that most of these are institutional measures. And whilst, one would hope, that educational development units do play a role here, it cannot be directly attributed to the educational development unit alone. I have personally always found such measure problematic as our role is as a conduit and a facilitator, there is someone standing between us (ie the academics) and the students and it would therefore not be appropriate to measure us on these scores.
Furthermore, I think many of the measures on the HEDG list were confusing measurement of the impact of the service and measurement of learning and teaching generally. So for example, accreditation, audit reports, student evaluations, retention, distance learning programmes etc cannot be measures of the work of educational development because you cannot contribute cause and effect. Or in other words you cannot measure it. Other measures just generate the “so what?” question, so take up of new technologies, number of NTFs, number of HEA fellows etc, may all be things we may think we want to measure but why? What difference does this make?
In my team we have been struggling with this for two years. Whilst I have acknowledged the external expectation around KPIs and measurement, I would not say this is the key driver. It may sound paranoid to the team but really I have wanted to introduce more of a culture of measurement and evaluation to improve our own motivation and work, as well as ensuring we meet what is required. I prefer to think about us being agile and finding measures we are happy with that we think have meaning but can be adeptly handled to give information to others that they can understand. And without sounding patronising I say this deliberately as I think that often we get too close to our work then are unable to articulate clearly to staff, whether or not in senior roles, the breadth and extent of our work. This is something else we have struggled with.
We have tried to tackle this issue in a number of ways. During the first year of the team we did two things. The first was to run a service development workshop with key staff across the institution to ascertain what shape they would like the service to be, what value they saw in the staff and those areas for development. We then commissioned a consultant to work with us on designing some KPIs. The consultant built on the work from the workshop and spent a long time talking to key stakeholders formally and then chatting to other staff informally. She did a very sensitive and thoughtful piece of evaluation which outlined our core “audiences” and how we could reach them, as well as identifying the types of things we wanted to measure and why. Her work made me realise that firstly we needed to select a few measures and stick to these – the keep it simple approach – and secondly that we could use a variety of methods in measuring the same thing. So we could mix and match approaches, which ties into my comments about agility as outlined above. This work formed the basis of a full team away day where we spent a day considering different areas that we wanted to measure and how. Key to this work was:
- The area had to be measurable, and preferably in more than one way
- It had to be something that we were used to measuring otherwise we wouldn’t do it
- The measuring and recording had to fit with our vision and values
- We had to produce meaningful data
- We had to demonstrate impact
Some of these areas were easier to address than others. Additionally, there was some resistance from the team. Some of the team felt that they would spend all their time completing spreadsheets without purpose. Others could not commit to a helpful measure or were not convinced the data would be useful. And I think some, although they probably didn’t say it to me!, felt that this was not appropriate for the academic nature of this work. On a more positive note however there was a general acceptance around the notion of evaluation and continuous improvement even though some of the tactics were more problematic.
At the end of the day though we had a come up with a set of seven areas that we felt we could comfortably measure. We did have a problem with one area in terms of the “coffee conversations” and discussed various ways we could address this. We decided though that what we could do was start on those areas we were more comfortable with and this would help convince the team of the value of what we did. Then other measures would fall into place. Also now all the team had started to think about evaluation, impact and data collection we were starting to change the culture of the team.
Six months after we had put this in place, I felt it was appropriate to review the team objectives and relate these more fully to the KPIs or critical success factors as we were calling them. Although we were more on track in some of the evaluation areas, in others I was not convinced we were demonstrating impact. At David Baume’s workshop at SEDA on evaluation we looked at Kirkpatrick’s levels of evaluation (1994), which are, as paraphrased by Baume:
- Did people like it?
- What have they learned from it?
- Have they applied what they learned to their practice?
- Have results improved?
This reminded me of where we were earlier this year. As Baume notes, we spend a lot of time on 1 and 2 – the kind of “happy sheet” culture. This may not be a problem, but we need more than this. I had sat as an external member on a review of a team similar to mine at another University and that “so what” question which is begged by data from levels 1 and 2 struck home in terms of my team. Do we know what the “so” is?
In order to address this we spent a few mornings as a team over a two month period trying to work out what was the essence of the team, how we could couch this in some clear objectives and then how would we know we were successful. What did success look like? How could we get there? And how would we know what to do to get there? Although at times tortuous, most of these sessions were positive in refocusing the team, who at our own admission have a tendency to take too much on and say yes to everyone, and getting them to think about how we could clearly talk about what we did. At the end of the two months, we agreed the following objectives:
- Recognition: Collaborate with staff and students to celebrate and publicise successes
- Expertise: Demonstrate expertise through thought leadership, developing practice and/or research informed practice
- Development opportunities: Create and promote opportunities for staff to engage with new learning and teaching techniques and dialogue around their practice
- Team: Exemplify good practice by actively participating in knowledge sharing and cross-skilling within the LDC team and collaboratively contributing to the LDC environment
I know many of you may be thinking this sounds awful and managerial but the way we did this was fun, it acted as a reward for the team, united us in common thinking and ensured that we were able to describe the work of the team to others clearly without falling back on a couple of tried and tested activities (usually the MA in Academic Practice and our work moving to Moodle, both of which are great in themselves but don’t demonstrate the breadth of what we do). It also enabled everyone to see what they gave to the team and pitch their work in relation to these broad, common purposes.
But the key question still was – what are our success factors and how do we capture much of the intangible work we do, the serendipitous meetings, the coffees that I am always telling people to go and have, the vital trust building 1-2-1s we do, the emotional intelligence and shoulder to cry on work. We have always held these kind of interactions dear to the work that we do but nothing seemingly could help us to measure them.
This was where our consultant told of us a genius idea which has truly helped us embed the culture of evaluation and measurement in our work. Lego. Or actually to be more precise – lego timesheets. The idea came from a developer who wanted an easy way to quantify his work on different activities. You can read about his thought process here. We suddenly realised that this could be a low maintenance, easy way to measure those serendipitous meetings and “intangibles” with Schools. By attributing a different colour to each thematic area of our work (determined in our KPIs), and a green lego base to each School we could see easily how much time we were spending in School and on what activity. Each block represents half an hour and staff from across the team add blocks to their Schools on a daily or weekly basis – whenever they can. The results are then recorded each month.
The lego idea has worked for the following reasons:
- It is a quick, simple and easy way of recording things that usually cannot be recorded
- It is fun – no really – people have built all sorts
- People like doing it and can immediately see how much time they have been spending with Schools and on what activities
- Staff from the Schools come into the office and notice the timesheets – they prompt comment and discussion. Some Schools have even been shocked at their figures, so much so they go and work to get the sheets more populated
- It keeps evaluation and measurement in our minds as we can all see it in the office and so is helping to change the culture
The downside of course is that is still does not work on those third and fourth levels, but what we have been able to do, by encouraging staff to think differently as well as engage in something that is more creative, is free their time from mechanistic measurement to get them to then capture elsewhere good case studies or stories where these conversations or activities have influenced practice. And what we need to do now is ensure more ways of getting this data but having a more organised way of capturing the “coffee conversations” has been massively beneficial.
But…. so what? What have we done with all this information?
Well, a number of things. Firstly, we created easily a “10 things about the LDC” document that gives information at our finger tips from the data we have collected. It is a snapshot of our work and attempts to get to the impact question. The support team within the team were able this year for the first time to create a report which comprehensively captured the range of what we do as a team – you can see our Critical Success Factors report online. This not only brought the work of that team to the forefront but also made all staff in the team see how important this work is. This report has been disseminated within the institution and was given to our new VC. It contains a range of data and measures and attempts to capture the “so what”.
We are also building on this by collecting a set of case studies and examples of the impact of our work on practice. Now we are more focused on this activity we are more attune to possibilities. This would be an activity that we would do anyway but we are just focusing on how we can use this in multiple ways. Our case studies about Moodle adoption are at http://www.city.ac.uk/ldc/learning-technologies/moodle/Moodle%20Case%20Studies.html
We have also changed our services and offerings. Based on evaluation we have made some changes to our professional development programme and will make more next year. We have gone for a more bespoke model, with some central activities and include built-in follow-ups and sessions with attendees to ascertain what changes they have made to their practice and why. The frequent question that we now feel more comfortable asking is, ok but “so what”?
On a strategic (or tactical level, depending on how you look at it!) the University has also undertaken a zero-based review of all services, to ascertain staffing levels and service offerings. In contributing to this activity we were able to provide data that we were comfortable with and had measured ourselves. Having a comprehensive set of metrics, agreed KPIs, an awareness of and practice of an evaluation and measurement culture, was enormously helpful in meeting with the ZBR team to describe what we did and why. It also helped us determine in those conversations where our remit stopped and started in relation to other areas.
We are planning a longer activity around user journeys as part of an extended user needs analysis piece of work to ascertain the real impact of what we do on a few staff members, so we can see in practice their challenges and needs and how we can better support them.
In all appraisals, the four objectives were matched with appropriate objectives for that staff member and everyone had to complete an extended development plan that included how they would measure success and how they would know they were successful.
We’re still learning with this. It is not perfect and I think at times we try to make it too complicated. It will be interesting to see what happens this year in terms of us evaluating and measuring success. We also need to marry the perhaps insular nature, or better still ensure our measuring does not become too insular, with that drive to collect examples, case stories and data. We need to continue to resist a culture that is mechanistic in terms of measurement and just measure for measurings sake but we need to remain mindful of our obligations or requirements to provide evidence for the value of our work, in ways which are required by others but is still in line with our values. We also need to still trust our gut instinct – measurement data may not give us the entire picture – and is only one way of supporting our work. Although we can continue to question and interrogate we should feel more motivated and confident in our work and our abilities if we know we can demonstrate that we are giving value and making a difference. And ultimately that is all any of us can hope for.
Interesting post – I thought there was some very sensible and thoughtful stuff in there, and it’s pitching a good middle line between imposed (and sometimes insane) measures and just trying to opt out.
The thing that particularly resonated for me was the idea:
“We need to continue to resist a culture that is mechanistic in terms of measurement and just measure for measurings sake but we need to remain mindful of our obligations or requirements to provide evidence for the value of our work, in ways which are required by others but is still in line with our values. We also need to still trust our gut instinct – measurement data may not give us the entire picture – and is only one way of supporting our work.”
This, I thought, tied back to one of the opening comments:
“So, that explains my interest in this, which I guess you could describe as threefold:
* wishing to evaluate work on team and usefulness of service
* path for new developments and innovation
* enable others to understand what we do (from senior management to institutional stakeholders)”
I suspect the problem with coffee conversations is that they need to be explained under (3) rather than (1). Having a coffee conversation is like using Moodle, in your example of pointless measures – great if it’s productive, but otherwise it’s just a meaningless tick in a box. What you really need are stories that show the value of the work which show how the coffee conversations were necessary and useful parts of the process. This may just be part of the endeavour that needs to be explained and tolerated as part of professional trust, rather than undertaken to targets.
(That said, if I was set a target for coffee and cake consumption, I wouldn’t argue…)