A few people have asked for details of the exercise I did at the SEDA conference around KPI and measuring, so here it is….
The exercise relates to my other post on coffee, lego and KPIs about measuring the impact of educational development initiatives. I had about 20-30mins to run this from start to finish but it could definitely have lasted longer. For the accompanying presentation I used a prezi
Step 1: Take some rather lovely coloured, postcard sized, rectangular card (I used some rather florescent card from Rymans, was a bit lurid but did the trick) and write one educational development activity on each card. So, my list of “typical” educational development centre activities was as follows:
Journal; Invited speakers; NTF; Workshops; Seminar series; Educational technologies; Institutional research; Professional development (ad hoc) ; 1-2-1s ; Awards; Educational strategy; Programme team “interventions”; Conference; Innovation; Peer review; PG Cert (Academic programmes)
There are others I’m sure but this gives a good flavour and a range of activities.
Step 2: Find some other shaped, coloured card. I used (again from Rymans) stars and circles, again in lurid florescent colours. There was something about the shaped card and using circle type shapes that worked well and some people commented on this in the session. Use these for the evaluation measures. Write one measure on each shape but you can repeat the measures. I tried to group them by different types of evaluation denoted by a different coloured card.
The measures I used are given below:
“Counting measures”: Nos attendees, nos users, nos applications, nos awards, other numeric data
More complex “counting” measures: NSS scores, input into curriculum changes, nos networks created, policies produced, committees attended, projects completed, satisfaction scores, hours spent with schools/faculties
Qualitative measures: Focus groups, case studies, interviews
More complex qualitative measures: User stories, egs of changed practice, testimonials, emotional engagement
Research measures: Published research, external/research funding, other research output, % time spent on research
Excuse the rather crude names for the types of measures, this was quick and dirty so does need some finessing – and I love to hear from you if you do finesse it. I’m sure there are other measures but these are the ones I came up with as starters for 10 and would love to know more if you have any ideas.
Step 3: Split your “audience” into 3 or so groups of about 4-5 people each. Depending on time, give them 3 or 4 different activities – we had about 20 mins and I gave them 3 each. Then randomly give them a selection of measures from all of the categories.
Step 4: In groups, with a flip chart page, each group identifies those measures that could be used to evaluate each activity. Usually there is some overlap and quite a lot of discussion.
Step 5: At the end of the allotted time each group feeds back to the others as to why they have chosen these measures, any problems they had etc.
If I was running this again I think I would be more scientific about the categories and also group the activities by colour, but this is a minor tweak.
I’d also like longer – probably an hour to an hour and half would give more discussion and more time to cover all the activities as well as for groups to contribute their own. It worked well though and was a good way of getting people to think about the range of measures you could use and where one “measure” gives you lots of data. Also we talked about how some measures may be relatively meaningless but are important for certain audiences.
Thanks to the LDC team and those who attended my session at the SEDA conference 2010 in Chester. Also to:
Lesley Salem from Cultivation who worked with us on the initial evaluation at City and identified some of the range of measures and was instrumental in making me think differently about KPIs.
Dr Antonia Ward who helped me think about impact and put me onto the fabulous lego timesheets