George "Alyn" Kinney
3 min readFeb 1, 2021

--

Training evaluation lessons learned

“This looks cherry picked. I don’t think your project is responsible for these results.” That was the response I got years ago after I did my first end to end design project. I couldn’t believe it. I had followed ADDIE and set up the project following the Kirkpatrick model.

My first reaction was that perhaps I didn’t follow the model well enough. I looked for feedback from my fellow designers and re-read decks from ASTD conferences. Everything looked right. My fellow designers backed me up but looking back I think we were wrong.

I made a few key mistakes. One was that I was reporting on my own success. I had worked with our data team to put all the measures in place and everything surfaced in a dashboard. I took the data from the dashboard along with qualitative observations and I put them into a slideshow which I presented back to stakeholders. I’ve since learned that everyone expects a degree of bias coming from anything self-reported, especially when it’s good news. I would have done better to at least co-present the facts back with the data team.

I was very confident in my report because everything lined up. Learning objectives lined up with quiz scores which lined up quality observations which lined up with compliance and quality business results. What I didn’t understand is that I was presenting a theory about where results were coming from. A theory that, as it turned out, conflicted with competing theories including my stakeholders theories. Management, coaches, frontline leadership, heck even individual employees had their own stories about how these numbers were achieved. One strategy I’ve used since then is to rely on just part of the story above. Maybe I can’t claim all quality results are mine but a theory that the curriculum led to better test scores around quality and then better individual quality results might be easier for stakeholders to swallow. Alternatively working with the data team to evaluate all the efforts going on at the time could have also worked. Management loves to know what their ‘levers’ are to impact business results.

Lastly, the feeling that my results looked cherry picked really bothered me because I had gone to lengths to only report on quantifiable data. I made a point out of excluding splashy learner quotes about how great my class was. It was an onboarding class and I knew from experience even then that employees were generally pretty happy so ‘L1’ or reaction wasn’t very reliable especially when the facilitator was good. The timing and the purpose of the evaluation needs to be established up front before any evaluation starts. From my perspective I was waiting until I had statistical significance and reporting back as soon as I knew what was going on. From my stakeholders perspective I was reporting mid quarter, months after my class had already launched. I already got my fanfare, why was I trying to steal the spotlight from other ongoing efforts? It seemed as though I cherry picked the timing to coincide with performance reviews. Again, it was the implied bias of reporting on my own performance that bothered stakeholders.

I learned a lot from that project. The biggest lesson for me was that cause and effect is messy. We can hardly predict what the weather is going to do, much less a complex group of people. Demonstrating a training program’s value isn’t lining up a grand theory or getting more data in just the right way but sometimes it is showing that it was helping where it was supposed to help in the context of a team effort.

--

--

George "Alyn" Kinney

I’m a Learning Experience Design Manager at Google. I like to write about education, user experience and philosophy.