Monitor and Evaluate

#9 of 9 in our weekly succession planning blog post series:

Our guest blogger, Paul Riley is life-long learner of Organizational Leadership and Change who applies systems thinking and community development principles to help people work more effectively together within the complex human systems we create.

This week’s blog post focuses on the last principle of the 7 principles of successful Succession Planning: #7: Monitor and Evaluate. Succession planning and leadership development programs should be continuously monitored and evaluated to help stakeholders understand what works, why it works, and what impact it’s having on the organization’s leadership pipeline. People often think of evaluation as an activity that’s done at the end of the program. However, it’s important that evaluation plays an integral role in the process from the beginning, during program planning and implementation, with a focus on long-term outcomes and continuous improvement.

Program evaluation starts with the end in mind. In other words, you must identify the goals and long-term outcomes of the program to understand what you’re evaluating. I like to start by establishing an explicit program theory to describe how and why a set of activities are expected to lead to anticipated outcomes and impacts. I often use a logic model with the organizations I work with to show the chain of reasoning by connecting the program’s parts using “if...then” statements to illustrate a sequence of causes and effects. The planning process begins with a discussion among stakeholders about strategies that will contribute to the program’s desired results. In essence, this conversation is about the program’s theory.

The logic model I mentioned above is an iterative tool that provides a simple framework which is revisited throughout the program planning, implementation, and evaluation phases. The terms ‘logic model’ and ‘program theory’ are often used interchangeably because the model describes how a program works and to what extent. The W.K. Kellogg foundation provides a very useful Logic Model Development Guide,  which was developed for the non-profit sector but is particularly useful for evaluating programs designed for organizational and behavioural change, regardless of sector.

Although a logic model provides a useful framework for establishing and presenting the program’s theory, the framework doesn’t provide much detail about how to select indicators. So for this, I recommend incorporating frameworks into the evaluation program that are designed for evaluating training, succession planning, and leadership development programs. For instance, Bennett’s Hierarchy describes seven successive levels to evaluate training and development programs. The hierarchy starts with inputs and activities at the bottom, which Bennett asserts are the simplest level of evaluation that provide the least value in determining whether a program is effective. At the top of the hierarchy are social, economic, and environmental outcomes, which Bennett believes represent the highest aim for educational programs and are often the most complex to measure. Kirkpatrick also provides a model to evaluate training programs, which includes four levels: (1) participant reaction, (2) learning, (3) behaviour change, and (4) organizational results. William Rothwell, author of Effective Succession Planning, proposes an adaptation of Kirkpatrick’s four-level model to evaluate succession planning programs, which includes: (1) customer satisfaction, (2) program progress, (3) effective placements, and (4) organizational outcomes.

Combining the frameworks proposed by Bennett, Kirkpatrick, Rothwell, and others, provides different lenses through which to look at the various aspects of the program’s theory. While the logic model provides a general framework to guide program planning, implementation, and evaluation, these other models offer a more targeted focus on establishing indicators to measure program outcomes and impacts. Incorporating multiple evaluation methods is likely to offset weaknesses and complement strengths of different models, and it allows evaluators to confirm results, which enhances the integrity of program evaluation by producing more accurate measurements. Mixed-method evaluation programs are also more likely to reflect the needs of program participants and stakeholders, by looking at things from a variety of perspectives, which is likely to produce better evaluation designs and more targeted recommendations.

One of the main challenges I encounter when establishing an evaluation program is that people in the organization often feel like they don’t have the time or the resources to devote to evaluation. They are too busy delivering succession planning and leadership development programs to reflect on whether what they’re doing is working. So, I recommend enlisting the help of participants of the program. Participative processes, such as empower evaluation, increase the likelihood that evaluation will happen, because users who are actively involved are more likely to understand the process and feel ownership. Furthermore, you can kill two birds with one stone by achieving program outcomes while facilitating data collection and analysis.

Creating flexibility in the evaluation process might also help to increase participation. For instance, I often work with organizations to develop a small “menu” from which users can select indicators for evaluation. This allows stakeholders to establish measurements that reflect their concerns, whereas an exhaustive list of indicators may be perceived as cumbersome and unrealistic in terms of data collection. An evaluation process that’s both flexible and participative will help to accommodate the many different contexts, goals and outcomes within the organization, and facilitate learning.

Stakeholders must be engaged in the monitoring and evaluation process from the beginning and throughout the life of the program to ensure indicators measure what is important to the organization, rather than focusing only on what is easily measured. Without clear, timely, accurate, and visible indicators, stakeholders will struggle to work toward the program’s goals, because they won’t have a clear understanding of what impact activities, outputs, and outcomes are having in building a leadership pipeline. Active participation ensures that assessment is rooted in the direct experiences of the organization and grounded in the organization’s vision, values, goals, and objectives.

Be sure to check out our other Succession Planning blog posts in this series:

What’s so important about Succession Planning? 

The 7 principles for successful Succession Planning

Aligning Succession Planning programs with the organization’s strategy

Combine Succession Planning and Leadership Development

Include all levels of the organization

Provide opportunities for practice, feedback, and reflection

Promote Openness and Transparency

Develop Simple, Flexible, and Decentralized Processes and Tools

Does the speed at which technology is changing and the pace of our modern world mean it’s okay to be a sloppy writer?

I’ve been thinking about this question lately, and a book I was reading on my Kindle prompted it all. I’m a voracious reader and I run the gamut from serious tomes to what I call “candy” reading in my choices. I was recently reading a book that admittedly leaned toward the latter category, one that I’d chosen to read as it was receiving some acclaim for huge sales and a strong following even though it was only available for e-reader devices.

After the first couple of chapters it didn’t seem that promising, but I pressed on to see if it would improve. In the end I put it down around the halfway point. It wasn’t due to the quality of the story – the reason I had to stop reading was that the number of typos, grammatical mistakes and punctuation errors was so distracting that I could no longer focus on what was happening in the story. I was amazed at this sloppy writing, and thought that I couldn’t be the only person to notice – we’re not talking about an occasional misplaced apostrophe, this was rampant. Scanning the Amazon reviews, I saw only a couple of comments about the editing amongst hundreds of four and five star reviews. I started wondering if people generally thought this was acceptable because it’s an e-book, but then I remembered a series from a well-established author and a major publishing house that started to drive me crazy because of the mistakes in printed versions.

It’s all around us, too – just look at your local newspaper’s headlines and I bet you’ll find something, and prepare to have a good laugh if you read the real estate listings! Regardless of whether these things bother me personally, I’m interested in whether it’s a sign that people in general just don't care so much any more. Is it more important to get the latest information as quickly as possible on the device that’s most convenient? I think the general population would say for some types of information, like urgent news, yes. Could acceptance elsewhere be an evolution of language? I guess that part will only be revealed through time.

And how does this all relate back to learning? Over my years in the learning profession I’ve really come to appreciate the input of a good editor, and it's something we like to factor into every project at Limestone. No matter how good the instructional designers are, the value of fresh eyes on course materials, a report or other words we use is that they see what you have missed with being so absorbed in the content. The editor is often unfamiliar with the subject matter as well, so they can provide useful feedback on the clarity of communication.

Timelines are always tight, though, and the day or two it takes to turn around the edit could be really valuable elsewhere, or mean an earlier delivery. When people seem to have so much tolerance for a lack of editing, could we cut it?

Talking to colleagues and associates has affirmed my answer of no. Learning materials do need to be held to a higher standard. People will excuse the occasional typo, but ongoing errors reflect badly on the content itself, leading people to think it’s of poor quality and may not be accurate. Also, mistakes can lead to confusion and misinterpretation, which does not support a positive learning experience! There are places where it’s okay to relax standards, but when you really need to get the message across and the face time with your audience is limited – whether onscreen or in a classroom – editing your material thoroughly is worth the effort. 

In writing this post I decided to take a look back at the page for that e-book on Amazon. I see that an announcement was made a couple of weeks ago that the book is “NOW PROFESSIONALLY EDITED”. Maybe there’s hope for editing yet!