Quick Guide: Brinkerhoff’s Success Case Method in Workplace Learning

How to use Brinkerhoff's Success Case Method in workplace learning?

How to Use Brinkerhoff’s Success Case Method in Workplace Learning

There are a lot of different frameworks that organisations use to evaluate the impact of their workplace learning initiatives. The Kirkpatrick model and the Philips ROI model may be the most common ones. While the Brinkerhoff’s Success Case Method is perhaps a less known one, it can too provide value when used correctly. In this post, we’ve compiled a quick overview of the method and how to use it to support L&D decisions in your organisations.

What’s the Brinkerhoff’s Success Case Method?

The method is the brainchild of Dr. Robert Brinkerhoff. While many of its original applications relate to organisational learning and human resources development, the method is applicable to a variety of business situations. The aim is to understand impact by answering the following four questions:

  • What’s really happening?
  • What results, if any, is the program helping to produce?
  • What is the value of the results?
  • How could the initiative be improved?

As you may guess from the questions, the Success Case Method’s focus is on qualitative analysis and learning from both successes and failures on a program level to improve for the future. On one hand, you’ll be answering what enabled the successful to succeed and on the other hand, what barred the worst performers from being successful.

How to use the Brinkerhoff Method in L&D?

As mentioned, the focus of the method is on qualitative analysis. Therefore, instead of using large scale analytics, the process involves surveys and individual learner interviews. By design, the method is not concerned with measuring “averages” either. Rather the aim is to learn from the most resound successes and the worst performances and then either replicate or redesign based on that information.

So ideally, you’ll want to find just a handful of individuals from both ends of the spectrum. Well-designed assessment or learning analytics can naturally help you in identifying those individuals. When interviewing people, you’ll want to make sure that their view on what’s really happening can be backed with evidence. It’s important to keep in mind that not every interview will produce a “success case”, one reason being the lack of evidence. After all, you are going to be using the information derived with this method to support your decision making, so you’ll want to get good information.

Once you’ve established the evidence, you can start looking at results. How are people applying the newly learnt? What kind of results are they seeing? This phase requires great openness. Every kind of outcome and result is a valuable one for the sake of analysis, and they are not always the outcomes that you expected when creating the program. Often training activities may have unintended application opportunities that only the people on the job can see.

When should you consider using Brinkerhoff’s Success Case Method?

It’s important to acknowledge that while the method doesn’t work on everything, there are still probably more potential use cases than we can list. But these few situations are ones that in our experience benefit from such qualitative analysis.

  • When introducing a new learning initiative or a pilot. It’s always good to understand early on where a particular learning activity might be successful and where not. This lets you make changes, improvements and even pivots early on.
  • When time is of the essence. More quantitative data and insights takes time to compile (assuming you have the necessary infrastructure already in place). Sometimes we need to prove impact fast. In such cases, using the Brinkerhoff method to extract stories from real learners helps to communicate impact.
  • Whenever you want to understand the impact of existing programs on a deeper level. You may already be collecting a lot of data. Perhaps you’re already using statistical methods and tools to illustrate impact on a larger scale. However, for the simple fact that correlation doesn’t mean causation, it’s sometimes important to engage in qualitative analysis.

Final thoughts

Overall, Brinkerhoff’s Success Case Method is a good addition to any L&D professional’s toolbox. It’s a great tool for extracting stories of impact, telling them forward and learning from past successes and failures. But naturally, there should be other things in the toolbox should too. Quantitative analysis is equally important, and should be “played” in unison with the qualitative. Especially nowadays, when the L&D function is getting increased access to powerful analytics, it’s important to keep on exploring beyond the surface level to make the as informed decisions as possible to support the business.

If you are struggling to capture or demonstrate the impact of your learning initiatives, or if you’d like start doing L&D in a bit more agile manner, let us know. We can help you in implementing agile learning design methods as well as analytical tools and processes to support the business.

More Learning Ideas

How to Write Good eLearning Questions?

How to write good elearning questions?

How to Write Good eLearning Questions?

Wherever there’s learning, we often need some kind of assessment. While learning analytics have evolved considerably over the past few years, often the easiest method to try to capture learning is through asking questions. However, it’s good to keep in mind other formative assessment methods, that might be better in evaluating long-term learning outcomes. Regardless, there are certain elements to asking questions as well. Naturally, you’ll want to be sure that you’re evaluating learning, not just the ability to regurgitate facts or recall statistics. Thus, we put together a quick guide on how to write good eLearning questions. Here you go!

1. Align your questions with the learning objectives

Whenever you’re writing questions, you should keep in mind what the learning objectives of the activity are. When going through subject matter and material, it’s easy to pick on certain things (especially facts, figures, numbers) in the hopes that they would make good questions. However, often these questions don’t go beyond the trivial level, and thus don’t support the learning goals either. Overall, we should focus on the use of knowledge, rather than the ability to recall content. Hence, you should focus on writing eLearning questions that require understanding the concepts and ideas, as well as practical applications.

2. Use a variety of question types

Simple multiple or single choice questions are probably the most used ones. However, there’s no reason you should limit yourself to those. Question types like drag-and-drop, fill-the-blanks, sorting activities and open-ended questions all work well and are easy to execute. The added variety has two benefits. Firstly, it may help in engagement. Instead of mindlessly clicking through alternatives, learners have to focus on the questions type first, and then the content. When you get people to focus, they are more careful, which means you’ll get better answers. Secondly, using multiple different eLearning question enables you to ask about the same thing from different perspectives and in different ways. This helps to really understand whether the learners truly understood the concept or are just working with surface level knowledge.

3. Keep the questions clear and concise, avoid negative

The aim of assessment should naturally be to test whether someone has understood your content. Now, if your learners have trouble already understanding the questions, you’ll just make everyone frustrated. The learners are having trouble answering and you can’t be sure whether it was the content or the question that wasn’t understood. So, keep your eLearning questions clear and concise. Avoid ambiguity, “circling around” and unnecessary detail, and be direct.

Also, you should try to avoid negative phrasing of questions wherever possible. Studies show that negatively phrases questions are more difficult to understand and thus result in more frequent mistakes.

4. Provide valid answer options without free clues

This is probably the part where it’s the easiest to cut corners when you’re under a time pressure. When designing the alternatives that the learner is supposed to pick from (in e.g. a multiple choice question), we’ll naturally already have the question and the right answer ready. It’s probably easy to just come up with random options for the wrong answers, which are also referred to as distractors. But you really shouldn’t do that.

Good assessment tries to eliminate the possibilities of guessing. We often say that “it’s not the correct but the incorrect answers that determine real knowledge”. By providing “bad” alternatives or silly distractors, you’re effectively making it a whole lot easier to pick the right answer from the rest. So, ensure that all the options could at least seem plausible to someone who had not learned the topic. Also, make sure that all your alternatives are roughly the same length and same phrasing. We human beings instinctively look for visual cues when trying to solve problems. By keeping things uniform, you’re not giving away unnecessary free clues.

Final words

Overall, writing good eLearning questions is not rocket science by any measure. A good rule of thumb that encapsulates a lot of the previously said would be “keep it clear and don’t try to trick the learner”. It’s very easy to sabotage one’s own “data set” by asking silly questions, but that only comes back to haunt you as an L&D professional, as you won’t get an accurate picture of the knowledge and skill levels in your organisation. So, the next time you’re designing an eLearning quiz, keep these 4 points in mind!

More Learning Ideas

5 Quick Tips on Giving Learning Feedback

Learning Feedback

5 Quick Tips on Giving Effective Learning Feedback

Feedback is an integral part of any learning process, whether instructor-led or self-paced. With effective learning feedback, you can increase engagement, motivation and growth in your learners. With the plethora of digital tools available today for seamlessly giving feedback, there’s no excuse in doing so. Furthermore, feedback is not difficult to incorporate into eLearning courses either. Most of the content authoring software come with easy tools for feedback. Also, modern digital learning environments increasingly support creative ways of feedback, such as gamification. However, even with all these tools, it’s important to remember what constitutes good learning feedback. Here are 5 quick tips on it.

1. Feedback needs to be continuous, but not interfering

Ideally, every learning activity, whether a video, storyboard or a classroom session, should have feedback. Continuity in giving learning feedback helps to guide the learning process. However, you should give feedback at natural milestones, such as the end of an activity. If you start giving out feedback midway, you have a risk of interfering with the learning flow of the employee.

2. Learning feedback must be about the activity and performance

Naturally, when giving feedback, you should focus on the activity and performance, not the learner as an individual. This is more of a problem in instructor-led sessions, where instructor may fall subject to attribution bias. Understand that everyone can improve through effort, and performance improvement is the thing that matters.

3. Use Effort Praise in your learning feedback

Effort praise vs. intelligence praise is a Growth Mindset concept. By verbally structuring your feedback for effort (e.g. “You worked hard, but it wasn’t quite enough yet. Could you find another way to do this?”) instead of intelligence (e.g. “Perfect. You’re are the best in the group”), you are developing a mindset that embraces challenges and risk and is creative and innovative.

4. Provide reasoning and guidance, not only scoring

When designing learning feedback loops, it’s important to explain the reasoning for a particular type of feedback. Instead of just telling the learner whether they got it right or not, explain why. Why was the answer wrong? Why was the solution to the problem not appropriate? In fact, it’s often good to explain even why the answer was right! From the reasoning, you can also move forward to guiding the learners to try again with a different approach.

5. Embrace making mistakes

Another concept from the realms of developing a growth mindset and learning feedback, embracing mistakes, is important. Mistakes are a natural phenomena and we learn through them. Hence, you shouldn’t punish your learners for making mistakes. Learning activities should be the de facto risk-free platform where they can make those mistakes. Furthermore, you may consider that others in the organisation may learn from someone else’s mistakes too – so share them!

Are you supporting your learners through adequate feedback in classroom sessions as well as eLearning? If you need help getting started, just drop us a note

More Learning Ideas

Training Evaluation in Digital – Kirkpatrick Model & Learning Analytics

Digital Training Evaluation

Digital Training Evaluation – Using the Kirkpatrick Model and Learning Analytics

Ask an L&D professional about how they measure training effectiveness and learning. The likely answer is that they are using the Kirkpatrick 4-level evaluation model. The model has been a staple in the L&D professionals’ toolbox for a long time. However, if you dig deeper, you’ll find that many organisations are only able to assess levels 1 & 2 of the model. While these levels do constitute valuable information, they help very little in determining the true ROI of learning. Luckily, thanks to technological development, we nowadays have the capability to do digital training evaluation on all 4 levels. And here are some best practices on how to do it.

Level 1: Reaction – Use quick feedback and rating tools to monitor engagement

The first level of Kirkpatrick is very easy to implement across all learning activities. You should use digital tools to collect quick feedback on all activities. That can be in the form of likes, star ratings, scoring or likert scales. Three questions should be enough to cover the ground.

  1. How did you like the training?
  2. How do you consider the value-add of the training?
  3. Was the training relevant to your job?

Generally, scale or ratings based feedback is the best for level 1. Verbal feedback requires too much to effectively analyse.

Level 2: Learning – Use digital training evaluation to get multiple data points

For level 2, it all start with the learning objectives. Learning objectives should be very specific, and tied to specific business outcomes (we’ll explain why in level 4). Once you have defined them, it’s relatively easy to build assessment around it. Naturally, we are measuring the increase in knowledge rather than just the knowledge. Therefore, it is vital to record at least 2 data points throughout the learning journey. A handy way to go about this is to design pre-learning and post-learning assessment. The former captures the knowledge and skill level of the employee before starting the training. Comparing that with the latter, we can comfortably identify the increase in knowledge. You can easily do this kind of assessment with interactive quizzes and short tests.

“If you’re measuring only once, it’s almost as good as not measuring at all”

Level 3: Behaviour – Confirm behavioural change through data and analytics

Finally, the level 3 of measuring behaviour is delving into somewhat uncharted territory. There are a couple of different angles for digital training evaluation here.

First, you could engage the learners in self-assessment. For the often highly biased self-assessment, two questions should be enough. If no behavioural change is reported, another question captures the reason behind it, and L&D can intervene accordingly.

  1. Have you applied the skills learnt? (linking to specific learning, can be a yes/no question)
  2. If not, why not?

Secondly, since self-assessment is often highly biased, it’s not necessary meaningful to collect more data directly from the learner itself. However, to really get factual insight into level 3, you should be using data and analytics. On the business level, we record a lot of data on a daily basis. Just think about all the information that is collected or fed into the systems we use daily. Thus, you should be using the data from these systems with the self-assessment to get a confirmed insight into the reported behavioural change. For instance, a sales person could see an increase in calls made post training. A marketing person could see an increase in the amount of social media posts they put out. The organisation has all the necessary data already – it’s just a matter of tapping into it.

Level 4: Results – Combining Learning Analytics and Business Analytics

Finally, the level 4 evaluation is the pot of gold for L&D professionals. This is where you link the learning to business performance and demonstrate the ROI through business impact. With modern ways of digital training evaluation you can eliminate the guess work and deliver facts:

To be noted, it is highly important to understand that the evaluation steps are not standalone. Level 4 is linked to levels 2 and 3. If there was no increase in knowledge or behavioural change did not happen, there’s no business impact. You might see a positive change in results, but you should not mistake that as the product of learning if the previous levels have not checked out. But once levels 2 and 3 have come out positive, you can look into the bigger picture.

Firstly, you should look back at the learning objectives, especially the business outcomes they were tied to. If your aim with the sales training was to increase the number of calls made, it’s important to look at what happened in that specific metric. If you see a change, then you can look at the business outcomes. How much additional revenue would those extra sales calls produced? The results can also be changes in production, costs, customer satisfaction, employee engagement etc. In any business, you should be able to assign a dollar value on most if not all of these metrics. Once you have the dollar value, it’s simple math to figure out the ROI.

All in all, there’s really no excuse for not dealing with levels 3 and 4 of Kirkpatrick. You can manage digital training evaluation and learning analytics even with limited budget. It’s just a matter of embracing data and the benefits of data driven decision making.

Want to start evaluating your learning on all levels? Click here to start.

 

 

More Learning Ideas