Knowledge Assessment in Corporate Learning – 5 Methods

Knowledge Assessment in Corporate Learning

Knowledge Assessment in Corporate Learning – 5 Methods

Whenever we do training, it’s generally a good idea to include some kind of assessment. As organisations, proper knowledge assessment enables us to track employee development and conduct analysis on instructional efficacy. While it’s important to go beyond this level of assessment to capture real organisational impact, it’s vital to get the basics right. A challenge in corporate learning is that the evaluation is often too immediate, intimidating and ineffective. Here are 5 methods that not only help in those aspects, but can also make testing more fun!

Continuous assessment with low-key quizzes

One of the challenges of assessment is that it’s often only administered after the fact. However, good evaluation should be continuous. Therefore, instead of saving the quizzes and test until the end of the course or activity, distribute them throughout. This also helps you as the evaluator to spot learning challenges early and intervene accordingly. Furthermore, instead of a daunting battery of never-ending questions, use them in small sets embedded in the content. This makes the whole thing a little more approachable, as the continuous type of questioning feels more like exercises than formal testing.

Constant tracking of activities

Another less quizzing-focused way of knowledge assessment is seamless tracking. The idea is to use comprehensive data collection tools, such as xAPI, to continuously collect engagement data on digital learning experiences. Formal testing is replaced by benchmark measures for user inputs and outputs, that the analytics track learners against. For instance, those who engage with a training video for its full length receive a “higher score” than those who didn’t. Alternatively, those who have made contributions or reflections about the learning on the organisation’s social learning platforms receive higher marks than the rest. These are just a few examples, but the goal is to make evaluation as seamless and automatic as possible.

Scenario-based simulations as knowledge assessment tools

Training simulations are not only good for simulating real life scenarios, they can also be used in highly practice-oriented assessment. This form of evaluation models real life situations and application contexts of the content. Therefore, instead of just answering abstract questions, the learners are able to apply the knowledge in a virtual environment. Depending on the training topic, you can assess multiple variables, e.g. speed, accuracy and confidence. The great thing about these simulations is that they also can make learners more confident in applying the skills on the real job environment, as they’ve got some practice under their belts.

Social analytics for social learners

In case you’ve already implemented social learning tools in your organisation, there’s an interesting alternative to conventional quizzing. Relying on the notion that reflection is one of the most important parts of learning, social analytics can help us to analyse interactions and provide a novel way of knowledge assessment. If you’ve implemented e.g. discussion boards, you could use analytics tools to evaluate learners based on the quantity and quality of discussion they bring in. For instance, simple counters can collect the quantity of comments by a particular learner. Similarly, other algorithms can determine the quality of those comments – whether they contribute to the discussion or not. If you already have a good learning culture, this could present an interesting alternative to some assessment.

Before-, after- and long-after quizzes

Finally, if nothing else, you should at least provide a knowledge assessment opportunity before and after a learning activity. This helps you gain insights into the development that happens. Furthermore, pre-tests can also serve as valuable data sources for instructors and designers, based on which to personalise the learning content. However, an interesting addition would be “long-after quizzes”. The problem with most post-training tests is that they’re too immediate. They tend to capture short term recall rather than real learning. As the forgetting curve tells us, people tend to forget a lot over time. Therefore, introducing quizzes some time after the training can serve a meaningful purpose of capturing the amount of knowledge that really stuck.

Final words

Overall, good assessment is an art form of sorts. There’s no single right answer to what works best. As long as you’re working towards more formative assessment, you’re on the right track. Getting the basics right by asking good eLearning questions also helps a lot. However, this kind of knowledge assessment is only the beginning. We still need to understand how learning translates into action, and how action translates to performance. And it’s the latter two that pose the real challenge in corporate learning. In case you need help solving those challenges, or just in building better corporate learning assessment, we’re happy to help. Just drop us a note here and tell us about your challenge.

More Learning Ideas

Compliance Training – Is There a Smarter Way to Do It?

Smarter ways to do compliance training

Compliance Training – Is There a Smarter Way to Do It?

With increasing regulation and complexity, compliance training is something that many companies must conduct. While the intention of regulators and compliance enforcers may be good, the practice needs improvement. The usual ways of doing the training don’t produce value beyond ticking a few boxes. Employees generally dislike it, and real learning is a rare occurrence – it’s just a matter of getting it over with in the least amount of time possible!

But does this work? Sure, as long as you get to your “completions” and “passes”, you can show that you’ve covered your own behind. But wouldn’t there be value in educating people in a proactive way, that perhaps could reduce risky behaviour in the first place? Or if that sounds alien, how about not having to force employees sit through the same material year after year? Let’s explore two small things we could do to make compliance training work just a little bit better.

Proving knowledge through mastery vs. a few correctly guessed questions

Arguably, the usual ways of conducting compliance training have very little actual learning value. The compliance training just acts as a tool to shift blame; you’ve “trained” the individual, so you can wash your hands off. Yet, the actual risks are of committing harmful acts are not necessarily materially reduced and do still realise. And as an organisation you’ll be on the hook – both financially and reputationally. Wouldn’t it make sense to be bit more proactive and try to reduce risky behaviours through learning in the first place?

Another problem is that the way compliance training is often assessed is quite limited. You’ll have your course, followed by a test that pulls its questions from a larger question bank. So even with a 100% score, there’s still a lot that they could potentially not know. Additionally, it’s perhaps worth realising that many learners just skip through the material and guess answers until getting a passing mark.

So, what if the learners actually learned the concepts and proved it through a mastery-based approach? In a mastery-based approach, you’re essentially testing everything, from multiple angles and at different points in time. Learners reach mastery when they can consistently answer correctly and confidently, without guessing or cheating, which can be detected by algorithms. At that point, you can also be fairly confident that they’ve learned what they had to.

In practice, such an approach doesn’t have to be a burdensome one either. By switching some of the focus from content to testing and instant feedback, you can keep the time investment required also in check. Furthermore, the learners can keep developing their mastery in short bursts over time, instead of having to spend a lot of time at once. Consequently, this also improves the learning results.

Enabling employees to test out of material

Even if you don’t buy the value of a more proactive approach just yet, you’ll probably agree that the time spent on compliance training is time away from productive work. Naturally, we’ll want to keep that time to a minimum.

As mentioned, we often tend to build compliance training in a way in which learners go through material and then test themselves. However, this kind of approach wastes a lot of time. It doesn’t really take into account learners’ existing knowledge, and forces them through mundane tasks. Consequently, the learners will look for ways to minimise their time investment, and start skipping through. Hence, it’s easy for updates and revisions go unnoticed, no one simply engages.

So, at the very least, it would probably make sense to do this the other way around. Why not test the learners before letting them into the material? If they score high enough, exempt them from the compliance training altogether – they already know the stuff. This saves their time – time which makes you money.

Final words

All in all, the usual ways of doing compliance training are not particularly smart. If we want to see real learning impact, we have to move away from the prevalent tick-the-box culture. Different mastery-based approaches or even downright getting practical by eliminating useless training could be steps towards the better. If you’d like to explore those steps further and find better ways of doing things, feel free to initiate a discussion with us. We rarely do anything related to compliance training for the sheer lack of imagination and ambition the field pertains, but we do entertain interesting ideas.

More Learning Ideas

Kaufman’s Learning Evaluation Model – Quick Overview

Kaufman's Learning Evaluation Model

Kaufman’s Learning Evaluation Model – Quick Overview

The field of corporate learning has a lot of different frameworks for evaluation. While not all of them are good or even necessary, some frameworks still provide good points of consideration and models for organising information. For instance, last week, we took a look at the Success Case Method which works best on capturing qualitative insights. This week, we decided to take a quick look at Kaufman’s learning evaluation model, and see if it still provides valid contributions.

Kaufman’s Learning Evaluation Model briefly explained

Instead of providing an entirely new framework, Kaufman’s model aims to improve the commonly used Kirkpatrick’s 4 levels. The allegedly improved version introduces some additional consideration by seemingly dividing Kirkpatrick level 1 into two and adding a fifth level. The levels and the respective questions and considerations for modern L&D professionals go as following:

  1. Input – what kind of resources and learning materials do we have at our disposal that we can use to support the learning experience?
  2. Process – how’s the delivery of the learning experience? Is it accepted? How are people responding to it?
  3. Micro level results – Did the learner or the learning group acquire the knowledge? Did they apply it on their jobs?
  4. Macro level results – Did performance improve due to this learning and application of new in the workplace? What kind of benefits arose from the learning on an organisational level?
  5. Mega level impact – What kind of impact did the learning have on society or larger external stakeholder groups?

Reflection on the Kaufman model

As the original author proposed the model as an improvement over Kirkpatrick’s, we’ll make the comparison accordingly. The separation of input and process might be a good one to make. Nowadays, we have access to vast pools of digital resources both in the public domain and sitting in corporate information systems. There are a lot of situations where organisations could leverage on a lot of this information and resources. For instance, curation-based learning content strategies might make more sense for some organisations. Hence, the introduction of inputs as a separate consideration might be a helpful change to some on the framework level.

Reversely, Kaufman also groups Kirkpatrick’s levels 2 and 3 together. While these are just semantic changes, it’s within this section that organisations have their L&D challenges. Often, learning is not the problem, and people may retain the newly learnt quite well. But the problem often comes in application, or learning transfer, as people fail to use these new skills or practices back at their daily jobs. Consequently, that’s something that modern L&D professionals should also focus more on.

Finally, Kaufman’s learning evaluation model introduces the “mega level”, or societal impact. While it may be a valid consideration for a select few, presumably this impact would go hand-in-hand with the business results analysed at the “macro level”. Or if not, we nevertheless encounter the immense difficulty of evaluating impact to external entities.

What’s in it for the L&D professional?

Like with any of the prevalent frameworks or models of evaluating learning at the workplace, it’s important not to take things too seriously. These models do provide a good basis for structuring one’s approach to evaluation, but L&D professionals should still adjust them to fit the context of their particular organisation. It’s also noteworthy that all these models were built on the conception of formal learning. Hence they may fail to address some more informal workplace learning. Regardless, the key takeaway from Kaufman’s learning evaluation model could be the notion of existing resources that can contribute to learning experiences. It’s not always necessary to reinvent the wheel after all!

If you’re looking for new ways of evaluating learning, especially learning transfer or business impact, drop us a note. We’d be happy to help you co-engineer evaluation methods that can actually demonstrate L&D’s value to the business.

More Learning Ideas

Quick Guide: Brinkerhoff’s Success Case Method in Workplace Learning

How to use Brinkerhoff's Success Case Method in workplace learning?

How to Use Brinkerhoff’s Success Case Method in Workplace Learning

There are a lot of different frameworks that organisations use to evaluate the impact of their workplace learning initiatives. The Kirkpatrick model and the Philips ROI model may be the most common ones. While the Brinkerhoff’s Success Case Method is perhaps a less known one, it can too provide value when used correctly. In this post, we’ve compiled a quick overview of the method and how to use it to support L&D decisions in your organisations.

What’s the Brinkerhoff’s Success Case Method?

The method is the brainchild of Dr. Robert Brinkerhoff. While many of its original applications relate to organisational learning and human resources development, the method is applicable to a variety of business situations. The aim is to understand impact by answering the following four questions:

  • What’s really happening?
  • What results, if any, is the program helping to produce?
  • What is the value of the results?
  • How could the initiative be improved?

As you may guess from the questions, the Success Case Method’s focus is on qualitative analysis and learning from both successes and failures on a program level to improve for the future. On one hand, you’ll be answering what enabled the successful to succeed and on the other hand, what barred the worst performers from being successful.

How to use the Brinkerhoff Method in L&D?

As mentioned, the focus of the method is on qualitative analysis. Therefore, instead of using large scale analytics, the process involves surveys and individual learner interviews. By design, the method is not concerned with measuring “averages” either. Rather the aim is to learn from the most resound successes and the worst performances and then either replicate or redesign based on that information.

So ideally, you’ll want to find just a handful of individuals from both ends of the spectrum. Well-designed assessment or learning analytics can naturally help you in identifying those individuals. When interviewing people, you’ll want to make sure that their view on what’s really happening can be backed with evidence. It’s important to keep in mind that not every interview will produce a “success case”, one reason being the lack of evidence. After all, you are going to be using the information derived with this method to support your decision making, so you’ll want to get good information.

Once you’ve established the evidence, you can start looking at results. How are people applying the newly learnt? What kind of results are they seeing? This phase requires great openness. Every kind of outcome and result is a valuable one for the sake of analysis, and they are not always the outcomes that you expected when creating the program. Often training activities may have unintended application opportunities that only the people on the job can see.

When should you consider using Brinkerhoff’s Success Case Method?

It’s important to acknowledge that while the method doesn’t work on everything, there are still probably more potential use cases than we can list. But these few situations are ones that in our experience benefit from such qualitative analysis.

  • When introducing a new learning initiative or a pilot. It’s always good to understand early on where a particular learning activity might be successful and where not. This lets you make changes, improvements and even pivots early on.
  • When time is of the essence. More quantitative data and insights takes time to compile (assuming you have the necessary infrastructure already in place). Sometimes we need to prove impact fast. In such cases, using the Brinkerhoff method to extract stories from real learners helps to communicate impact.
  • Whenever you want to understand the impact of existing programs on a deeper level. You may already be collecting a lot of data. Perhaps you’re already using statistical methods and tools to illustrate impact on a larger scale. However, for the simple fact that correlation doesn’t mean causation, it’s sometimes important to engage in qualitative analysis.

Final thoughts

Overall, Brinkerhoff’s Success Case Method is a good addition to any L&D professional’s toolbox. It’s a great tool for extracting stories of impact, telling them forward and learning from past successes and failures. But naturally, there should be other things in the toolbox should too. Quantitative analysis is equally important, and should be “played” in unison with the qualitative. Especially nowadays, when the L&D function is getting increased access to powerful analytics, it’s important to keep on exploring beyond the surface level to make the as informed decisions as possible to support the business.

If you are struggling to capture or demonstrate the impact of your learning initiatives, or if you’d like start doing L&D in a bit more agile manner, let us know. We can help you in implementing agile learning design methods as well as analytical tools and processes to support the business.

More Learning Ideas

How to Write Good eLearning Questions?

How to write good elearning questions?

How to Write Good eLearning Questions?

Wherever there’s learning, we often need some kind of assessment. While learning analytics have evolved considerably over the past few years, often the easiest method to try to capture learning is through asking questions. However, it’s good to keep in mind other formative assessment methods, that might be better in evaluating long-term learning outcomes. Regardless, there are certain elements to asking questions as well. Naturally, you’ll want to be sure that you’re evaluating learning, not just the ability to regurgitate facts or recall statistics. Thus, we put together a quick guide on how to write good eLearning questions. Here you go!

1. Align your questions with the learning objectives

Whenever you’re writing questions, you should keep in mind what the learning objectives of the activity are. When going through subject matter and material, it’s easy to pick on certain things (especially facts, figures, numbers) in the hopes that they would make good questions. However, often these questions don’t go beyond the trivial level, and thus don’t support the learning goals either. Overall, we should focus on the use of knowledge, rather than the ability to recall content. Hence, you should focus on writing eLearning questions that require understanding the concepts and ideas, as well as practical applications.

2. Use a variety of question types

Simple multiple or single choice questions are probably the most used ones. However, there’s no reason you should limit yourself to those. Question types like drag-and-drop, fill-the-blanks, sorting activities and open-ended questions all work well and are easy to execute. The added variety has two benefits. Firstly, it may help in engagement. Instead of mindlessly clicking through alternatives, learners have to focus on the questions type first, and then the content. When you get people to focus, they are more careful, which means you’ll get better answers. Secondly, using multiple different eLearning question enables you to ask about the same thing from different perspectives and in different ways. This helps to really understand whether the learners truly understood the concept or are just working with surface level knowledge.

3. Keep the questions clear and concise, avoid negative

The aim of assessment should naturally be to test whether someone has understood your content. Now, if your learners have trouble already understanding the questions, you’ll just make everyone frustrated. The learners are having trouble answering and you can’t be sure whether it was the content or the question that wasn’t understood. So, keep your eLearning questions clear and concise. Avoid ambiguity, “circling around” and unnecessary detail, and be direct.

Also, you should try to avoid negative phrasing of questions wherever possible. Studies show that negatively phrases questions are more difficult to understand and thus result in more frequent mistakes.

4. Provide valid answer options without free clues

This is probably the part where it’s the easiest to cut corners when you’re under a time pressure. When designing the alternatives that the learner is supposed to pick from (in e.g. a multiple choice question), we’ll naturally already have the question and the right answer ready. It’s probably easy to just come up with random options for the wrong answers, which are also referred to as distractors. But you really shouldn’t do that.

Good assessment tries to eliminate the possibilities of guessing. We often say that “it’s not the correct but the incorrect answers that determine real knowledge”. By providing “bad” alternatives or silly distractors, you’re effectively making it a whole lot easier to pick the right answer from the rest. So, ensure that all the options could at least seem plausible to someone who had not learned the topic. Also, make sure that all your alternatives are roughly the same length and same phrasing. We human beings instinctively look for visual cues when trying to solve problems. By keeping things uniform, you’re not giving away unnecessary free clues.

Final words

Overall, writing good eLearning questions is not rocket science by any measure. A good rule of thumb that encapsulates a lot of the previously said would be “keep it clear and don’t try to trick the learner”. It’s very easy to sabotage one’s own “data set” by asking silly questions, but that only comes back to haunt you as an L&D professional, as you won’t get an accurate picture of the knowledge and skill levels in your organisation. So, the next time you’re designing an eLearning quiz, keep these 4 points in mind!

More Learning Ideas

5 Quick Tips on Giving Learning Feedback

Learning Feedback

5 Quick Tips on Giving Effective Learning Feedback

Feedback is an integral part of any learning process, whether instructor-led or self-paced. With effective learning feedback, you can increase engagement, motivation and growth in your learners. With the plethora of digital tools available today for seamlessly giving feedback, there’s no excuse in doing so. Furthermore, feedback is not difficult to incorporate into eLearning courses either. Most of the content authoring software come with easy tools for feedback. Also, modern digital learning environments increasingly support creative ways of feedback, such as gamification. However, even with all these tools, it’s important to remember what constitutes good learning feedback. Here are 5 quick tips on it.

1. Feedback needs to be continuous, but not interfering

Ideally, every learning activity, whether a video, storyboard or a classroom session, should have feedback. Continuity in giving learning feedback helps to guide the learning process. However, you should give feedback at natural milestones, such as the end of an activity. If you start giving out feedback midway, you have a risk of interfering with the learning flow of the employee.

2. Learning feedback must be about the activity and performance

Naturally, when giving feedback, you should focus on the activity and performance, not the learner as an individual. This is more of a problem in instructor-led sessions, where instructor may fall subject to attribution bias. Understand that everyone can improve through effort, and performance improvement is the thing that matters.

3. Use Effort Praise in your learning feedback

Effort praise vs. intelligence praise is a Growth Mindset concept. By verbally structuring your feedback for effort (e.g. “You worked hard, but it wasn’t quite enough yet. Could you find another way to do this?”) instead of intelligence (e.g. “Perfect. You’re are the best in the group”), you are developing a mindset that embraces challenges and risk and is creative and innovative.

4. Provide reasoning and guidance, not only scoring

When designing learning feedback loops, it’s important to explain the reasoning for a particular type of feedback. Instead of just telling the learner whether they got it right or not, explain why. Why was the answer wrong? Why was the solution to the problem not appropriate? In fact, it’s often good to explain even why the answer was right! From the reasoning, you can also move forward to guiding the learners to try again with a different approach.

5. Embrace making mistakes

Another concept from the realms of developing a growth mindset and learning feedback, embracing mistakes, is important. Mistakes are a natural phenomena and we learn through them. Hence, you shouldn’t punish your learners for making mistakes. Learning activities should be the de facto risk-free platform where they can make those mistakes. Furthermore, you may consider that others in the organisation may learn from someone else’s mistakes too – so share them!

Are you supporting your learners through adequate feedback in classroom sessions as well as eLearning? If you need help getting started, just drop us a note

More Learning Ideas

Training Evaluation in Digital – Kirkpatrick Model & Learning Analytics

Digital Training Evaluation

Digital Training Evaluation – Using the Kirkpatrick Model and Learning Analytics

Ask an L&D professional about how they measure training effectiveness and learning. The likely answer is that they are using the Kirkpatrick 4-level evaluation model. The model has been a staple in the L&D professionals’ toolbox for a long time. However, if you dig deeper, you’ll find that many organisations are only able to assess levels 1 & 2 of the model. While these levels do constitute valuable information, they help very little in determining the true ROI of learning. Luckily, thanks to technological development, we nowadays have the capability to do digital training evaluation on all 4 levels. And here are some best practices on how to do it.

Level 1: Reaction – Use quick feedback and rating tools to monitor engagement

The first level of Kirkpatrick is very easy to implement across all learning activities. You should use digital tools to collect quick feedback on all activities. That can be in the form of likes, star ratings, scoring or likert scales. Three questions should be enough to cover the ground.

  1. How did you like the training?
  2. How do you consider the value-add of the training?
  3. Was the training relevant to your job?

Generally, scale or ratings based feedback is the best for level 1. Verbal feedback requires too much to effectively analyse.

Level 2: Learning – Use digital training evaluation to get multiple data points

For level 2, it all start with the learning objectives. Learning objectives should be very specific, and tied to specific business outcomes (we’ll explain why in level 4). Once you have defined them, it’s relatively easy to build assessment around it. Naturally, we are measuring the increase in knowledge rather than just the knowledge. Therefore, it is vital to record at least 2 data points throughout the learning journey. A handy way to go about this is to design pre-learning and post-learning assessment. The former captures the knowledge and skill level of the employee before starting the training. Comparing that with the latter, we can comfortably identify the increase in knowledge. You can easily do this kind of assessment with interactive quizzes and short tests.

“If you’re measuring only once, it’s almost as good as not measuring at all”

Level 3: Behaviour – Confirm behavioural change through data and analytics

Finally, the level 3 of measuring behaviour is delving into somewhat uncharted territory. There are a couple of different angles for digital training evaluation here.

First, you could engage the learners in self-assessment. For the often highly biased self-assessment, two questions should be enough. If no behavioural change is reported, another question captures the reason behind it, and L&D can intervene accordingly.

  1. Have you applied the skills learnt? (linking to specific learning, can be a yes/no question)
  2. If not, why not?

Secondly, since self-assessment is often highly biased, it’s not necessary meaningful to collect more data directly from the learner itself. However, to really get factual insight into level 3, you should be using data and analytics. On the business level, we record a lot of data on a daily basis. Just think about all the information that is collected or fed into the systems we use daily. Thus, you should be using the data from these systems with the self-assessment to get a confirmed insight into the reported behavioural change. For instance, a sales person could see an increase in calls made post training. A marketing person could see an increase in the amount of social media posts they put out. The organisation has all the necessary data already – it’s just a matter of tapping into it.

Level 4: Results – Combining Learning Analytics and Business Analytics

Finally, the level 4 evaluation is the pot of gold for L&D professionals. This is where you link the learning to business performance and demonstrate the ROI through business impact. With modern ways of digital training evaluation you can eliminate the guess work and deliver facts:

To be noted, it is highly important to understand that the evaluation steps are not standalone. Level 4 is linked to levels 2 and 3. If there was no increase in knowledge or behavioural change did not happen, there’s no business impact. You might see a positive change in results, but you should not mistake that as the product of learning if the previous levels have not checked out. But once levels 2 and 3 have come out positive, you can look into the bigger picture.

Firstly, you should look back at the learning objectives, especially the business outcomes they were tied to. If your aim with the sales training was to increase the number of calls made, it’s important to look at what happened in that specific metric. If you see a change, then you can look at the business outcomes. How much additional revenue would those extra sales calls produced? The results can also be changes in production, costs, customer satisfaction, employee engagement etc. In any business, you should be able to assign a dollar value on most if not all of these metrics. Once you have the dollar value, it’s simple math to figure out the ROI.

All in all, there’s really no excuse for not dealing with levels 3 and 4 of Kirkpatrick. You can manage digital training evaluation and learning analytics even with limited budget. It’s just a matter of embracing data and the benefits of data driven decision making.

Want to start evaluating your learning on all levels? Click here to start.

 

 

More Learning Ideas