How to Set Better Corporate Learning Objectives?

How to set effective corporate learning objectives?

How to Set Better Corporate Learning Objectives?

When designing learning activities, one of the first things to consider is what you want to accomplish with the training. Without proper goals, you can’t really know what to measure, let alone demonstrating the effects of the learning. While all L&D departments probably do set goals, not all of the goals are very meaningful. Specifically, learning professionals tend to set knowledge-based goals (e.g. “after this training, the participant will know good practices of leadership in the workplace”). However, the accumulation of knowledge, while a noble goal, doesn’t really provide any value to the business. It’s the enactment of new, desired behaviours and change, i.e. implementing the learning on the job, that determines the value-add. Thus, to effectively demonstrate the value of learning in our organisations, we need to set our corporate learning objectives in another way. And here’s a 4-step process to do that.

1. Define the workplace behaviours that you want to affect with training

First, you need to determine the specific behaviours you’d like to affect through training. And really, it means getting specific (you’ll run into trouble in #2 if you don’t). To continue with the leadership example: “we want our managers to become better leaders”. Bad. “We want our managers to have more frequent conversations with their direct reports”. Better.

The behaviours will naturally vary by topic, and some are easier to drill down to than others. However, “loose” learning objectives like masked as “performance objectives”, like in example #1 will turn out to be near impossible to measure.

2. Figure out what to measure and how. Don’t rely on self-reported data

If the first step is already a critical, the what and how of measurement is often the detrimental one in the context of corporate learning objectives. When trying to assess behavioural change (i.e. the impact of said learning) in organisations, there are two major mistakes that happen across the board.

First, not understanding what to measure. In similar fashion to setting the learning objectives, the ‘what’ is often too vague. If you’re doing sales training, measuring sales growth directly is too broad: you’re cutting a lot of corners and making dangerous assumptions. Sales may increase, but it may have no correlation with the training. Rather, the effect could be due to external environment, team relationships, incentives, seasonality, etc. Therefore, you need to drill down deeper. A proper level for example in sales training would be individual metrics, such as conversion ratios, time on calls, etc. These may or may not result in performance improvement, but that’s for you to find out without making ill-founded assumptions.

Second, the ‘how’ part of measurement is often lacking as well. If you really want to make an impact through better corporate learning objectives, it’s important to get this right. First, never rely on self-reported results. People lie, exaggerate, underestimate and aim to please, and even anonymity doesn’t remove the barrier to give honest answers. Rather, you should always use hard data. If the data is not readily available through non-learning channels (e.g. HR systems, performance management systems, ERPs, CRMs etc.), find a way to capture the needed information.

3. Quantify your corporate learning objectives

The relieving thing is that once you really drill down on the specific behaviours and get objective data sources, quantifying your learning objectives becomes much easier. In e.g. sales, finance, marketing or operations that is already a lot easier naturally. But even in the previous leadership example, there’s quite a large difference between “we want our managers to be 50% better leaders” vs. “we want our managers to have 50% more conversations with their direct reports”. The first is impossible to measure accurately, hence the quantification is moot and void. The second can be measured e.g through internal network analysis, communication meta-data and even calendar appointments.

Furthermore, once you quantify the learning objectives, you’re setting a transparent set of expectations. Consequently, you’ll have a much more easier job to sell the idea to your management and subsequently report the results. Once we analyse things a bit more deeply, we can assign “dollar values” to the changes in workplace behaviour. The value of sales staff converting 10% more of their calls is real and tangible, and it’s easy to track whether the learning investment is paying off. When the behaviours become less tangible (e.g. that leadership practice), you should agree with the business heads on what the value of those behaviours is to the business. For e.g. learning company values etc. it might seem silly, but you should consider doing it nonetheless to enable transparency in assessment and reporting. Of course, as you probably haven’t measured learning this way before, it’s important to acknowledge that in the beginning. So don’t punish yourself if you don’t “hit the target” right away.

Final words

By using this simple 3-step approach to setting corporate learning objectives, understanding the link between learning, impact and performance becomes a lot less burdensome. On an important note, once you’ve put this in place, you really need to actually measure things and commit to using the data. Collecting the data and insights, even if done properly, is itself a bad investment if you or your management still resort to making assumptions rather than trusting hard facts.

If you need help in understanding your organisation’s learning on a deeper level or to develop a data-driven learning strategy, contact us. We’ll walk you through what it takes.

More Learning Ideas

How to Use Formative Assessment in Corporate Learning?

How to Use Formative Assessment in Corporate Learning

How to Use Formative Assessment in Corporate Learning?

Wherever there’s learning, there should always be assessment. Assessment naturally comes in many types and formats, but generally a good distinction to draw is that between summative assessment and formative assessment.

In simplified terms, summative assessment focuses on trying to gauge the learning outcomes at the end of the learning activity. Students may be compared against each other and the assessment is a “high stakes” one. Formative assessment, on the other hand, attempts to measure learning throughout the learning activities. Formative evaluation can be considered less competitive – as the evaluation is based on criterion – and relatively “low stakes”.

How does formative assessment benefit corporate learning?

In our experience, a lot of corporate learning assessment is summative. L&D practitioners may dread the extra effort or may not even be familiar with formative practices. Furthermore, the prevalent tendency to developed slide-based courses with an exam at the end feeds into this behaviour. While building formative evaluation does require a bit more effort, the benefits tend to far outweigh the time investment.

Here are some of the benefits of formative assessment in corporate learning:

  • Trainers / L&D is able to recognise learning problems and skill gaps more effectively – on both individual and group levels
  • Learners are able to identify their own problem areas, self-correct and monitor their own progress
  • It provides valuable feedback to L&D to improve learning experiences and activities
  • It promotes active learning on the employees’ part
  • The focus shifts from achieving arbitrary outcomes (test scores, tick-box compliance etc.) to the learning process itself

In general, a well thought-out formative assessment approach helps all the stakeholders – trainers, learners and managers alike.

How to use formative assessment in practice?

Now that you’ve considered the benefits, here are some practical and highly manageable ways to improve your assessments.

The tools for formative assessment are plentiful, and the benefits are not limited to just evaluation either. By replacing summative assessment with something like this, you’ll also be creating much more engaging and learner-centric experiences. Furthermore, the approach is more data-driven by nature, helping you to make more informed L&D decisions. So start investing the time into it!

If you need help on designing digitally enabled assessments to support your learning strategy, we are happy to help. Just contact us.

More Learning Ideas

Leveraging Learning Content Analytics for Better Learning Experiences

Learning content analytics cover

Leveraging Learning Content Analytics for Better Learning Experiences

 

We published this article first on eLearning Industry, the largest online community of eLearning professionals. You may find the original article here

An area where Learning and Development professionals could learn a lot from, e.g. marketing experts, is content analytics. Whereas marketing has embraced the need to constantly iterate and redevelop content based on real-time campaign analytics, learning professionals tend to take the easier route. Once an eLearning activity is produced and published, it’s easy to just leave it there and be done with it. But the work is really only at its midway. How do you find out if the content resonated with the audience or not? If it didn’t, how do you figure out what are the problem areas with the content? This is where learning content analytics come in handy.

Example Of Learning Content Analytics On A Training Video

When analysing the effectiveness of eLearning content, you should pay attention to what kind of metrics you are tracking. For instance, in the case of a training video, traditional metrics like how many times the video was opened don’t necessarily carry a lot of value. Instead, we should be looking at the content consumption behaviour on a wider scale, throughout the content and the learning journey. Let’s take a look at an analytical view of a training video.

Learning content analytics on training video
With learning content analytics, you can easily capture where your learners lose interest and drop off.

In this example, you can see the users’ behaviour at various stages of the training video. As usual, you see a slump immediately in the beginning, followed by another bigger slump later on. We’ve coloured the 2 main points of interest to break them down.

1. Initial Attrition

You are always bound to lose some learners in the beginning due to a plethora of reasons. However, if you constantly see big drops starting from 0 seconds, you might want to double-check, e.g. the loading times of the content, to make sure your learners are not quitting because of inability to access the material in a timely manner.

2. Learning Content Engagement Failure

Going further in the video, we see another big slump where we lose around 40% of the remaining learners in just 30 seconds. Clearly, this represents a learning engagement failure. Something is not right there. Learners are likely dropping off because the content is not engaging, relevant or presented in an appealing way.

How Should I Incorporate Content Analytics In The eLearning Development Process?

The above-mentioned video analytics is just a single example of how you can use content analytics to support your learning. Ideally, you should be running these kind of analytics across all your learning content. xAPI tracking capabilities give a lot of possibilities in this regard. Once you’re collecting the data and running the analytics, this is how you could build the use of analytics into your eLearning development process:

  1. Develop an initial version of eLearning materials
  2. Roll it out to a test group of learners, monitor the analytics
  3. Identify potential learning engagement failures and re-iterate content accordingly
  4. Mass roll-out to a wider audience
  5. Revisit the content analytics at regular milestones (e.g. when a new group of learners is assigned the content) to ensure continued relevance and engagement

This type of approach helps to ensure that the learning activities you provide and invest money in, perform at their best at all times.

How Can I Use Learning Content Analytics To Provide Better Learning Experiences?

By now, you’ve surely developed many use cases for content analytics. To summarise, here’s how you could provide a better learning experience through data-driven insights:

1. Identify The Types Of Content Your Learners Like

In the case of videos, you could benchmark the performance of different types of videos (e.g. talking heads, animations, storytelling videos) against each other and see what type of content keeps your learners engaged the best.

2. Develop Engaging Content

With the power of analytics, you’ll be able to develop better learning. You are able to find out immediately what works and what doesn’t. No need to run extensive surveys. The behavior of the learners is the best feedback.

3. Personalise Learning Experiences

You can naturally run analytics for individuals and defined groups, in addition to the whole mass of learners. This helps you personalise the learning experiences according to e.g. skill levels, seniority, experience, previous learning history, etc.

All in all, learning content analytics provide a powerful tool for increased transparency and visibility into the performance of your eLearning. As learning moves to more in-demand and just-in-time, they help to ensure that you’re delivering the right content to the right audience.

Are you interested in developing more analytical, data-driven approaches to your L&D? Or want to know more about different content analytics possibilities? Just drop us a note, and we’ll get back to you. 

More Learning Ideas

Training Evaluation in Digital – Kirkpatrick Model & Learning Analytics

Digital Training Evaluation

Digital Training Evaluation – Using the Kirkpatrick Model and Learning Analytics

Ask an L&D professional about how they measure training effectiveness and learning. The likely answer is that they are using the Kirkpatrick 4-level evaluation model. The model has been a staple in the L&D professionals’ toolbox for a long time. However, if you dig deeper, you’ll find that many organisations are only able to assess levels 1 & 2 of the model. While these levels do constitute valuable information, they help very little in determining the true ROI of learning. Luckily, thanks to technological development, we nowadays have the capability to do digital training evaluation on all 4 levels. And here are some best practices on how to do it.

Level 1: Reaction – Use quick feedback and rating tools to monitor engagement

The first level of Kirkpatrick is very easy to implement across all learning activities. You should use digital tools to collect quick feedback on all activities. That can be in the form of likes, star ratings, scoring or likert scales. Three questions should be enough to cover the ground.

  1. How did you like the training?
  2. How do you consider the value-add of the training?
  3. Was the training relevant to your job?

Generally, scale or ratings based feedback is the best for level 1. Verbal feedback requires too much to effectively analyse.

Level 2: Learning – Use digital training evaluation to get multiple data points

For level 2, it all start with the learning objectives. Learning objectives should be very specific, and tied to specific business outcomes (we’ll explain why in level 4). Once you have defined them, it’s relatively easy to build assessment around it. Naturally, we are measuring the increase in knowledge rather than just the knowledge. Therefore, it is vital to record at least 2 data points throughout the learning journey. A handy way to go about this is to design pre-learning and post-learning assessment. The former captures the knowledge and skill level of the employee before starting the training. Comparing that with the latter, we can comfortably identify the increase in knowledge. You can easily do this kind of assessment with interactive quizzes and short tests.

“If you’re measuring only once, it’s almost as good as not measuring at all”

Level 3: Behaviour – Confirm behavioural change through data and analytics

Finally, the level 3 of measuring behaviour is delving into somewhat uncharted territory. There are a couple of different angles for digital training evaluation here.

First, you could engage the learners in self-assessment. For the often highly biased self-assessment, two questions should be enough. If no behavioural change is reported, another question captures the reason behind it, and L&D can intervene accordingly.

  1. Have you applied the skills learnt? (linking to specific learning, can be a yes/no question)
  2. If not, why not?

Secondly, since self-assessment is often highly biased, it’s not necessary meaningful to collect more data directly from the learner itself. However, to really get factual insight into level 3, you should be using data and analytics. On the business level, we record a lot of data on a daily basis. Just think about all the information that is collected or fed into the systems we use daily. Thus, you should be using the data from these systems with the self-assessment to get a confirmed insight into the reported behavioural change. For instance, a sales person could see an increase in calls made post training. A marketing person could see an increase in the amount of social media posts they put out. The organisation has all the necessary data already – it’s just a matter of tapping into it.

Level 4: Results – Combining Learning Analytics and Business Analytics

Finally, the level 4 evaluation is the pot of gold for L&D professionals. This is where you link the learning to business performance and demonstrate the ROI through business impact. With modern ways of digital training evaluation you can eliminate the guess work and deliver facts:

To be noted, it is highly important to understand that the evaluation steps are not standalone. Level 4 is linked to levels 2 and 3. If there was no increase in knowledge or behavioural change did not happen, there’s no business impact. You might see a positive change in results, but you should not mistake that as the product of learning if the previous levels have not checked out. But once levels 2 and 3 have come out positive, you can look into the bigger picture.

Firstly, you should look back at the learning objectives, especially the business outcomes they were tied to. If your aim with the sales training was to increase the number of calls made, it’s important to look at what happened in that specific metric. If you see a change, then you can look at the business outcomes. How much additional revenue would those extra sales calls produced? The results can also be changes in production, costs, customer satisfaction, employee engagement etc. In any business, you should be able to assign a dollar value on most if not all of these metrics. Once you have the dollar value, it’s simple math to figure out the ROI.

All in all, there’s really no excuse for not dealing with levels 3 and 4 of Kirkpatrick. You can manage digital training evaluation and learning analytics even with limited budget. It’s just a matter of embracing data and the benefits of data driven decision making.

Want to start evaluating your learning on all levels? Click here to start.

 

 

More Learning Ideas

Seamless Learning Tracking vs. Formal Assessments

Learning Tracking

Seamless Learning Tracking vs. Formal Assessments

Previously, we have had very limited tools for tracking corporate learning activities – namely, formal assessments. Formal assessments may have come in a variety of forms: tests, learning surveys, self-assessments, performance evaluations etc. The common denominator for these is that they are relatively hard to administer and produce vague results. Luckily, thanks to advances in learning technology, we can do all this much more efficiently. Let’s look at how we can use seamless learning tracking to make our learning assessments much more efficient.

First, let’s consider a two traditional assessment tools used in corporate L&D: tests and learning evaluations.

Seamless Learning Tracking vs. Conventional Tests

The first problem with formal tests is that they often become the end instead of means to an end. Tests should be a tool for learning, not the reason we are learning for. Consequently, tests are also quite dreadful to the learners. Therefore, many learners are subject to stress and anxiousness when taking examinations. Hence, they are not performing at their best in terms of problem solving or creativity. Frankly, formal tests and scores are also subject misinformation caused by cheating, etc.

By utilising modern tracking methods instead of archaic testing, we can extract a superior amount of information without subjecting our learners to the adversary effects. With modern digital learning environments, we can track everything the learner does. We can follow every individual learner’s journey: how many times did they watch a training video? How much time did they spend on topic X? How many times did they try a quiz before passing? When we have access to learning data like this, the old formal tests scores become practically meaningless. Rather than assessing our learners bi-annually or in defined intervals, we can track them continuously. Hence, we can tackle problems in real time, rather than six months later.

Seamless Learning Tracking vs. Learning Evaluations

Other things we could do more reliably with modern technology are feedback and learning evaluations.

For feedback, the fundamental problem is the one that all marketing research people know – people lie. In organisations, especially hierarchical ones, it’s often a mission impossible to extract genuine feedback. Instead of being truthful, the learners give mid-range positive feedback to avoid confrontation – perhaps in the fear that even the anonymous feedback may be traced back to them. And if we are not getting honest feedback, we’ll shoot ourselves in the foot trying to improve the learning. However, by enabling comprehensive learning tracking, we can let behaviour be our best feedback. We can accurately pinpoint the learning materials that are not engaging or effective and work to improve them.

For learning evaluations, we can pull information from a whole history of learning, rather than just a formal test here and there. This helps us to provide much more personalised feedback to the learners. Instead of focusing on what (“you scored 88/100” in test X), we can focus on how (“you took twice as long to complete the training as your peers”), and most importantly, why (“could another style of learning work better for you?”). This provides us a much more comprehensive view to our people, their skill-sets and capabilities than we could ever achieve by traditional, formal assessments.

Are you using advanced learning tracking methods in your organisation? Would you like to take a more data-driven approach to your HR? We can help, just leave us a note here.  

More Learning Ideas