Kaufman’s Learning Evaluation Model – Quick Overview

Kaufman's Learning Evaluation Model

Kaufman’s Learning Evaluation Model – Quick Overview

The field of corporate learning has a lot of different frameworks for evaluation. While not all of them are good or even necessary, some frameworks still provide good points of consideration and models for organising information. For instance, last week, we took a look at the Success Case Method which works best on capturing qualitative insights. This week, we decided to take a quick look at Kaufman’s learning evaluation model, and see if it still provides valid contributions.

Kaufman’s Learning Evaluation Model briefly explained

Instead of providing an entirely new framework, Kaufman’s model aims to improve the commonly used Kirkpatrick’s 4 levels. The allegedly improved version introduces some additional consideration by seemingly dividing Kirkpatrick level 1 into two and adding a fifth level. The levels and the respective questions and considerations for modern L&D professionals go as following:

  1. Input – what kind of resources and learning materials do we have at our disposal that we can use to support the learning experience?
  2. Process – how’s the delivery of the learning experience? Is it accepted? How are people responding to it?
  3. Micro level results – Did the learner or the learning group acquire the knowledge? Did they apply it on their jobs?
  4. Macro level results – Did performance improve due to this learning and application of new in the workplace? What kind of benefits arose from the learning on an organisational level?
  5. Mega level impact – What kind of impact did the learning have on society or larger external stakeholder groups?

Reflection on the Kaufman model

As the original author proposed the model as an improvement over Kirkpatrick’s, we’ll make the comparison accordingly. The separation of input and process might be a good one to make. Nowadays, we have access to vast pools of digital resources both in the public domain and sitting in corporate information systems. There are a lot of situations where organisations could leverage on a lot of this information and resources. For instance, curation-based learning content strategies might make more sense for some organisations. Hence, the introduction of inputs as a separate consideration might be a helpful change to some on the framework level.

Reversely, Kaufman also groups Kirkpatrick’s levels 2 and 3 together. While these are just semantic changes, it’s within this section that organisations have their L&D challenges. Often, learning is not the problem, and people may retain the newly learnt quite well. But the problem often comes in application, or learning transfer, as people fail to use these new skills or practices back at their daily jobs. Consequently, that’s something that modern L&D professionals should also focus more on.

Finally, Kaufman’s learning evaluation model introduces the “mega level”, or societal impact. While it may be a valid consideration for a select few, presumably this impact would go hand-in-hand with the business results analysed at the “macro level”. Or if not, we nevertheless encounter the immense difficulty of evaluating impact to external entities.

What’s in it for the L&D professional?

Like with any of the prevalent frameworks or models of evaluating learning at the workplace, it’s important not to take things too seriously. These models do provide a good basis for structuring one’s approach to evaluation, but L&D professionals should still adjust them to fit the context of their particular organisation. It’s also noteworthy that all these models were built on the conception of formal learning. Hence they may fail to address some more informal workplace learning. Regardless, the key takeaway from Kaufman’s learning evaluation model could be the notion of existing resources that can contribute to learning experiences. It’s not always necessary to reinvent the wheel after all!

If you’re looking for new ways of evaluating learning, especially learning transfer or business impact, drop us a note. We’d be happy to help you co-engineer evaluation methods that can actually demonstrate L&D’s value to the business.

More Learning Ideas

Quick Guide: Brinkerhoff’s Success Case Method in Workplace Learning

How to use Brinkerhoff's Success Case Method in workplace learning?

How to Use Brinkerhoff’s Success Case Method in Workplace Learning

There are a lot of different frameworks that organisations use to evaluate the impact of their workplace learning initiatives. The Kirkpatrick model and the Philips ROI model may be the most common ones. While the Brinkerhoff’s Success Case Method is perhaps a less known one, it can too provide value when used correctly. In this post, we’ve compiled a quick overview of the method and how to use it to support L&D decisions in your organisations.

What’s the Brinkerhoff’s Success Case Method?

The method is the brainchild of Dr. Robert Brinkerhoff. While many of its original applications relate to organisational learning and human resources development, the method is applicable to a variety of business situations. The aim is to understand impact by answering the following four questions:

  • What’s really happening?
  • What results, if any, is the program helping to produce?
  • What is the value of the results?
  • How could the initiative be improved?

As you may guess from the questions, the Success Case Method’s focus is on qualitative analysis and learning from both successes and failures on a program level to improve for the future. On one hand, you’ll be answering what enabled the successful to succeed and on the other hand, what barred the worst performers from being successful.

How to use the Brinkerhoff Method in L&D?

As mentioned, the focus of the method is on qualitative analysis. Therefore, instead of using large scale analytics, the process involves surveys and individual learner interviews. By design, the method is not concerned with measuring “averages” either. Rather the aim is to learn from the most resound successes and the worst performances and then either replicate or redesign based on that information.

So ideally, you’ll want to find just a handful of individuals from both ends of the spectrum. Well-designed assessment or learning analytics can naturally help you in identifying those individuals. When interviewing people, you’ll want to make sure that their view on what’s really happening can be backed with evidence. It’s important to keep in mind that not every interview will produce a “success case”, one reason being the lack of evidence. After all, you are going to be using the information derived with this method to support your decision making, so you’ll want to get good information.

Once you’ve established the evidence, you can start looking at results. How are people applying the newly learnt? What kind of results are they seeing? This phase requires great openness. Every kind of outcome and result is a valuable one for the sake of analysis, and they are not always the outcomes that you expected when creating the program. Often training activities may have unintended application opportunities that only the people on the job can see.

When should you consider using Brinkerhoff’s Success Case Method?

It’s important to acknowledge that while the method doesn’t work on everything, there are still probably more potential use cases than we can list. But these few situations are ones that in our experience benefit from such qualitative analysis.

  • When introducing a new learning initiative or a pilot. It’s always good to understand early on where a particular learning activity might be successful and where not. This lets you make changes, improvements and even pivots early on.
  • When time is of the essence. More quantitative data and insights takes time to compile (assuming you have the necessary infrastructure already in place). Sometimes we need to prove impact fast. In such cases, using the Brinkerhoff method to extract stories from real learners helps to communicate impact.
  • Whenever you want to understand the impact of existing programs on a deeper level. You may already be collecting a lot of data. Perhaps you’re already using statistical methods and tools to illustrate impact on a larger scale. However, for the simple fact that correlation doesn’t mean causation, it’s sometimes important to engage in qualitative analysis.

Final thoughts

Overall, Brinkerhoff’s Success Case Method is a good addition to any L&D professional’s toolbox. It’s a great tool for extracting stories of impact, telling them forward and learning from past successes and failures. But naturally, there should be other things in the toolbox should too. Quantitative analysis is equally important, and should be “played” in unison with the qualitative. Especially nowadays, when the L&D function is getting increased access to powerful analytics, it’s important to keep on exploring beyond the surface level to make the as informed decisions as possible to support the business.

If you are struggling to capture or demonstrate the impact of your learning initiatives, or if you’d like start doing L&D in a bit more agile manner, let us know. We can help you in implementing agile learning design methods as well as analytical tools and processes to support the business.

More Learning Ideas

How to Use Data to Support Face-to-face Training?

How to support face-to-face training with data?

How to Use Data to Support Face-to-face Training?

Organisational learning and development is becoming increasingly data-driven. This is fuelled by the need to demonstrate impact, be more effective and direct resources more efficiently. With the advent of new learning technologies and platforms – many of which come with built-in analytics capabilities – we are increasingly better equipped to measure all kinds of learning in a meaningful way. However, for the most part, the collection and especially the use of this data has been limited to only digital learning experiences. But there’s no reason to draw that kind of limitation. In fact, traditional face-to-face training could benefit greatly from having access to data and analytics. So, let’s explore how we could support face-to-face training with data!

Current challenges with face-to-face training

Face-to-face training has its fair share of challenges ahead. On one hand, it’s rather expensive, once you factor in all of the lost productivity and indirect costs. However, cost becomes less of an issue as long as you can demonstrate impact and value. And that’s perhaps a business challenge. The real learning challenges, on the other hand, are related to the delivery.

Overall, face-to-face learning is not particularly personalised. Trainers are often not aware of the existing knowledge of the participants, let alone their personal context: jobs, tasks, challenges, problems, difficulties, team dynamics etc. Hence, the training – especially in subject matter intensive topics – often results in a more or less one-size-fits-all type of approach: trainer goes through the slide deck, perhaps with a few participatory activities and some feedback at the end. Even if you’re an experienced trainer, it’s difficult to improvise and go off-course in the heat of the moment to pursue the emerging (personal) needs of the learners.

So, wouldn’t it be beneficial and make sense to put that information into good use and start to support face-to-face training with data? Yes it would. Here are two easy ways you can get a lot more out of your “classroom” sessions.

1. Determining existing knowledge and skill level with pre-work

One of the simplest things you can do to get more value out of your face-to-face training is to start using pre-work. Have your learners go through digital learning materials before coming to the session. Build in some seamless assessment and collect information in the form of user submissions and feedback. With good design and proper use of learning analytics, this already gives you a lot of valuable information.

As a trainer, you can then check e.g. what your learners already know and what they are having difficulties with. It probably doesn’t make sense to spend a lot of time in the classroom on things they already know. Rather, you’re better off using the time on addressing problem areas, challenges and personal experiences that have come out during the pre-work. Or if you want to explore making things even more impactful, try an approach like flipped learning. In flipped learning, you use digital to deliver the knowledge while focusing the classroom time solely on discussions, practice and hands-on activities.

2. Using learning records history to understand the people you’re training

Another idea we could do better at is understanding the people we deal with. At their best, these records may provide a whole history of learning. As these digital platforms compile more and more data about our learning experiences, it would be beneficial to let the trainers access that as well. By understanding prior experiences, the trainer can create scaffolding – build on what the employees already know from before. This might be totally unrelated to the current topic too.

Furthermore, having access to a “HR” history of the employees might be beneficial too, especially in large organisations where the trainer doesn’t necessarily now the people personally. For instance, what are the attendees jobs? Where do they work? Where have they worked before? In what kind of roles? All the information like this brings additional data points to personalise the learning experience on. In some cases, you might even find that there’s a subject matter expert in the group. Or someone who has dealt in practice with the issues of the ongoing training. These could be assets you can leverage on, of which you wouldn’t perhaps even know about without the data.

Final thoughts

All in all, there’s a whole lot that data and analytics can offer to “traditional” training. The need for personalisation is real, and smart use of learning data helps to cater to that need. Of course, you can use data to support face-to-face training in many more ways, these are just two examples. For instance, post-session feedback is much more handy to do digitally. This feedback can then be used to improve future sessions on the same topic (or with the same participants).

If you feel you could do more with data and smart learning design, don’t hesitate to reach out. We can help you design blended learning experiences that deliver impact and value.

More Learning Ideas

How to Help Learners Succeed with Personal Learning Analytics?

How to use personal learning analytics to help learners succeed?

Personal Learning Analytics – Helping Learners with Their Own Data

Generally, the use of data and analytics in workplace learning is reserved for a small group senior people. Naturally, learning analytics can provide a lot of value for that group. For instance, data-driven approaches to training needs analysis and measuring corporate learning are quickly gaining ground out of the need to prove the impact of workplace learning initiatives. However, there could be further use cases for those analytical powers. One of them is helping the learners themselves with personal learning analytics.

What is ‘personal learning analytics’?

Like the title may give away, personal learning analytics is just that: individualised information made available to the learner. The major difference with conventional “managerial” analytics is that most of the information is about the learner in question. Whenever that’s not the case, the information of others would always be anonymised. A few exceptions could include e.g. gamification elements which display user names and achievements. So, effectively, it’s all about giving the user access to his/her own data and anonymised “averages”.

How can we use personal analytics to help learners?

One of the challenges in conventional approaches to workplace learning is that the process is not very transparent. Often, the organisation controls the information, and the learners may not even gain access. However, a lot of this information could help the learners. Here are a few examples.

  • Comparing performance against others. While cutthroat competition is probably not a good idea, and learners don’t necessarily want others to know how they fared, they can still benefit from being able to compare their performance against the groups. Hence, they’ll know if they’re falling behind and know to adjust their effort/seek new approaches.
  • Understanding the individual learning process. All of us would benefit greatly from information about how we learn. For instance, how have we progressed, how are we developing as well as how and when do we engage with learning. Luckily, personal learning analytics could tell us about all of that. The former helps to keep us motivated, while the latter helps us to identify patterns and create habits of existing behaviour.
  • Access to one’s learning history. We are learning all the time and all kinds of things. However, we are not necessarily very good at keeping track ourselves. If we just could pull all that data into one place, we could have a real-time view into what we have learned in the past. Potentially, this could enable us to identify new skills and capabilities – something that the organisation would likely be interested in too.

Towards self-regulated learning

Across the globe, organisations are striving to become more agile in their learning. One key success factor for such transformation is the move towards more self-regulated learning. However, achieving that is going to be difficult without slightly more democratised information.

If the learners don’t know how they are doing, they cannot really self-regulate effectively. And no, test scores, completion statistics and annual performance reviews are not enough. Learning is happening on a daily basis and the flow of information and feedback should be continuous. Thankfully, the technology to provide this sort of individual learning analytics and personalised dashboards is already available. For instance, xAPI and Learning Record Stores (LRS) enable us to store and retrieve this type of “big learning data” and make it available to the learners. Some tools even provide handy out-of-the-box dashboards.

On a final note, we do acknowledge that the immediate applications of “managerial” learning analytics likely provide greater initial value to any given organisation. And if you’re not already employing learning analytics to support your L&D decision making, you should start. However, once we go beyond that stage, providing access to personal learning analytics may be a good next step that also helps to facilitate a more modern learning culture in the organisation.

If you’re eager about learning analytics, whether on an organisational or personal level, but think you need help in figuring out what to do, we can help. Just drop us a note here, and let’s solve problems together.

More Learning Ideas

Quantitative vs. Qualitative Data in Learning – Where’s the Value?

Quantitative vs Quantitative Data in Learning featured image

Quantitative vs. Qualitative Data in Learning

Corporate learning and development is becoming increasingly data-driven. On one hand, learning teams need to be able to track and assess all the different learning experiences. On the other hand, they also need to demonstrate business value. This requires smart use of data collection and analytics. While all efforts towards more data-driven strategies are a step towards the better, we’ve noticed a strong bias towards quantitative data. While quantitative data is important, it’s not quite enough to effectively evaluate learning as a process. To achieve that, we’ll need to also pay attention to qualitative data in learning. To help clear up some of the ambiguity, let’s look at each of the two and how to use them smartly.

What is quantitative data in learning?

Quantitative data, by definition, is something that can be expressed in numerical terms. This type of learning data is used to answers questions such as “how many” or “how often”. In learning, organisations often use quantitative data in the form of tracking:

  • Enrolment rates
  • Completion rates
  • Time spent on learning
  • Quiz scores

Unfortunately, this is often where most organisations stop with their learning analytics capabilities. However, the problem is that this type of data tells us absolutely nothing about the learning efficacy or the process itself. While we can always employ better quantitative metrics, such as engagement rates, that will never be enough. In any case, we are always better off with access to qualitative data as well.

What is qualitative data in learning?

There’s a lot of questions that we cannot effectively answer with numbers, hence we need qualitative data. Qualitative data, by definition, is non-numerical and used to answer questions such as “what”, “who” or “why”. Examples of qualitative data in learning could include:

  • How the learners progressed through the activities
  • The nature and topics of discussion between the learners
  • How employees accessed the learning
  • How the employees applied the learning on the job

It’s quite evident, that these types of questions go a lot further in understanding behaviours, efficacy and the learning process as a whole. Without this kind of data, you effectively have no idea what works and what doesn’t. Or, you may be able to see the effect (e.g. low completion or engagement rates) but may have no idea of the underlying cause (e.g. irrelevant content, bad user experience). From a learning design perspective, these type of data points are immensely valuable.

How to use quantitative and qualitative learning data meaningfully?

So, in general, there’s a lot of untapped value in qualitative data and understanding the learning process on a holistic level. But of course, that doesn’t mean that you should forget about quantitative data either. Instead, you should always aim to validate ideas and insights derived from qualitative learning data through the use of quantitative data. How else would you know the impact of things at scale?

For instance, we think there’s a lot of value in employee discussions and sharing. These provide a great opportunity for organisations to source knowledge and transfer information between its layers. It often happens that employees in these learning discussions bring up their own best practices and work methods (qualitative), that even the L&D team is not aware of. However, to understand if the practice can be applied across the organisation, we may need to do a survey or a poll to understand the magnitude of the idea (quantitative).

Final words

Overall, we believe that a lot of the traditional “LMS metrics” are quite useless for anything other than compliance purposes (and even for that, there are better ways…). To really deliver great learning experiences, organisations need to understand learning as a process and not strip it down to simple numbers of how many people took part. In essence, companies need to focus more on the quality of their data and the ability to quantify the impact of insights derived from qualitative data in learning.

This often requires technical capabilities, such as the xAPI, but once again, buying technology is not enough. Rather, organisations have to understand the meaningful things to measure and cut through the noise. If your organisation needs help in that, or in crafting more data-driven learning strategies in general, we are happy to help. Just drop us a note here and tell us about your problem.

More Learning Ideas

How to Use Social Analytics in Organisational Learning?

How to use social analytics in organisational learning

How to Use Social Analytics in Organisational Learning?

Nowadays, the HR and L&D functions of organisations are increasingly data-driven. Many employ analytics to aid in decision making processes and to try to analyse e.g. the effectiveness of learning initiatives. While there’s a lot of ways to use learning analytics, we found that organisations are underutilising a particular type of data. While digital learning platforms increasingly come with social features (walls, blogs, news feeds, etc.), not many are yet paying attention to how people use these social elements, and the potential implications for the organisation. Thus, here are three cases for using social analytics in organisational learning.

1. Measuring interactions between learners

If we want to understand learning on a holistic level, it’s important to also understand it granularly. Hence, one good use of social analytics is to analyse interaction between the learners. Some example data points for these interactions could be:

  • How many times was a particular piece of content or user submission liked/shared?
  • The number of comments that a post or a piece of content attracted
  • How often/for how long are users interacting with each other?

The first two examples above could help you to understand what kind of content works the best or sparks the most discussion. The latter one could help in understanding how people collaborate with each other.

2. Measuring the quality of interactions and organisational influence

Naturally, quantitative data only gets us so far and it’s important to understand the quality of the “social” as well. Empty comments that don’t contribute to the discussion are not likely to create value. Hence, organisations could consider using semantic analysis, powered by NLP algorithms to gauge “what” is being talked about, and whether the social discourse is contributions or just mere commenting. The benefits of semantic analysis are two-fold. It may, again, help you to spot problem areas in your content (e.g. when learners need to clarify concepts to each other). But perhaps more importantly, it can provide you information on who are the “contributors” in your organisation.

Also, it’s important to understand “who” are interacting and “how” they interact. This level of analysis could be helpful in determining organisational influence. Who are the individuals with networks across the organisation, or liked by their peers, or helping everyone. These people may even go unnoticed if not for the social analytics, but maybe they could be among the future leadership potential in the organisation. Even if not, there’s a good chance that these may be local opinion leaders that you could utilise to execute your strategy in the future.

3. Sourcing ideas and innovation from the ground up

Finally, a potentially highly impactful application of social analytics is in sourcing information, ideas and innovation from within your own organisation. Often, the people doing a particular job have a lot of ideas on how to improve. It’s just that these ideas rarely reach the top, due to organisational layers, bureaucracy, culture etc. Could we help in that?

With social analytics, you could effectively set up hidden knowledge collection tool. By analysing discussions/sharing/likes around content or user submissions, you could establish a direct flow of information from the line of duty all the way to the decision makers in the upper echelon’s of the organisation. The decision makers would see what kind of work practices/ideas/methods gain the most traction, and then find ways of replicating them across the organisation. On a technical level, such flows are not hard to set up. Mostly, you just need quantitative data, or a combination of quantitative and semantics, depending on the case.

Final words

All in all, there’s a lot of under-utilised value in social analytics for workplace learning and organisational development purposes. As learning is fundamentally a social experience, this data helps in understanding the learning that is taking place. So, as you’ll get deeper into the world of learning data, don’t just focus on the traditional metrics like course completions etc. A more social data set might provide much better insights. And if you need help in social learning or hidden knowledge collection, we can help. Just contact us here.

More Learning Ideas

How to Leverage Data in Training Needs Analysis?

How to leverage data for training needs analysis?

How to Leverage Data in Training Needs Analysis?

The training needs analysis is a real staple in the corporate L&D field. Everyone does it, yet the real value-add is ambiguous. The traditional ways of doing it are not employee-centric, which results in irrelevant and inconveniencing rather than enabling learning activities. While extensive use of data to support that analysis is clearly the best direction to take, organisations don’t often understand how. Thus, we wanted to explain a few of different ways you could leverage data in your training needs analysis.

Using search data to understand what your learners really need

One of the biggest problems in training needs analysis is that the people doing it don’t often really talk to the end user of the training. And naturally, they don’t have the time either. While it would be nice to sit down for a 1-on-1 with each learner, often that’s not a practical nor feasible possibility. But what if we could have the learners talk to us anyways? That’s where data collection comes in handy.

By monitoring e.g. what your employees search during their work can be a really good indicator of the types of things they would need to learn. As most of workplace learning happens that way – employees searching for quick performance support resources – you should really aim to understand that behaviour. So, why don’t you start playing Google? You already should have the capabilities of tracking search history on company devices or within your learning systems. These searches are highly contextual, as they happen within the direct context of learning or work. It’s just a matter of compiling this data and using it to support your training decisions.

Using real-time learning data to identify organisational skill gaps

Another stream of data that you should be looking into when doing training needs analysis comes directly from the learning activities themselves. First of all, you should make sure that the learning data you collect is relevant and actually gives an accurate representation of learning. If you’re not yet using xAPI, start now. You’ll unlock a whole new level of analytical power.

Once you’ve got that covered, you should track that data across the board. This enables you access to individual-, group- and subject matter level insights. For subject matter (i.e. training topics), you’re better off tagging all your learning content appropriately. By having an up-to-date representation of what learning experience related to what topic or competency, you enable quick glances into your organisation’s learning. For instance, a skills heat map might aggregate this “tagging” data and learning data to give you a visual representation on which areas your learners are lacking in competence. Then, you can start drilling down on the group- and individual levels to determine why some are succeeding and some are not. This helps you to craft better and much more personalised training activities and learning solutions.

Using performance data to understand the business needs

Naturally, organisational learning should always support the business rather than inconvenience it. Therefore, it’s important to measure and understand performance. If you don’t keep track of performance, it’s impossible to measure real learning impact and consequently do effective training needs analysis. Performance data is everywhere, often scattered across the business in various systems and silos. Different departments might have their own data and some of it may be centralised. But whether it’s sales, marketing, customer facing staff, operations, finance or HR, the data is often there already. And it’s incredibly important to tap into this data, regardless of where it is.

However, one extremely important thing to note is not to use performance data in isolation. Rather, you should always compare it with your learning data. For instance, if looking at performance data alone, you might see that performance of department X is lacking. The easy answer would be to “assign” more training. However, looking at learning data could reveal that training has not solved the problem before and thus you should be looking at completely different solutions to it. Furthermore, you should always be careful in jumping to conclusions when linking learning to performance impact. Again, the L&D department might see performance improvement as a quick win, but a deeper cross-analysis with learning data could reveal that the performance improvement wasn’t actually caused by the training.

Final words

Overall, there are tremendous amounts and types of both learning- and non-learning data we can leverage in training needs analysis. The above provides just a few examples. With better analysis we can provide better learning experiences and positively impact business performance. To not leverage the vast amounts of data available to do that is simply foolish.

If you need help in crafting more data-driven learning strategies or adopting technology to do so, lets talk. Just drop us a note here.

More Learning Ideas

How to Use Learning Analytics? 3 Value-add Cases

How to use learning analytics?

How to Use Learning Analytics? 3 Value-add Cases

As corporations become more data-driven in their decision making, learning & development has to follow suit. To make better decisions, you naturally need to collect a lot more learning data. But that alone isn’t enough. You also need capabilities to analyse the data to understand what it means. While there’s a lot of ambiguity about corporate training analytics and some organisations intentionally try to make it sound extremely difficult, it’s not entirely true. To clear out some of that ambiguity, here are 3 different use cases for learning analytics that are applicable for organisations of all sizes.

1. How to use learning analytics to increase engagement?

One of the bottleneck issues in corporate learning today is engagement. It’s not always an easy task to put out learning experiences that resonate with the learners and keep them engaged. Naturally, your content has to be of good quality, and you should likely use a fair bit of interactivity. But once all that is said and done, you should unleash the analytics.

Through learning content analytics, we can get a much better understanding of our users. We can see what are the pieces of content that are used the most or the least. We can also get an understanding of ‘when’ and ‘where’ learners tend to drop off, which then enables to start figuring out ‘why’. Furthermore, we can drill down to each interaction between the learner and content/instructors/other learners to really understand what is working and what is not. All of this (and a fair bit more!) enables us to constantly develop our learning experiences based on real information instead of gut-feels and opinions. And when we can make our content to be more relevant and to-the-point, a lot of the engagement tends to come naturally.

2. How to use learning analytics to personalise learning experiences?

Our professional learners – the employees – come with various skills, degrees of experience, education and backgrounds. As they certainly don’t represent a one-size sample, we shouldn’t be putting them through one-size-fits-all learning experience either. As organisations have understood this, the hype around personalised learning has grown significantly over the past few years. But it’s not just hype, there’s real value to personalisation that learning analytics can help us to unlock.

First of all, learning analytics help us to understand the different individuals and groups of learners in our organisation. By being able to drill down all the way to the level of individual’s interactions, we can understand our learners’ needs and challenges much better. This enables us to cater to their various strengths, diverse learning history and varying interests. Instead of providing a simple one-size-fits-all learning experience, we can use this information to design personalised learning paths for different groups or even up to an individual level. These learning paths can branch out and reconnect based on difficulty of content, experience, current job and various other factors. The learning experience thus becomes a spider’s web instead of a straight line, and you’ll be able to catch much more of your learners.

3. How to use learning analytics to prove the impact of learning?

Proving the impact or the ROI of learning is something that L&D professionals often struggle with. One of the reasons for struggle is not using learning analytics. For learning results in terms of knowledge acquisition, a data-driven approach beats out the traditional multiple choice testing or feedback forms by a long shot. Furthermore, it enables a much more formative way of assessment, thanks all the data points collected and available.

But simple knowledge acquisition isn’t simply enough to demonstrate corporate learning impact. After all, what’s the learning good for if no one applies it? Thus, it’s imperative that we combine learning analytics with performance metrics and indicators. By doing this, we’ll get a lot closer to real learning results. E.g. how did the sales training affect the sales staff routines, behaviours and performance? How much of the risky behaviour did the compliance training help to eliminate? Is our training on team management actually resulting in teams being managed better? By enabling this level of analytics, you can answer a lot more questions. Furthermore, you can also start asking questions that you were not even aware of.

In our work, learning analytics and data-driven approaches play a big part. While technology plays a big part, there’s obviously more to it. For instance, you want to be sure that you’re setting your corporate learning objectives to enable this. If you’re looking to move into more data-driven learning strategies or understand your training impact better, we can probably help you. Just reach out to us here.

More Learning Ideas

How to Use Formative Assessment in Corporate Learning?

How to Use Formative Assessment in Corporate Learning

How to Use Formative Assessment in Corporate Learning?

Wherever there’s learning, there should always be assessment. Assessment naturally comes in many types and formats, but generally a good distinction to draw is that between summative assessment and formative assessment.

In simplified terms, summative assessment focuses on trying to gauge the learning outcomes at the end of the learning activity. Students may be compared against each other and the assessment is a “high stakes” one. Formative assessment, on the other hand, attempts to measure learning throughout the learning activities. Formative evaluation can be considered less competitive – as the evaluation is based on criterion – and relatively “low stakes”.

How does formative assessment benefit corporate learning?

In our experience, a lot of corporate learning assessment is summative. L&D practitioners may dread the extra effort or may not even be familiar with formative practices. Furthermore, the prevalent tendency to developed slide-based courses with an exam at the end feeds into this behaviour. While building formative evaluation does require a bit more effort, the benefits tend to far outweigh the time investment.

Here are some of the benefits of formative assessment in corporate learning:

  • Trainers / L&D is able to recognise learning problems and skill gaps more effectively – on both individual and group levels
  • Learners are able to identify their own problem areas, self-correct and monitor their own progress
  • It provides valuable feedback to L&D to improve learning experiences and activities
  • It promotes active learning on the employees’ part
  • The focus shifts from achieving arbitrary outcomes (test scores, tick-box compliance etc.) to the learning process itself

In general, a well thought-out formative assessment approach helps all the stakeholders – trainers, learners and managers alike.

How to use formative assessment in practice?

Now that you’ve considered the benefits, here are some practical and highly manageable ways to improve your assessments.

The tools for formative assessment are plentiful, and the benefits are not limited to just evaluation either. By replacing summative assessment with something like this, you’ll also be creating much more engaging and learner-centric experiences. Furthermore, the approach is more data-driven by nature, helping you to make more informed L&D decisions. So start investing the time into it!

If you need help on designing digitally enabled assessments to support your learning strategy, we are happy to help. Just contact us.

More Learning Ideas

Omnichannel Learning – Steps Towards Unified Experiences

Omnichannel learning experiences - unified and seamless

Omnichannel Learning – Steps Towards Unified Experiences

The concept of omnichannel comes from the retail sector, where retailers are striving to provide a seamless, unified and personalised shopping experience across different channels, such as online, mobile and physical stores. Organisations who fail to utilise some of the individual channels or integrate them seamlessly seem to be struggling in business because of low customer engagement. While omnichannel is not much of a buzzword in the learning and development space, we should adopt the same ideology. After all, learning engagement as well as tracking learning across different channels is a challenge for many organisations. Here’s how we could move towards an omnichannel learning approach to tackle these problems.

Omnichannel learning starts with cross-platform functionality

We live in the era of learning apps. For almost every need, there’s an app. On top of that, you have your corporate LXP (or LMS) systems, learning portals, intranets and co-working platforms. The problem is that often these systems are don’t communicate very well with each other. Your learner may complete a learning activity in a dedicated application, but doesn’t in any way reflect in the content that e.g. your LMS might push to him/her. Running multiple platforms easily results in an incredible amount of duplicate work and activities. Furthermore, it tends to hide information in silos and the confines of the platform.

The aim of successful omnichannel learning is to abolish the boundaries of individual platforms. While running a single learning platform for all the learning needs would be ideal from a systems management standpoint, it’s often a non-feasible reality. Hence, when you’re looking at “yet another app” to solve your learning challenges, you should pay attention to the interoperability possibilities with your existing infrastructure. An important aspect of that is the Application Programming Interfaces (APIs) the systems can use to fetch and receive information from each other.

Omnichannel learning should aim for a unified user experience

Another omnichannel feature that may be equally challenging to create is a unified user experience across platforms. If we use a retail analogy, the aim is not only for the mobile app to match the design of the responsive website/web application, but the physical environment (the retail stores) to match it as well. A seamless transition between online and offline will be key to deliver a great user experience and sustain engagement. Interestingly, the online to offline is a particular challenge in learning as well (more on that later).

This area of omnichannel learning is the one where running multiple platforms usually kills the game. However, with a bit of effort on visual- and functional design, we can do quite a lot. Naturally, visual design, colour schemes etc. should match across platforms, as it is a low effort – high return type of situation. In terms of functionality, you’re better off if your applications follow similar logic in terms of accessing and consuming learning. Furthermore, you shouldn’t unreasonably restrict functionalities on mobile platforms, otherwise you may lose a lot of engagement.

How do we collect uniform learning data from all the different channels – even offline?

To, first of all, understand and further develop omnichannel learning experiences, we need comprehensive learning data. As we want to eliminate unnecessary overlaps in delivery, we need to grasp how the different channels work together. While each app or learning tool may very well have its own analytics, they don’t necessarily help the bigger picture. Furthermore, a major challenge is bringing offline (face-to-face) into the mix and collecting data from them. Thus, we need a unified framework of recording all different learning activities, whether mobile, online or classroom-based.

Luckily, we already have the technological answer for the problem – The Experience API (xAPI). The xAPI specification enables us to track and collect uniform data from all learning activities, even offline and pass them onto a single locker of data for analysis. It helps not only in learning analytics, but also enables better understanding of content engagement and learner-centric design.

What about content development for omnichannel?

Finally, content development is an important topic in an omnichannel approach to learning. Naturally, all digital content should be fully responsive, so it can be accessed via a browser on all devices and wrapped into mobile applications for native use. Interoperability and accessibility is imperative, as the concept of omnichannel expands the “mobile learning paradigm” of “anytime, anywhere” to “any content, anytime, anywhere”.

Integrating this mode of operation to offline activities is again the biggest challenge. The approach requires a degree of flexibility from the trainers, coaches and mentors. They need to adapt their classroom content to form a natural continuum to the prior (digital) learning experiences. But thanks to xAPI and learning analytics, they nowadays have the power to understand each learner on a very individual level.

Are you delivering seamless and unified learning experiences across different channels? If you want to move away from siloed learning approaches, we can help. Our advisory services cover both technology implementations and strategic learning consulting. Just contact us.

More Learning Ideas