Knowledge Assessment in Corporate Learning – 5 Methods

Knowledge Assessment in Corporate Learning

Knowledge Assessment in Corporate Learning – 5 Methods

Whenever we do training, it’s generally a good idea to include some kind of assessment. As organisations, proper knowledge assessment enables us to track employee development and conduct analysis on instructional efficacy. While it’s important to go beyond this level of assessment to capture real organisational impact, it’s vital to get the basics right. A challenge in corporate learning is that the evaluation is often too immediate, intimidating and ineffective. Here are 5 methods that not only help in those aspects, but can also make testing more fun!

Continuous assessment with low-key quizzes

One of the challenges of assessment is that it’s often only administered after the fact. However, good evaluation should be continuous. Therefore, instead of saving the quizzes and test until the end of the course or activity, distribute them throughout. This also helps you as the evaluator to spot learning challenges early and intervene accordingly. Furthermore, instead of a daunting battery of never-ending questions, use them in small sets embedded in the content. This makes the whole thing a little more approachable, as the continuous type of questioning feels more like exercises than formal testing.

Constant tracking of activities

Another less quizzing-focused way of knowledge assessment is seamless tracking. The idea is to use comprehensive data collection tools, such as xAPI, to continuously collect engagement data on digital learning experiences. Formal testing is replaced by benchmark measures for user inputs and outputs, that the analytics track learners against. For instance, those who engage with a training video for its full length receive a “higher score” than those who didn’t. Alternatively, those who have made contributions or reflections about the learning on the organisation’s social learning platforms receive higher marks than the rest. These are just a few examples, but the goal is to make evaluation as seamless and automatic as possible.

Scenario-based simulations as knowledge assessment tools

Training simulations are not only good for simulating real life scenarios, they can also be used in highly practice-oriented assessment. This form of evaluation models real life situations and application contexts of the content. Therefore, instead of just answering abstract questions, the learners are able to apply the knowledge in a virtual environment. Depending on the training topic, you can assess multiple variables, e.g. speed, accuracy and confidence. The great thing about these simulations is that they also can make learners more confident in applying the skills on the real job environment, as they’ve got some practice under their belts.

Social analytics for social learners

In case you’ve already implemented social learning tools in your organisation, there’s an interesting alternative to conventional quizzing. Relying on the notion that reflection is one of the most important parts of learning, social analytics can help us to analyse interactions and provide a novel way of knowledge assessment. If you’ve implemented e.g. discussion boards, you could use analytics tools to evaluate learners based on the quantity and quality of discussion they bring in. For instance, simple counters can collect the quantity of comments by a particular learner. Similarly, other algorithms can determine the quality of those comments – whether they contribute to the discussion or not. If you already have a good learning culture, this could present an interesting alternative to some assessment.

Before-, after- and long-after quizzes

Finally, if nothing else, you should at least provide a knowledge assessment opportunity before and after a learning activity. This helps you gain insights into the development that happens. Furthermore, pre-tests can also serve as valuable data sources for instructors and designers, based on which to personalise the learning content. However, an interesting addition would be “long-after quizzes”. The problem with most post-training tests is that they’re too immediate. They tend to capture short term recall rather than real learning. As the forgetting curve tells us, people tend to forget a lot over time. Therefore, introducing quizzes some time after the training can serve a meaningful purpose of capturing the amount of knowledge that really stuck.

Final words

Overall, good assessment is an art form of sorts. There’s no single right answer to what works best. As long as you’re working towards more formative assessment, you’re on the right track. Getting the basics right by asking good eLearning questions also helps a lot. However, this kind of knowledge assessment is only the beginning. We still need to understand how learning translates into action, and how action translates to performance. And it’s the latter two that pose the real challenge in corporate learning. In case you need help solving those challenges, or just in building better corporate learning assessment, we’re happy to help. Just drop us a note here and tell us about your challenge.

More Learning Ideas

Learning Technology Integrations – A Quick Guide

Learning Technology Integrations

Learning Technology Integrations – A Quick Guide

Often, a challenge in using information systems in complex organisations is that the systems don’t talk to each other. Information is scattered and outdated, transition between different systems is not easy and it’s hard to get a unified view of what’s going on as data is spread across multiple silos in different formats. Hence, system integrations have become important. As more technologies emerge in L&D, the topic has become important here too. Therefore, we put together a quick guide on the most relevant learning technology integrations you should know. Take a look!

Single Sign-on Integration

Single sign-on (SSO) is a basic learning technology integration but a handy one. With SSO, your users are able to login to the different learning technology systems by using their existing company accounts. For instance, say you have Microsoft accounts that employees use for identifying themselves. Instead of having to remember a new set of login credentials, employees are able to login to other systems with them.

The benefits of SSO integration include user experience and security. Moving between different systems is much easier when you don’t have to login separately. Also, less credentials means more security. Furthermore, as the company controls the original credentials, security interventions can be swift. Also, as soon as an employee’s account gets terminated, they lose access to all the other systems too.

HR system integrations

While you’re using learning technologies, you also most likely have some kind of HR system. Another important learning technology integration happens between that and the learning technologies. The goal of such integration is to update information at both ends automatically. For instance, the learning tool pulls personnel data from the HR system, and assigns the user learning based on that information. Thus, whenever there’s a role change, you don’t need to manually assign new learning tasks. Also, the learning technology tool can push back information to the HR system. For instance, whenever an employee finishes a learning path, the tool sends information to the HR system.

The benefit of this type of learning technology integration is the elimination of manual administrative tasks. There’s no longer a need to retrieve and upload e.g. excel files between different systems. Furthermore, with good initial configuration, employees can e.g. automatically get access to learning resources based on their role, seniority, business unit, geography etc.

LRS Integrations

Learning Records Store (LRS) is a powerful tool based on the xAPI framework. It enables the collection of data from multiple systems under the same roof. For instance, you may have multiple LMS systems that all feed into this same data archive. Or you might feed in face-to-face training records, mobile app and performance support tool data. While it may require data operations, it’s also possible to pull in data from non-learning systems, such as performance management system or that HR system to an LRS.

With this kind of learning technology integration, you can have all your training-related data, and much more, in the same format, in the same location. This makes effective learning analytics a lot easier. Hence, you’ll be able to get a better understanding and bird’s-eye view of what’s happening in the entire organisation. All the LRS tools also become equipped with powerful dashboards and data tools.

Webhook Integrations

Finally, webhooks are a type of integration that can sometimes prove handy. The fundamental idea is that a webhook notifies you when something happens in a system, for which you can then create an automated response. In the context of learning technology integrations, there can be several use cases. For instance, whenever a learner does something in App 1, do something in App 2. Or, as a group of learners have finished a learning experience, send an automatic report to their line manager.

Webhooks are a good way of integrating certain things and automating workflows. When running multiple systems and platforms, it’s easy to get lost in the administrative work. Designing these types of integrations and reactions in a smart way enables you to decrease that workload.

Final words

Overall, the future of learning is integrated. The different tools we use have to talk to each other. Otherwise, it all quickly becomes inefficient and redundant. Learning technology integrations are an important thing to consider whenever bringing new technology into the fold. Good integrations and automation protocols can significantly reduce the administrative workload that goes into managing learning tools or other systems.

More Learning Ideas

Kaufman’s Learning Evaluation Model – Quick Overview

Kaufman's Learning Evaluation Model

Kaufman’s Learning Evaluation Model – Quick Overview

The field of corporate learning has a lot of different frameworks for evaluation. While not all of them are good or even necessary, some frameworks still provide good points of consideration and models for organising information. For instance, last week, we took a look at the Success Case Method which works best on capturing qualitative insights. This week, we decided to take a quick look at Kaufman’s learning evaluation model, and see if it still provides valid contributions.

Kaufman’s Learning Evaluation Model briefly explained

Instead of providing an entirely new framework, Kaufman’s model aims to improve the commonly used Kirkpatrick’s 4 levels. The allegedly improved version introduces some additional consideration by seemingly dividing Kirkpatrick level 1 into two and adding a fifth level. The levels and the respective questions and considerations for modern L&D professionals go as following:

  1. Input – what kind of resources and learning materials do we have at our disposal that we can use to support the learning experience?
  2. Process – how’s the delivery of the learning experience? Is it accepted? How are people responding to it?
  3. Micro level results – Did the learner or the learning group acquire the knowledge? Did they apply it on their jobs?
  4. Macro level results – Did performance improve due to this learning and application of new in the workplace? What kind of benefits arose from the learning on an organisational level?
  5. Mega level impact – What kind of impact did the learning have on society or larger external stakeholder groups?

Reflection on the Kaufman model

As the original author proposed the model as an improvement over Kirkpatrick’s, we’ll make the comparison accordingly. The separation of input and process might be a good one to make. Nowadays, we have access to vast pools of digital resources both in the public domain and sitting in corporate information systems. There are a lot of situations where organisations could leverage on a lot of this information and resources. For instance, curation-based learning content strategies might make more sense for some organisations. Hence, the introduction of inputs as a separate consideration might be a helpful change to some on the framework level.

Reversely, Kaufman also groups Kirkpatrick’s levels 2 and 3 together. While these are just semantic changes, it’s within this section that organisations have their L&D challenges. Often, learning is not the problem, and people may retain the newly learnt quite well. But the problem often comes in application, or learning transfer, as people fail to use these new skills or practices back at their daily jobs. Consequently, that’s something that modern L&D professionals should also focus more on.

Finally, Kaufman’s learning evaluation model introduces the “mega level”, or societal impact. While it may be a valid consideration for a select few, presumably this impact would go hand-in-hand with the business results analysed at the “macro level”. Or if not, we nevertheless encounter the immense difficulty of evaluating impact to external entities.

What’s in it for the L&D professional?

Like with any of the prevalent frameworks or models of evaluating learning at the workplace, it’s important not to take things too seriously. These models do provide a good basis for structuring one’s approach to evaluation, but L&D professionals should still adjust them to fit the context of their particular organisation. It’s also noteworthy that all these models were built on the conception of formal learning. Hence they may fail to address some more informal workplace learning. Regardless, the key takeaway from Kaufman’s learning evaluation model could be the notion of existing resources that can contribute to learning experiences. It’s not always necessary to reinvent the wheel after all!

If you’re looking for new ways of evaluating learning, especially learning transfer or business impact, drop us a note. We’d be happy to help you co-engineer evaluation methods that can actually demonstrate L&D’s value to the business.

More Learning Ideas

Quick Guide: Brinkerhoff’s Success Case Method in Workplace Learning

How to use Brinkerhoff's Success Case Method in workplace learning?

How to Use Brinkerhoff’s Success Case Method in Workplace Learning

There are a lot of different frameworks that organisations use to evaluate the impact of their workplace learning initiatives. The Kirkpatrick model and the Philips ROI model may be the most common ones. While the Brinkerhoff’s Success Case Method is perhaps a less known one, it can too provide value when used correctly. In this post, we’ve compiled a quick overview of the method and how to use it to support L&D decisions in your organisations.

What’s the Brinkerhoff’s Success Case Method?

The method is the brainchild of Dr. Robert Brinkerhoff. While many of its original applications relate to organisational learning and human resources development, the method is applicable to a variety of business situations. The aim is to understand impact by answering the following four questions:

  • What’s really happening?
  • What results, if any, is the program helping to produce?
  • What is the value of the results?
  • How could the initiative be improved?

As you may guess from the questions, the Success Case Method’s focus is on qualitative analysis and learning from both successes and failures on a program level to improve for the future. On one hand, you’ll be answering what enabled the successful to succeed and on the other hand, what barred the worst performers from being successful.

How to use the Brinkerhoff Method in L&D?

As mentioned, the focus of the method is on qualitative analysis. Therefore, instead of using large scale analytics, the process involves surveys and individual learner interviews. By design, the method is not concerned with measuring “averages” either. Rather the aim is to learn from the most resound successes and the worst performances and then either replicate or redesign based on that information.

So ideally, you’ll want to find just a handful of individuals from both ends of the spectrum. Well-designed assessment or learning analytics can naturally help you in identifying those individuals. When interviewing people, you’ll want to make sure that their view on what’s really happening can be backed with evidence. It’s important to keep in mind that not every interview will produce a “success case”, one reason being the lack of evidence. After all, you are going to be using the information derived with this method to support your decision making, so you’ll want to get good information.

Once you’ve established the evidence, you can start looking at results. How are people applying the newly learnt? What kind of results are they seeing? This phase requires great openness. Every kind of outcome and result is a valuable one for the sake of analysis, and they are not always the outcomes that you expected when creating the program. Often training activities may have unintended application opportunities that only the people on the job can see.

When should you consider using Brinkerhoff’s Success Case Method?

It’s important to acknowledge that while the method doesn’t work on everything, there are still probably more potential use cases than we can list. But these few situations are ones that in our experience benefit from such qualitative analysis.

  • When introducing a new learning initiative or a pilot. It’s always good to understand early on where a particular learning activity might be successful and where not. This lets you make changes, improvements and even pivots early on.
  • When time is of the essence. More quantitative data and insights takes time to compile (assuming you have the necessary infrastructure already in place). Sometimes we need to prove impact fast. In such cases, using the Brinkerhoff method to extract stories from real learners helps to communicate impact.
  • Whenever you want to understand the impact of existing programs on a deeper level. You may already be collecting a lot of data. Perhaps you’re already using statistical methods and tools to illustrate impact on a larger scale. However, for the simple fact that correlation doesn’t mean causation, it’s sometimes important to engage in qualitative analysis.

Final thoughts

Overall, Brinkerhoff’s Success Case Method is a good addition to any L&D professional’s toolbox. It’s a great tool for extracting stories of impact, telling them forward and learning from past successes and failures. But naturally, there should be other things in the toolbox should too. Quantitative analysis is equally important, and should be “played” in unison with the qualitative. Especially nowadays, when the L&D function is getting increased access to powerful analytics, it’s important to keep on exploring beyond the surface level to make the as informed decisions as possible to support the business.

If you are struggling to capture or demonstrate the impact of your learning initiatives, or if you’d like start doing L&D in a bit more agile manner, let us know. We can help you in implementing agile learning design methods as well as analytical tools and processes to support the business.

More Learning Ideas

How to Use Data to Support Face-to-face Training?

How to support face-to-face training with data?

How to Use Data to Support Face-to-face Training?

Organisational learning and development is becoming increasingly data-driven. This is fuelled by the need to demonstrate impact, be more effective and direct resources more efficiently. With the advent of new learning technologies and platforms – many of which come with built-in analytics capabilities – we are increasingly better equipped to measure all kinds of learning in a meaningful way. However, for the most part, the collection and especially the use of this data has been limited to only digital learning experiences. But there’s no reason to draw that kind of limitation. In fact, traditional face-to-face training could benefit greatly from having access to data and analytics. So, let’s explore how we could support face-to-face training with data!

Current challenges with face-to-face training

Face-to-face training has its fair share of challenges ahead. On one hand, it’s rather expensive, once you factor in all of the lost productivity and indirect costs. However, cost becomes less of an issue as long as you can demonstrate impact and value. And that’s perhaps a business challenge. The real learning challenges, on the other hand, are related to the delivery.

Overall, face-to-face learning is not particularly personalised. Trainers are often not aware of the existing knowledge of the participants, let alone their personal context: jobs, tasks, challenges, problems, difficulties, team dynamics etc. Hence, the training – especially in subject matter intensive topics – often results in a more or less one-size-fits-all type of approach: trainer goes through the slide deck, perhaps with a few participatory activities and some feedback at the end. Even if you’re an experienced trainer, it’s difficult to improvise and go off-course in the heat of the moment to pursue the emerging (personal) needs of the learners.

So, wouldn’t it be beneficial and make sense to put that information into good use and start to support face-to-face training with data? Yes it would. Here are two easy ways you can get a lot more out of your “classroom” sessions.

1. Determining existing knowledge and skill level with pre-work

One of the simplest things you can do to get more value out of your face-to-face training is to start using pre-work. Have your learners go through digital learning materials before coming to the session. Build in some seamless assessment and collect information in the form of user submissions and feedback. With good design and proper use of learning analytics, this already gives you a lot of valuable information.

As a trainer, you can then check e.g. what your learners already know and what they are having difficulties with. It probably doesn’t make sense to spend a lot of time in the classroom on things they already know. Rather, you’re better off using the time on addressing problem areas, challenges and personal experiences that have come out during the pre-work. Or if you want to explore making things even more impactful, try an approach like flipped learning. In flipped learning, you use digital to deliver the knowledge while focusing the classroom time solely on discussions, practice and hands-on activities.

2. Using learning records history to understand the people you’re training

Another idea we could do better at is understanding the people we deal with. At their best, these records may provide a whole history of learning. As these digital platforms compile more and more data about our learning experiences, it would be beneficial to let the trainers access that as well. By understanding prior experiences, the trainer can create scaffolding – build on what the employees already know from before. This might be totally unrelated to the current topic too.

Furthermore, having access to a “HR” history of the employees might be beneficial too, especially in large organisations where the trainer doesn’t necessarily now the people personally. For instance, what are the attendees jobs? Where do they work? Where have they worked before? In what kind of roles? All the information like this brings additional data points to personalise the learning experience on. In some cases, you might even find that there’s a subject matter expert in the group. Or someone who has dealt in practice with the issues of the ongoing training. These could be assets you can leverage on, of which you wouldn’t perhaps even know about without the data.

Final thoughts

All in all, there’s a whole lot that data and analytics can offer to “traditional” training. The need for personalisation is real, and smart use of learning data helps to cater to that need. Of course, you can use data to support face-to-face training in many more ways, these are just two examples. For instance, post-session feedback is much more handy to do digitally. This feedback can then be used to improve future sessions on the same topic (or with the same participants).

If you feel you could do more with data and smart learning design, don’t hesitate to reach out. We can help you design blended learning experiences that deliver impact and value.

More Learning Ideas

How to Help Learners Succeed with Personal Learning Analytics?

How to use personal learning analytics to help learners succeed?

Personal Learning Analytics – Helping Learners with Their Own Data

Generally, the use of data and analytics in workplace learning is reserved for a small group senior people. Naturally, learning analytics can provide a lot of value for that group. For instance, data-driven approaches to training needs analysis and measuring corporate learning are quickly gaining ground out of the need to prove the impact of workplace learning initiatives. However, there could be further use cases for those analytical powers. One of them is helping the learners themselves with personal learning analytics.

What is ‘personal learning analytics’?

Like the title may give away, personal learning analytics is just that: individualised information made available to the learner. The major difference with conventional “managerial” analytics is that most of the information is about the learner in question. Whenever that’s not the case, the information of others would always be anonymised. A few exceptions could include e.g. gamification elements which display user names and achievements. So, effectively, it’s all about giving the user access to his/her own data and anonymised “averages”.

How can we use personal analytics to help learners?

One of the challenges in conventional approaches to workplace learning is that the process is not very transparent. Often, the organisation controls the information, and the learners may not even gain access. However, a lot of this information could help the learners. Here are a few examples.

  • Comparing performance against others. While cutthroat competition is probably not a good idea, and learners don’t necessarily want others to know how they fared, they can still benefit from being able to compare their performance against the groups. Hence, they’ll know if they’re falling behind and know to adjust their effort/seek new approaches.
  • Understanding the individual learning process. All of us would benefit greatly from information about how we learn. For instance, how have we progressed, how are we developing as well as how and when do we engage with learning. Luckily, personal learning analytics could tell us about all of that. The former helps to keep us motivated, while the latter helps us to identify patterns and create habits of existing behaviour.
  • Access to one’s learning history. We are learning all the time and all kinds of things. However, we are not necessarily very good at keeping track ourselves. If we just could pull all that data into one place, we could have a real-time view into what we have learned in the past. Potentially, this could enable us to identify new skills and capabilities – something that the organisation would likely be interested in too.

Towards self-regulated learning

Across the globe, organisations are striving to become more agile in their learning. One key success factor for such transformation is the move towards more self-regulated learning. However, achieving that is going to be difficult without slightly more democratised information.

If the learners don’t know how they are doing, they cannot really self-regulate effectively. And no, test scores, completion statistics and annual performance reviews are not enough. Learning is happening on a daily basis and the flow of information and feedback should be continuous. Thankfully, the technology to provide this sort of individual learning analytics and personalised dashboards is already available. For instance, xAPI and Learning Record Stores (LRS) enable us to store and retrieve this type of “big learning data” and make it available to the learners. Some tools even provide handy out-of-the-box dashboards.

On a final note, we do acknowledge that the immediate applications of “managerial” learning analytics likely provide greater initial value to any given organisation. And if you’re not already employing learning analytics to support your L&D decision making, you should start. However, once we go beyond that stage, providing access to personal learning analytics may be a good next step that also helps to facilitate a more modern learning culture in the organisation.

If you’re eager about learning analytics, whether on an organisational or personal level, but think you need help in figuring out what to do, we can help. Just drop us a note here, and let’s solve problems together.

More Learning Ideas

Quantitative vs. Qualitative Data in Learning – Where’s the Value?

Quantitative vs Quantitative Data in Learning featured image

Quantitative vs. Qualitative Data in Learning

Corporate learning and development is becoming increasingly data-driven. On one hand, learning teams need to be able to track and assess all the different learning experiences. On the other hand, they also need to demonstrate business value. This requires smart use of data collection and analytics. While all efforts towards more data-driven strategies are a step towards the better, we’ve noticed a strong bias towards quantitative data. While quantitative data is important, it’s not quite enough to effectively evaluate learning as a process. To achieve that, we’ll need to also pay attention to qualitative data in learning. To help clear up some of the ambiguity, let’s look at each of the two and how to use them smartly.

What is quantitative data in learning?

Quantitative data, by definition, is something that can be expressed in numerical terms. This type of learning data is used to answers questions such as “how many” or “how often”. In learning, organisations often use quantitative data in the form of tracking:

  • Enrolment rates
  • Completion rates
  • Time spent on learning
  • Quiz scores

Unfortunately, this is often where most organisations stop with their learning analytics capabilities. However, the problem is that this type of data tells us absolutely nothing about the learning efficacy or the process itself. While we can always employ better quantitative metrics, such as engagement rates, that will never be enough. In any case, we are always better off with access to qualitative data as well.

What is qualitative data in learning?

There’s a lot of questions that we cannot effectively answer with numbers, hence we need qualitative data. Qualitative data, by definition, is non-numerical and used to answer questions such as “what”, “who” or “why”. Examples of qualitative data in learning could include:

  • How the learners progressed through the activities
  • The nature and topics of discussion between the learners
  • How employees accessed the learning
  • How the employees applied the learning on the job

It’s quite evident, that these types of questions go a lot further in understanding behaviours, efficacy and the learning process as a whole. Without this kind of data, you effectively have no idea what works and what doesn’t. Or, you may be able to see the effect (e.g. low completion or engagement rates) but may have no idea of the underlying cause (e.g. irrelevant content, bad user experience). From a learning design perspective, these type of data points are immensely valuable.

How to use quantitative and qualitative learning data meaningfully?

So, in general, there’s a lot of untapped value in qualitative data and understanding the learning process on a holistic level. But of course, that doesn’t mean that you should forget about quantitative data either. Instead, you should always aim to validate ideas and insights derived from qualitative learning data through the use of quantitative data. How else would you know the impact of things at scale?

For instance, we think there’s a lot of value in employee discussions and sharing. These provide a great opportunity for organisations to source knowledge and transfer information between its layers. It often happens that employees in these learning discussions bring up their own best practices and work methods (qualitative), that even the L&D team is not aware of. However, to understand if the practice can be applied across the organisation, we may need to do a survey or a poll to understand the magnitude of the idea (quantitative).

Final words

Overall, we believe that a lot of the traditional “LMS metrics” are quite useless for anything other than compliance purposes (and even for that, there are better ways…). To really deliver great learning experiences, organisations need to understand learning as a process and not strip it down to simple numbers of how many people took part. In essence, companies need to focus more on the quality of their data and the ability to quantify the impact of insights derived from qualitative data in learning.

This often requires technical capabilities, such as the xAPI, but once again, buying technology is not enough. Rather, organisations have to understand the meaningful things to measure and cut through the noise. If your organisation needs help in that, or in crafting more data-driven learning strategies in general, we are happy to help. Just drop us a note here and tell us about your problem.

More Learning Ideas

How to Use Social Analytics in Organisational Learning?

How to use social analytics in organisational learning

How to Use Social Analytics in Organisational Learning?

Nowadays, the HR and L&D functions of organisations are increasingly data-driven. Many employ analytics to aid in decision making processes and to try to analyse e.g. the effectiveness of learning initiatives. While there’s a lot of ways to use learning analytics, we found that organisations are underutilising a particular type of data. While digital learning platforms increasingly come with social features (walls, blogs, news feeds, etc.), not many are yet paying attention to how people use these social elements, and the potential implications for the organisation. Thus, here are three cases for using social analytics in organisational learning.

1. Measuring interactions between learners

If we want to understand learning on a holistic level, it’s important to also understand it granularly. Hence, one good use of social analytics is to analyse interaction between the learners. Some example data points for these interactions could be:

  • How many times was a particular piece of content or user submission liked/shared?
  • The number of comments that a post or a piece of content attracted
  • How often/for how long are users interacting with each other?

The first two examples above could help you to understand what kind of content works the best or sparks the most discussion. The latter one could help in understanding how people collaborate with each other.

2. Measuring the quality of interactions and organisational influence

Naturally, quantitative data only gets us so far and it’s important to understand the quality of the “social” as well. Empty comments that don’t contribute to the discussion are not likely to create value. Hence, organisations could consider using semantic analysis, powered by NLP algorithms to gauge “what” is being talked about, and whether the social discourse is contributions or just mere commenting. The benefits of semantic analysis are two-fold. It may, again, help you to spot problem areas in your content (e.g. when learners need to clarify concepts to each other). But perhaps more importantly, it can provide you information on who are the “contributors” in your organisation.

Also, it’s important to understand “who” are interacting and “how” they interact. This level of analysis could be helpful in determining organisational influence. Who are the individuals with networks across the organisation, or liked by their peers, or helping everyone. These people may even go unnoticed if not for the social analytics, but maybe they could be among the future leadership potential in the organisation. Even if not, there’s a good chance that these may be local opinion leaders that you could utilise to execute your strategy in the future.

3. Sourcing ideas and innovation from the ground up

Finally, a potentially highly impactful application of social analytics is in sourcing information, ideas and innovation from within your own organisation. Often, the people doing a particular job have a lot of ideas on how to improve. It’s just that these ideas rarely reach the top, due to organisational layers, bureaucracy, culture etc. Could we help in that?

With social analytics, you could effectively set up hidden knowledge collection tool. By analysing discussions/sharing/likes around content or user submissions, you could establish a direct flow of information from the line of duty all the way to the decision makers in the upper echelon’s of the organisation. The decision makers would see what kind of work practices/ideas/methods gain the most traction, and then find ways of replicating them across the organisation. On a technical level, such flows are not hard to set up. Mostly, you just need quantitative data, or a combination of quantitative and semantics, depending on the case.

Final words

All in all, there’s a lot of under-utilised value in social analytics for workplace learning and organisational development purposes. As learning is fundamentally a social experience, this data helps in understanding the learning that is taking place. So, as you’ll get deeper into the world of learning data, don’t just focus on the traditional metrics like course completions etc. A more social data set might provide much better insights. And if you need help in social learning or hidden knowledge collection, we can help. Just contact us here.

More Learning Ideas

How to Leverage Data in Training Needs Analysis?

How to leverage data for training needs analysis?

How to Leverage Data in Training Needs Analysis?

The training needs analysis is a real staple in the corporate L&D field. Everyone does it, yet the real value-add is ambiguous. The traditional ways of doing it are not employee-centric, which results in irrelevant and inconveniencing rather than enabling learning activities. While extensive use of data to support that analysis is clearly the best direction to take, organisations don’t often understand how. Thus, we wanted to explain a few of different ways you could leverage data in your training needs analysis.

Using search data to understand what your learners really need

One of the biggest problems in training needs analysis is that the people doing it don’t often really talk to the end user of the training. And naturally, they don’t have the time either. While it would be nice to sit down for a 1-on-1 with each learner, often that’s not a practical nor feasible possibility. But what if we could have the learners talk to us anyways? That’s where data collection comes in handy.

By monitoring e.g. what your employees search during their work can be a really good indicator of the types of things they would need to learn. As most of workplace learning happens that way – employees searching for quick performance support resources – you should really aim to understand that behaviour. So, why don’t you start playing Google? You already should have the capabilities of tracking search history on company devices or within your learning systems. These searches are highly contextual, as they happen within the direct context of learning or work. It’s just a matter of compiling this data and using it to support your training decisions.

Using real-time learning data to identify organisational skill gaps

Another stream of data that you should be looking into when doing training needs analysis comes directly from the learning activities themselves. First of all, you should make sure that the learning data you collect is relevant and actually gives an accurate representation of learning. If you’re not yet using xAPI, start now. You’ll unlock a whole new level of analytical power.

Once you’ve got that covered, you should track that data across the board. This enables you access to individual-, group- and subject matter level insights. For subject matter (i.e. training topics), you’re better off tagging all your learning content appropriately. By having an up-to-date representation of what learning experience related to what topic or competency, you enable quick glances into your organisation’s learning. For instance, a skills heat map might aggregate this “tagging” data and learning data to give you a visual representation on which areas your learners are lacking in competence. Then, you can start drilling down on the group- and individual levels to determine why some are succeeding and some are not. This helps you to craft better and much more personalised training activities and learning solutions.

Using performance data to understand the business needs

Naturally, organisational learning should always support the business rather than inconvenience it. Therefore, it’s important to measure and understand performance. If you don’t keep track of performance, it’s impossible to measure real learning impact and consequently do effective training needs analysis. Performance data is everywhere, often scattered across the business in various systems and silos. Different departments might have their own data and some of it may be centralised. But whether it’s sales, marketing, customer facing staff, operations, finance or HR, the data is often there already. And it’s incredibly important to tap into this data, regardless of where it is.

However, one extremely important thing to note is not to use performance data in isolation. Rather, you should always compare it with your learning data. For instance, if looking at performance data alone, you might see that performance of department X is lacking. The easy answer would be to “assign” more training. However, looking at learning data could reveal that training has not solved the problem before and thus you should be looking at completely different solutions to it. Furthermore, you should always be careful in jumping to conclusions when linking learning to performance impact. Again, the L&D department might see performance improvement as a quick win, but a deeper cross-analysis with learning data could reveal that the performance improvement wasn’t actually caused by the training.

Final words

Overall, there are tremendous amounts and types of both learning- and non-learning data we can leverage in training needs analysis. The above provides just a few examples. With better analysis we can provide better learning experiences and positively impact business performance. To not leverage the vast amounts of data available to do that is simply foolish.

If you need help in crafting more data-driven learning strategies or adopting technology to do so, lets talk. Just drop us a note here.

More Learning Ideas

How to Use Learning Analytics? 3 Value-add Cases

How to use learning analytics?

How to Use Learning Analytics? 3 Value-add Cases

As corporations become more data-driven in their decision making, learning & development has to follow suit. To make better decisions, you naturally need to collect a lot more learning data. But that alone isn’t enough. You also need capabilities to analyse the data to understand what it means. While there’s a lot of ambiguity about corporate training analytics and some organisations intentionally try to make it sound extremely difficult, it’s not entirely true. To clear out some of that ambiguity, here are 3 different use cases for learning analytics that are applicable for organisations of all sizes.

1. How to use learning analytics to increase engagement?

One of the bottleneck issues in corporate learning today is engagement. It’s not always an easy task to put out learning experiences that resonate with the learners and keep them engaged. Naturally, your content has to be of good quality, and you should likely use a fair bit of interactivity. But once all that is said and done, you should unleash the analytics.

Through learning content analytics, we can get a much better understanding of our users. We can see what are the pieces of content that are used the most or the least. We can also get an understanding of ‘when’ and ‘where’ learners tend to drop off, which then enables to start figuring out ‘why’. Furthermore, we can drill down to each interaction between the learner and content/instructors/other learners to really understand what is working and what is not. All of this (and a fair bit more!) enables us to constantly develop our learning experiences based on real information instead of gut-feels and opinions. And when we can make our content to be more relevant and to-the-point, a lot of the engagement tends to come naturally.

2. How to use learning analytics to personalise learning experiences?

Our professional learners – the employees – come with various skills, degrees of experience, education and backgrounds. As they certainly don’t represent a one-size sample, we shouldn’t be putting them through one-size-fits-all learning experience either. As organisations have understood this, the hype around personalised learning has grown significantly over the past few years. But it’s not just hype, there’s real value to personalisation that learning analytics can help us to unlock.

First of all, learning analytics help us to understand the different individuals and groups of learners in our organisation. By being able to drill down all the way to the level of individual’s interactions, we can understand our learners’ needs and challenges much better. This enables us to cater to their various strengths, diverse learning history and varying interests. Instead of providing a simple one-size-fits-all learning experience, we can use this information to design personalised learning paths for different groups or even up to an individual level. These learning paths can branch out and reconnect based on difficulty of content, experience, current job and various other factors. The learning experience thus becomes a spider’s web instead of a straight line, and you’ll be able to catch much more of your learners.

3. How to use learning analytics to prove the impact of learning?

Proving the impact or the ROI of learning is something that L&D professionals often struggle with. One of the reasons for struggle is not using learning analytics. For learning results in terms of knowledge acquisition, a data-driven approach beats out the traditional multiple choice testing or feedback forms by a long shot. Furthermore, it enables a much more formative way of assessment, thanks all the data points collected and available.

But simple knowledge acquisition isn’t simply enough to demonstrate corporate learning impact. After all, what’s the learning good for if no one applies it? Thus, it’s imperative that we combine learning analytics with performance metrics and indicators. By doing this, we’ll get a lot closer to real learning results. E.g. how did the sales training affect the sales staff routines, behaviours and performance? How much of the risky behaviour did the compliance training help to eliminate? Is our training on team management actually resulting in teams being managed better? By enabling this level of analytics, you can answer a lot more questions. Furthermore, you can also start asking questions that you were not even aware of.

In our work, learning analytics and data-driven approaches play a big part. While technology plays a big part, there’s obviously more to it. For instance, you want to be sure that you’re setting your corporate learning objectives to enable this. If you’re looking to move into more data-driven learning strategies or understand your training impact better, we can probably help you. Just reach out to us here.

More Learning Ideas