4 Levels of Analytics in L&D & How They Create Value

4 Levels of Analytics in L&D and How They Create Value

4 Levels of Analytics in L&D and How They Create Value

The learning analytics landscape is buzzing. Thanks to digital tools and technologies, the capabilities to track, evaluate and assess the impact of learning have increased manifold. This enables organisations to increasingly understand not only the learning process, but also the impact of learning to the business itself. However, there’s also a lot of misconceptions around analytics. We’ve seen a worrying tendency to paint a picture of deep analytics, whereas the real capabilities don’t extend beyond rudimentary statistics. To clarify some of the possibilities in this space, we put together this look at the different levels of analytics in L&D. Let’s take a look!

1. Descriptive analytics: ‘what’

Descriptive analytics, by definition, focus on “what happened”. Whereas there’s a lot of hype in the space, most “analytics” still constitute just this type. One could argue that a lot of the descriptive analytics is not actually analytics, but rather simple statistics. These, of course, are usually displayed in a visual and digestible dashboard format, reinforcing the perception of analytical power.

As mentioned, the focus is on phenomena and their magnitude. Some arbitrary examples of descriptive analytics could be how many employees completed training, how long it took them, how they engaged with the learning resources etc. Although the analysis part is limited, there’s still value to be had in this kind of analytics in L&D as well. A lot of these things provide a good basis for reporting. Engagement statistics can even help to improve the quality of learning resources. However, using this mostly quantitative statistical data, you shouldn’t forget to use also qualitative insights to get a complete picture.

2. Diagnostic analytics: ‘why’

Whereas descriptive paints a part of the picture, diagnostic analytics help to complete it. In general, these type of analytics aim to answer the question “why did it happen”. The focus, therefore, is in the underlying reasons behind the phenomena described above.

Overall, there can be incredible value understanding the ‘why’. For instance, why did the learners pass on an activity? Why did the learning not translate into action? Why is a particular learning experience successful? While descriptive information is important, it’s often useless unless we understand the why. By understanding the relationship between different factors, we can make better learning – and business – decisions.

3. Predictive analytics in L&D

While the segment of predictive analytics is not entirely black and white, e.g. diagnostics may contain generic predictive analytics, we’ll deal with it as one segment. Like the name gives away, predictive analytics deal with the future. In general, the aim is to answer the question: “what will happen”. This focus makes it a powerful decision making support tool for not only L&D teams, but the business as a whole.

For instance, predictive analytics in L&D can provide valuable insights on the expected outcome of training, i.e. what kind of effect or impact can we expect. It’s also possible to predict trends, e.g. which departments are on the rise, which are regressing. On a more granular level, it can also help trainers and L&D professionals to determine which learners may be at risk and intervene early, rather than too late after the fact. Another interesting value scenario could be to predict individuals’ potential in reflection to their performance in learning, something that one could use e.g. in leadership pipeline planning.

4. Prescriptive analytics in L&D

Finally, the fourth level of learning analytics in this mapping of ours is prescriptive analytics. Whereas predictive analytics focus on what the future is likely to look like, prescriptive analytics in turn focus on “how to make it happen”. Similar to the previous, these analytical tools tend to offer businesses significant support and power in their decision making. Just like a doctor, the algorithms prescribe a particular course of action to fulfil a given goal.

In the realm of L&D, prescriptive analytics can come in handy on many fronts. One application is to provide recommendation on learning interventions. For instance, the algorithms can calculate the optimal learning paths for different groups or individuals, and identify suitable resources or courses for them to take to progress. Furthermore, these tools also enable scenario analysis, e.g. how to best roll out certain programs. Overall, the goal is optimisation across the board, and the analytics provide the recommended courses of actions to do that.

Final words

Overall, all the different levels of analytics can provide value to learning organisations. Although, the value tends to increase the more sophisticated the analytics in L&D. The development in the space is rapid, and we are constantly finding new ways of capturing learning impact and delivering value through learning. Tools like learning big data, as well as artificial intelligence, are necessary pieces to the puzzle nowadays. They enable us to constantly develop even smarter solutions. If you’re looking to get your L&D analytics strategy up to speed to be able to visualise the real impact of learning in your organisation, don’t hesitate to drop us a note. Let’s take on the future of learning together.

AI Tools for L&D – Examples & Uses

Artificial intelligence tools for L&D

Artificial Intelligence Tools for L&D – Examples & Uses

The advent of artificial intelligence brings about significant analytical power that corporate L&D can take advantage of. While the AI technologies are generally nothing new, the significant increase in computing power has made the rapid development of recent years possible. Whereas the strong all-powerful AI remains a dream, there are a lot of practical applications for the technology. Here are 3 examples and specific use cases for different AI tools for L&D.

Recommendation engines & algorithms

One of the most commonly implemented AI tool in L&D is a recommendation engine. Most often used for recommending content, the engine analyses the context of an individual learner, and aims to offer a personalised curation of learning resources based on the materials given. However, it’s worthwhile to note that these types of recommendation engines have existed for long, even without AI.

Whereas content recommendation works on a relatively micro level, it’s possible to use the same principles on a wider spectrum. Some of the more advanced recommendation algorithms and AI tools don’t just recommend content, but can also extend to recommend different interventions and courses of action for the L&D team. For instance, the algorithms can provide suggestions on learning paths for different groups.

Grouping algorithms

Another great example of AI tools suitable for L&D are grouping algorithms. While they constitute a very basic form of machine learning, these algorithms can be a powerful tool. Essentially, what the algorithms do is they analyse different individuals or groups (e.g. business units, departments, locations) and their attributes. For instance, the algorithms could detect groups with similar recommended learning paths. Consequently, the L&D could use these inter-organisation groups as basis for organising learning, rather than arbitrary division.

Furthermore, another use case is to use similar grouping algorithms to group people based on their ability. This type of use would detect individuals’ and their groups’ common existing capabilities, and propose reorganisations based on that. In practice, this would enable further personalisation of learning by dividing the organisation into groups, and offering each group the optimal difficulty and degree of content.

Predictive analytics and modelling

Another great use of AI tools for L&D is on the analytics front. While there are several uses for learning analytics, AI makes possible more than what we are used to. Instead of simply reporting descriptive analytics, AI enables us to get into diagnostic, predictive and prescriptive analytics. Diagnostics generally aim to answer why certain things happened (i.e. why did learning results drop). While that in itself is incredibly valuable information from an organisational development perspective, there’s still more to unlock.

Predictive analytics enable us to answer questions about potential impact (e.g. “what will happen if we get learning engagement to increase by 20%?”). This enables organisations to run “what if” analysis and supports them in identifying the areas of L&D where it’s possible to make the most impact. Prescriptive analytics, on the other hand, do a similar thing, centring around inputs (e.g. what do we need to do to raise learning engagement by 20%). While these kind of analytical powers require significant commitment in measurement and defining relevant parameters, they provide a tool for L&D to demonstrate its impact to the business that hasn’t been around before.

Final words

While AI is currently suffering a slight inflation thanks to its buzzword status, there are a lot of great AI tools for L&D out there and in the making. These tools not only enable learning professionals to offer better learning experiences, but also to understand the impact of learning. There’s also big potential in automatising a lot of the conventional information gathering. This, in turn, should enable L&D teams to focus on their core competence – delivering great learning. If you’re interested in the different possibilities AI can offer and how to use AI in organisational development, contact us here. We’d be happy to share some of our experiences, examples and research.

Knowledge Assessment in Corporate Learning – 5 Methods

Knowledge Assessment in Corporate Learning

Knowledge Assessment in Corporate Learning – 5 Methods

Whenever we do training, it’s generally a good idea to include some kind of assessment. As organisations, proper knowledge assessment enables us to track employee development and conduct analysis on instructional efficacy. While it’s important to go beyond this level of assessment to capture real organisational impact, it’s vital to get the basics right. A challenge in corporate learning is that the evaluation is often too immediate, intimidating and ineffective. Here are 5 methods that not only help in those aspects, but can also make testing more fun!

Continuous assessment with low-key quizzes

One of the challenges of assessment is that it’s often only administered after the fact. However, good evaluation should be continuous. Therefore, instead of saving the quizzes and test until the end of the course or activity, distribute them throughout. This also helps you as the evaluator to spot learning challenges early and intervene accordingly. Furthermore, instead of a daunting battery of never-ending questions, use them in small sets embedded in the content. This makes the whole thing a little more approachable, as the continuous type of questioning feels more like exercises than formal testing.

Constant tracking of activities

Another less quizzing-focused way of knowledge assessment is seamless tracking. The idea is to use comprehensive data collection tools, such as xAPI, to continuously collect engagement data on digital learning experiences. Formal testing is replaced by benchmark measures for user inputs and outputs, that the analytics track learners against. For instance, those who engage with a training video for its full length receive a “higher score” than those who didn’t. Alternatively, those who have made contributions or reflections about the learning on the organisation’s social learning platforms receive higher marks than the rest. These are just a few examples, but the goal is to make evaluation as seamless and automatic as possible.

Scenario-based simulations as knowledge assessment tools

Training simulations are not only good for simulating real life scenarios, they can also be used in highly practice-oriented assessment. This form of evaluation models real life situations and application contexts of the content. Therefore, instead of just answering abstract questions, the learners are able to apply the knowledge in a virtual environment. Depending on the training topic, you can assess multiple variables, e.g. speed, accuracy and confidence. The great thing about these simulations is that they also can make learners more confident in applying the skills on the real job environment, as they’ve got some practice under their belts.

Social analytics for social learners

In case you’ve already implemented social learning tools in your organisation, there’s an interesting alternative to conventional quizzing. Relying on the notion that reflection is one of the most important parts of learning, social analytics can help us to analyse interactions and provide a novel way of knowledge assessment. If you’ve implemented e.g. discussion boards, you could use analytics tools to evaluate learners based on the quantity and quality of discussion they bring in. For instance, simple counters can collect the quantity of comments by a particular learner. Similarly, other algorithms can determine the quality of those comments – whether they contribute to the discussion or not. If you already have a good learning culture, this could present an interesting alternative to some assessment.

Before-, after- and long-after quizzes

Finally, if nothing else, you should at least provide a knowledge assessment opportunity before and after a learning activity. This helps you gain insights into the development that happens. Furthermore, pre-tests can also serve as valuable data sources for instructors and designers, based on which to personalise the learning content. However, an interesting addition would be “long-after quizzes”. The problem with most post-training tests is that they’re too immediate. They tend to capture short term recall rather than real learning. As the forgetting curve tells us, people tend to forget a lot over time. Therefore, introducing quizzes some time after the training can serve a meaningful purpose of capturing the amount of knowledge that really stuck.

Final words

Overall, good assessment is an art form of sorts. There’s no single right answer to what works best. As long as you’re working towards more formative assessment, you’re on the right track. Getting the basics right by asking good eLearning questions also helps a lot. However, this kind of knowledge assessment is only the beginning. We still need to understand how learning translates into action, and how action translates to performance. And it’s the latter two that pose the real challenge in corporate learning. In case you need help solving those challenges, or just in building better corporate learning assessment, we’re happy to help. Just drop us a note here and tell us about your challenge.

Learning Technology Integrations – A Quick Guide

Learning Technology Integrations

Learning Technology Integrations – A Quick Guide

Often, a challenge in using information systems in complex organisations is that the systems don’t talk to each other. Information is scattered and outdated, transition between different systems is not easy and it’s hard to get a unified view of what’s going on as data is spread across multiple silos in different formats. Hence, system integrations have become important. As more technologies emerge in L&D, the topic has become important here too. Therefore, we put together a quick guide on the most relevant learning technology integrations you should know. Take a look!

Single Sign-on Integration

Single sign-on (SSO) is a basic learning technology integration but a handy one. With SSO, your users are able to login to the different learning technology systems by using their existing company accounts. For instance, say you have Microsoft accounts that employees use for identifying themselves. Instead of having to remember a new set of login credentials, employees are able to login to other systems with them.

The benefits of SSO integration include user experience and security. Moving between different systems is much easier when you don’t have to login separately. Also, less credentials means more security. Furthermore, as the company controls the original credentials, security interventions can be swift. Also, as soon as an employee’s account gets terminated, they lose access to all the other systems too.

HR system integrations

While you’re using learning technologies, you also most likely have some kind of HR system. Another important learning technology integration happens between that and the learning technologies. The goal of such integration is to update information at both ends automatically. For instance, the learning tool pulls personnel data from the HR system, and assigns the user learning based on that information. Thus, whenever there’s a role change, you don’t need to manually assign new learning tasks. Also, the learning technology tool can push back information to the HR system. For instance, whenever an employee finishes a learning path, the tool sends information to the HR system.

The benefit of this type of learning technology integration is the elimination of manual administrative tasks. There’s no longer a need to retrieve and upload e.g. excel files between different systems. Furthermore, with good initial configuration, employees can e.g. automatically get access to learning resources based on their role, seniority, business unit, geography etc.

LRS Integrations

Learning Records Store (LRS) is a powerful tool based on the xAPI framework. It enables the collection of data from multiple systems under the same roof. For instance, you may have multiple LMS systems that all feed into this same data archive. Or you might feed in face-to-face training records, mobile app and performance support tool data. While it may require data operations, it’s also possible to pull in data from non-learning systems, such as performance management system or that HR system to an LRS.

With this kind of learning technology integration, you can have all your training-related data, and much more, in the same format, in the same location. This makes effective learning analytics a lot easier. Hence, you’ll be able to get a better understanding and bird’s-eye view of what’s happening in the entire organisation. All the LRS tools also become equipped with powerful dashboards and data tools.

Webhook Integrations

Finally, webhooks are a type of integration that can sometimes prove handy. The fundamental idea is that a webhook notifies you when something happens in a system, for which you can then create an automated response. In the context of learning technology integrations, there can be several use cases. For instance, whenever a learner does something in App 1, do something in App 2. Or, as a group of learners have finished a learning experience, send an automatic report to their line manager.

Webhooks are a good way of integrating certain things and automating workflows. When running multiple systems and platforms, it’s easy to get lost in the administrative work. Designing these types of integrations and reactions in a smart way enables you to decrease that workload.

Final words

Overall, the future of learning is integrated. The different tools we use have to talk to each other. Otherwise, it all quickly becomes inefficient and redundant. Learning technology integrations are an important thing to consider whenever bringing new technology into the fold. Good integrations and automation protocols can significantly reduce the administrative workload that goes into managing learning tools or other systems.

Kaufman’s Learning Evaluation Model – Quick Overview

Kaufman's Learning Evaluation Model

Kaufman’s Learning Evaluation Model – Quick Overview

The field of corporate learning has a lot of different frameworks for evaluation. While not all of them are good or even necessary, some frameworks still provide good points of consideration and models for organising information. For instance, last week, we took a look at the Success Case Method which works best on capturing qualitative insights. This week, we decided to take a quick look at Kaufman’s learning evaluation model, and see if it still provides valid contributions.

Kaufman’s Learning Evaluation Model briefly explained

Instead of providing an entirely new framework, Kaufman’s model aims to improve the commonly used Kirkpatrick’s 4 levels. The allegedly improved version introduces some additional consideration by seemingly dividing Kirkpatrick level 1 into two and adding a fifth level. The levels and the respective questions and considerations for modern L&D professionals go as following:

  1. Input – what kind of resources and learning materials do we have at our disposal that we can use to support the learning experience?
  2. Process – how’s the delivery of the learning experience? Is it accepted? How are people responding to it?
  3. Micro level results – Did the learner or the learning group acquire the knowledge? Did they apply it on their jobs?
  4. Macro level results – Did performance improve due to this learning and application of new in the workplace? What kind of benefits arose from the learning on an organisational level?
  5. Mega level impact – What kind of impact did the learning have on society or larger external stakeholder groups?

Reflection on the Kaufman model

As the original author proposed the model as an improvement over Kirkpatrick’s, we’ll make the comparison accordingly. The separation of input and process might be a good one to make. Nowadays, we have access to vast pools of digital resources both in the public domain and sitting in corporate information systems. There are a lot of situations where organisations could leverage on a lot of this information and resources. For instance, curation-based learning content strategies might make more sense for some organisations. Hence, the introduction of inputs as a separate consideration might be a helpful change to some on the framework level.

Reversely, Kaufman also groups Kirkpatrick’s levels 2 and 3 together. While these are just semantic changes, it’s within this section that organisations have their L&D challenges. Often, learning is not the problem, and people may retain the newly learnt quite well. But the problem often comes in application, or learning transfer, as people fail to use these new skills or practices back at their daily jobs. Consequently, that’s something that modern L&D professionals should also focus more on.

Finally, Kaufman’s learning evaluation model introduces the “mega level”, or societal impact. While it may be a valid consideration for a select few, presumably this impact would go hand-in-hand with the business results analysed at the “macro level”. Or if not, we nevertheless encounter the immense difficulty of evaluating impact to external entities.

What’s in it for the L&D professional?

Like with any of the prevalent frameworks or models of evaluating learning at the workplace, it’s important not to take things too seriously. These models do provide a good basis for structuring one’s approach to evaluation, but L&D professionals should still adjust them to fit the context of their particular organisation. It’s also noteworthy that all these models were built on the conception of formal learning. Hence they may fail to address some more informal workplace learning. Regardless, the key takeaway from Kaufman’s learning evaluation model could be the notion of existing resources that can contribute to learning experiences. It’s not always necessary to reinvent the wheel after all!

If you’re looking for new ways of evaluating learning, especially learning transfer or business impact, drop us a note. We’d be happy to help you co-engineer evaluation methods that can actually demonstrate L&D’s value to the business.

Quick Guide: Brinkerhoff’s Success Case Method in Workplace Learning

How to use Brinkerhoff's Success Case Method in workplace learning?

How to Use Brinkerhoff’s Success Case Method in Workplace Learning

There are a lot of different frameworks that organisations use to evaluate the impact of their workplace learning initiatives. The Kirkpatrick model and the Philips ROI model may be the most common ones. While the Brinkerhoff’s Success Case Method is perhaps a less known one, it can too provide value when used correctly. In this post, we’ve compiled a quick overview of the method and how to use it to support L&D decisions in your organisations.

What’s the Brinkerhoff’s Success Case Method?

The method is the brainchild of Dr. Robert Brinkerhoff. While many of its original applications relate to organisational learning and human resources development, the method is applicable to a variety of business situations. The aim is to understand impact by answering the following four questions:

  • What’s really happening?
  • What results, if any, is the program helping to produce?
  • What is the value of the results?
  • How could the initiative be improved?

As you may guess from the questions, the Success Case Method’s focus is on qualitative analysis and learning from both successes and failures on a program level to improve for the future. On one hand, you’ll be answering what enabled the successful to succeed and on the other hand, what barred the worst performers from being successful.

How to use the Brinkerhoff Method in L&D?

As mentioned, the focus of the method is on qualitative analysis. Therefore, instead of using large scale analytics, the process involves surveys and individual learner interviews. By design, the method is not concerned with measuring “averages” either. Rather the aim is to learn from the most resound successes and the worst performances and then either replicate or redesign based on that information.

So ideally, you’ll want to find just a handful of individuals from both ends of the spectrum. Well-designed assessment or learning analytics can naturally help you in identifying those individuals. When interviewing people, you’ll want to make sure that their view on what’s really happening can be backed with evidence. It’s important to keep in mind that not every interview will produce a “success case”, one reason being the lack of evidence. After all, you are going to be using the information derived with this method to support your decision making, so you’ll want to get good information.

Once you’ve established the evidence, you can start looking at results. How are people applying the newly learnt? What kind of results are they seeing? This phase requires great openness. Every kind of outcome and result is a valuable one for the sake of analysis, and they are not always the outcomes that you expected when creating the program. Often training activities may have unintended application opportunities that only the people on the job can see.

When should you consider using Brinkerhoff’s Success Case Method?

It’s important to acknowledge that while the method doesn’t work on everything, there are still probably more potential use cases than we can list. But these few situations are ones that in our experience benefit from such qualitative analysis.

  • When introducing a new learning initiative or a pilot. It’s always good to understand early on where a particular learning activity might be successful and where not. This lets you make changes, improvements and even pivots early on.
  • When time is of the essence. More quantitative data and insights takes time to compile (assuming you have the necessary infrastructure already in place). Sometimes we need to prove impact fast. In such cases, using the Brinkerhoff method to extract stories from real learners helps to communicate impact.
  • Whenever you want to understand the impact of existing programs on a deeper level. You may already be collecting a lot of data. Perhaps you’re already using statistical methods and tools to illustrate impact on a larger scale. However, for the simple fact that correlation doesn’t mean causation, it’s sometimes important to engage in qualitative analysis.

Final thoughts

Overall, Brinkerhoff’s Success Case Method is a good addition to any L&D professional’s toolbox. It’s a great tool for extracting stories of impact, telling them forward and learning from past successes and failures. But naturally, there should be other things in the toolbox should too. Quantitative analysis is equally important, and should be “played” in unison with the qualitative. Especially nowadays, when the L&D function is getting increased access to powerful analytics, it’s important to keep on exploring beyond the surface level to make the as informed decisions as possible to support the business.

If you are struggling to capture or demonstrate the impact of your learning initiatives, or if you’d like start doing L&D in a bit more agile manner, let us know. We can help you in implementing agile learning design methods as well as analytical tools and processes to support the business.

How to Use Data to Support Face-to-face Training?

How to support face-to-face training with data?

How to Use Data to Support Face-to-face Training?

Organisational learning and development is becoming increasingly data-driven. This is fuelled by the need to demonstrate impact, be more effective and direct resources more efficiently. With the advent of new learning technologies and platforms – many of which come with built-in analytics capabilities – we are increasingly better equipped to measure all kinds of learning in a meaningful way. However, for the most part, the collection and especially the use of this data has been limited to only digital learning experiences. But there’s no reason to draw that kind of limitation. In fact, traditional face-to-face training could benefit greatly from having access to data and analytics. So, let’s explore how we could support face-to-face training with data!

Current challenges with face-to-face training

Face-to-face training has its fair share of challenges ahead. On one hand, it’s rather expensive, once you factor in all of the lost productivity and indirect costs. However, cost becomes less of an issue as long as you can demonstrate impact and value. And that’s perhaps a business challenge. The real learning challenges, on the other hand, are related to the delivery.

Overall, face-to-face learning is not particularly personalised. Trainers are often not aware of the existing knowledge of the participants, let alone their personal context: jobs, tasks, challenges, problems, difficulties, team dynamics etc. Hence, the training – especially in subject matter intensive topics – often results in a more or less one-size-fits-all type of approach: trainer goes through the slide deck, perhaps with a few participatory activities and some feedback at the end. Even if you’re an experienced trainer, it’s difficult to improvise and go off-course in the heat of the moment to pursue the emerging (personal) needs of the learners.

So, wouldn’t it be beneficial and make sense to put that information into good use and start to support face-to-face training with data? Yes it would. Here are two easy ways you can get a lot more out of your “classroom” sessions.

1. Determining existing knowledge and skill level with pre-work

One of the simplest things you can do to get more value out of your face-to-face training is to start using pre-work. Have your learners go through digital learning materials before coming to the session. Build in some seamless assessment and collect information in the form of user submissions and feedback. With good design and proper use of learning analytics, this already gives you a lot of valuable information.

As a trainer, you can then check e.g. what your learners already know and what they are having difficulties with. It probably doesn’t make sense to spend a lot of time in the classroom on things they already know. Rather, you’re better off using the time on addressing problem areas, challenges and personal experiences that have come out during the pre-work. Or if you want to explore making things even more impactful, try an approach like flipped learning. In flipped learning, you use digital to deliver the knowledge while focusing the classroom time solely on discussions, practice and hands-on activities.

2. Using learning records history to understand the people you’re training

Another idea we could do better at is understanding the people we deal with. At their best, these records may provide a whole history of learning. As these digital platforms compile more and more data about our learning experiences, it would be beneficial to let the trainers access that as well. By understanding prior experiences, the trainer can create scaffolding – build on what the employees already know from before. This might be totally unrelated to the current topic too.

Furthermore, having access to a “HR” history of the employees might be beneficial too, especially in large organisations where the trainer doesn’t necessarily now the people personally. For instance, what are the attendees jobs? Where do they work? Where have they worked before? In what kind of roles? All the information like this brings additional data points to personalise the learning experience on. In some cases, you might even find that there’s a subject matter expert in the group. Or someone who has dealt in practice with the issues of the ongoing training. These could be assets you can leverage on, of which you wouldn’t perhaps even know about without the data.

Final thoughts

All in all, there’s a whole lot that data and analytics can offer to “traditional” training. The need for personalisation is real, and smart use of learning data helps to cater to that need. Of course, you can use data to support face-to-face training in many more ways, these are just two examples. For instance, post-session feedback is much more handy to do digitally. This feedback can then be used to improve future sessions on the same topic (or with the same participants).

If you feel you could do more with data and smart learning design, don’t hesitate to reach out. We can help you design blended learning experiences that deliver impact and value.

How to Help Learners Succeed with Personal Learning Analytics?

How to use personal learning analytics to help learners succeed?

Personal Learning Analytics – Helping Learners with Their Own Data

Generally, the use of data and analytics in workplace learning is reserved for a small group senior people. Naturally, learning analytics can provide a lot of value for that group. For instance, data-driven approaches to training needs analysis and measuring corporate learning are quickly gaining ground out of the need to prove the impact of workplace learning initiatives. However, there could be further use cases for those analytical powers. One of them is helping the learners themselves with personal learning analytics.

What is ‘personal learning analytics’?

Like the title may give away, personal learning analytics is just that: individualised information made available to the learner. The major difference with conventional “managerial” analytics is that most of the information is about the learner in question. Whenever that’s not the case, the information of others would always be anonymised. A few exceptions could include e.g. gamification elements which display user names and achievements. So, effectively, it’s all about giving the user access to his/her own data and anonymised “averages”.

How can we use personal analytics to help learners?

One of the challenges in conventional approaches to workplace learning is that the process is not very transparent. Often, the organisation controls the information, and the learners may not even gain access. However, a lot of this information could help the learners. Here are a few examples.

  • Comparing performance against others. While cutthroat competition is probably not a good idea, and learners don’t necessarily want others to know how they fared, they can still benefit from being able to compare their performance against the groups. Hence, they’ll know if they’re falling behind and know to adjust their effort/seek new approaches.
  • Understanding the individual learning process. All of us would benefit greatly from information about how we learn. For instance, how have we progressed, how are we developing as well as how and when do we engage with learning. Luckily, personal learning analytics could tell us about all of that. The former helps to keep us motivated, while the latter helps us to identify patterns and create habits of existing behaviour.
  • Access to one’s learning history. We are learning all the time and all kinds of things. However, we are not necessarily very good at keeping track ourselves. If we just could pull all that data into one place, we could have a real-time view into what we have learned in the past. Potentially, this could enable us to identify new skills and capabilities – something that the organisation would likely be interested in too.

Towards self-regulated learning

Across the globe, organisations are striving to become more agile in their learning. One key success factor for such transformation is the move towards more self-regulated learning. However, achieving that is going to be difficult without slightly more democratised information.

If the learners don’t know how they are doing, they cannot really self-regulate effectively. And no, test scores, completion statistics and annual performance reviews are not enough. Learning is happening on a daily basis and the flow of information and feedback should be continuous. Thankfully, the technology to provide this sort of individual learning analytics and personalised dashboards is already available. For instance, xAPI and Learning Record Stores (LRS) enable us to store and retrieve this type of “big learning data” and make it available to the learners. Some tools even provide handy out-of-the-box dashboards.

On a final note, we do acknowledge that the immediate applications of “managerial” learning analytics likely provide greater initial value to any given organisation. And if you’re not already employing learning analytics to support your L&D decision making, you should start. However, once we go beyond that stage, providing access to personal learning analytics may be a good next step that also helps to facilitate a more modern learning culture in the organisation.

If you’re eager about learning analytics, whether on an organisational or personal level, but think you need help in figuring out what to do, we can help. Just drop us a note here, and let’s solve problems together.

Quantitative vs. Qualitative Data in Learning – Where’s the Value?

Quantitative vs Quantitative Data in Learning featured image

Quantitative vs. Qualitative Data in Learning

Corporate learning and development is becoming increasingly data-driven. On one hand, learning teams need to be able to track and assess all the different learning experiences. On the other hand, they also need to demonstrate business value. This requires smart use of data collection and analytics. While all efforts towards more data-driven strategies are a step towards the better, we’ve noticed a strong bias towards quantitative data. While quantitative data is important, it’s not quite enough to effectively evaluate learning as a process. To achieve that, we’ll need to also pay attention to qualitative data in learning. To help clear up some of the ambiguity, let’s look at each of the two and how to use them smartly.

What is quantitative data in learning?

Quantitative data, by definition, is something that can be expressed in numerical terms. This type of learning data is used to answers questions such as “how many” or “how often”. In learning, organisations often use quantitative data in the form of tracking:

  • Enrolment rates
  • Completion rates
  • Time spent on learning
  • Quiz scores

Unfortunately, this is often where most organisations stop with their learning analytics capabilities. However, the problem is that this type of data tells us absolutely nothing about the learning efficacy or the process itself. While we can always employ better quantitative metrics, such as engagement rates, that will never be enough. In any case, we are always better off with access to qualitative data as well.

What is qualitative data in learning?

There’s a lot of questions that we cannot effectively answer with numbers, hence we need qualitative data. Qualitative data, by definition, is non-numerical and used to answer questions such as “what”, “who” or “why”. Examples of qualitative data in learning could include:

  • How the learners progressed through the activities
  • The nature and topics of discussion between the learners
  • How employees accessed the learning
  • How the employees applied the learning on the job

It’s quite evident, that these types of questions go a lot further in understanding behaviours, efficacy and the learning process as a whole. Without this kind of data, you effectively have no idea what works and what doesn’t. Or, you may be able to see the effect (e.g. low completion or engagement rates) but may have no idea of the underlying cause (e.g. irrelevant content, bad user experience). From a learning design perspective, these type of data points are immensely valuable.

How to use quantitative and qualitative learning data meaningfully?

So, in general, there’s a lot of untapped value in qualitative data and understanding the learning process on a holistic level. But of course, that doesn’t mean that you should forget about quantitative data either. Instead, you should always aim to validate ideas and insights derived from qualitative learning data through the use of quantitative data. How else would you know the impact of things at scale?

For instance, we think there’s a lot of value in employee discussions and sharing. These provide a great opportunity for organisations to source knowledge and transfer information between its layers. It often happens that employees in these learning discussions bring up their own best practices and work methods (qualitative), that even the L&D team is not aware of. However, to understand if the practice can be applied across the organisation, we may need to do a survey or a poll to understand the magnitude of the idea (quantitative).

Final words

Overall, we believe that a lot of the traditional “LMS metrics” are quite useless for anything other than compliance purposes (and even for that, there are better ways…). To really deliver great learning experiences, organisations need to understand learning as a process and not strip it down to simple numbers of how many people took part. In essence, companies need to focus more on the quality of their data and the ability to quantify the impact of insights derived from qualitative data in learning.

This often requires technical capabilities, such as the xAPI, but once again, buying technology is not enough. Rather, organisations have to understand the meaningful things to measure and cut through the noise. If your organisation needs help in that, or in crafting more data-driven learning strategies in general, we are happy to help. Just drop us a note here and tell us about your problem.

How to Use Social Analytics in Organisational Learning?

How to use social analytics in organisational learning

How to Use Social Analytics in Organisational Learning?

Nowadays, the HR and L&D functions of organisations are increasingly data-driven. Many employ analytics to aid in decision making processes and to try to analyse e.g. the effectiveness of learning initiatives. While there’s a lot of ways to use learning analytics, we found that organisations are underutilising a particular type of data. While digital learning platforms increasingly come with social features (walls, blogs, news feeds, etc.), not many are yet paying attention to how people use these social elements, and the potential implications for the organisation. Thus, here are three cases for using social analytics in organisational learning.

1. Measuring interactions between learners

If we want to understand learning on a holistic level, it’s important to also understand it granularly. Hence, one good use of social analytics is to analyse interaction between the learners. Some example data points for these interactions could be:

  • How many times was a particular piece of content or user submission liked/shared?
  • The number of comments that a post or a piece of content attracted
  • How often/for how long are users interacting with each other?

The first two examples above could help you to understand what kind of content works the best or sparks the most discussion. The latter one could help in understanding how people collaborate with each other.

2. Measuring the quality of interactions and organisational influence

Naturally, quantitative data only gets us so far and it’s important to understand the quality of the “social” as well. Empty comments that don’t contribute to the discussion are not likely to create value. Hence, organisations could consider using semantic analysis, powered by NLP algorithms to gauge “what” is being talked about, and whether the social discourse is contributions or just mere commenting. The benefits of semantic analysis are two-fold. It may, again, help you to spot problem areas in your content (e.g. when learners need to clarify concepts to each other). But perhaps more importantly, it can provide you information on who are the “contributors” in your organisation.

Also, it’s important to understand “who” are interacting and “how” they interact. This level of analysis could be helpful in determining organisational influence. Who are the individuals with networks across the organisation, or liked by their peers, or helping everyone. These people may even go unnoticed if not for the social analytics, but maybe they could be among the future leadership potential in the organisation. Even if not, there’s a good chance that these may be local opinion leaders that you could utilise to execute your strategy in the future.

3. Sourcing ideas and innovation from the ground up

Finally, a potentially highly impactful application of social analytics is in sourcing information, ideas and innovation from within your own organisation. Often, the people doing a particular job have a lot of ideas on how to improve. It’s just that these ideas rarely reach the top, due to organisational layers, bureaucracy, culture etc. Could we help in that?

With social analytics, you could effectively set up hidden knowledge collection tool. By analysing discussions/sharing/likes around content or user submissions, you could establish a direct flow of information from the line of duty all the way to the decision makers in the upper echelon’s of the organisation. The decision makers would see what kind of work practices/ideas/methods gain the most traction, and then find ways of replicating them across the organisation. On a technical level, such flows are not hard to set up. Mostly, you just need quantitative data, or a combination of quantitative and semantics, depending on the case.

Final words

All in all, there’s a lot of under-utilised value in social analytics for workplace learning and organisational development purposes. As learning is fundamentally a social experience, this data helps in understanding the learning that is taking place. So, as you’ll get deeper into the world of learning data, don’t just focus on the traditional metrics like course completions etc. A more social data set might provide much better insights. And if you need help in social learning or hidden knowledge collection, we can help. Just contact us here.