We talked again with Patrick Lambe, this time focused on his new book, Principles of Knowledge Auditing: Foundations for Knowledge Management Implementation. We discussed knowledge audits, the importance of context, and organizational change. Patrick is also author of Organizing Knowledge (2007) and co-author of The Knowledge Manager’s Handbook (2019).
What have you been working on since our last interview?
PL: Recently we have been working with inter-governmental organisations on knowledge audits. Working with these projects has reminded me of the importance of organizational culture in knowledge management. Since the pandemic, the projects have been getting smaller. We aren’t seeing the big projects anymore. Organizations are seeking to adjust and refocus.
We find with inter-governmental organizations the strategy is determined by the governments or their representatives, member-states, and stakeholders. This means strategy and objectives are highly negotiated and the downside can be slowness to adopt change. It feels these organizations are structured to minimise the effects of change. This is an interesting challenge in knowledge management because the objective of most knowledge management activity is change.
Individuals respond to change if they can see the purpose and are supported throughout the process. Too often we design change and don’t understand the context or communicate “the change in context” well. People who are expected to change their behaviours are often uninvolved in the design of the change. They have competing demands coming at them from different directions.
Tell us about how this new book on Knowledge Auditing evolved?
PL: After my first book, Organizing Knowledge, I thought it would be a good idea to write a similar book on practical methods for knowledge auditing. I thought I had a fairly robust methodology, derived from the classic information auditing process, but extended to knowledge management.
The more I researched for this new book the more I realized that the information audit tradition was just one practice of auditing relating to knowledge use in organisations.
There are several traditions which date back several decades. Communication audits started in the 1950s. Communication at that time was about the flow of information to decision makers. Then information audits emerged in the 1970s and 1980s. Then knowledge audits in the 1990s and 2000s. Each of these had developed more or less independently of each other. They had their own traditions, their own principles, frameworks and methodologies. What I imagined as a simple practice was in fact a whole landscape of competing definitions and different assumptions about what was meant. I realised it was just not possible to write that book without first making sense of that landscape. Which is why this latest book is about Principles of Knowledge Auditing. The original book idea is still on my agenda but I’m about 80% done on another book focused on knowledge mapping before I can get to the original book idea. Knowledge mapping is a whole other practice area within knowledge auditing.
”Individuals will change if they can see the purpose and are supported throughout the process. Too often we design change and don't understand the context or don't communicate “the change in context" well.
You reference several types of audits in books. Can you describe what a knowledge audit is?
PL: A knowledge audit can be many things. In the book I describe several different forms. These are guided by purpose and they help take stock of your environment whether it’s knowledge, resources, or activities based on the understanding of your environment.
You can approach this exercise in a number of different ways. The simplest, or foundational way is to do an inventory audit of your content. There are limitations with this approach – it will only cover what’s visible. You don’t get a salient sense of the most current content, or the non-tangible knowledge that people use to perform their work. When we’re doing inventory audits, we do knowledge mapping, which covers both explicit and tacit knowledge, and mapping starts with establishing the context of knowledge use.
What’s the activity you’re performing and what are the knowledge resources that you use to do that?
What is being performed, who is performing it, what knowledge are they relying on to perform those tasks?
If you want to understand how well you are doing in terms of your current knowledge and information flows, you might want to do a diagnostic around the pain points that people uncover including:
- cultural behaviours
- knowledge management processes
- methods for encouraging sharing
These are evaluative types of audits. Discovery review audits that represent what it is you’re doing, how you’re doing it and based on your needs and goals, what you should do next.
Then there are more formal audits where you are auditing to a standard or benchmarking against a set of external practices. Here you may be looking at how you extract value from your knowledge resources. You can use these distinct audit types individually or in combination. It all depends on the purpose and goal for why you are doing the audit in the first place.
For example, a large organization looking for recognition for their knowledge management program may opt for a standards-based audit like ISO30401 management systems audit. On the other end of the spectrum, you may be new to KM and you’re not sure where to start. In this case you might combine an inventory audit and a discovery review audit, just to take stock of what you have and where your opportunities for improvement are.
It’s a good idea to use a combination of audits when you want to bring about change. The most interesting elements of knowledge work in organizations are often not easily observed. You need different methodologies to get different perspectives and look for the common patterns. It’s a kind of triangulation technique. A single audit instrument will not tell you everything that you need to know if your objective is to bring about real and useful change.
”You need distinct methodologies to obtain diverse perspectives and look for the common patterns. A single audit instrument will not tell you everything that you need to know if the objective is to bring about real and useful change.
How can you use the knowledge audit tools and methodologies you described to improve their taxonomies and ontologies?
PL: Taxonomy is still perceived by some as a technical discipline about defining terms and relationships. Understanding the context in which the taxonomy will be used has received relatively little attention in the past.
Testing mechanisms, use cases, well developed scenarios representing real people undertaking real work are not always in place. They are not driving the design or validation of the taxonomy.
Building a sense of context can come directly out of a knowledge audit. Building a rich sense of context is exactly what a knowledge audit is intended to achieve. Knowledge mapping as part of an inventory audit also builds out contemporaneous descriptions of the key knowledge resources used to perform key activities. They form a rich evidence base for taxonomy design. This is why the majority of our taxonomy work is often a follow through from a knowledge audit. Our clients benefit from two outputs; a knowledge management strategy to direct the purpose of the taxonomy, the evidence base for the taxonomy design, and the context descriptions to build use case scenarios for testing.
What did you learn from your research that holds relevance for the practice of knowledge management today?
PL: I learned a lot from the practice of communication audits in the 1950s, 60s and 70s. There was some really fascinating work on methodologies for understanding communication and knowledge flows in organisations that has got lost. Currently these are not widely available or broadcast in the knowledge management space.
In knowledge audits we rely too much on surveys and interviews. Surveys are not great methods for understanding the particularities of an organization’s working context. A survey can only ask questions that you already predict are going to be relevant. You’re not going to discover anything surprising or new.
Interviews are typically with senior managers who have their own, not necessarily well-informed opinions on what should happen. These opinions may compete with the opinions of other stakeholders. You have no basis for resolving those into a common picture. These methods provide the backbone for a lot of knowledge audit practice, but they are not systematic or evidence based.
What I learned most from studying earlier forms of communication and information audits was the range of available group-based methodologies for developing a well-founded understanding not just the of organization’s contexts but also its needs and opportunities.
These are participative methods which means that they involve the people who do the actual work in representing their work and their needs, and then collaborate with us in designing the change. They can tell us about the pain points, what’s working and what’s not working. This is a much stronger basis for identifying commonalities and recommendations that might work. Then the individuals who helped you build that picture are going to recognise the expected change when it comes along. It will make sense to them.
In the book you describe methods and case studies using questions. What makes a good investigative question?
PL: Technically, a good investigative question is one that tells you something useful that you didn’t already know or necessarily predict. Why? Is a very good question in the right context. You’re going to be asking questions like
How do you do this?
Why do you do it?
What follows from that?
Who/what else depends on this process?
You need to ask broad questions about the activity that you’re meant to be supporting in order to understand it. Use the type of questions that help build that out; What? When? Where? Why? How? You can then undertake fact-based surveys-based questions in order to validate and understand this in depth but the real goal is to understand the context of work in new ways.
What challenges in KM are shared by Taxonomists?
PL: Like knowledge managers, taxonomists must figure out how to use technology to help individuals and organizations gain access to knowledge and information. A major issue, and interesting area in knowledge management is the work around intangible knowledge. This is also often difficult to represent through taxonomies: representing and making accessible the knowledge that people use, those informal and undocumented interactions.
Another area shared between taxonomy and knowledge management is the context sensitivity problem: especially when the rules of the game change. This can be the operating context, the people themselves, their roles, or the activity that they’re deployed in. Both disciplines can do a lot of running around without making progress because you haven’t fully understood the context. There might be critical elements that you haven’t seen or taken into account.
”Taxonomy technologies can enable various aspects of the work of knowledge manager. Taxonomy is particularly useful at managing information. It's good at helping people communicate and collaborate with each other.
How can ontologists use these techniques in developing, improving and understanding their work?
PL: Start with the knowledge mapping process. When you start with an activity then you ask what is the knowledge you need? What resources are needed to perform this activity? Who else do you interact with to perform this activity? This approach gives you:
- A clear agreed context from two or three people who perform the same activity to provide a shared, well-founded understanding of the activity.
- How that relates to other knowledge resources.
- The language that is used to describe those resource.
- This provides the raw material for the taxonomy or ontology.
How can technology enable the work of knowledge managers?
PL: We need to recognise that technology both enables and disables. There is a flawed habit of investing in technology, throwing it at the organization and expecting everything and everyone to adapt itself around this technology. Sometimes it just doesn’t take, sometimes it’s useful, and sometimes it disrupts.
Knowledge management practice tends to be led by the technology. We know that it’s hard to develop a good taxonomy for an organization using SharePoint. SharePoint does not accommodate all the useful features of a standards-based taxonomy. It doesn’t handle synonyms well or allow you to map related term relationships between terms or represent polyhierarchy. Yet there it is, it’s a fact of life we have to deal with.
In principle, taxonomy technologies can enable various aspects of the work of knowledge manager. Taxonomy is particularly useful at managing information. It’s good at helping people communicate and collaborate with each other. It can be useful in identifying pockets of expertise in the organization. However, this is limited because the technology relies heavily on explicit knowledge activity. It doesn’t capture the implicit non-visible information. We think we can see everything through the system but the reality is we can’t.
Technology can be useful in the learning and improving strand of knowledge management. There are text analytics tools available now capable of crawling through project reports, evaluation reports, and identifying common lessons and patterns. These enable analysts to do a preliminary gathering of content and figure out common patterns that the organisation needs to learn from. Technology can be useful in offering different ideas and options, e.g., in looking at your knowledge risks, maintaining critical knowledge, learning and improving, supporting innovation and change.
When you select a technology tool have a clear use case(s) and assess the technology on those examples. Will it perform to the organization’s requirements? Is it is actually doing what you want it to do and what you’re expecting? We use Synaptica software exclusively when working on taxonomies in in the context of knowledge management.
”When you select a technology tool have a clear use case(s) and assess the technology on those examples. Will it perform to the organization’s requirements? Is it is actually doing what you want it to do and what you're expecting?
Synaptica Insights is our popular series of use cases sharing stories, news, and learning from our customers, partners, influencers, and colleagues. You can review the full list of Insight interviews online including our recent interview with Helen Lippell.