Above and Beyond KM

A discussion of knowledge management that goes above and beyond technology.

Awards & Recognition

Subscribe to Above and Beyond KM

Subscribe in a reader

Enter your email address:

Delivered by FeedBurner

Facebook

Recent Posts

Disclaimer

This publication contains my personal views and not necessarily those of my clients. Since I am a lawyer, I do need to tell you that this publication is not intended as legal advice or as an advertisement for legal services.
  • data slide What dataset informs your mindset? That’s the question that Dr. Hans Rosling would ask you if he could. When he probed this issue with his university students in Sweden, he discovered that some of their views in the 21st century were based on a dataset that reflected the reality of … the 1950s.  In fact, their responses to his questions were so bad that he said that chimpanzees could do better. (Apparently chimps are able to get the answer right 50% of the time.)

    Dr. Rosling is a Swedish professor of public health who has become famous for his ability to take dry statistics and convey them in a clear and compelling fashion.  Along the way, he has been dispelling many of the myths that inform our mindset.  He challenged a US State Department audience in 2009 with the following words:  ”Does your mindset correspond to my dataset? If not, one or the other needs upgrading….” The unspoken premise was that his dataset should trump the flawed mindset of anyone who does not have a fact-based view of the world.

    Building a Fact-Based Worldview

    If you go the website of Gapminder, the organization Dr. Rosling co-founded, you’ll find the following appeal:

    Gapminder is a non-profit foundation based in Stockholm. Our goal is to replace devastating myths with a fact-based worldview. Our method is to make data easy to understand. We are dedicated to innovate and spread new methods to make global development understandable, free of charge, without advertising. We want to let teachers, journalists and everyone else continue to freely use our tools, videos and presentations.

    Your contribution will help us in our efforts to explain how the world is changing. Your generosity will strengthen our independence.

    Help us achieve a fact-based understanding of the world. Support our work by making a donation today.

    As I read the appeal, I found myself wishing that the legal industry had a Gapminder-like organization to help us move from myth to a fact-based worldview. What data is your firm collecting? Do the data have integrity? Do you have capable people who can analyze that data and communicate what’s meaningful? Or are your firm leaders making decisions that reflect their favorite myths?

    Ron Friedmann has a recommendation for law firms intent on developing a fact-based worldview:  ”Law firms should collect data to measure the multiple aspects of `service delivery’ and the `client experience’.” If you were to follow Ron’s recommendation, what would that mean for your firm?  What would you count? What would matter? I suspect you’re going to have look far past billable hours and realization rates to examine the profitability of matters and individual lawyers. What about measuring the rate at which lawyers of your firm innovate? Or the rate at which they convert business development opportunities into sustainable income streams? How do you measure client engagement and client satisfaction? How do you measure the contributions of law firm administrative departments? (In terms of dollars under budget? Or in terms of value delivered to clients?) And, how do you measure the contribution of each person in your firm towards the health and welfare of the firm?

    There are many opportunities for us to learn more about our business through the careful gathering and analysis of data. However, I don’t mean to minimize the challenge.  Most folks in law firms are not trained statisticians. We don’t always know what to count or understand the problems implicit in how we collect and analyze what little data we have.This is an area in which our entire industry could benefit from some training and some standardized approaches.

    What dataset informs the mindset of your law firm leaders? That’s the question Dr. Hans Rosling would ask them if he could.  But, since he can’t, shouldn’t you?

    [Photo Credit: Tom Woodward]

    No Comments
  • Here are my notes from the third session of the Enterprise 2.o Black Belt Workshop:  Measuring Success and Business Value – Metrics and Analysis

    Speakers:

    • Ted Hopton, Wiki Community Manager, United Business Media
    • Donna Cuomo, Chief Information Architect, the MITRE Corporation

    Background:

    [These are my quick notes, complete with  (what I hope is no more than) the occasional typo and grammatical error.  Please excuse those. Thanks!

    From time to time, I'll insert my own editorial comments - exercising the prerogatives of the blogger.  I'll show those in brackets. ]

    Notes:

    Ted Hopton:

    • His company organized the Enterprise 2.0 conference
    • They use Jive software for their enterprise 2.0 platform
    • Focus first on participation
      • Use the analytics module of your enterprise 2.0 tool to see who is visiting the site, where the activity is taking place, who is creating and viewing content, etc.
      • Analyze active members by level of activity
    • Problem: Make sure the metrics tie back to your project goals
    • Use qualitative measures to improve your understanding
      • Use a survey – ask how and how often people use the community
      • List possible positive outcomes and ask users which of these outcomes they have experienced
      • Ask why they don’t use the tool more
        • Use blunt, negative statements
        • Encourage them to tell you exactly how they feel
      • Use this information to benchmark (and draw out the venom – otherwise it festers)
      • Net Promoter Score – give users a scale of 1-10 and ask them how likely they are to promote your work.  Scores above 6 indicate that they will promote rather than detract your promoter. Subtract your scores below 6  from your scores of 6 and above. This yields your “Net Promoter Score.”  Obviously, the higher the better.
      • Track your success stories and share them
    • Lessons Learned
      • While it’s good to have consistent metrics, be aware that metrics evolve and your methods should evolve too
      • Beware of benchmarks (e.g., the 90-9-1 standard of participation). Make sure the benchmark you are using really applies to E2.0 projects.

    Donna Cuomo:

    • The Mitre Company runs four differently federally funded programs (including for the Dept. of Homeland Security and the Dept. of Defense)
    • Use Case 1:  Improve MITRE’s Research Program Selection Process
      • They used Spiggot to be their “innovation management tool”
      • They wanted to codify their research competition process
      • They wanted to stop people further down the food chain from weeding out ideas too early
      • They wanted to encourage broader participation (from a review perspective)
      • They created an “Idea Market”based on a SharePoint wiki
      • Their first-year metrics indicated broad participation
      • They were able to create widespread transparency
      • They used surveys to compare the new tools (and user satisfaction) against the old tools/methodologies
    • Use Case 2: Social Bookmarking
      • Hypothesized that social bookmarking would inmprove resource sharing, leveraging the research of others across teams and the corporation
      • They also thought the tagging would help identify experts within the organization
      • They used a tool similar to Delicious
      • Bookmarks helped create a lightweight newsletter (this was an unexpected benefit)
      • You don’t need many participants in order to provide real value to the entire organization
    • Use Case 3: Babson SNA Study
      • They identified super users of their internal social networks and social media (brokers) and then interviewed their colleagues
      • They discovered that these super users tended to be innovative and provide huge value to their networks
      • Frequency of interactions was not as important as the number of unique connections each broker had (indicative of their ability to have an impact on a wider range of people).

    Exercise:

    • What are the most important things you are NOW measuring?
      • Number of communities
      • Number of community members
      • Percentage of contributors versus consumers
      • Usage across geographies, business units, etc.
      • Number of visits
      • Dwell time (how long is each visit)
      • Number of concurrent users at any one time
      • Number of people editing (indicates collaboration)
      • Number (and identity of ) lurkers
      • Measuring conversion of lurkers to active participants
      • Participation in community activities (who is sharing, who is editing, who is tagging, etc.)
      • Utilization of the various social tools
      • Success stories
    • What are the most important things you should be measuring?
      • Abandonment rate – when do visits/activity drop off
      • Tracking against business goals
      • Net Promoter Score
      • Day/time of highest activity
      • first and last page viewed
      • business improvement metrics
        • = correlation of usage to operating metrics
        • = correlation of usage to improved business process
      • Measuring cross-fertilization (the extent to which people choose to go outside their community for information)
      • Number of new ideas/ rate of innovation
      • What’s the reduction in other forms of overhead activities (e.g., now that the subject matter expert is posting answers on a social platform, what is the resulting decline in repetitive e-mail requests?)
      • Percentage of profile completion
      • Rating content
      • Ability to determine a dollar value to participation
      • Where was the content reused, how was it reused, and what were the results of the reuse (e.g., cost savings, process improvement, etc.)
    • Presentations:  www.e2conf.com/boston/2010/presentations/workshop
      • User name: Workshop
      • Password: Boston
    • Presentations also on Slideshare: http://slideshare.net/20adoption
    2 Comments
  • Am I creating value? That’s the key question to start and end every working day.

    For knowledge management professionals, it can be a tough one to answer honestly. Why? Because many of us struggle with proving the value of knowledge management efforts. We know that we’ve helped individuals, but we are often hard pressed to explain how much we have in fact helped.  For example, you might truly believe that the enterprise search engine you’ve painstakingly implemented will save lawyers time and effort, ultimately saving clients money.  But do you have any metrics to prove it?  Unlikely.  So how do you know that your search engine project creates value?

    One approach is to sit next to your colleagues with a stop watch measuring the time spent on searches before and after your enterprise search engine is implemented. Then you should have the data necessary to prove value numerically.  But how do you measure user satisfaction? You could ask users to complete a survey.  With tools like zoomerang or surveymonkey, it’s almost too easy to do this.  However, the real challenge lies in how the survey is constructed and interpreted.  An additional problem is that it can be hard to coax busy lawyers to complete your survey.

    If you’re looking for ways to show how much value you’ve created, consider the example of Morrison & Foerster.  On a page whimsically entitled “Geek Power,” the firm makes the following claims about their knowledge management program:

    In order to take maximum advantage of the collective experience of our lawyers, we have developed a number of important knowledge management systems and tools.  These systems improve our efficiency.

    AnswerBase. AnswerBase is our award-winning enterprise search engine.  The search engine enables us to access the firm’s best and most pertinent practice materials, internal research, attorney experience, client and matter information and other important firm information.  AnswerBase has won a number of awards, including an award from Law Technology News (“Best Collaboration in Implementing Enterprise Search”) and Citytech Global Tech Leaders Top 100 (“Law Firm Project of the Year”).

    Knowledge Exchange. Our Knowledge Exchange database makes documents, forms, templates, precedent, briefs, practice materials and internal research available to all attorneys.

    They back this up with an exercise they undertook in 2006 to prove value.  Specifically, they retained Bruce MacEwen of Adam Smith Esq to talk to MoFo attorneys about their experiences before and after AnswerBase.  According to Bruce MacEwen:

    I was retained by Morrison & Foerster to lead an analysis and review of AnswerBase vis-a-vis its predecessor Knowledge Management system during last summer and fall, and reached the resounding conclusion that AnswerBase was strongly superior to the firm’s legacy systems, by providing highly relevant documents and discovering genuine subject-matter experts within the firm with impressive accuracy.   By interviewing a broad cross-section of lawyers at the firm’s New York offices, I was able to determine that the design and functionality of AnswerBase essentially replicate, as I put it in my report, “the way lawyers think” rather than reflecting technical considerations or limitations.

    Admittedly, hiring someone of Bruce MacEwen’s caliber will be hard to justify for every small project on your to do list.  However, I’ve recounted this story to remind you (and me) that sometimes it makes a lot of sense to bring in an impartial third party to help you and your colleagues see what is right in front of you.  And if in the process you manage to demonstrate that your KM efforts have created value, that’s all the better.

    [Photo Credit: Dave Elmore]

    No Comments
  • I made a mistake the other day.  As I was leaving for work, I checked the weather report to see how warmly I needed to dress.  The forecast said 40 Fahrenheit.  So, my brain went through the following fairly logical steps:

    1. On the Fahrenheit scale, freezing occurs at 32 degrees.
    2. Today’s temperature is only 8 measly degrees above freezing.
    3. Therefore, it is practically freezing and I should dress warmly to avoid practically freezing myself.

    So I put on my winter coat and walked out the door.  Moments later, it was clear that I had misunderstood the data.  I saw people walking in light jackets and, in a couple of slightly crazy cases, in shirtsleeves.  Where did I go wrong?  While 40 is fairly close to freezing, in New York City in January it can feel balmy — especially if it comes on the heels of a cold snap.  If you doubt this, think about how you  dress in the autumn in New York City as the temperature is plummeting towards winter.  Warmly, right?  To be specific, would you wear a sweater if it were 50 degrees Fahrenheit in September?  Yes, most probably.  Now think about a 50 degree day in March.  In New York City, you’re likely to see folks wearing shorts and T-shirts.

    What is critical to this analysis is knowing that we’re talking about New York City rather than Miami AND we’re talking about specific times of year.  Both elements of context have a huge impact on how we interpret the bald data of temperature.  It is no different when thinking about the metrics you’ve so carefully collected (I hope!) to help understand the efficacy of your Enterprise 2.0 or knowledge management project.   Knowing that activity levels have risen may be interesting, but knowing that happened against a backdrop of falling business levels makes for interesting analysis.  What’s going on?  Why?  The metrics by themselves don’t tell the complete story.  They need faithful, honest interpreters who can place them in their correct context and draw appropriate conclusions.  We need to be those faithful, honest interpreters.

    By the way, it’s snowing heavily in New York City as I write. I’ll be dressing warmly.

    [Photo Credit: Qiao-Da-Ye]

    6 Comments
  • Recently I had the interesting experience of reading survey results relating to a subject I actually knew something about. At first blush, the numbers were quite impressive. And then I read a little more closely and discovered that the presentation gave the impression of results that were better than warranted by reality. Since just the “bare numbers” had been reported, important context and nuance were lost. As a result, the story the numbers told was a little misleading.

    So how do we restore context, nuance and meaning? And, more importantly, how do we help initiate needed change within our organizations? According to the folks at Anecdote, the answer lies in telling good stories and then listening properly to those stories:

    Surveys and metrics can uncover trouble in an organisation, but they usually don’t help you identify the reasons for dysfunctions, let alone generate the resolve to springboard people into action. Instead, learn to use stories as listening posts and tap into the emotion to spark action. From time immemorial, stories have contained collective lessons in condensed form. When gathered and examined, stories that are told in your organisation reveal important themes and patterns that in turn indicate effective solutions.

    To be clear, I’m not trying to trash quantitative analysis. However, I do believe there are some things that can be communicated best by numbers and other things that can be communicated accurately only through narrative. Be very sure that when you make your choices about what to measure, how to measure and how to report the results, you choose the right tools and methods. If you cut corners here you will compromise your project and, possibly, your credibility. Why risk it?

    [Thanks to Stan Garfield for pointing out the Anecdote post.]

    2 Comments
  • You get what you measure. This isn’t news — first you decide what you want to achieve and then you design your metrics to let you know when you’ve arrived. That’s good practice and it’s the message of my earlier post, The Metrics Mess. Simple stuff, right? Wrong. You’d be amazed how often folks misunderstand where true success lies and, therefore, collect metrics that drive them in the wrong direction.

    Let’s take the example of the typical law firm. How does it define success? Profits per partner? Long-term client relationships? Employee attrition? Recruiting rates? The reality is that there are many bases on which to judge success. So, what do firms typically choose to track? Billable hours. When you track hours, you send the unmistakable signal that you are interested in time — lots of time. After all, time spent equals money. However, where in that equation is the notion that time spent well is worth more than money? At the end of the day, you know the cost of the time spent. But, do you know the value to the firm or, more importantly, to the client?

    If we defined success as delivering high-value services to clients, what would we track? If we defined success as building value within the firm as an institution, what would we track?

    For law firm knowledge management, the issue of metrics is a persistent problem. We’ve chased various ways of trying to prove return on investment, but with little success. What should we track to show how our efforts provide value to clients and to the firm itself? Until we’ve conquered this challenge, we can’t expect to achieve any real measure of permanence within a law firm. And, that’s a problem when the economy is heading south.

    1 Comment
  • I recently saw the perfect illustration of how we can get ourselves completely tangled up in unproductive activity by measuring the wrong thing. In this case, it was someone on Twitter who thought they had hit the jackpot because they had hundreds of followers. Further, this person was offering advice on how to increase the number of followers his readers had. This struck me as misguided at best. To be honest, there are folks I follow whom I’m sure don’t realize I exist. Equally, there are folks who follow me, but I’m largely oblivious to them because our paths don’t cross very often. So the numbers alone don’t tell the whole story and may, in fact, tell a misleading story.

    The real issue isn’t size of following as much as it is scope of impact. How many of these folks are really paying attention to you? How many do you actually affect? Unless you know this, you don’t have a good understanding of your interaction with Twitter. Admittedly, there are Twitter stars whom everyone likes to follow. And, assuming we follow because of their established reputations, we’re more likely to pay attention to what those Twitter stars say. For the rest of us in the Twitter mob, however, the number of our followers is a poor (and possibly inaccurate) proxy for our impact.

    Coming back to law firm knowledge management, take a moment to consider whether your efforts to measure the wrong thing are leading you into unproductive activity. Don’t focus on bulk — focus on impact. For example, counting how many times a particular document is opened via your portal or document management system may be interesting but not helpful. What you really want to know is how many times was it opened and actually used? And, how often was it exactly the thing the user was searching for? In the latter two cases, you learn much more about the quality of your content and the quality of your search engine.

    Consider the following: a document was opened 10 times and used each time, but then opened 20 times and discarded because it was not on point. For someone looking at bulk alone, they’d say, the document was opened 30 times, declare victory and go home. However, someone measuring impact would say it was used 10 times not 3o, and then would ask why. When you ask that question you create the possibility of learning and insight. That’s when you know you’re on the path to using metrics intelligently.


    [permission to use granted under a creative commons license]

    2 Comments