AI Practice and Ethics for KM #KMWorld

Speakers: Dave Snowden (Chief Scientific Officer, The Cynefin Co) and Phaedra Boinodiris (Trustworthy AI leader, IBM)

Session Description: KMWorld has been exploring the future of AI and other emerging tech as it transforms knowledge sharing, collaboration, and innovation in our organizations. Responsible AI, machine learning, ethics, and knowledge management definitely intersect and are rooted in culture change and business transformation. Our knowledgeable speakers consider the basic differences between human and machine intelligence; the impact of creating training datasets, especially in light of the recent Google controversies; how to assess models; and more as they look at what’s next for AI, KM, and our world in 2022 and beyond.

[These are my notes from the KMWorld Connect 2021 Conference. I’m publishing them as soon as possible after the end of a session so they may contain the occasional typographical or grammatical error. Please excuse those. To the extent I’ve made any editorial comments, I’ve shown those in brackets.]

Notes:

Dave Snowden

  • AI Causes Loss of Human Capability
    • There is a story of someone who blindly followed their satnav (GPS) and inadvertently drove off a pier because the GPS told them that if they took the recommended route they would be driving over a bridge.
    • If humans don’t use a capability, they lose that capability within a single generation
    • Will we all be looked after by “machines of love and grace”?
  • Technology is creating a dangerous unbuffered feedback loop
    • There is a lack of human judgment and empathy in algorithms
    • What you have in Facebook is an unbuffered feedback loop that elevates extremism.
    • Snowden believes that this is not a problem we can rely on the free market to solve. Rather, it is government’s responsibility to fix.
    • Snowden advocates “human buffering.” This mean interposing human judgment and human empathy in the feedback loop.
    • We need to increase the amount of empathy between humans so that they don’t leap to negative judgments and action.
  • How to increase empathy between humans and across human systems?
    • You “entangle” people so that empathy jumps across systems rapidly. [This means that you deliberately create situations in which different people meet and interact. Then you switch the members of the teams around so that each person meets and works with a wide variety of people. This expands their exposure, experience, and empathy.]
  • The lack of ethical training in software developers.
    • This is a criminal omission for which we are paying the price now and will continue to pay for the foreseeable future
  • Improve the training sets for algorithms
    • This is critical for ameliorating the negative effects of algorithms run wild.
    • If the training sets are biased, the algorithm results will be biased.
  • Star Trek Lessons:
    • Horta (The Devil in the Dark) episode: “Silicon-based life forms are not carbon-based life forms.” AI is silicon-based.
    • Darmonk episode: they meet a species that speaks only in metaphors derived from experience. (Remember that art comes before language. Art allows us to think abstractly first and then bring it down to the concrete level through language.)
  • KM should focus more on judgment than on information.

Phaedra Boinodiris

  • Trust in AI
    • What does it take to create trust in AI?
      • This is not a technological challenge. It is a sociological challenge.
  • Diversity improves AI models
    • Without true diversity in your development teams, it is extremely hard to avoid bias in your AI models.
    • Without a wide range of people involved, it is difficult to create “explainable AI” that works for and can be explained to a variety of populations.
  • How to help people think about empathy systemically?
    • They have tweaked design thinking to ensure they consider at the planning stage the primary effects, secondary effects, and tertiary effects of an AI model. It is the tertiary effects that pose the biggest problems because they are usually unintended and potentially harmful. Once these tertiary effects are identified, then her colleagues engage in ethical hacking to remove the negative effects and replace them with positive effects.
  • Scaled Data Science Method
    • This is a collaborative platform that reminds participants, at specific milestones in the AI development life cycle, that they should bring in people with complementary expertise in psychology, sociology, etc.
    • This ensures that more perspectives and expertise are brought to bear during the development process. They are no longer relying entirely on developers to ensure the integrity and positive impact of the AI models.
  • Teach Ethics Early and Often
    • We should be teaching Ethics in Data Science and Ethics in AI to more people early and often — we should be teaching this as early as high school.
    • Snowden Response: it’s not enough to teach ethics. We also need to create processes that train people to BE ETHICAL.

Conversation

  • Snowden:
    • They are doing Children Ethnography
      • This is about seeing the world through the eyes of kids
        • In Wales, there is legislation that requires every new project to be evaluated from the perspective of the children who will have to live with the consequences.
      • By asking children to tell their own stories and to gather stories from their own communities, Snowden’s team are able to generate valuable abstracted data overlaid with human-applied meaning.
      • They are working through schools, churches, and sports clubs to recruit and train the child participants.
      • This child ethnography approach helps them see patterns across communities and allows them to actually consult with children to get real-time data as situations arise.
  • The Deficits in our Education Systems
    • Snowden: Our current problem is that education is privileging STEM over the humanities. We should teach empathy and design thinking, and art, etc. We can teach students coding later.
      • Because of the bias in the education system that favors expertise over general knowledge, there are relatively few generalists under the age of 60. This means there are insufficient people able to connect the dots across domains of expertise.
      • Instead, we are taking an engineering approach to human education: learn a module, then take a test. Pass the test, then tackle the next module. This rigid approach does not leave enough space and time for curiosity or creativity.
    • Boinodiris: It would be wonderful to involve school-age children in societal decision making but some significant problems in our public education system make that difficult. Because children today are being taught in silos, they are less able to make wise decisions.
  • How to Improve Diversity and Inclusion
    • Boinodiris: rethink the role of the chief DEI officer. In addition to recruiting a wider variety of people, they should ensure that the organization has true diversity in a variety of functions ranging from development to the governing ethics board and beyond.
    • Snowden: we have got the diversity agenda wrong. We’ve been focused on visible diversity while ignoring attitudinal diversity. (Attitude is a leading indicator but practice is a lagging indicator.)

Share

E2.0 Stag Party

An Enterprise 2.0 project is sufficiently different from a traditional knowledge management or IT project that it can be a little disconcerting at first. Some experts recommend what seems like a 1960s free love approach — anything goes and, by the way, I’m ok and you’re ok. At the other extreme are the traditionalists who believe that introducing any innovation within an organization requires lots of constraints to ensure safety.

If you’re starting a new E2.0 project, which approach do you take?   Neither.

I’d like to commend to you the “stag party” approach described by Ron Donaldson in his post Lines in the Sand.  He starts with the following statement:

A complex system requires boundary conditions, not too tight that they constrain and not too loose as they allow unacceptable behaviours.

He then goes on to list the rules his son’s friends agreed on to govern their stag weekend.  You should read these rules.  They are both funny and intensely pragmatic. Ron Donaldson called them “[b]rilliant, self organising, self regulating and in everyone’s best interests.”

Coming back to your E2.0 deployment, can you reduce your concerns  to a small handful of rules? What minimums does Mum (or, as is most likely in your case, senior management) require?  Do these rules protect the most vulnerable and valuable while still permitting sufficient flexibility for learning, growth and enjoyment?

From his postmortem of the stag weekend, it’s clear that the rules worked.  Everyone behaved appropriately for the context and, while there were some perfectly predictably after-effects, everyone survived and even enjoyed the experience.

It seems to me that if we can ensure that with our E2.0 deployments, we’ll have done pretty well.  What do you think?

[Photo Credit: Jack Spellingbacon]

Share