Keynote: Responsible AI – Ethics and Inclusive Design #KMWorld

kmworld-social

Speakers: Jean-Claude Monney, Board of Directors Member, Keeeb; Phaedra Boinodiris, IBM Academy of Technology, Executive Consultant, Trustworthy and Responsible Systems; and Steve Sweetman, Customer & Strategy Lead, Ethics & Society Engineering, Microsoft

Session Description: Join our exploration into the future of AI and other emerging tech as it transforms the knowledge sharing, collaboration and innovation in our organizations. Responsible AI, ethics and knowledge management definitely intersect and are routed in culture change and business transformation. Our experts share a lively discussion with the audience and will leave you thinking about what’s next for AI, KM, and our world in 2021 and beyond!

[These are my notes from the KMWorld Connect 2020 Conference. Since I’m publishing them as soon as possible after the end of a session, they may contain the occasional typographical or grammatical error. Please excuse those. To the extent I’ve made any editorial comments, I’ve shown those in brackets.]

Intro

  • Why are these speakers here today?
    • Phaedra has been interested for a long time in both technology and social justice. Her new role at IBM is to work in the trust and AI practice. She is focused on how to reduce bias and increase trust in tech systems.
    • Steve had an aha moment on March 23, 2016 when Microsoft’s friendly chat bot had been poisoned by hackers and turned into a racist and vicious bot. This taught him that ethics were no longer academic. They needed to make ethics real in their tools so that they can build responsible AI.
    • JCM taught students in the Columbia M.S. in Information & Knowledge Strategy about ethics in connection with digital transformation. The students quickly realized how critical this issue is.

Technology

  • KM’s basic concept is to provide relevant information for reuse. When this is enabled by AI, where is the bias in the system? (See below!)
  • For the last 20 years, we’ve been teaching people how to enter data into computers and then work with that data. With the advent of AI, we are teaching computers how to consume data and work with it. But the great dilemma of AI is that we don’t understand how the system reaches a specific conclusion. So how do we trust it?
  • 4 questions to ask before purchasing an AI system
    • What are the intended uses of the system that you’ve built it for and trained it for?
    • What are the unintended uses that you haven’t built it for and trained it for?
    • What makes it perform? What makes perform well?
    • What are its limitations?
  • Other questions you should also be asking:
    • Is it fair? Is it biased?
    • Is it easy to understand and explain to non-technical stakeholders, users or administrators?
    • Is it tamper-proof?
    • Is it accountable? Does it have acceptable governance standards?
  • How can organizations mitigate bias?
    • There are a lot of tools. For example, IBM has donated Fairness 360 to the Linux Foundation.
    • Culture is a big issue. How are teams made up? Consider employing red team vs green team tactics (borrowed from the cybersecurity world).
    • Governance: make sure you have published standards that explain your company standards to the market and your employees. Do you have a diverse, inclusive AI ethics board? Do employees have a way to submit anonymously their ethics concerns?
  • Education is a big challenge
    • Why are we not teaching AI and ethics in high school and even middle school?
    • Current leaders in organizations and government do not seem to understand AI. So they cannot understand the true impact of this technology. All leaders should be at least fluent in AI because it will affect every part of their organizations.

Ethics

  • Recommended reading: Brad Smith, Tools and Weapons
  • If is not a question of IF we have bias in our systems. It is a fact that we DO have bias in our systems.
    • bias comes from the lack of diversity among developers and executives
  • Do NOT attempt to determine AI ethics this alone. It is not something for data scientists to do by themselves. You must involve different stakeholders who bring different points of view to the discussion.
  • The diversity prediction theorem = the more diverse and inclusive the crowd, the closer you get to ground truth.
  • Warning signal: major lack of diversity leads to diminished fairness in AI systems
  • Forensic technology = the tools you can use to create responsible systems. They help address fairness, explainability, transparency
  • How to find bias in your AI systems?
    • Ask if your models would keep you from offering the same service to people? Do you discriminate on a false basis?
    • Do you have fair representations of the services you are recommending? Do different people get the same outcomes?
    • Are we stereotyping? Are we using labels, for example, that reflect inherent bias?
  • How to mitigate?
    • Correct the existing data
    • Collect more representative data
    • Look at all the models across your systems — work to improve all of them and track your progress
  • How to address bias in AI used for hiring and promoting?
    • It is rare to find a bias-free system. Be hyper aware of hidden bias. There are many types of bias beyond race and gender.
    • Pay attention to the training data set. There may be bias in that set — for example, if successful people in a specific job were historically white and male, then the historic data used to train the AI will be biased in favor of white males.
  • Microsoft has a sensitive use protocol. Not all AI systems have the same impact. When AI systems could have a disproportionate impact on peoples lives, then you need to slow down development to ensure they are fair, safe, and trustworthy. Examples of high impact systems:
    • hiring, lending, admission to school
    • system misfires could result in injury to someone
    • system could diminish a person’s human rights)

Culture

  • Microsoft
    • You need to create internal standards that you will live by: put ethics and fairness at the same level as security and innovation.
    • Ensure diversity of teams at every level from ideation to design to development to market delivery sysems
  • At IBM
    • Culture — big focus on diversity and inclusivity; advocacy for ethics in technology
    • Forensic Technology — donating tools to the open source community to tackle fairness, explainability, transparency
    • Governance — shaping global standards on technology governance

Resources

Comments are closed.

Create a website or blog at WordPress.com

Up ↑