In the midst of a lively, thoughtful discussion, one of my friends and colleagues asked for a moment’s silence to take note of the fact that Mary Abraham had just endorsed automation over human action. This led to gales of laughter. Why? Because over the years I’ve become reasonably well-known in legal knowledge management circles for repeatedly reminding people that technology won’t solve every problem (note the banner of this blog) and that we might get further if we spent at least as much time and attention on people and processes as we do on the technology.
That remains my position, but with time and experience it has become slightly more nuanced. While I still don’t think that technology is the silver bullet, I also don’t believe that simply throwing more people at a problem is the best path to a solution either. Further, given the advances in technology today, it could arguably be abusive to humans NOT to adopt appropriate technology.
Not convinced? Think about the many processes within law firms that to this day still are not automated. They haven’t been studied, standardized or streamlined to improve efficiency and efficacy. Rather they depend on a variety of people operating consistently at their personal best to ensure good results. In fairness, these folks have probably been doing a good job for many years. But what if someone becomes ill or disengaged? What if they retire? Where’s the safety in this system? There’s also the problem that you’re asking human beings to do work that a properly equipped machine could do. How demoralizing is that?
In the 1940s, Isaac Asimov introduced the Three Laws of Robotics (see video below). The first of these laws was:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
It’s helpful that he identified this way to reduce the likelihood that a robot might harm a human. However, that still leaves the human race very much at risk of harm from members of its own species. With this in mind, consider what would change if law firm IT departments and KM departments adopted the following variant of the first law of robotics:
An IT department or KM department may not injure a human being, or through inaction, allow a human being to come to harm.
What would the practical implications of this be?
- We would have to spend much more time upfront considering user interface and user experience.
- We would have to pay closer attention to HelpDesk inquiries and customer complaints — what keeps going wrong?
- We would have to think harder about the “unintended consequences” (or, as Bruce MacEwen writing at Adam Smith Esq states more accurately, the “unanticipated consequences“) of the innovations we introduce.
- We would have to stop asking our colleagues and our own staff to do things that more properly should be done by machines.
- We would have to be willing to review and revise what we’re doing to ensure the humans we serve are not harmed.
As you think about your work and its consequences, can you honestly say that it does not harm humans? If not, what will you change?
[Hat tip to Michael Mills of Neota Logic for reminding me of Asimov’s Three Laws.]
[Photo Credit: Chelsea Wa]