AI Lacks Empathy, Needs Humans in the Middle
September 29, 2022
Artificial intelligence is designed to assist with decision-making when the data, parameters, and variables involved are beyond human comprehension. But it fails to capture or respond to intangible human factors that go into real-life decision-making. The question then is whether AI can introduce more subjective experiences, feelings, and empathy. Business and technical leaders need to ensure that their AI systems have the necessary checks and balances as well as consistent human oversight to ensure that AI is ethical and moral. To help assure greater humanity in these systems, leaders need to encourage and build an organizational culture and training that promotes ethics in AI decisions, remove bias from the data, keep humans in the loop, validate algorithms in real-world scenarios, and teach the systems human values.
New AI systems like DALL-E, language transformers, and vision/deep learning models are coming close to matching human abilities, yet AI has a long way to go. We still need humans in the middle. The bottom line is, AI is based on algorithms that responds to models and data. It isn’t ready to assume human qualities that emphasize empathy, ethics, and morality.
Critical intelligence for general counsel
Stay on top of the latest news, solutions and best practices by reading Daily Updates from Today's General Counsel.
Daily Updates
Sign up for our free daily newsletter for the latest news and business legal developments.