Artificial intelligence (AI) is the ultimate disruptive technology: it has the potential to transform how we work but could also displace many jobs. There’s also a need to guard against mistakes, bias, malicious activity and other unintended consequences.
Organisations that are adopting AI are taking steps to ensure responsible use of the technology, according to research by SAS, Accenture Applied Intelligence, Intel and Forbes Insights.
The study, AI Momentum, Maturity and Models for Success, found that most AI adopters conduct ethics training for their technologists (70%) and have ethics committees in place to review the use of AI (63%).
Among AI leaders (those which rate their deployment of AI as ‘successful’ or ‘highly successful’), almost all (92%) train their technologists in ethics.
AI leaders also recognise that the technology should not operate independently of human intervention. Nearly three-quarters (74%) reported careful oversight with at least weekly review or evaluation of outcomes. And 43% of AI leaders said their organisation has a process for augmenting or overriding results deemed questionable during review.
However, the report notes that oversight processes have a long way to go before they catch up with advances in AI technology.
“The ability to understand how AI makes decisions builds trust and enables effective human oversight,” said Yinyin Liu, head of data science for Intel’s AI Products Group. “For developers and customers deploying AI, algorithm transparency and accountability, as well as having AI systems signal that they are not human, will go a long way toward developing the trust needed for widespread adoption.”
Overall, the research found that 72% of organisations globally are now using AI in one or more business areas.
More than half (51%) said their deployment of AI has been a real success, with the key benefits being more accurate forecasting and decision-making, improved customer acquisition and increased organisational productivity.
Tags: AI, artificial intelligence, ethics