AI has led to “ethical issues” for 90% of companies
A new report from Capgemini has found that 90% of organizations are aware of at least one instance where an AI system has caused ethical issues for their business.
The report, entitled “AI and the ethical conundrum: how organizations can build ethically robust AI systems and gain trust”Found that while digital and AI-based interactions with customers are on the rise as customers seek non-touch or non-touch interfaces amid the COVID-19 pandemic, systems are still being designed regardless ethical issues.
While two-thirds (68%) of consumers expect AI models to be fair and free from bias, Capgemini results show that only 53% of organizations have a leader responsible for business ethics. AI systems, such as an ethics manager, and only 46% of them independently audit the ethical implications of their AI systems.
Additionally, 60% of organizations have undergone a legal review and 22% have faced negative reactions from customers due to these decisions made by AI systems.
The lack of implementation of ethical AI is hampered by increased regulatory oversight. The European Commission has issued guidelines on the key ethical principles that should be used in the design of AI applications, and the US Federal Trade Commission (FTC) called in early 2020 for “transparent AI”. “. He said that when an AI-enabled system makes an unfavorable decision, such as denying a credit card application, the organization should then show the affected consumer the key data points used to make the decision. and give him the right to modify any incorrect information.
However, while 73% of organizations globally were educating users on how AI decisions might affect them in 2019, that number fell to 59%.
Anne-Laure Thieullent, head of the Artificial Intelligence and Analytical Group offering at Capgemini, comments: “Given its potential, the ethical use of AI should of course ensure that there is no harm to humans, and full human responsibility and accountability in the event of a problem. But beyond that, there is a real opportunity for proactive research into environmental and social well-being, ”comments Anne-Laure Thieullent, head of the Artificial and Analytical Intelligence group’s offer at Capgemini.
“Instead of fearing the impacts of AI on humans and society, it is quite possible to orient AI towards actively combating prejudices against minorities, or even correcting human prejudices existing in our communities. companies today. “
The report highlights seven key actions that organizations need to take in order to build an ethically sound AI system: clearly defining the intended purpose of AI systems and assessing its overall potential impact; proactively deploy AI for the benefit of society and the environment; proactively integrate the principles of diversity and inclusion throughout the lifecycle of AI systems; improve transparency using technological tools; humanize the AI experience and provide human oversight of AI systems; ensure the technological robustness of AI systems; and protect people’s privacy by empowering them and empowering them to interact with AI.