Fujitsu Announces Ethical Toolkit
Fujitsu has announced that it is releasing a toolkit offering developers guidance for understanding the risks and ethical impact of AI systems. The toolkit is in response to the 2021 draft from the European Commission calling for “...a comprehensive ethical response for AI system developers, users, and stakeholders in response to increasing concerns surrounding algorithmic bias and discriminatory decision-making in AI and machine learning applications.”
Although many governments and research institutions now have ethical guidelines for AI research, Fujitsu’s toolkit attempts to bridge possible misunderstandings or misinterpretations of the guidelines, allowing developers a chance for “thoroughly identifying and preventing possible ethical issues early in the development process in keeping with international best practices.”
The toolkit consists of:
- Whitepaper: A general overview of methodology
- AI ethical impact assessment procedure manual: AI system diagram, preparation procedure of AI ethical model and explanation of problem correspondence method
- AI ethical model: An AI ethical model based on AI ethical guidelines published by the European Commission (created by Fujitsu)
- AI ethics analysis case studies: Results of analysis of major AI ethics issues from the AI Incident Database of Partnership on AI (As of February 21, there were six cases, which were added sequentially.)
As an example, the press release describes the scenario of an AI tool designed to evaluate individuals for bank loans, with the goal of ensuring that the AI model does not project or amplify cultural bias that could lead to unfair decisions.
See the announcement at the Fujitsu website for more information. The toolkit could emerge as an important resource for AI developers, but for now, you’ll need to wait for the English edition, which the company says is coming soon. The only version currently available at the website is in Japanese.