The fully automated software service explains decision-making and detects bias in AI models at run time as decisions are being made capturing potentially unfair outcomes as they occur. IBM Services will also work with businesses to help them harness the new software service.
IBM has launched software to analyze how and why algorithms make decisions, as well as detect bias and recommend changes. IBM's goal for the release is to encourage researchers to integrate bias detection as they build AI models.
"IBM led the industry in establishing Trust and Transparency principles for the development of new AI technologies". "It's created to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education", they outlined.More news: PAOK Salonika vs. Chelsea - Football Match Report
In an IBM blog post, IBM developers Animesh Singh and Michael Hind stated: "As AI becomes more common, powerful, and able to make critical decisions in areas such as criminal justice and hiring, there's a growing demand for AI to be fair, transparent, and accountable for everyone".
Recent research from IBM indicates that 82 percent of enterprises are considering deploying AI, but 60 percent are concerned about liability, and 63 percent do not have the necessary in-house talent to manage the technology.
The good news is that IBM has now added bias detection to its IBM Cloud solution.More news: Christine and Frank Lampard have baby girl
The software service can also be programmed to monitor the unique decision factors of any business workflow, enabling it to be customized to the specific organizational use. "It will also detect bias that may come into decisions on account of multiple reasons", he said.
IBM says that the explanations for how an AI is making decisions are provided in easy to understand terms. A final check will be carried out too, but the systems will be tracked for accuracy, performance and overall fairness, over time.
According to IBM, the Fairness 360 is "a comprehensive open-source toolkit of metrics to check for unwanted bias in datasets and machine learning models, and state-of-the-art algorithms to mitigate such bias". "AIF360 is a bit different from now available open source efforts1 due its focus on bias mitigation (as opposed to simply on metrics), its focus on industrial usability, and its software engineering", wrote Kush Varshney, principal research staff member and manager at IBM Research.More news: 'Disgraceful': Theresa May leaves Brits in France furious after Brexit speech
- Fitbit launches new enterprise platform for health and disease management
- Outrage over $500 duct-tape designer shoes from Nordstrom
- Florence Flooding Prompts New Round Of Evacuations In South Carolina
- How Spider-Man PS4's Best Boss Battles Came to Be - Beyond Highlight
- Kanye West calls out Drake in a series of Instagram videos
- France Belgium become first joint leaders in Federation Internationale de Football Association rankings history
- Aaron Rodgers Concerned Knee Injury Will Get Worse
- Government calls off meeting with Pakistan at UNGA: MEA Spokesperson
- Martin Keown identifies the source of Mohamed Salah’s struggles
- Mohamed Salah: Liverpool boss Jurgen Klopp not expecting goals repeat