The fully automated software service explains decision-making and detects bias in AI models at run time as decisions are being made capturing potentially unfair outcomes as they occur. IBM Services will also work with businesses to help them harness the new software service.
IBM has launched software to analyze how and why algorithms make decisions, as well as detect bias and recommend changes. IBM's goal for the release is to encourage researchers to integrate bias detection as they build AI models.
"IBM led the industry in establishing Trust and Transparency principles for the development of new AI technologies". "It's created to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education", they outlined.More news: Mohamed Salah: Liverpool boss Jurgen Klopp not expecting goals repeat
In an IBM blog post, IBM developers Animesh Singh and Michael Hind stated: "As AI becomes more common, powerful, and able to make critical decisions in areas such as criminal justice and hiring, there's a growing demand for AI to be fair, transparent, and accountable for everyone".
Recent research from IBM indicates that 82 percent of enterprises are considering deploying AI, but 60 percent are concerned about liability, and 63 percent do not have the necessary in-house talent to manage the technology.
The good news is that IBM has now added bias detection to its IBM Cloud solution.More news: Samsung announces Galaxy A7 with triple lens camera
The software service can also be programmed to monitor the unique decision factors of any business workflow, enabling it to be customized to the specific organizational use. "It will also detect bias that may come into decisions on account of multiple reasons", he said.
IBM says that the explanations for how an AI is making decisions are provided in easy to understand terms. A final check will be carried out too, but the systems will be tracked for accuracy, performance and overall fairness, over time.
According to IBM, the Fairness 360 is "a comprehensive open-source toolkit of metrics to check for unwanted bias in datasets and machine learning models, and state-of-the-art algorithms to mitigate such bias". "AIF360 is a bit different from now available open source efforts1 due its focus on bias mitigation (as opposed to simply on metrics), its focus on industrial usability, and its software engineering", wrote Kush Varshney, principal research staff member and manager at IBM Research.More news: Tour Championship LEADERBOARD: Tiger Woods leads, Rory McIlroy in the running
- Outrage over $500 duct-tape designer shoes from Nordstrom
- Trump Returns To Form By Lobbing Grenades At Kavanaugh Accuser Blasey Ford
- Sergio Aguero signs new Manchester City contract
- Kavanaugh Accuser Open To Testifying About Sexual Assault Allegations
- Kris Reveals She Helped Deliver Baby Stormi and Kim Is Grossed Out
- Fitbit launches new enterprise platform for health and disease management
- Ancient fat solves fossil riddle at last
- France Belgium become first joint leaders in Federation Internationale de Football Association rankings history
- Emery targets defensive stability as Arsenal claim win in Europa League opener
- Meghan Markle takes mom to United Kingdom cookbook fundraiser