The Problem With Algorithms in Artificial Intelligence and How They can be Improved.

By Gerald Trites

At their most basic level, algorithms are simply sets of instructions for how to perform a particular task or solve a particular problem. They are conceptual and may or may not be included in computer programs. Unlike algorithms, programs are sets of instructions for a computer to follow to achieve a particular goal, or complete a process or set of processes. They must be written in language that the computer can understand.

There is nothing new about algorithms and yet we hear a lot more about them now. That’s because they are being used more often for a variety of purposes and are being included in Artificial Intelligence programs.

The use of algorithms, for example, has attracted attention for their use in screening people for matters like hiring, giving loans, grading tests, and buying a home. Algorithms have sometimes been found to be flawed largely because they develop and “learn from” correlations of facts in data; correlations that may be nonsensical, meaningless or inappropriate.

For example, correlations between hiring and body type or race or sex can lead to incorrect and inappropriate conclusions and even lead to violations of human rights. For this reason, it’s essential for people to have a better understanding of how AI and algorithms work in order to hold their users accountable.

New York’s City Council recently adopted a law requiring “audits of algorithms used by employers in hiring or promotion. The law, the first of its kind in the nation, requires employers to bring in outsiders to assess whether an algorithm exhibits bias based on sex, race, or ethnicity. Employers also must tell job applicants who live in New York when artificial intelligence plays a role in deciding who gets hired or promoted In Washington, a bill is being drafted by members of Congress to require businesses to evaluate automated decision-making systems used in areas such as health care, housing, employment, or education, and report the findings to the Federal Trade Commission.

“An AI Bill of Rights has been proposed by the White House which calls for disclosing when AI makes decisions that impact a person’s civil rights. It also calls for the audit of AI systems for accuracy and bias, among other things. Similar laws are being considered in Europe. Some proponents of greater scrutiny favor mandatory audits of algorithms similar to the audits of companies' financials. Others prefer “impact assessments” akin to environmental impact reports.”

If audits are required, there is a need for standards to guide and support the audits. These standards could be promulgated by a board such as the Financial Accounting Standards Board or the Sustainability Accounting Standards Board. Without standards, the audits could vary in reliability and quality.

Artificial Intelligence systems are increasingly used for making decisions and the use of algorithms in AI systems is fundamental to their design. They use algorithms differently than other systems have. Rather than follow only explicitly programmed instructions, some computer algorithms are designed to allow computers to learn on their own (i.e., facilitate machine learning). Uses for machine learning include data mining and pattern recognition (a process not unlike finding correlations).

The issue goes beyond the actual functionality of the algorithms. It extends into the use/misuse of data, which of course is being gathered and analyzed in unprecedented numbers. We have laws and rules with regard to the use of data and there must be a link between these rules and laws and the algorithms.

“Correlation … is different than causality.” [“Big Data Uncovers Some Weird Correlations,” The Wall Street Journal, 23 March 2014]. “Finding surprising correlations has never been easier, thanks to the flood of data that’s now available.” Deborah Gage reports that one “company found that deals closed during a new moon are, on average, 43% bigger than when the moon is full.” Other weird correlations that have been discovered include, “People answer the phone more often when it’s snowy, cold or very humid; when it’s sunny or less humid they respond more to email. A preliminary analysis shows that they also buy more when it’s sunny, although certain people buy more when it’s overcast.

“Are sales deals affected by the cycles of the moon? Is it possible to determine credit risk by the way a person types? Fast new data-crunching software combined with a flood of public and private data is allowing companies to test these and other seemingly far-fetched theories, asking questions that few people would have thought to ask before. By combining human and artificial intelligence, they seek to uncover clever insights and make predictions that could give businesses an advantage in an increasingly competitive marketplace.”

Machine learning isn’t replacing people.” Part of the problem is that most machine learning systems don’t combine reasoning with calculations. They simply spit out correlations whether they make sense or not. By adding reasoning to machine learning systems correlations and insights become much more useful.

AI is trying to deal with the common sense issue by introducing common-sense reasoning into their systems. “Common-sense reasoning is a field of artificial intelligence that aims to help computers understand and interact with people more naturally by finding ways to collect these assumptions and teach them to computers. Common Sense Reasoning has been most successful in the field of natural language processing (NLP), though notable work has been done in other areas.”

The technology of common sense reasoning is finding its way into commercial products. It will continue to evolve in the coming years to stabilize AI and make it much more sensible. Without common sense, it will be difficult to rely on AI systems in an increasingly digital and mobile world.


Leave a comment


  • No comments found