Blog Details

AI for the Common Good, Part Two

Rory Donovan
January 20, 2023
0 comments

Over View

In our last blog we explored the power of AI for the Common Good, spotlighting a couple of initiatives to find workable solutions to promote fair and unbiased AI systems. The greatest challenge is connecting people and spreading the wealth of knowledge and resources to those without internet access – some 19 million Americans, according to the FCC.

Not far behind the issue of accessibility is the need for reskilling those whose jobs have been replaced by technology. For these reasons and more, developers and managers must understand AI principles for the common good, without exploitation.

1. Improving Accountability Mechanisms

The need for scrutiny of AI systems that foster accountability and ensure technology’s proper execution must be our priority. Rather than tolerating obscurity, developers and data scientists should collaborate to ensure algorithms are more transparent. We can do this through a strategy that involves:

A.     Qualitative analysis and research

B.     Black box testing

C.     Training data review, and

D.    Code analysis

E.     Journalism

A. Qualitative Analysis (or Q&A)

Qualitative research that employs more formal and rigorous methods to analyze how algorithms’ function requires further scrutiny. Methods like ethnography also enable researchers to identify system assumptions, purpose, any implicit policies advocated by its methods, and various parts that comprise its makeup.

B. Black Box Testing

Having direct access to inputs and resultant outputs through which researchers attempt to create emulating systems that approximate the model is an integral first step.

C. Data Review

A significant aspect of scrutiny is the ability to examine the training data and understand its properties, such as its size, origin, and features – as well as any changes to these over the period of its use. Access to this data allows those examining the algorithm to determine whether the data is appropriate for its intended use and if enough of it exists – and in the right proportions – to support the prediction algorithms required.

D. Code Analysis

A careful analysis of the root of the sources code or statistical model grounding the algorithm – known as white-box testing – is vital in improving accountability mechanisms. In addition to identifying system errors, code review can be used for analyzing the overall behavior of the system, including the data it has accessed, its inputs and their weights, the calculation it has made, the progress of its decision trees, and its errors.

E. Journalism

By highlighting the adverse effects opaque algorithms may have on society, journalists can significantly raise awareness about the need for fairness with regulators and policymakers. Investigative journalism can spotlight algorithm systems of interest, uncover the purpose of those algorithms, analyze the subsystems that constitute the algorithms, and advocate for policy-making that governs how algorithms are designed and implemented.

Even with the level of access to source code or statistical model granted, reviewers might still experience difficulties understanding it fully. The reason is that “even seasoned experts can miss simple problems buried in complicated code.”  In fact, “it may be necessary to see a program run in the wild, with real users and data to understand its effects.”

2. Benefits of LAWS (Lethal Autonomous Weapon Systems)

Perhaps ironically, AI for the common good also has a role on the battlefield. The U.S. Navy and U.S. Air Force often place unmanned aerial and naval vehicles (drones) in the air and sea. The tasks of aerial surveillance, collecting and analyzing intelligence, and detecting biological and chemical weapons are beneficial for AI-powered machines because they replace humans in these dangerous warfare situations.

Similarly, the U.S. Army seeks to apply AI and robotics to field operations (over terrain in which these technologies have never operated) and to which their creators have had no access. By adding estimations of collateral damage to these AI lists of objectives, these branches of the military hope to increase the benefits of AI in warfare.

Yet, problems still arise from unintended consequences of placing such power in the control of algorithms, including the potential for producing collateral damage that humans cannot predict. A multitude of ethical questions also arise, such as whether malware introduced into LAWS can cause premature or incorrect deployment. Or if it is possible to hold autonomous AI accountable for actions or misdeeds performed while prosecuting a war.

Just as AI can be a tool for common good, it can also be used to harm. Bottom line – AI should not be used in LAWS without full transparency and accountability. There is simply too much at stake.

Interested in learning more about how to develop ethical AI? Our firm can help you put best practices in place to better serve your customers?  Contact us! Quickly develop ethical AI that is explainable, equitable, and reliable with help from our complete AI IaaS. Sign up for FREE diagnostics.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *