Campbell Law Observer writer explores lack of governance over AI technology

Photo of a man holding a yellow sticky note wit the letters A.I. written on it

In both the government and private sectors, technology using artificial intelligence (AI) is everywhere.  It has been incorporated across a variety of industries and has become an essential part of daily life for many.  AI has an immensely powerful influence over people today; it influences many spending decisions people make, including travel, entertainment, personal purchases such as clothing, and food.  While AI continues to advance rapidly in complexity, the same cannot be said for the regulations, standards, and guidelines that must be implemented alongside it.  These standards and guidelines are critical to ensuring fairness, transparency, and, more importantly, accountability when AI goes wrong and causes harm.  The lack of standards and guidelines is troubling, but the lack of accountability and oversight appears even more troubling.

Although there are multiple definitions of AI, the most prevalent definition refers to a machine that is capable of performing tasks that typically require human intelligence.   Common examples of technology incorporating AI includes a computer that plays chess, Facebook’s newsfeed, Google’s search results, Amazon’s Echo, better known as “Alexa,” and the smartphone personal assistants, known as “Siri” or “Google.”  The tasks performed by these computers are considered intelligent, meaning they typically require some higher level of cognitive functioning such as reasoning or judgment.

The processes a machine uses to learn are not really known.

While AI actually dates back to the 1950s with the famous Dartmouth Conference, AI has increased in complexity within the last ten years with recent developments in machine learning and the development of neural networks .

To read the full story, follow the link to the Campbell Law Observer.

 

Contributors

Deb Shartle '21

This article is related to: