Blog

Responsible AI is More Established Than You Think

Here’s how this critical field has evolved, and what you can learn from its history.

Katie Trauth Taylor

Katie Trauth Taylor

June 26, 2024

While generative AI adoption is growing faster than ever, most organizations still approach it with an abundance of caution, and rightfully so. Inaccuracy remains the most concerning risk among organizations globally, and issues like IP infringement, security, compliance, personal privacy, bias, and environmental and physical human remain top of mind. McKinsey recently went so far as to suggest that in the era of genAI, every professional in an organization will need to become risk savvy. And as companies continue to participate in genAI innovation, they are paying attention not only to how they govern the technology’s use, but also to foundational technological and ethical principles that promote the responsible build and use of this technology. 

Responsible AI is a movement of data scientists, scholars, business people and activists seeking to develop AI systems that operate in a trustworthy, responsible manner. Specifically, responsible AI often refers to a framework to guide ethical and responsible AI use and development. The past few years have even seen a surge from governments and industries alike to articulate responsible AI frameworks that ensure systems are used responsibly and safely. 

At Narratize, we have always taken responsible AI seriously. In our work with the organization Cincinnati AI Catalyst, we define responsible AI as:

the practice of designing, developing, and deploying AI with good intention to empower employees and businesses, and fairly impact customers and society, allowing companies to engender trust and scale AI with confidence.

But scrutiny over the responsible development and use of AI is not new, and in fact, has coevolved with the technology itself. Responsible AI – as a unique and substantial field of study – has been around since the inception of artificial intelligence technologies. Read on for a (brief) journey through the history of AI more broadly to understand what responsible AI use has looked like over the past decades and how our current understanding and need for responsible AI has emerged. We’ll cover: 

  • A very, very brief history of AI and how we got to where we are today
  • The origins of Responsible AI and its foundational parameters
  • Some of the earliest AI governance guidelines - and their influence on organizations today 

AI: A Very Brief History

While the idea of machine intelligence is far older than this starting point, the origins of our common understanding of AI can be traced back to the Dartmouth Summer Research Project in New Hampshire on artificial intelligence in the summer of 1956, where researchers met to better understand AI. At the time, researchers focused heavily on trying to define “intelligence,” arguing that they could both do this and that, because they could do this, they could also get machines to simulate this activity. In a sense, “intelligence” was something that could be specifically described and replicated in machines. The researchers – John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude E. Shannon– proposed to study several of the innovations that form the foundation of today’s technology: automatic computing, language learning, neural nets (now known as neural networks), autonomous learning, and even abstract reasoning and creativity. 

This idea echoes Alan Turing some 75 years ago, who suggested that the goal of designing an intelligent system is to approximate human behavior, such that someone could not tell whether they were interacting with a human or a machine. In fact, the Turing Test (or imitation game) was one of the first guidelines shaping how AI is used; in this case, a guideline to measure whether an AI system can imitate human behavior. Today, we use CAPTCHA, a type of Turing Test, to differentiate just that type of behavior. Now, we use those today as a security measure to protect against spam or systems pretending to be humans.

Following a brief “winter” for AI where development was stagnant in the 1970s (accredited largely to a lack of capabilities), AI development shifted in the 1980s and 1990s towards systems designed for rational behavior, a step beyond simply imitating human behavior, because humans do not always act in the “optimal” way in all scenarios. The idea of a rational system helped build the idea of expert systems, or systems that may use techniques that human “experts” might use to solve problems. This concept of machine rationality became so consequential, some have described it as an important part of the “fourth industrial revolution.”

Rational systems “collected” human insights, which gave way to machine learning and deep learning techniques that use large sets of data to recognize patterns to achieve goals. Machine learning has caused an explosion in new techniques for artificial intelligence, branching off into areas of image recognition, face recognition, and navigation between 2000 and the late 2010s. The emergence of systems reliant on large amounts of data, however, means there is a need to generate and store a lot of information. That means personal data could be available to AI developers as well as others, such as big companies like Apple, Facebook, Amazon, and more. 

The design of systems that could analyze such large sets of data to solve problems or make decisions raised ethical concerns about human judgment. Where is the human need when AI systems appear to so adequately mimic human behavior? What are the implications of “not needing” human presence? Those that study AI ethics have wondered whether ethical systems designed by humans (eg, utilitarian ethics) are applicable to AI machines, how to design systems with such values, and the ethical implication of “designing” a system for one system of ethics over the other.

What this (brief) history of AI shows is a significant change in how AI systems work and why they work the way they do. They’ve changed from imitating human behavior to leveraging enormous amounts of data to process information in a way a single human simply could not. This expansion of AI, almost inevitably, has led to questions about how to control it and address the myriad risks and challenges that have emerged as AI systems have evolved. 

Governing AI: Origins of responsible AI

As AI continued to grow in both use and complexity between 2000 and the late 2010s, various industries and governments started to take notice. In particular, these entities started to notice particular risks associated with AI as well as the powerful benefits these systems offered: a high  risk, high reward scenario that required the adoption of clear frameworks for using AI effectively to avoid harm to individuals and even maximize the public good. 

Questions about physical safety (autonomous cars, for example were in their nascent stage, but the concern was real) as well as ethics and bias began to grow; AI, after all, was built around certain types of data and used certain algorithms to analyze data that were created with specific goals, which have been shown to perpetuate biases and discrimination. 

Calls for “trustworthy,” “ethical,” “beneficial,” and “responsible” AI began to emerge. But there was also a concurrent notion that there could be enormous benefits to using AI, and that AI could be “used for good,” a term associated with the United Nations International Telecommunications Unit as they highlighted goals to use AI to accomplish sustainable development goals. 

But by and large, many in recent years have looked at how the presence of AI necessitates clear frameworks to guide how it is used. These terms: ethical, beneficial, responsible, all signal an interest in using AI well. Particularly because AI is being used more often and for more complex tasks, industry and governments alike have noted potential harms to society, including bias, physical safety, privacy (both widespread data use and surveillance), and even the ethics of AI and job displacement.

Earliest modern Responsible AI guidelines

To that end, between 2018 and 2019, the European Commission was one of the first government entities to underscore a need to build and use “trustworthy” AI, establishing an expert group called the High-Level Expert Group on AI (AI HLEG). Their work would become the cornerstone for how AI would be governed in the EU. The ultimately outlined three key features of “trustworthy” AI:

  1. AI has to be lawful (it complies with local and national regulations
  2. AI must be ethical (in this case, adhering to Ethics Guidelines for Trustworthy AI created by this same group
  3. AI must be robust, both technically and socially. This dual approach means that trustworthy AI is both robust in how it functions, including its accuracy and ability to be safe and to function robustly within society, and supporting societal needs.

Today, many industries are defining responsible AI policies for themselves that focus on several key concepts, including fairness, transparency, accountability, privacy, and safety. In other words, the lineage of responsible AI today is rooted in this historical movement, and guided by a multidisciplinary, cross-industry coalition of AI advocates and critics alike. 

Building AI that serves humans is at the heart of what we do at Narratize. Learn more about what companies can look for in a safe, ethical AI product – and about our approach to building responsible AI – at Night Sky, your source of inspiration and guidance for all things GenAI transformation and innovation. 

Leave no great idea untold.

Sign up to learn how to accelerate time-to-market for your enterprise’s best, most brilliant ideas.

By clicking Sign Up you're confirming that you agree with our Terms and Conditions.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Frequently Asked Questions

plus/close icon

Can I find case studies or examples of how other companies have used Narratize?

plus/close icon

Are there any webinars or events scheduled that I can attend to learn more about Narratize?

plus/close icon

I need a really specific story. Do you create custom use cases or customize the platform?

plus/close icon

What kind of support can I expect if I have technical issues or questions?

Distill your breakthroughs into impactful, accurate, content.

Leave no great idea untold.

Get Started