BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

What Is ‘Responsible AI’ And Why Is Big Tech Investing Billions In It?

Following

The boom of artificial intelligence (AI) and super-intelligent computation has taken the world by storm. Pundits are calling the AI revolution a “generational event”—one that will change the world of technology, information exchange and connectivity forever.

Generative AI specifically has redefined the barometer for success and progress in the field, creating new opportunities across all sectors, ranging from medicine to manufacturing. The advent of generative AI in conjunction with deep learning models has made it possible to take raw data and prompts to generate text, images and other media. The technology is heavily based on self-supervised machine learning from data sets, meaning that these systems can grow their repertoire and become increasingly adaptable and appropriately responsive as they are fed more data.

Kevin Scott, Chief Technology Officer for Microsoft, writes about how AI will change the world, describing that generative AI will help unleash humanity’s creativity, provide new ways to “unlock faster iteration” and create new opportunities in productivity: “The applications are potentially endless, limited only by one’s ability to imagine scenarios in which productivity-assisting software could be applied to complex cognitive work, whether that be editing videos, writing scripts, designing new molecules for medicines, or creating manufacturing recipes from 3D models.”

Both Microsoft and Google are at the forefront of this development and have made incredible strides in AI technology in the last year. Microsoft has integrated the technology seamlessly into its search functions, in addition to creating platforms for developers to innovate in other useful areas. Google has also progressed significantly on this front, showing immense promise with its Bard platform and PaLM API.

However, the promise of endless possibilities brings with it immense responsibility.

Namely, the advent of generative AI has also raised numerous concerns regarding the best way to develop these platforms in a fair, equitable, and safe manner.

One of the primary concerns is regarding the creation of systems that can provide equitable and appropriate results. A few years ago, Amazon had to disband an artificial intelligence system that the company was trialing to streamline the recruitment process. In an attempt to introduce automation into recruitment, the company built an AI system that could sort resumes from candidates and help identify top talent, based on historical hiring data. However, a significant issue emerged: because the system was using patterns based on historical data, and given that the tech industry has been historically dominated by males, the system was increasingly selecting males to advance in the recruitment process. Although Amazon recruiters only used this system for recommendations and made final decisions themselves, they scrapped the entire program so as to ensure complete transparency and fairness in the process moving forward.

This incident highlighted a hallmark issue for developers: AI systems are only as good as the data they are trained with.

Recognizing the potential for such problems, Google has been incredibly proactive in its approach to development. Earlier this month at Google’s annual developer conference, executives dedicated an entire portion of the keynote to “responsible AI,” reassuring the audience that it is a key priority for the company.

In fact, Google is striving to be transparent about its safety measures, explaining key issues in developing AI responsibly: “The development of AI has created new opportunities to improve the lives of people around the world, from business to healthcare to education. It has also raised new questions about the best way to build fairness, interpretability, privacy, and safety into these systems.” As a corollary to the conundrum Amazon faced, Google discusses the importance of data integrity and the inputs and models that are used to train AI systems: “ML models will reflect the data they are trained on, so analyze your raw data carefully to ensure you understand it. In cases where this is not possible, e.g., with sensitive raw data, understand your input data as much as possible while respecting privacy; for example by computing aggregate, anonymized summaries.” Additionally, the company emphasizes that users must: understand the limitations of data models, repeatedly test systems, and closely monitor results for signs of bias or error.

Similarly, Microsoft has invested a significant amount of effort in upholding responsible AI standards: “We are putting our principles into practice by taking a people-centered approach to the research, development, and deployment of AI. To achieve this, we embrace diverse perspectives, continuous learning, and agile responsiveness as AI technology evolves.” Overall, the company states that its goal for AI technology is to create lasting and positive impact to address society’s greatest challenges, and to innovate in a way that is useful and safe.

Other companies innovating in this arena must be equally invested in developing these systems in a responsible manner. The development and commitment to “responsible AI” will undoubtedly cost tech companies billions of dollars a year, as they are forced to iterate and re-iterate to create systems that are equitable and reliable. Although this may seem like a high cost, it is certainly a necessary one. AI is both an incredibly new yet powerful technology— and it will inevitably upend many industries in the years to come. Therefore, the foundation for the technology must be strong. Companies must be able to create these systems in a way that fosters deep user trust and truly progresses society in a positive manner. Only then will the true potential of this technology be unlocked to become a boon rather than bane to society.

Follow me on Twitter or LinkedIn

Join The Conversation

Comments 

One Community. Many Voices. Create a free account to share your thoughts. 

Read our community guidelines .

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's Terms of Service.  We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Spam
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's Terms of Service.