The ethics of innovation in generative AI and the future of humanity

Join top executives in San Francisco July 11-12 to hear how leaders are integrating and optimizing AI investments for success. Learn more


Artificial intelligence has the potential to change the social, cultural and economic fabric of the world. Just as television, mobile and the internet have spurred mass transformation, generative AI developments such as ChatGPT will create new opportunities that humanity has yet to imagine.

However, with great power comes great risk. It’s no secret that generative AI has raised new questions about ethics and privacy, and one of the biggest risks is that society is using this technology irresponsibly. To avoid this outcome, it is crucial that innovation does not outweigh responsibility. New regulatory guidelines need to be developed as fast as major tech players roll out new AI applications.

To fully understand the moral conundrums around generative AI and their potential impact on the future of the global population, we need to take a step back to understand these great language models, how they can create positive change, and where they may fall short.

The challenges of generative AI

Humans answer questions based on our genetic makeup (nature), education, self-learning and observation (education). A machine like ChatGPT, on the other hand, has the world’s data at its fingertips. Just as human biases influence our responses, the output of AI is influenced by the data used to train it. Since the data is often comprehensive and contains many perspectives, the answer that Generative AI provides depends on how do you ask the question.

Event

Transform 2023

Join us in San Francisco July 11-12, where top executives will share how they integrated and optimized AI investments for success and avoided common pitfalls.

subscribe now

AI has access to trillions of terabytes of data, allowing users to focus their attention through rapid engineering or programming to make the output more precise. This is not bad if the technology is used to it to suggest actions, but the reality is that generative AI can be used to make decisions that affect human life.

For example, when using a navigation system, a human specifies the destination and the machine calculates the fastest route based on things like road traffic data. But if the navigation system were asked to determine the destination, would its action correspond to the result desired by man? Also, what if a human being was unable to intervene and decided to drive on a different route than the one suggested by the navigation system? Generative AI is designed to simulate thoughts in human speech from patterns it has already witnessed, not to create new knowledge or make decisions. Using the technology for that kind of use case is what raises legal and ethical concerns.

Use cases in action

Low-risk applications

Low-risk, ethically sound applications will almost always focus on an assistive approach with a human in the loop, where the human has responsibility.

For example, if ChatGPT is used in a college literature class, a professor could use tech knowledge to help students discuss topics at hand and test their understanding of the material. Here, AI successfully supports creative thinking and expands students’ perspectives as a supplementary educational tool if students have read the material and can compare the simulated AI ideas against their own.

Medium risk applications

Some applications have a medium risk and warrant further criticism under regulations, but the benefits may outweigh the risks when used correctly. For example, AI can make recommendations about medical treatments and procedures based on a patient’s medical history and the patterns it identifies in similar patients. However, a patient who goes ahead with that recommendation without consulting a human medical expert could have dire consequences. Ultimately, the decision and how their medical data is used is up to the patient, but generative AI should not be used to create a care plan without proper checks and balances.

Risky applications

High-risk applications are characterized by a lack of human accountability and AI-driven autonomous decisions. For example, an AI judge presiding over a courtroom is unthinkable under our laws. Judges and attorneys can use AI to do their research and suggest a course of action for the defense or prosecution, but when the technology transforms into playing the role of judge, it poses a different threat. Judges are stewards of the rule of law, bound by the law and their own conscience which AI does not have. There may be ways in the future for AI to treat people fairly and without bias, but in our current state only humans are held accountable for their actions.

Immediate steps towards accountability

We have entered a crucial phase in the regulatory process for Generative AI, where applications like these need to be considered in practice. There’s no easy answer as we continue to research AI behavior and develop guidelines, but there are four steps we can take now to minimize the immediate risk:

  1. Self-government: Every organization should adopt a framework for the ethical and responsible use of AI within their company. Before the regulation is drawn up and becomes legal, self-government can show what works and what doesn’t.
  2. Test: A comprehensive testing framework that follows the fundamental rules of data consistency, such as detecting bias in data, rules for sufficient data for all demographics and groups, and truthfulness of data, is essential. Testing for these biases and inconsistencies can ensure that disclaimers and caveats are applied to the final output, just like a prescription drug where all potential side effects are mentioned. Testing should be ongoing and should not be limited to releasing a feature once.
  3. Responsible action: Human assistance matters no matter how smart generative AI gets. By ensuring that AI-driven actions pass through a human filter, we can ensure responsible use of AI and confirm that practices are human-controlled and properly governed from the outset.
  4. Continuous risk assessment: Considering whether the use case falls into the low, medium or high risk category, which can be complex, will help determine the appropriate guidelines that need to be applied to ensure the right level of governance. A one-size-fits-all approach will not lead to effective governance.

ChatGTP is just the tip of the iceberg for Generative AI. Technology is advancing at breakneck speed, and taking responsibility now will determine how AI innovations impact the global economy, among many other outcomes. We are at an interesting point in human history where our humanity is being challenged by technology trying to replicate us.

A bold new world awaits and we must collectively be prepared for it.

Rolf Schwartzmann, Ph.D., serves on the Information Security Advisory Board of Icertis.

Monish Darda is the co-founder and chief technology officer of Icertis.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including data engineers, can share data-related insights and innovations.

If you want to read cutting-edge ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contributing your own article!

Read more from DataDecisionMakers

#ethics #innovation #generative #future #humanity

Leave a Comment