The introduction of generative AI systems into the public domain exposed people all over the world to new technological possibilities, implications, and even consequences many had yet to consider. Thanks to systems like ChatGPT, just about anyone can now use advanced AI models that are not only capable of detecting patterns, honing data, and making recommendations as earlier versions of AI would, but also moving beyond that to create new content, develop original chat responses, and more.
A turning point for AI
When ethically designed and responsibly brought to market, generative AI capabilities support unprecedented opportunities to benefit business and society. They can help create better customer service and improve healthcare systems and legal services. They also can support and augment human creativity, expedite scientific discoveries, and mobilize more effective ways to address climate challenges.
We are at a critical inflection point in AI’s development, deployment, and use, and its potential to accelerate human progress. However, this huge potential comes with risks, such as the generation of fake content and harmful text, possible privacy leaks, amplification of bias, and a profound lack of transparency into how these systems operate. It is critical, therefore, that we question what AI could mean for the future of the workforce, democracy, creativity, and the overall well-being of humans and our planet.
The need for new AI ethics standards
Some tech leaders recently called for a six-month pause in the training of more powerful AI systems to allow for the creation of new ethics standards. While the intentions and motivations of the letter were undoubtedly good, it misses a fundamental point: these systems are within our control today, as are the solutions.
Responsible training, together with an ethics by design approach over the whole AI pipeline, supported by a multi-stakeholder collaboration around AI, can make these systems better, not worse. AI is an ever-evolving technology. Therefore, for both the systems in use today and the systems coming online tomorrow, training must be part of a responsible approach to building AI. We don’t need a pause to prioritize responsible AI.
It’s time to get serious about the AI ethics standards and guardrails all of us must continue adopting and refining. IBM, for its part, established one of the industry’s first AI Ethics Boards years ago, along with a company-wide AI ethics framework. We constantly strive to strengthen and improve this framework by taking stock of the current and future technological landscape –from our position in industry as well as through a multi-stakeholder approach that prioritizes collaboration with others.
Our Board provides a responsible and centralized governance structure that sets clear policies and drives accountability throughout the AI lifecycle, but is still nimble and flexible to support IBM’s business needs. This is critical and something we have been doing for both traditional and more advanced AI systems. Because, again, we cannot just focus on the risks of future AI systems and ignore the current ones. Value alignment and AI ethics activities are needed now, and they need to continuously evolve as AI evolves.
Alongside collaboration and oversight, the technical approach to building these systems should also be shaped from the outset by ethical considerations. For example, concerns around AI often stem from a lack of understanding of what happens inside the “black box.” That is why IBM developed a governance platform that monitors models for fairness and bias, captures the origins of data used, and can ultimately provide a more transparent, explainable and reliable AI management process. Additionally, IBM’s AI for Enterprises strategy centers on an approach that embeds trust throughout the entire AI lifecycle process. This begins with the creation of the models themselves and extends to the data we train the systems on, and ultimately the application of these models in specific business application domains, rather than open domains.
All this said – what needs to happen?
First, we urge others across the private sector to put ethics and responsibility at the forefront of their AI agendas. A blanket pause on AI’s training, together with existing trends that seem to be de-prioritizing investment in industry AI ethics efforts, will only lead to additional harm and setbacks.
Second, governments should avoid broadly regulating AI at the technology level. Otherwise, we’ll end up with a whack-a-mole approach that hampers beneficial innovation and is not future-proof. We urge lawmakers worldwide to instead adopt smart, precision regulation that applies the strongest regulation control to AI use cases with the highest risk of societal harm.
Finally, there still is not enough transparency around how companies are protecting the privacy of data that interacts with their AI systems. That’s why we need a consistent, national privacy law in the U.S. An individual’s privacy protections shouldn’t change just because they cross a state line.
The recent focus on AI in our society is a reminder of the old line that with any great power comes great responsibility. Instead of a blanket pause on the development of AI systems, let’s continue to break down barriers to collaboration and work together on advancing responsible AI—from an idea born in a meeting room all the way to its training, development, and deployment in the real world. The stakes are simply too high, and our society deserves nothing less.
Read “A Policymaker’s Guide to Foundation Models”