ChatGPT is a popular artificial intelligence (AI) chatbot that can answer complex questions like a human might. Estimates say the chatbot got more than 100 million users in January, just two months after its launch. This makes ChatGPT the fastest-growing app in history, having easily surpassed TikTok and Instagram.
But as ChatGPT grows in popularity and other tech companies begin making similar products, questions about the need to regulate these AI products will emerge. It so happens that one of the people who created ChatGPT believes that such conversations should already be happening, and they have to include governments.
Even without analysts mentioning the 100 million monthly users milestone for ChatGPT, there’s another way to quantify the impact of the product. That’s Google’s reaction to the emergence of ChatGPT.
Google is reportedly scrambling to respond to ChatGPT, which has the potential to grow into a serious Google Search rival. It’s not a reliable Google Search alternative yet though, and that’s something that ChatGPT users need to remember.
Google is in “code red” mode. The company increased work on its own chatbots in recent weeks. Also, Google is preparing a launch event that might deliver the first ChatGPT-like Google products.
Separately, Microsoft is already using ChatGPT itself in its Bing service. But Microsoft has poured billions into OpenAI, the company that created ChatGPT.
As we’ve witnessed in the past few months, the uses of ChatGPT can be far and wide. That might include more malicious activities, like using ChatGPT to cheat on school tests.
OpenAI chief technology officer Mira Murati discussed ChatGPT with Time magazine and addressed the need to regulate AI.
A potentially new version of Microsoft Bing Image source: Owen Yin
“When we’re developing these technologies, we’re really pushing toward general intelligence, general capabilities with high reliability—and doing so safely,” the exec said, answering a question about schools banning ChatGPT.
She continued, “But when you open it up to as many people as possible with different backgrounds and domain expertise, you’ll definitely get surprised by the kinds of things that they do with the technology, both on the positive front and on the negative front.”
As for the dangers of AI, Murati acknowledged, “there are a lot of hard problems to figure out.”
“How do you get the model to do the thing that you want it to do, and how you make sure it’s aligned with human intention and ultimately in service of humanity?,” she said. “There are also a ton of questions around societal impact, and there are a lot of ethical and philosophical questions that we need to consider. And it’s important that we bring in different voices, like philosophers, social scientists, artists, and people from the humanities.”
Furthermore, Murati said AI could be misused or used by bad actors. “So, then there are questions about how you govern the use of this technology globally. How do you govern the use of AI in a way that’s aligned with human values?”
The exec said companies like OpenAI should “bring this into the public consciousness in a way that’s controlled and responsible.”
“But we’re a small group of people and we need a ton more input in this system and a lot more input that goes beyond the technologies—definitely regulators and governments and everyone else,” Murati said. She added that it’s not too early for regulators to start getting involved, considering the impact AI tech will inevitably have on society.