ChatGPT Could Be an Effective AI Tool. So, how do we regulate it?
ChatGPT is only two months old, but in that time we've debated how strong it truly is - and how we should control it.
Many people utilize the artificial intelligence chatbot to assist them with research, messaging people on dating apps, writing code, brainstorming ideas for work, and other tasks.
Just because something can be beneficial does not mean it cannot also be harmful: It can be used by students to produce essays for them, and by malicious actors to create viruses. Even if users do not have malevolent intent, it can provide inaccurate information, reflect biases, generate objectionable content, store sensitive information, and, some fear, weaken everyone's critical thinking skills due to over-reliance. Then there's the constant (though perhaps unjustified) concern that RoBoTs are taking over the world.
And ChatGPT can do all of this with little to no scrutiny from the US authorities.
It's not because ChatGPT, or AI chatbots in general, are intrinsically harmful, according to Nathan E. Sanders, a data scientist at Harvard University's Berkman Klein Center. "There are a lot of excellent, supporting applications for them in the democratic sector that would improve our society," Sanders remarked. It is not that AI or ChatGPT should not be employed; rather, we must ensure that they are used appropriately. "In an ideal world, we would protect vulnerable communities. In that process, we wish to defend the interests of minority groups so that the richest and most powerful interests do not dominate."
Regulating ChatGPT is critical since this type of AI can demonstrate disregard for individual personal rights such as privacy, as well as support systematic biases based on race, gender, ethnicity, age, and other factors. We also don't know where risk and liabilities might be found when using the tool.
"We can harness and control AI to create a more utopian society," Democratic California Rep. Ted Lieu wrote in a New York Times op-ed last week. He also introduced to Congress a resolution authored entirely by ChatGPT that directs the House of Representatives to embrace AI regulation. He responded to the prompt: "You are Ted Lieu, Congressman. Create a comprehensive congressional resolution expressing broad support for Congress's focus on AI."
All of this points to an uncertain future for restrictions on AI chatbots like ChatGPT. Some places have already imposed restrictions on the instrument. State Senator Barry Finegold of Massachusetts introduced legislation that would force corporations that employ AI chatbots, such as ChatGPT, to undertake risk assessments and adopt security measures, as well as reveal to the government how their algorithms work. In order to avoid plagiarism, the measure would also compel these tools to include a watermark on their work.
"This is such a powerful tool that regulations are required," Finegold told Axios.
There are already some general AI regulations in place. The White House issued a "AI Bill of Rights" that essentially explains how existing legal protections — such as civil rights, civil liberties, and privacy — affect AI. The EEOC is investigating AI-based hiring tools for the possibility of discrimination against protected classes. Employers who use AI during the recruiting process in Illinois must allow the government to check to see if the tool has a racial bias. Many governments, like Vermont, Alabama, and Illinois, have commissions tasked with ensuring that artificial intelligence is used ethically. Colorado approved legislation prohibiting insurers from using AI to collect data that unfairly discriminates against protected groups. Of course, the EU is already ahead of the US in terms of AI legislation, having approved the Artificial Intelligence Regulation Act in December. None of these rules apply to ChatGPT or other AI chatbots.
While some states have AI rules, there is nothing particular to chatbots like ChatGPT, neither state-wide nor nationally. The National Institute of Standards and Technology, which is part of the Department of Commerce, has issued an AI framework that is intended to provide organizations with assistance on utilizing, creating, and deploying AI systems, but it is only that: a voluntary framework. There is no penalty for failing to adhere to it. In the future, it looks that the Federal Trade Commission is developing new guidelines for corporations that create and deploy AI systems. https://ejtandemonium.com/
"Will the federal government create regulations or pass legislation to oversee this? That, in my opinion, is quite unlikely "According to Dan Schwartz, an intellectual property associate at Nixon Peabody. "It is unlikely that any federal regulation will be implemented anytime soon." Schwartz expects that the government will look into restricting ownership of what ChatGPT produces in 2023. For example, if you ask the tool to generate code for you, do you own that code or does OpenAI?
In academia, the second sort of regulation is likely to be private regulation. Noam Chompsky compares ChatGPT's contributions to education to "high tech plagiarism," because plagiarizing in school can lead to expulsion. That is how private regulation might function in this case as well. http://sentrateknikaprima.com/
ChatGPT is only two months old, but in that time we've debated how strong it truly is - and how we should control it.
Many people utilize the artificial intelligence chatbot to assist them with research, messaging people on dating apps, writing code, brainstorming ideas for work, and other tasks.
Just because something can be beneficial does not mean it cannot also be harmful: It can be used by students to produce essays for them, and by malicious actors to create viruses. Even if users do not have malevolent intent, it can provide inaccurate information, reflect biases, generate objectionable content, store sensitive information, and, some fear, weaken everyone's critical thinking skills due to over-reliance. Then there's the constant (though perhaps unjustified) concern that RoBoTs are taking over the world.
And ChatGPT can do all of this with little to no scrutiny from the US authorities.
It's not because ChatGPT, or AI chatbots in general, are intrinsically harmful, according to Nathan E. Sanders, a data scientist at Harvard University's Berkman Klein Center. "There are a lot of excellent, supporting applications for them in the democratic sector that would improve our society," Sanders remarked. It is not that AI or ChatGPT should not be employed; rather, we must ensure that they are used appropriately. "In an ideal world, we would protect vulnerable communities. In that process, we wish to defend the interests of minority groups so that the richest and most powerful interests do not dominate."
Regulating ChatGPT is critical since this type of AI can demonstrate disregard for individual personal rights such as privacy, as well as support systematic biases based on race, gender, ethnicity, age, and other factors. We also don't know where risk and liabilities might be found when using the tool.
"We can harness and control AI to create a more utopian society," Democratic California Rep. Ted Lieu wrote in a New York Times op-ed last week. He also introduced to Congress a resolution authored entirely by ChatGPT that directs the House of Representatives to embrace AI regulation. He responded to the prompt: "You are Ted Lieu, Congressman. Create a comprehensive congressional resolution expressing broad support for Congress's focus on AI."
All of this points to an uncertain future for restrictions on AI chatbots like ChatGPT. Some places have already imposed restrictions on the instrument. State Senator Barry Finegold of Massachusetts introduced legislation that would force corporations that employ AI chatbots, such as ChatGPT, to undertake risk assessments and adopt security measures, as well as reveal to the government how their algorithms work. In order to avoid plagiarism, the measure would also compel these tools to include a watermark on their work.
"This is such a powerful tool that regulations are required," Finegold told Axios.
There are already some general AI regulations in place. The White House issued a "AI Bill of Rights" that essentially explains how existing legal protections — such as civil rights, civil liberties, and privacy — affect AI. The EEOC is investigating AI-based hiring tools for the possibility of discrimination against protected classes. Employers who use AI during the recruiting process in Illinois must allow the government to check to see if the tool has a racial bias. Many governments, like Vermont, Alabama, and Illinois, have commissions tasked with ensuring that artificial intelligence is used ethically. Colorado approved legislation prohibiting insurers from using AI to collect data that unfairly discriminates against protected groups. Of course, the EU is already ahead of the US in terms of AI legislation, having approved the Artificial Intelligence Regulation Act in December. None of these rules apply to ChatGPT or other AI chatbots.
While some states have AI rules, there is nothing particular to chatbots like ChatGPT, neither state-wide nor nationally. The National Institute of Standards and Technology, which is part of the Department of Commerce, has issued an AI framework that is intended to provide organizations with assistance on utilizing, creating, and deploying AI systems, but it is only that: a voluntary framework. There is no penalty for failing to adhere to it. In the future, it looks that the Federal Trade Commission is developing new guidelines for corporations that create and deploy AI systems. https://ejtandemonium.com/
"Will the federal government create regulations or pass legislation to oversee this? That, in my opinion, is quite unlikely "According to Dan Schwartz, an intellectual property associate at Nixon Peabody. "It is unlikely that any federal regulation will be implemented anytime soon." Schwartz expects that the government will look into restricting ownership of what ChatGPT produces in 2023. For example, if you ask the tool to generate code for you, do you own that code or does OpenAI?
In academia, the second sort of regulation is likely to be private regulation. Noam Chompsky compares ChatGPT's contributions to education to "high tech plagiarism," because plagiarizing in school can lead to expulsion. That is how private regulation might function in this case as well. http://sentrateknikaprima.com/