The company behind the popular chatbot, ChatGPT, has said it would not allow the platform to be used for any political campaigns even as it launches tools to fight disinformation ahead of a series of elections this year.
This development is coming amid concerns expressed by the World Economic Forum over the possibility of AI technology disrupting several elections to be held in different countries this year.
Top among the countries having elections this year are the United States, the United Kingdom, the European Union, and India.
While noting that it is still working to understand how effective ChatGPT might be for personalized persuasion, OpenAI said it would not allow people to use it for political campaigns and lobbying “until we know more.”
Tackling election misinformation
Highlighting how the company is guiding against the misuse of ChatGPT ahead of the 2024 elections, OpenAI in a blog post said:
- “We expect and aim for people to use our tools safely and responsibly, and elections are no different. We work to anticipate and prevent relevant abuse—such as misleading “deep fakes”, scaled influence operations, or chatbots impersonating candidates.
- “Before releasing new systems, we red-team them, engage users and external partners for feedback, and build safety mitigations to reduce the potential for harm. For years, we’ve been iterating on tools to improve factual accuracy, reduce bias, and decline certain requests. These tools provide a strong foundation for our work around election integrity. For instance, DALL·E has guardrails to decline requests that ask for image generation of real people, including candidates.
- “People want to know and trust that they are interacting with a real person, business, or government. For that reason, we don’t allow builders to create chatbots that pretend to be real people (e.g., candidates) or institutions (e.g., local government).
- “We don’t allow applications that deter people from participation in democratic processes—for example, misrepresenting voting processes and qualifications (e.g., when, where, or who is eligible to vote) or that discourage voting (e.g., claiming a vote is meaningless).”
AI misinformation as a global risk
Nairametrics had earlier reported that the World Economic Forum (WEF) listed AI-generated misinformation/disinformation and cyberattacks as some of the top risks that countries globally will face this year.
The Forum disclosed this in its Global Risks Report 2024 just released. According to WEF, “Global risk” is defined as the possibility of the occurrence of an event or condition which, if it occurs, would negatively impact a significant proportion of global GDP, population, or natural resources.
The report, which details the findings of the Global Risks Perception Survey (GRPS), revealed that the advancement in AI technology now makes it easy for people to create and spread misinformation.
According to the report, 53% of the respondents saw AI misinformation as the biggest global risk in 2024, bringing it to number 2 out of the top 10 risks for this year, after extreme weather, which topped the risks table.