ChatGPT shows geographic biases on environmental justice issues: Report

Virginia Tech, a US university, has published a report describing possible biases in the artificial intelligence (AI) tool ChatGPT, suggesting variations in its results on environmental justice issues in different counties.

In a recent reportVirginia Tech researchers have alleged that ChatGPT has limitations in delivering area-specific information on environmental justice issues.

However, the study identified a trend indicating that information was more available for larger, more densely populated states.

"In states with larger urban populations, such as Delaware or California, less than 1 percent of the population lived in counties that cannot receive specific information."

Meanwhile, regions with smaller populations lacked equivalent access.

"In rural states such as Idaho and New Hampshire, more than 90 percent of the population lived in counties that could not receive specific local information," the report states.

Additionally, he cited a professor named Kim from Virginia Tech's Department of Geography who urged the need for more research as biases are discovered.

"While more studies are needed, our findings reveal that geographic biases currently exist in the ChatGPT model," stated Kim.

The research paper also included a map illustrating the extent of the American population without access to location-specific information on environmental justice issues.

A map of the United States showing areas where residents can see (blue) or cannot see (red) specific local information on environmental justice issues. Source: Virginia Tech

Related: ChatGPT passes neurology exam for the first time

This follows recent news that academics are uncovering possible political biases exhibited by ChatGPT in recent times.

On August 25, Cointelegraph reported that researchers from the United Kingdom and Brazil published a study declaring the departure of large language models (LLMs) like ChatGPT. text containing errors and biases that could mislead readers and have the ability to promote political biases presented by traditional media.

Magazine: Deepfake K-Pop porn, woke Grok, 'OpenAI has a problem', Fetch.AI: AI Eye