Opinion

Is Brazil going too far to curb AI-generated disinformation?

Brazil's top electoral board has approved a resolution mandating swift action from platforms, imposing a level of oversight that seems unrealistic

curb AI-generated disinformation elections
Brazil’s top electoral court is taking a proactive approach to AI-generated disinformation. Photo: Alejandro Zambrana/Secom/TSE

On February 27, Brazil’s Superior Electoral Court approved 12 new resolutions to guide the October municipal elections — including provisions on the use of AI by candidates for public office.

To mitigate the risks of AI-generated disinformation, the court took a preemptive measure that bypassed traditional democratic processes. The move, likely made in expediency, raises concerns about the process and implications of such rapid action. 

By taking this measure, the court ignored the importance of legislative action to address these concerns adequately, consulting society and the economic groups that will be impacted by the regulation. 

Resolution number 23,732 mandates that social media platforms are jointly liable for failing to “immediately make unavailable” content or accounts that pose “risk cases” to electoral integrity, such as disinformation and anti-democratic acts. 

The resolution clashes with Brazil’s Internet Bill of Rights (the so-called Marco Civil da Internet), particularly with the article that determines judicial oversight before platform liability is determined. It thus deviates from an important existing law, raising concerns about its compatibility with established regulations and the potential impact on free speech. 

As a result, it leaves room to suppress valid social media posts and gives excessive power to private platforms over content moderation. This approach deviates from the current legal framework regarding platform liability for user-generated content, potentially leading to the cautious removal of legitimate content to avoid legal issues.

The resolution calls for the “immediate” removal of illicit content. Assuming “immediate” means “simultaneously,” it is valid to ask whether it refers to the moment of publication, when the platform monitors and detects the content, or to every time a user reports something. 

The range of possibilities is broad, suggesting the electoral authority may have acted on a premise that is not aligned with reality. 

Platforms are tasked with identifying disinformation, as they do with images of child sexual abuse or copyright violations. Yet, these are fundamentally different challenges due to the need to analyze context in cases of electoral disinformation. 

During the January 8, 2023 riots, extremists used the codename “Selma’s Party” on social media to plan acts that undermined democracy. “Selma” is a play on the word selva, Portuguese for jungle. The word is used by the Brazilian military as a war cry.

The reliance on flawed algorithms, prone to bias against marginalized groups, further complicates this issue. 

Given the complexity of legal interpretations, expecting error-free moderation, whether from automated systems or human oversight, is unrealistic.

Power detests vacuum. 

In the absence of action by Congress, the Superior Electoral Court has stepped in to explore new legislative approaches and responsibility models. In interviews, justices in the court referred to the resolutions as a form of “test drive” in an electoral moment, which could potentially extend its reach to the broader internet. 

This scenario might lead someone unfamiliar with Brazil’s power dynamics to wonder whether the Supreme Court is experimenting with its role as an innovation lab or a legislative branch’s proxy, testing policies before comprehensive implementation. 

Over 50 civil society and academic organizations in Brazil have come together to raise concerns about the new internet electoral advertising rules, which ignore public feedback. Not to mention that an amendment to electoral legislation should have gone through legislative approval. 

Some would say this is outrageous. But often, outrage overlooks the detailed complexities of political issues, choosing exaggerated misrepresentation and predictions of disaster instead. 

It’s not a form of logical discussion, but rather a verbal contest and political performance where points are scored. We must move beyond that to a scenario where lawmakers and regulators worldwide swiftly address the risks and potentialities posed by AI. These measures must be evidence-based and processed through democratic avenues that maintain a system of checks and balances, ensuring the fairness and transparency of AI governance. 

The urgency to understand and regulate AI is palpable worldwide. 

Brazilian President Luiz Inácio Lula da Silva has tasked the Science and Technology Ministry with drafting a comprehensive plan on AI usage to be presented to the United Nations General Assembly in September. 

The U.S. has rolled out voluntary guidelines and policies to strengthen its AI industry and compete with China’s AI ambitions. 

A significant legal dispute arose when The New York Times sued OpenAI and Microsoft, alleging copyright infringements in AI model training, highlighting the need for a legal reassessment regarding AI. 

Additionally, AI discussions took center stage at major international forums, including the World Economic Forum, with nations from Saudi Arabia to France ramping up their national AI strategy efforts. In business, substantial investments in AI initiatives notably boosted Nvidia’s market value to USD 2 trillion, driving American stock indices to new heights. 

The European Parliament is also advancing with the “AI Act,” a set of regulations aimed at governing AI use, awaiting adoption by the European Council with detailed timelines for implementation.

The approach to election seasons has heightened worries over AI’s role in spreading disinformation and creating deepfakes. Globally, politicians are grappling with how to regulate AI effectively, especially in the context of elections. 

Since the launch of ChatGPT in November 2022, the discourse on AI safety has broadened from existential risks to include a more comprehensive array of concerns, such as content moderation. This expansion is beneficial but raises concerns that political considerations might overshadow essential, non-political aspects of AI safety, as seen in debates around initiatives like Google’s Gemini.

Aside from the immediate concerns around electoral risks, there’s a deeper, more structural debate regarding the opportunities AI presents and the dangers of overlooking them. Most of the discourse around AI and regulation focuses on risks, which is undeniably crucial. 

However, there’s also room to explore regulation as a means to seize opportunities. A comprehensive approach to AI regulation must consider this, alongside the costs of neglect, such as delays in AI adoption due to lack of investment or education, which can be pretty expensive. 

This conversation transcends borders, impacting dialogues between Brasília, Geneva, and Washington. Last week, the Wilson Center Brazil Institute facilitated a crucial private briefing for Brazilian legislators, diving into the complexities of AI regulation and governance. 

The session featured experts like Kellee Wicker, Prem M. Trivedi, Gary Corn, and Gordon LaForge, alongside a prominent group of representatives from Brazil’s Congress, executive branch, and private sector. 

The delegation was organized by the Movimento Brasil Competitivo and included members of the Brazilian Senate, representatives from the private sector, and civil society, underscoring the broad interest and concern over AI governance and regulatory frameworks. The active participation of scholars and researchers in these discussions is vital in bridging the gap between AI advancements and regulatory measures, ensuring that the governance of AI is informed and effective.

The discussion centered on the diverse approaches governments are adopting for AI governance. Despite a consensus among experts that fully autonomous AI is more speculative than immediate, the undeniable rapid progress of AI technology has shifted the debate from whether society will embrace AI to how it will be implemented. 

Regulators and policymakers acknowledge that containing AI is unrealistic altogether. Efforts at national and international levels are geared towards establishing effective governance frameworks to preemptively address the evolution of AI. Yet, there’s a notable lag in regulatory measures in keeping pace with AI’s rapid advancements, highlighting the critical role of scholars and researchers in bridging this gap. 

Their contributions are vital in formulating policies that facilitate safe, creative, and stable incorporation of AI into society.

Gary Corn, who brought an extensive military background to the discussion, particularly in the context of national security, stressed that AI’s complex nature demands nuanced regulation. He emphasized the importance of adhering to value-based principles in AI governance, cautioning against the risks of hampering innovation with excessive regulation. 

Moreover, he highlighted the significance of maintaining data integrity within AI models and the pivotal role of AI in enhancing cybersecurity, pointing to the intricate relationship between technological progress and societal well-being.

The discourse around AI regulation and its societal impact is multifaceted. It isn’t just about tech companies; it’s about how it will reshape agriculture, healthcare, and education. Recognizing this broad impact is crucial to engaging the entire society in the debate, balancing the need to mitigate risks with the need to seize opportunities. 

Reflecting on the discussions hosted in Washington D.C., with the Brazilian delegation, after visits to think tanks such as the Wilson Center, the White House, Congress, private companies, and startups, it’s evident that a nuanced approach is essential to address the complexities of AI. 

This strategy includes enacting comprehensive legislation, creating public AI resources for widespread access, continuous technological advancement, and an international governance framework emphasizing the common good. 

The common ground among the visited stakeholders is that they were all committed to pursuing a policy framework that involves infrastructure and innovation investment, developing human capital for an AI-empowered workforce, and ensuring AI is universally accessible to address societal and economic challenges while establishing a regulatory framework to support a robust AI ecosystem. 

A key takeaway is that successfully navigating the intertwined future of AI and society demands innovative policymaking and a steadfast commitment to upholding democratic values.