Podcast

Explaining Brazil #288: Elon Musk has beef in Brazil

Musk has stated that Brazil is on the verge of becoming an Orwellian dystopia in which one man — Supreme Court Justice Alexandre de Moraes — decides what can be said online

Elon Musk, the CEO of Tesla, Starlink, and SpaceX and the owner of X, the social media platform we used to call Twitter, is no stranger to controversy or conspiracy theories.

There is even a Wikipedia page dedicated exclusively to his wacky worldviews. 

Earlier this year, the American press denounced that Mr. Musk had started to echo several of Donald Trump’s claims about the American voting system, putting forth distorted and false notions that American elections were vulnerable to fraud and illegal voting by noncitizens.

He has now pointed his cannons at Brazil. 

Just like he did with Mr. Trump, Mr. Musk has echoed the theories of another far-right figurehead: Brazil’s former President Jair Bolsonaro. Mr. Musk has stated that Brazil is on the verge of becoming an Orwellian dystopia in which one man — Supreme Court Justice Alexandre de Moraes — decides what can be said online.

Justice Moraes took the bait and put Mr. Musk under investigation for obstruction of justice, inciting crime, and the “willful criminal instrumentalization” of X.

This week, we will discuss the issue of content moderation in Brazil and the beef between Elon Musk and Alexandre de Moraes.

Listen and subscribe to our podcast from your mobile device:

Spotify, Apple Podcasts, Google Podcasts, Deezer

This episode used music from Uppbeat and Envato. License codes: Sci-Fi Suspense Intro by GentleJammers: LR57Q6X9ZP, Aspire by Pryces: B6TUQLVYOWVKY02S, Dark Sci-Fi Suspenseful Bass by AudioZen: HQKRFPXAYN, and Future Hip-Hop by raspberrymusic: VSUW3JTMD7.

In this episode:

  • Daniel Castro is vice president of the Information Technology and Innovation Foundation and director of the foundation’s Center for Data Innovation.

Background reading:

Do you have a suggestion for our next Explaining Brazil podcast? Drop us a line at [email protected]

Don’t forget to follow us on X and Facebook.

Transcript of this episode (with Cockatoo)

Elon Musk, the CEO of Tesla, Starlink and SpaceX, and the owner of X, the social media platform we use to call Twitter, is no stranger to controversy or conspiracy theories. There is even a Wikipedia page dedicated exclusively to his wacky world views. Earlier this year, the American press denounced that Musk had started to echo several of Donald Trump’s claims about the American voting system, putting forth distorted and false notions that American elections were vulnerable to fraud and illegal voting by non-US citizens. He has now pointed his cannons at Brazil. with Trump, Musk has parroted the theories of another far-right figurehead, Brazil’s

former President Jair Bolsonaro. Musk has stated that Brazil is on the verge of becoming an Orwellian dystopia in which one man, Supreme Court Justice Alexandre de Moraes, decides what can be said online. Justice Moraes took the bait and put Musk under investigation for obstruction of justice, inciting criminal activity and the willful criminal instrumentalization of acts. This week we will talk about the issue of content moderation in Brazil and about the beef between Elon Musk and Alexandre de Moraes. My name is Gustavo Ribeiro, I’m the editor-in-chief of the Brazilian Report. This is Explaining Brazil.

If you like Explaining Brazil, you should subscribe to our website,

which is the journalistic engine behind this podcast. And you can also do the extra mile and make a donation to our newsroom, buying a coffee for one of our journalists. And God knows, reporters live off of coffee. Our biggest enthusiast is definitely a reader called Carson Allen, who has made multiple donations of dozens of cups of coffee at a time, and recently donated 25 of them to us. Carson, thank you a lot.

And for everyone else listening to this podcast be more like Carson and you can also subscribe to our buy me a coffee fan page pledging a monthly subscription to our newsroom in exchange for exclusive content that you will not find anywhere else. Our buy me a coffee subscribers are sides of course Carson Allen, Jaceada de Oliveira, Gabriel Luca, Andrey Novoseltsev, Pen Ludvig, Leslie Sio, Mark Hillary, Luis Hentz, Erwan Menezes, Aaron Berger, Karas Vrezvec, Alasdair Townsend, Miller Renacido, Peter Suffring, Andersona Silva and someone who chose to remain anonymous. If you are like them and believe in the importance of independent journalism and want to hear

your name on our podcast, go to buymeacoffee.com slash brazilianreport and subscribe to one of the membership tiers. Click on buymeacoffee.com slash brazilianreport to find out more.

Since taking control of Twitter, Elon Musk has not only changed the social network’s name,

he has also dismantled the platform’s system for flagging false election content, arguing it amounted to election interference. Now Musk has taken the route many Brazilian ultra conservatives have taken in recent years and for which they have been punished. He called Justice Alexandre de Moraes a dictator, claiming Moraes interfered with the 2022 election to help President Luiz Inácio Lula da Silva beat former former President Jair Bolsonaro and called for Moraes to be deposed. Musk has threatened to disobey Brazilian court decisions and lift all restrictions on accounts banned in Brazil

for spreading electoral disinformation in public discourse. He has yet to fulfill that promise, though. For more detail on the investigation Elon Musk faces in Brazil, I suggest you go to the Brazilian report’s website, because in this podcast we want to talk more about online speech regulation and how Elon Musk’s charges against the Brazilian Supreme Court may alter the debates around the issue. And we welcome Daniel Castro, Vice President of the Foundation’s Center for Data Innovation. Daniel, thanks for joining us. Freedom of speech, whether online or offline,

is a bit of a different discussion in Brazil compared to the US, right? It’s not an absolute thing here. For instance, racist or homophobic comments constitute crimes in Brazil, they’re not just frowned upon. To what extent is this Musk-Modise dispute an attempt from the ex-owner to make US concepts of freedom of speech apply to a foreign country?

Oh, it’s a great question. You know, Elon Musk has, of course, notoriously said he’s a free speech absolutist, but the reality is he’s taken different approaches to free speech in different countries. So, for example, in India, there’s been government requests for takedowns of politically sensitive content, other objectionable content, and Elon Musk has taken it down. Twitter, or as it’s now known, X, has its own set of principles and rules for what is allowed on the platform. And that does not include, there are exceptions to what’s allowed on that platform that are even legal speech in the United States.

So, there’s a lot of nuance here. I think what’s happening in Brazil is that, Elon Musk has said he disagrees with these specific takedown requests. And he’s basically said he will not abide by them. Now, you know, the problem is no company can unilaterally ignore court orders and get away with it. You know, no individual can unilaterally ignore what a, you know, what a democratically elected government requires a lawful company to do. And so that’s where I think there’s significant tension here. I think it’s less about applying America’s vision

of free speech and more about applying Elon Musk’s vision of free speech. In this current dispute, we’ve got one group represented largely by Elon Musk in Brazil’s right wing, crying censorship. The other backed by Supreme Court Justice Alexandre de Moraes in Brazil’s left, calling it content moderation. When does one become the other? And where do court orders that suspend social media accounts for fake news and anti-democratic speech lie in this discussion?

More toward moderation or more toward censorship? That’s a great question. So this is a challenge that I think many countries wrestle with is, where do you draw the line on free speech? And of course, you know, doing it on online and social media, you know, is sometimes different than doing it in public spaces, you know, in the offline world. And the reason it’s become different is because, you know, sometimes governments are holding these companies liable for basically, you know, making the wrong choice in terms of whether they leave third-party content on their platforms or not. And so, you know, there’s kind of a, you know, there’s multiple dimensions to these arguments about censorship and the impact of various

policies in this space. You know, one scenario is, you know, you must remove this count or you must remove this post. If those, you know, are lawful orders, you know, if they’re orders that, you know, there’s a chance for judicial review, you know, it’s, you know, it adheres to national law, domestic law, you know, those are fairly straightforward. I think what happens is that that you have these other types of requests sometimes where they are either incredibly broad, so they’re not about specific accounts, but they’re about a large amount of information

that they want to restrict. Maybe they want to block an entire hashtag. Maybe they want to block thousands of accounts without maybe clear evidence of something. Or sometimes they’re asking for these platforms to turn over information about the users on their platform. Again, sometimes that is completely valid,

but sometimes it’s not, maybe it doesn’t adhere to specific laws. What we’ve seen is that many of these platforms are willing to challenge these laws, especially when they receive an order that they do not believe is lawful. Now, that is their right to challenge it. Typically, under these laws, they can go to a court,

they can appeal. But after that appeal, if they lose, you know, then they have to take these things down. And that’s where I think where we start to get into these, you know, claims of censorship more recently is not just because, you know, they’ve been ordered to take something down, but they’ve been threatened directly for maybe not moving fast enough, for appealing some of these takedown requests. And then I think the bigger fear that a lot of these platforms have is that this creates a chilling effect

on all of these platforms. And so in the future, especially if they see aggressive legal action taken against an individual within the country that works for one of these platforms, sometimes individuals have been jailed even for these types of resistance, that will create the chilling effect

where any request from the government, even unlawful ones, that these platforms will take down. And that typically in the offline world, what happens is if somebody’s speech is being challenged and they have a right to challenge that, that individual can go to the court and challenge it. But what happens in the online space is it’s no longer the individual that gets to challenge it.

It’s really these platforms. And so people are depending on the platforms to protect their free speech interests, but the government’s also depending on the platforms to take down lawful orders. And that’s when these platforms get caught in the middle. And many of them try to do the right things. But as we see in some of these cases, that’s not always the case. Or sometimes the leaders of these platforms may have different views,

and they may inject their views into the decisions that their companies take in these controversial cases.

Now regarding content moderation online and stepping away from the Musk-Modise controversy, can there ever be a Goldilocks solution? Something that freedom of speech advocates can accept and which can still curtail potentially harmful content online, something that everyone can be happy with it, not too hot, not too cold. I think it depends on what you mean by everyone.

I you know, as you pointed out, there’s no global consensus on what free speech should be. Free speech in the United States is different than Brazil. It’s different in France. It’s different in India. It’s different in many countries. And that is okay. That’s why we have different countries. And democratically, elected governments should have that right to set those different lines and draw those different lines. So I don’t think the goal is to have one set of free speech for the entire internet. What we have to be concerned about, though, is when one country tries to impose its views on the rest of the world. And so, you know, where that’s most

problematic is where we’ve seen content being taken down, not just within a country, but globally. Or we’ve seen, you know, blocks on, you know, requests or orders that a platform block a user, not just within their country, but globally. That’s where I think we have to be very careful that we don’t allow, you know, kind of a race to the bottom in terms of, you know, free speech that, lawful requests for user data, including, you know, for any type of legal investigation, including when there’s arguments that, you know, somebody has gone online and said something unlawful. And this is where, for example, the OECD has, you know, come together in 2022, and there was an intergovernmental agreement on common approaches for safeguarding user

user data in respect to, you know, when there are law enforcement or national security requests for user data, basically, you know, some of these principles outline things like, you know, you want to make sure these are lawful requests. There needs to be certain safeguards in place to make sure that, you know, you’re protecting user data. And, you know, there was a whole set of principles

that many countries agreed to. And that’s what you want to see. You don’t want to have, you know, these companies that are global in nature having to go to every different country and figure out, if we receive a lawful order, how do we determine if it’s lawful? There’s so much that goes into that. You want to have a good global standard of what that baseline expectation is.

It should come with judicial oversight. It should specify specific accounts. It shouldn’t be too broad. That’s, I think, where we should work on agreement. That way, you know, these platforms aren’t, you know, thrust in the position of trying to be, you know, both judge and jury on, you know, when they receive these requests.

How much do you think political point scoring harms the regulation discussion in Brazil and elsewhere?

Oh, I think it’s, you know, that’s the politics of content moderation are, you know, the defining issue for how, you know, you know, most legislators and policymakers look at this issue, right? There’s always content that someone will find objectionable, you know, and there are claims made on both sides, right? content is being left up. And that has basically put most of these platforms into the very unenviable position of having all sides of the political spectrum angry at them and telling them they’re either doing too much or they’re doing too little. And there is no Goldilocks approach that they can take where everyone thinks they’re doing the right thing. And so, you know, these tech companies have become, you know, political punching bags and, you know,

very popular ones because of that fact. And the problem with that is that that doesn’t allow, you know, policy makers to actually advance, you know, reasonable policies that say, okay, you know, these are the type of lawful requests you’re going to get. These are the types that, you know, if you receive a request, you can ignore. These are the types that we would like you to be more proactive on addressing yourself

and kind of working hand in hand. There are many areas where there is going to be speech that is what they call awful, but lawful. That speech that probably many of these platforms want to take down. We don’t want to get in a situation where they’re only removing illegal content. That’s probably not the types of platforms that most users want. But when you get into this kind of political hot potato of who’s going to make the decision of what content to come down, that’s pretty problematic when you put all of that on the platform. But you certainly don’t want to be in the position either of the government

saying certain content is illegal, but the platform saying, well, we’re going to leave it up anyway. That’s also not very tenable. And so, hopefully, coming out of these disputes that we’re seeing, hopefully, we’ll see both sides maybe look for a little more moderation in their approach. So in terms of government, what we would hopefully want to see is government being very specific on the types of requests it’s going to take, it’s going to require platforms to take down, maybe not be too aggressive in going after individuals and companies if they’re just trying to obey the law. But at the same time, these companies need to be following lawful requests. And that’s the type of balance that hopefully will kind of lead to a little bit of cooling

down of the temperature in the space.

A 2022 report by NGO Freedom House called Brazil, India, Nigeria, both swing states on which the future of internet freedoms hinge due to their potential regional or global influence on the future of internet governance. What do you make of that assessment? And from what we’ve seen over the past week, do you think Elon Musk shares that view with regard to Brazil?

I don’t think there can be any global multinational tech company that doesn’t have a presence in Brazil. So I think Elon Musk’s threat to abandon the Brazilian market is unlikely to occur or would be very surprising, at least, if he follows through on that threat. seen India, Brazil, and Nigeria be probably at the forefront of some of the various Internet shutdowns, takedown of content in the past. We’ve seen judicial orders that go pretty far in some of these countries, and I think that’s probably what Freedom House is referring previous reports. At the same time, these are, first of all, especially in terms of

India and Brazil, some of the largest user bases of Internet users. And also, reflecting different norms and expectations of what is going to be allowed online. And this is something that I think any platform, again, working in this space, they’re going to have to find solutions that work in different countries. And that won’t always be the same solution, but it should be something that takes into account both the fact that they are global platforms, but also that they have to respect domestic laws.

How far along is Brazil on the path towards social media regulation? And can the world learn from Brazil’s social media regulation push?

It’s a great question. So, you know, I’ve just worked on a report with the Brazil Institute at the Wilson Center where we were looking specifically at, you know, innovation and liability for online intermediaries in Brazil and some of these, you know, regulations and social media regulation. And, you know, the United States was one of the first countries to really put forward this, you know, early law, Section 230 of the Communications Decency Act, which said that, you know, intermediaries online are not responsible, are not liable for the speech of their users. And that law has had a, you know, tremendously positive effect in terms of allowing all sorts of social media sites and other types of services to flourish online.

Because an email provider, for example, is not liable if somebody sends a threatening message through that service. And a dating app is not responsible for necessarily what its users do on these services. That’s been really useful, but obviously we’ve seen some opposition to that in recent years, concerns that social media companies, for example, are not doing enough to police bad behavior. What’s interesting is that Brazil has also looked at creating, extending this Article 19 of the Civil Rights Framework for the Internet to do similar things, to put in place a liability framework that would both protect third parties, but also put in place specific guidelines on how the judiciary should notify platforms and remove content.

That’s the type of development that I think a lot of countries are going through right now. They’re asking these questions of, okay, we want to see these online companies be responsive, but we don’t necessarily, we don’t want to go too far. We don’t want to create a regime that could be abused. We don’t want to create laws that will lead to censorship or that will go maybe on the other side and lead to too much disinformation

and misinformation online that can also be incredibly problematic for society. And so, you know, I think there’s a lot of experimentation in this space. There’s a lot of, you know, I think there’s gonna be a lot of trial and error. But I think one of the most important, you know, principles that, you know, any country should take when they’re rewriting the laws or writing new laws about social media is trying to impose liability and responsibility as close to the actual speaker as possible. And so many times that will not be about imposing more liability on platforms. But there will be potentially need to make sure that platforms are responsible for their own conduct, that they are responsible for their own speech, and that they also have certain obligations perhaps around transparency

so that we can see when they are taking things down so that we can see if there is censorship occurring whether it’s occurring from the platform or whether it’s occurring from other you know governments whether it’s you know domestically or even foreign governments. I think that’s you know a kind of a path forward that you know provides more information so that policymakers can make better decisions going forward, but without kind of going so far that, what you don’t wanna have is a bunch of, either elected officials or politicians or bureaucrats,

deciding what content stays up and what content goes down, what content’s promoted and what content is not promoted. That is a world of censorship, I think is very far away from pretty much any democratic vision of what free speech should look like.

Daniel, fantastic stuff. Thank you very much for joining us. Daniel Castro is a Vice President of the Information Technology and Innovation Foundation and Director of this foundation’s Center for Data Innovation. If you like Explaining in Brazil, please give us a five-star rating wherever you get your podcasts, or better yet, subscribe to The Brazilian Report, the journalistic engine behind this podcast. We have a subscription-based business model, and your memberships fuel our journalism. Our work has been recognized for its quality, and we have won several international awards.

More recently, The Brazilian Report was nominated for Best News Website in the Americas by the World Association of News Publishers One Ifra so fingers crossed for us And to continue this work. We need your support Go to Brazilian dot report slash subscribe. I’m Gustavo Ribeiro. Thanks for listening

Explaining in Brazil. We’ll be back next week Explaining in Brazil. We’ll be back next week

— Transcribed with Cockatoo