Moderating Content on Online Platforms. What Would You Do?/transcript

From IGF-USA Wiki
Jump to: navigation, search

>> MODERATOR: Thank you for joining us. My name is Ashkhen Kazaryan and I'm joined with Carl Szabo, general counsel with NetChoice and a law professor from time to time, and Kaitlin Sullivan, who is manager for content policy with Facebook.

Today we're going to talk about platforms and how they moderate information and content that they have created by third parties. There are a lot of problems and issues that rise out of what some people would think easy like moderating should be easy but it's not. A lot of questions are not just ethical questions, they are also about freedom of speech and dissemination of information. Our panel first will discuss the current landscape of regulation and the laws that currently exist on the books. Kaitlin will tell us how Facebook moderates content right now and what was the process that made them come to these policies and then we're going to turn it to you, we're going to work through two scenarios that we wrote for you. Some of them are based on real life events, some are not but they're definitely all hypothetical and what we're going to do is we're going to give you a print out with the scenarios and they're also going to have the scenarios on the screen and we're going to read each scenario, let you discuss amongst yourselves and then just ask there will be four different choices what to do with the content and then I'll just ask you to raise your hand. I know that's not very tech forward but it's definitely very foolproof. And please just feel free to ask us questions after we're done with our little panel segment and then we can move often to the scenarios.

So Carl, let's start with you, do you want to tell everyone what is exactly going on right now in the content regulation world?

>> CARL SZABO: Thanks. I actually put together a quick deck so I'm going to put on my professor hat real quickly to kind of run through this. So content mod rakes, I was talking to a colleague of mine last night from back where I live up in Maryland and telling him I was going to do this panel in content moderation. Oh, great you're going to talk about the first amendment and stuff like that. Well, that's kind of the first thing I wanted to kind of address as part of this conversation is this isn't about the first amendment, right? So let's look at what the first amendment actually says, Congress shall make no law is kind of the way it starts and the reason I emphasize the word "Congress" is first amendment law, first amendment protection is about government regulation of free speech. What we're here to talk about today is content moderation by private entities on their own private platforms. A good example would be the IGF Wiki, we've all been to visit it, may I love to put up whatever I want IGF at the end of the day has the power and the right and arguably the responsibility to kind of moderate and keep an eye on some of the content that's up there.

So then you ask, OK, fine, it's not about first amendment, it's about free speech rights. I say something on a platform, let's just pick one, Twitter. I say something on Twitter, it's my freedom of speech to say whatever I want and do whatever I want on that platform all the time. But is it? Is it freedom of speech? What if, say, tomorrow Twitter started to stop being all content and say, you know what, we're going to be Twitter for dogs, we're all about dog talk, dog pictures, dogs all the time. And, you know, my friend Kim who is a cat person, bless her heart, you know, they exist, so Kim goes on to Twitter and posts a picture of her cat and how awesome her cat is, and Twitter for dogs says no, this is dog only content, cats go elsewhere and kicks it out, takes down the post, removes it.

Kim says, well, that's my freedom of speech, Twitter for dogs, I should be able to talk about cats, well, maybe not, maybe it's the platform's choice to decide, right? And that gets into kind of protecting the platform and protecting other users so is takedown of content appropriate and when is it appropriate, when should it occur? I mean, if you go with some of the easy examples, right, death threats, threats of violence, most of us would say easy takedowns. And we're going to be talking about this a little bit later and going through some scenarios, but death threats, easy takedown. But what if the threat of violence is an uprising against an oppressive regime? A way to coordinate civil movement that may become violent, is it OK then? That's a question.

Let's look at some other examples, bullying, at what point is it bullying and what if I take down content that I consider to be bullying someone doesn't, is that OK. We have a lot of countries out there that have laws on the books that regulate what can or cannot be said online. So India, for example, is very strict about statements that disparage the government or disparage the country. Likewise, Germany famously has laws that prohibit declaring the Holocaust didn't happen or advancing Nazism. Poland considered legislation that would make it a crime to suggest the government had connections to or assisted with the Holocaust. And we have lots of laws out there. Most platforms and policies out there say we will respect the laws of that country but what happens when those laws of that country conflict with my freedom of speech? It becomes really difficult to match those two together. And, of course, we've had all the talk about fake news and deep fakes and all this other content out there. When does it become a opinion, when does it become fake news. Political statements, when is it a political statement, when is it just somebody expressing an opinion? I don't like unions would be a statement. Or is it advancing a political position? These are the types of content moderation challenges that are faced by platforms today.

Let's take a look at a couple examples. OK, guess who this is. We want people to use X, X is the subject, I had to remove them, to express themselves and share content that is important to them. Dah, dah, dah, you may not use products to do or share anything that violates these terms or community standards and other terms and policies that apply to your use of X, it is unlawful, misleading, discriminatory or fraudulent.

So this is from Facebook's terms of service. And they actually have a hyperlink for community standards which goes into great detail what they do or do not cover and Kaitlin's going to talk about that in a minute.

Let's take a look at another. Guess who? Debate but don't attack. In a community full of opinions and preferences people don't always agree. X, once again the company, encourages active discussion and welcomes heated debate on services but personal attacks and direct violations of these terms of service are grounds for immediate and permanent suspension of access to all or part of this service.

Then they've got a little statement at the bottom basically saying we reserve the right to remove content that we don't like, in essence, or is copyrighted.

Now, this is actually the terms of service in use for the "New York Times," you may think the "New York Times," why would they care about content moderation, what do they have do with content moderation? One of the things that has become with news web sites is to create the comment section. I try to never read the comments of anything that I've done as a personal rule, because fortunately it's about half of them hate me and half of them like me but regardless it's very damaging. But comments sections have become very popular with respect to news web sites and the "New York Times" in particular is a very interesting one because they have the "New York Times" picks and this is where you actually have a subset of editorial content where the "New York Times" has gone through with human beings that actually looked and and chosen which ones they like best to promote to the top as part of the "New York Times" picks. So the "New York Times" is encouraging content creation, user generated content but at the same time is subject to content moderation because they don't want to contribute to fake news or contributing to violent or disturbing debates.

Let's take a look at second to last one here, when you share content to us you will not disclose the affiliation or not share anything that contains harmful computer code, biased threatening or harassing, kind of similar to what we've seen previously, similar to the "New York Times," similar to Facebook in tone. This actually is from Best Buy. This is from bestbuy.com and you may ask why does Best Buy care about content moderation. They have user reviews, these are user reviews of products and software. And just like the "New York Times," just like Facebook, just like Twitter, now bestbuy.com a brick and mortar retail store with an online presence is facing content moderation questions, they have to figure out are these opinions, are they about the product or are they somebody ranting because let's say the delivery window was missed? And then last but not least we reserve the right in our ole o sole discretion to discontinue, change or improve anything and suspend or deny access to the web site. This is terms that everybody should be familiar with because it is from NetChoice.org, if you haven't visited it, you should.

OK. Section 230 of the Communications Decency Act, now I'm kind of doing a real blitzkrieg on a lot of stuff, I usually break these out into multiple classes, there is a class on Internet privacy law at George Mason University, I teach it, great program. The Communications Decency Act will come up in this conversation and other conversations throughout the day, there's an opinion succession in particular where I suspect some of this will come up, Section 230 of the Communications Decency Act at a high level that platform is not responsible for the content posted by ours. Let's use the Best Buy, for example, if I say this laptop is terrible, don't buy it, it's a piece of trash, the manufacturer can sue Carl Szabo if it's a libelous statement, it cannot go after bestbuy.com. I call this the Good Samaritan carveout which applies to content moderation and it says no action can be taken against a service provider that removes or restricts access to content the provider or user considers to be obscene, lewd, lascivious, filthy, or otherwise objectionable, whether or not such materials are protected by constitutional protection. And that's what we're going to be talking a lot about here today is content moderation and figuring out where platforms and ultimately you, the audience, consider this to actually intersect with what should or should not be left up or taken down.

And so here's a screen shot and these are incredibly like I didn't have time to count up the number of words, and this is just a caption of the content community standards that Facebook has developed. This was that hyperlink that I mentioned earlier on and Kaitlin's going to kind of talk about the community standards that they've developed, how they help address a lot of the content moderation challenges that they face.

>> KAITLIN SULLIVAN: Thank you, and thank you, Carl, that was a really great overview about all the various issues and thank you for setting this up. I am Kaitlin, I work on our content policy team at Facebook. And we are the team that gets to break the rules for what's allowed and not allowed on Facebook. There are a set of global standards that you can see on the screen on the global team. We have offices across three continents right now and our policies are global in nature and apply across the platform. We are a team that tries to hire from really diverse backgrounds because these issues are really complex because my background is in women's safety issues. I worked as an advocate and counselor for years before joining Facebook. We have former prosecutors, we have former teachers, we have former business operations specialists, we have people who come from civil society, all kind of a variety of backgrounds because these issues are really complex and because of the intermediary liability protection we really get to make the decisions for what's appropriate on our platform for the mission of our company and our community. So as Carl said, we have our community standards which are super detailed and we'll go into that a bit. But they really start thinking about what's right for the people on our platform. We start with our values. And the values in our community standards are first and foremost safety. We believe we have an obligation on our platform to make sure they are safe, that Facebook is not participating in making it less safe and, frankly, the mission of our platform is to make the platform communicate more meaningfully to communities and that's not something someone can do unless they feel a baseline level of safety.

Our second level that we really espouse with these standards so the idea the expression is good, that's why we excess, that's why many platforms on the Internet exist is to help people connect with each other and share about things that are meaningful to themselves, to others and so privacy. What that means is we have a good reason to remove speech. We thing that peer connection is a default good thing and there are definitely limits to that but the burden is on us to explain what harm we are preventing, whether the burden of speech, the burden is not on the person in our community to prove that their speech is worthwhile and that's something that works well for our platform that looks different and all of this applies to everything but looks different for other platforms the third platform is fairness. As a global platform, as a very diverse platform, we want our policies and need our policies to apply fully and equitably across the world and for such a diverse community and disparate (?) to the extent we can control it on different people. So there's a policy idea that works really well in North America but doesn't scale globally that's not something that we can implement fairly.

>> I have a question.

>> KAITLIN SULLIVAN: Yes.

>> So what is the daily routine for a team that is involved in censoring or allowing to content to go up or down?

>> KAITLIN SULLIVAN: We're constantly looking at our policies and evaluating whether they're in the right place. We're getting feedback, so my team writes the policies and then there's an organization that we call community operations that works on enforcing the policies, especially in response to what people report to us. So we get feedback from them about what they're seeing in their areas all over the world. We look at trends in the real world, we look at changes to our product, we change the risk assessment for different things and do a lot of talking, a lot of meetings with people internally. We are part of our public policy organization, we have colleagues that sit all over the world and can give us a global perspective on whatever the issue is we talk to experts internally and externally so if the issue it (?) we'll talk to our team about that content, we'll talk to third parties all over the world, suicide hot lines, academics who specialize in this and we'll get their input on what it is that we're balancing. We're balancing different competing interest, there's a tension that we're trying to solve all of our policy issues and then we'll do lots of iteration and testing to the extent we can trying to figure out what it is impact that content might be before the policy goes live. And

>> MODERATOR: Sorry, in those groups are you

>> KAITLIN SULLIVAN: No, we don't do that with our policies, we will use a dark mode and see what might be without the actions go live, that goes back to fairness and equity, you don't want to arbitrarily assign a policy to one team than everyone else in the room. That's an interesting but for policy it doesn't really make sense. We have a complete feedback process and that's part of why we really worked on increasing our transparency. So the standards that Carl has up were released in April and they were really the moderation guidelines. And you can see if you go to Facebook.com/communitystandards, nice intro letter and then we have a dozen plus sections where we can open and keep it (?) and you can see how detailed things get and that is a lot for reasons of scale and a lot, again, for reasons of equity and fairness. So it may seem obvious to say we don't allow nudity on our platform but what do you mean by nudity, how much of a person's body do you have to see, what if the context is in a medical context, what if it's about breast feeding, what if it's a child, if it's child nudity or baby in a bathtub. We have to get that level of granularity to the teams that review content because what they're doing so everything on Facebook is reportable and we do most of our content moderation through community rewards. Somebody thinks that violates our standards, that's triaged often by a machine, we do some machine learning. Machines are really good at identifying super obvious things that well known subsets of content so child exploitation imagery is a great example. There is thank goodness a relatively small world of that content, it's very obvious when you see it, machines can learn how to see it, match it, prevent it from ever being uploaded and we don't have to have people look at it over and over again which is great for resiliency as well and machines can help triage some issues, this is likely a suicidal post, you want to put it in front of someone first. But for the most part content is really complex, our policies are fairly complex and we want humans who can take all of that context into account. So something will go to a person. But they aren't making a judgment call as to whether they like the post or not, whether they agree with the politics, whether they think the picture's pretty, they are applying our very detailed policies and says does it meet this threshold, does it meet this definition. If it does I'll remove it, if it doesn't then I'll ignore it. It takes one report on any piece of content to get it in front of someone at Facebook and a thousand reports doesn't change our mind. We will actually stop reviewing the same thing over and over and over again if it hasn't changed because it's

>> MODERATOR: (?)

>> KAITLIN SULLIVAN: So that's a great question. We really view it as a whole team that works on safety and security issues because we have our reviewers, the engineers that support them and we're currently 15,000 people with the commitment to get to 20,000 by the end of the year in total global coverage for this. I think our review team is close to around 7,000 people right now.

>> MODERATOR: So I guess if anyone is interested.

>> KAITLIN SULLIVAN: We're hiring.

>> MODERATOR: Talk to Kaitlin after this.

>> CARL SZABO: What's the work flow, one of your content moderators (?) how does it succeed?

>> KAITLIN SULLIVAN: A person applies a piece of content, it's triaged by (?) to make sure it's not, you know, the obvious child exploitation imagery, (?) stuff like that and then it will go to our review team for review and our reviewers all over the world, we cover over 50 languages and we hire people who are linguistically and culturally native to the areas of the world that they're supporting. So just being proficient in Spanish, the AP Spanish class is not sufficient to cover the, you know, Mexican market nor is being from Mexico and proficient and sufficient and linguistically native to Mexico sufficient to cover the Spanish market because there are different cultural differences, different slangs, different things you use now and we all know how nuanced language is and being able to have people evaluate the content or the faces of where the content is coming from is really important to us. This is a thing that gets lots of training. Again, if you go through, our policies are kind of in depth and detailed. So lots of training and expensive retraining as well as our policies change, our policies update. And then we also have a really rigorous audit process to make sure that people are enforcing our policies accurately and we can address any gaps either in our training or, frankly, in our policies. Sometimes if a policy's not being enforced accurately it's because my team wasn't fair enough in outlining what the requirements were for it. And so we're really iterative process between the teams and then the third team we work really closely with we call our community integrity team and they're the team of engineers and scientists, technical folks who are trying to make this all easier and more scalable. We talk a lot about helping machines and computers do it's not an either/or, replace people and eventually AI will run everything and there must be, it's really not A the technology's not that advanced yet as people think, we don't live inside that world yet, it's not that easy, it's really complicated in many of these pieces but what machines can do is, like I said, help us triage, help us get things to the right people, help us what language something was in, what issue, so we can send it to a specialist, they can at least not yet and probably never 100% do this work because it's so contextual.

>> I have a question, after Facebook acquired Instagram did the content moderation synchronize, did the teams you had synchronize separate, different guidelines and different content that goes up?

>> KAITLIN SULLIVAN: That's a really great question. There are some differences in our policies between Facebook and Instagram. Like, for instance, Facebook uses you identify your identity, identify who you are, and it matches (?) obviously, much more photo heavy, much more video heavy. But by and large the standards are the same 'cause we have the same principles at all platforms, we're trying to protect from the same harms and so there's a large overlap in the work forces for the operations partly to minimize duplication and to use the learning that Facebook had had for the decade prior to Instagram.

>> MODERATOR: Before we move on to the scenarios and the fun part of you guys being in the role of the moderates, are there moderators are there any questions for the panelists.

>> Do we have the microphone.

>> KAITLIN SULLIVAN: Sorry, I appropriated the microphone. It's limited resources.

>> Thank you so much.

>> MODERATOR: We'll just use this one.

>> My name's Michael (saying name), I'm a journalist, I don't know how many other journalists there are in this room but it seems like a lot of the reasons we're in what is a bit of a swamp is because we have three anodyne words, content, moderation, and platform where as and this is perpetuated initially by Zuckerberg and Facebook that, oh, we're just a platform. Isn't the reality that we're dealing with it's not a platform, you're publishing. And if you're publishing, you have a responsibility to edit. The "New York Times" has a couple editors, you're now learning that you've got to edit, you've got 20,000 editors because you have a lot bigger audience. So haven't we sort of missed the point here or is the point that it's time for the so called neutral platforms to deal with the reality that they're publishing companies and they have to deal with the consequences of that?

>> CARL SZABO: So, you know, it's an interesting analogy, you know, "New York Times," right, as the one that (?) the "New York Times" has an editorial board, right, so they've got news articles and they've got the editorial section but they also have the classified section, right? And usually the terms around classified section are actions around classified operates much more like a bulletin board, we don't want to assume editorial control over the content of the classified section. We may choose to deny or remove classified ads that we find objectionable but we're not going to assume full responsibility for every transaction that occurs through the classified section. Someone sells a long term more of a car through the classified section should the "New York Times" assume that liability? Probably not. So platforms, using the "New York Times" let's use the whole pneumonia, not just the content that gets written by the editorial board. Now let's move to the idea of a platform, right? So online platforms are and this is channeling the creators of section 230 of like a library that houses a content but should not be liable for the content. But the library may decide it doesn't like the content in that library. We may not like pornography in libraries, take that out and leave it aside. But just because a library is assuming moderator's control over some of the content that's within the library doesn't mean we're going to imbue the library of all the responsibility of reading every single page in every single (?). In trying to suggest the platforms just because they moderate and remove content because it's objectionable, because it violates clear their terms of service, it defines violates community standards or they just want to create a safe space doesn't immediately transform them into a newspaper which is designed around the idea of articles written by the people who work for the newspaper. Editorial content written by the editorial board who are employees of the newspaper so it's not the right analogy to use to say platforms are newspapers. So we'll disagree with the supposition.

>> MODERATOR: I'm going to take off the moderator hat, I'm involved with an NGO, legislation trying to create platforms (?) forget about the digital age is how much power it gave back to the population and how platforms and not platforms but their users are when they use the platform and their voice speech as others. As Carl said, a platform can decide to only post content and I fully support that, by the way, and nothing is neutral about that. Platforms are still companies that have a business model. The social obligations that they have themselves is their own choice and our choices to either be members of those platforms or delete our accounts. For example, I've never been a member of the Pinterest community. I hear it's really fun and there's a lot of user generated content but it's not my cup of tea. The content that goes up is not as interesting and I don't spend my time on it. There's another question. And unfortunately it's the last one before we go to the fun part of deciding on the scenarios and then I'll let you go early to lunch.

>> To your point, isn't that less the case when you have a very small number of platforms with a virtual monopoly on social networking.

>> I can't hear you.

>> Sorry. So we have a very small number of platforms with a virtual monopoly on public

>> Have you been to myspace?

>> I don't know if it's quite the same. I mean, doesn't being doesn't being a monopoly come with more responsibility or liability?

>> I think it's for the government to decide if a platform is a monopoly or not or majority of the market at the moment. They definitely have more pressure on them but if you remember recently, after certain events, users have left platforms based on using them (?) I know a lot of people aren't in social media, they're not on Facebook, it's really hard to stop them and know what's going on with them. Sometimes when I interview people, we interview for interns or for fellows, I'm trying to find out information about these people and there are some people who don't use social media platforms and it's their choice. Carl?

>> CARL SZABO: I'm going to channel my time at the Federal Trade Commission and my background in economics. So let's start by breaking down the word monopoly. What's the prefix mono being one. We have to ask if there's only one market player and if not monopoly is not the right word. The first step in identifying monopoly is what is the market you're working in, that's the first step, you have to define the market, this came up quite prolifically when XM and Sirius were trying to merge. Some were saying the market should be defined as satellite radio only, two merging into one. Others were saying the market should be defined as all terrestrial radio, multiple players, others are saying it should be broader, iPods, iPods were new at the time, CD players, cassette decks, which also are gone. So let's look at what we're considering the market for social markets, are we saying it's only Facebook in which case, yeah, maybe that's a Mom, the market is only Facebook, then, yeah, that's an argument. We're saying it should be broader the ability to communicate between people, social networks, should it be then, OK, we've got Linkedin, we've got Twitter, Snapchat, Facebook.

>> KAITLIN SULLIVAN: The comment section on our blog.

>> CARL SZABO: Yeah, suddenly the marketplace gets much, much, much bigger. And then the term monopoly starts to go away. Now, the counterpoint would then be, well, I spent so much time on this platform, have a history with this platform that I'm trapped, I can't escape. Locked into my 30 year fixed mortgage, I think. Well, then we're getting into a totally different conversation, the portability, ability to download your data, a lot of big platforms like to do that. So we need to take a step back before we start throwing out words like Mom we need to ask what is the market, what are we trying to achieve. Again, that's kind of moderation, so I as somebody who tries to pitch Op Eds all the time and Rob our contractor over there is doing some right now, if one platform says no, then I go to another, if they say no, I go to another and eventually I go to a medium or on my Web page. Stifling free speech, there are so many venues to give speech that I don't think we should be scared when one platform decides what is right for their users.

>> MODERATOR: All right, so now we're going to go to scenarios. Our wonderful volunteer is going to help us find them. You have the panelists in front of you. Look at them. I'm going to read the facts, you can read the platform policy and you're the moderator, you're a team at a hypothetical company, each has a different company with different roles. Heads up, some of these scenarios are very intense, they're based on your life events, they are controversial so be aware of that. So, yes, first scenario is as opposed to the screen (reading scenario).

I'll save this and come back and raise your hand based on your decision to what should be the content. Discussion amongst yourself is encouraged. So please raise your hand if you would leave the content up. OK. Raise your hand. Seven people. Raise your hand if you want to (?). All right. Raise your hand if you would take the content down. That's OK. So Carl, what do you think about this scenario.

>> CARL SZABO: This is definitely the challenge that Kaitlin and her team faces and a lot of content moderator spaces. You've got to take off the hat, right, this is horrible speech, this is terrible things to say to a human being regardless of political ideology. So now we need to turn to what is the policy principles that we're advancing. And the policy principles talk about (?). That violates our community standards. So the question is not is this free speech, the question is is this the first amendment, the question is does this violate our community standards and is this a threat of violence. It is. I would say the comment goes you take it down and you say this is a violation of our community standards.

>> KAITLIN SULLIVAN: Yeah, and I think that's definitely really here with these scenarios is there's always a valid argument on all sides and it depends on the rules. When we look at speech, especially hate speech, we will remove hate speech used to attack someone but we want to allow for the discussion of hate speech, so if you're discussing something that happened to you, you know, I walked down the street yesterday and someone called me (?) that's allowed, if you want to discuss does (?) here have too much power in our society, should it be used more openly, that's valid. The tricky thing is knowing that context and whether that context is clear enough in the content. Do you always say this is what happened to me I want to share it or do you sometimes just share the screen shot and assume all of your friends will know this is something that happened to you and what your friends might know our moderators might not know if it's not explicit context. In threats of violence is an interesting case and I think that's a harder one to talk about what you should be allowed to reshare even if it's done or discussed because it could still be very specific, it could still provide instructions for a specific method or some sort of some types of speech that are still dangerous and hurtful and some threats of violence fit squarely in that so that's a really interesting one.

>> CARL SZABO: So, I mean, this gets back to the point you were mentioning earlier about just having the computer do it all, right? So racial slur gets taken down, period. Well, maybe not (?) with what your angle is which is having a fruitful discussion of is this appropriate speech, let's talk about this as a society type content and then the same could be true with like how do you deal with the collision between newsworthiness and just kind of offensive kind of (?). It's kind of the same question but it's a different (?).

>> KAITLIN SULLIVAN: Yeah, I think I'm so I didn't hear your (?) totally right, computers are really identifying slur words, they're really bad at the context in which they're sharing. That's what we need people for. On the newsworthy one that's honestly one of the hardest issues we face is how to balance reporting on something that is validly part of a political discussion with not allowing certain things and then we, frankly, have unsurprisingly very detailed standards on this and there are some things that are never newsworthy. You don't allow videos with headings on our platform no matter whether it's the BBC reporting on what happened to a journalist that was kidnapped or whether someone was praising it and saying yay, beheadings, not allowed on our platform. But there are other things, you know, photos of a genocide that include bodies stacked up and those bodies happen to be naked that is maybe a really good newsworthy reason to allow nude content even though nude content generally violates our policies because it is raising awareness (?)

>> CARL SZABO: Didn't mean to steal your scenario.

>> KAITLIN SULLIVAN: That's all right before we move on as a retired law professor I lasted only two years the end of the platform policy gears the team that (?) if they think that the spirit of a policy to the letter is more important they can leave the content up. So all of you were right in your way. All of this could have happened with the policy that's applied to it. Scenario number two a photograph of World War II concentration camp in Nazi Germany has been posted. Naked body is on the photograph. Platform policy platform (?) images as part of its community standards. Please discuss. This is a shorter scenario policy so let's vote. Raise your hand if you would leave the content up. One, two, three four five six, seven, eight, nine 10 11 13. Raise your hand if you would slap the content and put a warning label. Escalate. Take down the content? All right. I'm just going to address this, unfortunately the platform policy just asks if there (?) this policy, a platform will probably take down the photograph. Now, it's your personal choice if you would stay on the platform afterwards.

Let's move on to scenario number three, personality (?) locally known as political commentators, through a series of posts arguing that team survivors of a school shootings are actors or financially backed by a powerful foundation that (?). Read the policy and discuss if you would take these posts down. All right, guys if you would (?) post up. Raise your hand if you would (?) their content. Raise your hand if you would escalate their posts. Raise your hand if you would take it down. Going to throw in a little audience participation, people who would flat the content or take it down, content comment on your decision making, why you would take such action?

>> MODERATOR: This is live streamed, want audience online to hear what we say, use the #IGFUSA.

>> Why do laws come into play here?

>> CARL SZABO: So the (?) so if I am posting something on a platform the Section 230, what I posted earlier comes into play. If I posted on a platform that Rob dresses terribly, which if you look at him you can see is not the truth, Rob can sue Carl Szabo for libel, defamation of character, things like that but not against the platform. The policy that we see here does not address legality, our privacy policy says that we will comply with laws. So that would be a way to remove the content but on the face of it, it says the content of the visual image is unsafe to the community it shall be removed and show respect for others. It's I mean, this is a tough one because the policy I don't think is narrowly written enough to give you the type of guidance that I would want as part of a content moderator to take a look at it because I could say, well, we're not impugning the other users of the platform, we're impugning people off platform, it's newsworthy, it's good discussion, we're not doing it to be offensive, we think this is a truthful statement. I don't know, I would go with the copout answer of escalate the issue and let somebody else figure it out.

>> KAITLIN SULLIVAN: I was going to say this is partly why our reviewer guidelines are far, far, far, far, far more detailed than these hypothetical ones. You need to give reviewers guidance on what to do in every situation. As Carl explained, though, the platforms don't consider libel laws, they're also very different globally but we do have policies against harassment and bullying and for Facebook at least not the hypothetical platform here but we do have policies, we would count likely it's hard to say without seeing the exact piece of content but a survivor of a tragedy violates our harassment policies.

>> MODERATOR: I think it's also important to be aware that, for example, Facebook has a huge team and very they spend a lot of days and nice writing those guidelines and working on them and improving them based on current events. When you're a smaller platform that has a comments section or just a startup, you don't have lawyers, you maybe have someone out of law school who decided to join a startup and live a life for a little bit so these policy guidelines can be all they have, a very vague obviously, if you're right out of law school you'd rather have something vague and then copout than have a detailed policy that you have to be responsible for.

All right. Moving on to scenario number four, I'm going to read it out, running back, thank you so much to every volunteer, by the way, they're doing an amazing job. They were here earlier than we are. Posted genocide never happen. Supplements with photographs, letters and other documents that the poster are real but, in fact, fixed. Policy is we open dialogue and conversation even if contradictory. Don't consider posted statements as facts. Please discuss.

>> MODERATOR: Full disclosure. What a talker based on my last name. What would you guys do, what would the audience do, would you leave the content up? Raise your hand, all right. Would you flag the content? One person, escalate the issue. Would you take the content down? All right. I would say that, for example, if this content was just a statement about there being a genocide not happening the content would have stayed up based on these policies. However, the fact that they are all statements and false materials that are attached to the post make it eligible for removal. Are there countries, by the way Robert? Yes.

>> So, yeah, I think maybe you were just about to say that there were countries out there that refuse to the fact that a genocide happened. And you talk about the, you know, if you hear moderators that are sensitive to local, you know, to like their local culture and language and all those sorts of things, if you exist in one of these countries, let's say this is Facebook's policies, I don't think it is but let's say it is and you are hiring someone in Turkey and this is someone in Turkey posting this that becomes a very difficult game of what is the truth when internationally the truth is disputed.

>> MODERATOR: And I was going to also elaborate that, for example, in countries like France and Germany denying Armenian genocide is a crime, I think in France it involves jail time. Kaitlin, as someone who deals with conflict of laws and historical interpretation, what would a platform do?

>> KAITLIN SULLIVAN: Yeah, so this is, again, different platforms can take different stances on this. And I know there's a panel later on on misinformation which is a little bit later so I don't want to kind of scoop that one. Facebook's stance is generally not to police the truth which sounds a little bit like a copout but I think one of the reasons is the reason that Robert brought up that a lot of these things are internationally disputed and very validly disputed. We have the U.N., we have other international bodies that declare whether or not something is considered a genocide, each country has standards for considering whether something is a genocide and to have a private company make that determination instead of these internationally recognized bodies seems probably not the ideal outcome. And then there's also kind of disputes or disputed yeah, disputed kind of ideas or determinations at different levels so there's just things that Facebook can't know. What happened at the town council meeting in Bethlehem, Pennsylvania last Tuesday and who said what is definitely someone could be misrepresenting on Facebook but it's not something that we'd be ever able to know and enforce. So I think you have both the issue of actual acquisition of knowledge at this scale that we operate on and then you have the issue of a private company making determinations that are probably better played out in other international forums rather than being policed by a company who decides what's true and what's not true when a lot of these things are very validly disputed.

>> CARL SZABO: Yeah, and, you know, looking at the platform, you know, the fake platform policy that we put up here, I would leave the content up, right? Because what the the policy lacks is a (?), what is the intent, what is the knowledge of the poster, right? The policy said we do not allow posting of facts sorry posting of false statements that are posted as facts. But what you're seeing here are posting of statements that the poster believes is truthful, believes are fact but the preamble to that is we want an open conversation, an open dialogue. So perhaps by leaving this up we're actually engine during an open conversation, open dialogue about what occurred and by having an open dialogue you could actually change somebody's mind as opposed to just saying we're not going to talk about it at all.

>> KAITLIN SULLIVAN: If I lived in French I'd (?) and send it to authorities.

>> I have a question for Facebook going back to this idea of not policing facts. So, for example, the vast majority of scientists believe that the earth is flat, there's an international consensus on that sorry, is round.

[LAUGHTER]

>> KAITLIN SULLIVAN: I thought you were giving an hypothetical.

>> There are some avid believers that is not a fact but according to your, you know, criteria, actually, the if there is an international consensus that the earth is round. So according to this platform policy, would it be removed because it is clearly a false statement?

>> KAITLIN SULLIVAN: So just to be very clear, this is not Facebook's platform policy. So according to this platform policy, I would probably remove it but that's not our platform policy. We will not censor someone on Facebook for saying the earth is flat. That seems like such a silly, silly statement but when we do go back over history censoring people who believe something out of the norm was something that hindered the progress of science when everyone thought the earth was the center of the solar system and the sun revolved around us and this sounds a little silly to pick something out of history but Galileo is, nope, the sun is the center not the earth, that obviously didn't work out very well for him even though 99% of the people at that time believed the earth was the center and that was fact, he thought something different. I don't think the earth is actually flat, I don't think we're all going to change our minds on that but we don't want to be the platform that comes in and censors someone and also to kind of Carl's point, we don't want to punish someone for being wrong, for having a belief that they may genuinely and validly hold as true and may have been what they were taught, it doesn't kind of feel like a adequate thing to say no, I'm going to censor you for being incorrect even though that's what you were taught in your family or your country.

>> MODERATOR: There is definitely also a value that comes with making a statement that might not be factually true or just not a lot of people wouldn't agree with T and then being part of a discussion. I've changed my mind multiple times throughout my life through engaging on these platforms. Not that I believe that the earth is flat. But I don't have a lot of knowledge in the scientific field because I've been a lawyer for way too long. So sometimes you ask a question, sometimes you set up and say, you know, I've heard this on the news and I've heard this from this person, hey, scientists out there, if you have time please educate me, tell me if this is right. That value that comes from engaging online I think is huge. So let's move on to scenario number five. Below country A I'm not sure is this on your printouts, do you see this scenario? OK, awesome. The lion company A makes it I will lead to criticize its leader, I'm from Russia. Not part of the post country A for a statement made by the leader. The platinum policy would comply with all the, robust discussion and freedom of speech. Please discuss.

>> MODERATOR: All right. So please raise your hand if you would leave the content up. One, two, three, four. Eight, nine people. Please raise your hand if you would flag the content or put a warning label on it. OK. Raise your hand if you would escalate it up the chain. Seven, eight people. Raise your hand if you would take the content down. Four. Five. All right. We have a question. So we'll go to the question before we discuss the scenario.

>> Jeff J. from (saying name) inadequate information here because you don't have the locality. That's critical to this and the worldwide discussion. So if it's in the country that this occurs, and the policy is incomplete here because every policy I see would have the word local laws in there so it's really hard to say.

>> MODERATOR: The Internet doesn't have (?) does it Carl?

>> CARL SZABO: The question is right. And actually because I ended up writing this one. I specifically made it vague because I think there's a really interesting way that we can kind of look at it. The first question is how would you respond I'm just going to say the United States, the poster's in the United States. Then do we leave the content up. Because but the law may apply regardless of where the content where the statement was

>> (?)

>> CARL SZABO: But it's on the Internet so it was where it was received. So I'm if I'm the leader of country A.

>> I'm pissed.

>> CARL SZABO: I'm pissed and I say my laws apply, this poster posted on the Internet, this content was read by my country by my people therefore I have jurisdiction over it.

>> (Off mic).

>> CARL SZABO: But it's not that simple. If I'm an Internet company, I'm an Internet company and I have employees in that country, my employees are getting thrown in prison. So I Jeff, you're absolutely right. Theoretically, this should only apply if it's a country A resident making the statement about country A. But the way that we've seen it applied has been, OK, company, OK, platform, I this was made on your platform, your platform's being viewed inside my country. I'm going to arrest your employees and we can then have a discussion about whether the content should remain up or down.

>> (off mic).

>> KAITLIN SULLIVAN: Yeah, and then I know I have a question too to get back to too. I think these are some of the toughest issues and I think a lot of the questions you raised are right because we also, we may have different gut feelings about this, if it is Germany and a holocaust denial law so a democratic country with a law that most people find principled versus Thailand with a you cannot insult the king law. So an undemocratic, you know, country with a law that we generally, at least in the U.S., does not rest nature as principled resonate as principled. So Facebook is a company, first of all, we are against government censorship especially censorship that is against internationally recognized human rights legislation. That being said, we do respect local law in part because local law is often democratic and is the law of the countries in which we operate. And the way that this plays out in practicality is any request to remove content that may be illegal in a certain jurisdiction must be kind of very specific, specifically tailored to that jurisdiction, the specific piece of content must cite the law, must demonstrate that this law has been enforced before that it's a real kind of practicing law on the books so we're pretty spending on exactly how it comes in and kind of the rules by which you have to play to get this to be recognized and then we will push back if we think it's very, you know, against the international human rights norms or part of the global network initiative which priss a lot of companies together to do this work. Where we end up complying we do it as narrowly as possible both the content in question, so if it is a post we will comply with respect to the post but not the entire profile. We do it as narrowly as possible with respect to jurisdiction so if the content is illegal in Turkey it will be inaccessible in Turkey but not in the rest of Europe or the rest of the world. And then we try to be as transparent as possible. So we will notify the person who posted that it is that their content was restricted because of a government request, not because of a Facebook policy. And then we will publish that in our by yearly transparency report so that everyone can see what governments are doing this and what the nature of their requests are. And we do look to and work with this community of folks in civil society to be a voice for where those things are appropriate and where they're not appropriate.

>> MODERATOR: We had another question? Awesome. All right. Well, let's move on to our last scenario it's about a diner and then we're going to move on to lunch after that. So diner, a diner gives a negative rating about a meal do you want to do this one, you actually wrote it too.

>> CARL SZABO: Sure. So a diner gives a negative rating about a meal under the awning of Washington, D.C. restaurant last weekend. So if everyone just rewinds, like, two days, it was really wet outside for a really long time and I'm still depressed about it so that's what I was thinking when I wrote this. Anyways, so they give a negative review while they're dining outside at a restaurant last weekend. And for the folks at home, it was monsoon season last week. They're complaining about the humidity, the splashing of rain from the sidewalk, and passersby ducking under the awning. So they're complaining about the experience they had as a result of the humidity and splashing of rain and people trying to get out of the rain so the platform is policy and this is where somebody's reviewing please make sure your contributions are relevant and appropriate to the forum. For example, reviews aren't the place for rants about a business's employment practices, political ideology, extraordinary circumstances, or other matters that don't address the core of the consumer experience. So at the end of the day should this content be left up because of the weather? So that's a question the question before you and consider and decide.

>> MODERATOR: All right. All right, guys. I know everyone is ready for lunch. I am. Raise your hand if you would leave the content up. Please wait a second, I can't count. Nine. I'm sorry, it's not because I'm a lawyer, it's because that's just that's the way my brain works. Nine people. Raise your hand if you would flag the content. All right. Raise your hand if you would escalate this content upchain. Raise your hand if you would take it down. 10. I feel like there were a few people who didn't vote.

[LAUGHTER]

>> MODERATOR: C'mon. It's your civil liability civil duty, sorry. All right. Carl, what would you I honestly would leave this up but what would you do?

>> CARL SZABO: You know, I look at reviews online a lot. In fact, I wrote a review for a product that died on me after a month just the other day. Those I find to be really valid reviews. You look at five star reviews and one star reviews sometimes the one star reviews are really silly like I use the example of best by earlier, the packaging didn't get there on time. Well, that doesn't tell me if it's a good TV or not. But this does feel like it's related to the experience at the restaurant. Maybe the tables were too close to the curb and that's why I'm getting splashed and maybe they should have moved it back. Maybe when people were ducking under the awning maybe they should fence off the awning to prevent that. If it's impinging my dining experience I think that's relevant to the idea of a review of the restaurant, because the restaurant's not just the food, it's the experience, and thus, I would leave the content up.

>> MODERATOR: What if there was part of the restaurant that didn't have a roof, it had those little gazebo situation, that would be even worst.

>> CARL SZABO: They shouldn't be outside.

>> KAITLIN SULLIVAN: This is interesting and Facebook does have a reviews product and I think determining what is validly part of the review is really hard. We were talking up here kind of as everyone else was talking how this policy is very clear that political ideology suspect revel but, of course, yeah, I live in D.C. and it's really interesting to consider whether the fact that a restaurant will or won't service in person or that a fact that a cake baker won't bake a cake for a certain type of couple may be relevant to your decision making and whether a platform gets to decide if you get to decide if it's relevant to your decision making is, I think it's a whole 'nother lens on content moderation that we don't talk about or explore as much because the point of review platform is different from the point of a kind of broad connection platform. It's there to serve a specific purpose but there's still a lot of choices to be made and what benefits that purpose.

>> MODERATOR: So hypothetically, if you were evaluating the reviews of a restaurant that some consumers who haven't gone there are miss taking for another restaurant that has the same name, would you take those down because it's review of the wrong place?

>> KAITLIN SULLIVAN: I am not an expert in our review policies, there's kind of a different team that owns that, but I think personally that would be more irrelevant because it is about the wrong place. So if you have a policy that says you don't allow irrelevant reviews, you don't allow a review to talk about the fact that the sky is blue then you probably shouldn't allow a review to talk about a different business.

>> CARL SZABO: Yeah, at the end of the day if I'm a platform, I ask myself what is the purpose of my reviews. And is it to give somebody a place to rant, whether positive or negative, or is it to provide a service to other potential consumers, right? Is the purpose of my review section to give people actionable information as to whether they should visit this restaurant, buy this item or stay at this hotel? If it is that and there is an errant review that gets placed and that's what I think my review section should do is provide people useful information, then I'll probably try to take that down per my terms and that's kind of why here I would leave the content up because I think it's relevant to the user experience but and I keep going back to the example of somebody giving a one star because the package got there late and that has nothing to do with the underlying product, those are things that I would probably consider trying to get out of the system.

>> MODERATOR: There are bots now that you could leave on Amazon and other web sites that are not real comments that people are filing with algorithms, people have to look at them and evaluate if that looks like a real review, if that the pattern is the pattern of a real human and not a bot located somewhere in Russia.

So before we close out I just wanted to ask our wonderful panelists to give your last thoughts just in a few sentences, what do you want our wonderful audience to take away or what do you think is the future for platform moderation because there are more and more challenges to Section 230 that are coming up, recently this year Congress passed a law called the (saying name) bill, this hybrid, Section 230, but prefers time so the question is is this a slippery slope, are we chipping away the immunity that platforms have or are we going to have this immunity that created this place where free speech and commerce can flourish?

>> CARL SZABO: So content moderation is tough. It really is. I mean, the examples that we put together are just kind of for the most part, you know, some of the easier ones that a lot of sites and services face. Content moderation is really tough. And that's why we wanted to go through this exercise to get you thinking, well, they took that content, that's got to be a violation and getting out of the idea of, well, it may be offensive to me but it's not a violation of the policy. But as to your point on Section 230, one of the important prongs and that was the one that I had voted earlier on is the good Samaritan of Section 230 Communications Decency Act if we want platforms to remove objectionable content and encourage them to keep an eye out for objectional content then we need to give them permission to do so without assume liability for all content. And that's one of the really important prongs that a lot of us sometimes forget when we're talking about treating platforms as a place where people can post and then saying the platform's not liable for those posts. If they want to monitor for the offensive content and take it down, then you can't say, well, you missed this, therefore you're liable for it.

So content moderation is really hard and we need enable platforms to have the tools and the latitude to figure out what content is best for their users and that's kind of my two cents.

>> KAITLIN SULLIVAN: Yeah, and just I would echo all of that and I think that something both the scenarios did really well to demonstrate the areas that content moderation affects. So I think sometimes our mental model is Facebook/Twitter/youTube but it's about the "New York Times" or any blog that one of you might have, content moderation is about a startup, it's about a restaurant review site, it's about so many other things so I think our mental model needs to broaden when we think about it and that every platform does it a little bit differently based on the principles of that platform and that that diversity of places in the marketplace you can go to share your content is an important thing and is part of what the liability protection protect.

>> MODERATOR: Sure. Why don't we do questions before we wrap up. Really quick.

>> What happened to 230 in the recent legislation? What's the impact for content sites and such as on 230's protection?

>> MODERATOR: So the stop enabling sex traffickers act and the Fosta which I don't sex trafficking, they now find the platforms liable if there is sex trafficking that happens on them. And they if they don't cooperate with the law enforcement the problem is before, platforms, big and small, would collaborate and give information freely with law enforcement. Now the knowledge standard that now, for example, the victims of sex trafficking or their family can sue the platform with, if a platform acts if someone uses my blog post's comment section and uses some kind of code to initiate a trafficking deal, and then if I go to law enforcement because I catch it, then someone can sue me because they say you knew. You knew, hence you're liable.

>> KAITLIN SULLIVAN: Yeah, I think it's still kind of to be determined how it ends up totally playing out but the intent of it and I think a lot of Ashkhen's points whether they made the intent clear enough to go after a web site called backpage that are kind of sole purpose and lots of people believe have met a high standard of knowledge that this was happening on their platform and chose not to address it and it is meant to tackle that.

>> MODERATOR: As a result Reddit and craigslist and a bunch of other web sites have shut down some parts of their web site because they're afraid of having that liability on them without monitoring or having the knowledge about these awful, awful acts happening.

>> Thanks, Andrew Bridges of Fenwick & West in San Francisco. Seems to me what we're talking about is a classic problem of rigid law versus flexible equity. In that you're announcing all sorts of principles and rules that you're planning to follow. Looking earlier, I'm not sure anybody in this room could tell you whether the values, the community values of Facebook and "New York Times" and Best Buy differ at all because all terms of service are litigation documents. They are meant to provide maximum flexibility and minimum obligations in liability. All terms of service 'cause I right them as a litigator, all terms of service say exactly the following mumble, mumble, mumble, we can do exactly what we want to do and you can't sue us. That's what they all say. So at the end of the day, and I get that it's a tough job, I worry that platforms are hurting themselves by pretending to have a great objective infrastructure for what is ultimately a subjective operation one hopes diverse extent of constituencies as possible, but it's not much different, I think, from saying we have a velvet rope and we will allow what we deem cool and trust us, we're trying our best. Isn't that the most response way of saying it?

>> KAITLIN SULLIVAN: So I'm not a lawyer and I did not write our terms of service but I think that's one of the major reasons why our community standards are separate, they're incorporated into our terms of service reference them and that is kind of the legally binding document but we try very hard to write our community standards in as plain English you can make when you print it out turns out to be a 45 page document to be. And that's why we try to uplevel those principles and values and we talk about the policy rationale behind each policy so if you don't want to read the 20 page explanation of what exactly we mean by nudity you can still understand what we're trying to cover. But you're right, like those are standards that we hold ourselves to and we hold ourselves very strongly to the idea applying them equitably and the news and the press and public hold us to that even though they may not sue us for it, we're heavily scrutinized for it and we believe it's the right thing to do but they're separate from our terms of service not written by lawyers and written as a document that we apply to our own content.

>> CARL SZABO: I've written a number of Op Eds on this, these are private platforms, as much as people think that they are the most that their fundamental right or anything like that, these are private places at the end of the day. So if I came in here and I wore something wholly inappropriate, you know, IGF is bad, you know, Shane 2 is going to turn me back at the door.

>> MODERATOR: Or they don't like green, turn you back at the door.

>> CARL SZABO: Exactly. It's a private space, first. Second, look, back in my old days I used to right terms of service and private policies. People complained nobody reads them. Well, guess what, they're not for you, they're for other lawyers to read when platforms get sued and say, well, here we laid it out. Now what I was really impressed by and I really recommend this check out Facebook's community standards because they actually make it pretty easy to figure out what's going on. Now, it goes multiple layers deep but there's a table of contents on the left, you can always uplevel and it's subdivided. It's really impressive how they break it down.

>> KAITLIN SULLIVAN: It's searchable now.

>> CARL SZABO: Yeah. I was impressed by that. It's different from the terms of service or the private policies because it's actually written for an audience like well, I'm a lawyer, but for my wife. So I think that's something to take notice of, terms of service and privacy policies are not for you. Facebook has the infrastructure and the size and they stepped up and did this. So I think at the end of the day there's something to be done there.

>> KAITLIN SULLIVAN: When writing the scenario.

>> MODERATOR: When writing the scenarios and putting together this panel I had to push myself and put in things that make my blood boil and ideas that I don't agree with. And that was the purpose of this panel. We wanted to ask more questions and give you answers and to maybe encourage you to read more about this or to become more comfortable living in the gray area, not knowing the right answer right away. We are happy to continue the discussion after the panel is over. I am going to let you go a minute before the end. Thank you so much for participating and joining and thanking our speakers.

[APPLAUSE]