Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Facebook: 64% of the time users join extremist groups, the platform told them to (wired.com)
109 points by trackofalljades on Jan 8, 2021 | hide | past | favorite | 49 comments


Editorializing titles like this is against the site guidelines and will cause you to lose submission privileges on HN.

https://news.ycombinator.com/newsguidelines.html

In this case it's tricky because the article's own title is baity, so the HN guidelines call for changing it: "Please use the original title, unless it is misleading or linkbait". But they add "don't editorialize". Cherry-picking a detail out of the article and making it the title is the leading form of editorializing.

When an original title is baity or misleading, the way to rewrite it is to find representative language from somewhere else in the article—something that accurately and neutrally represents what it is about. There's nearly always something there that's suitable: it might be in the HTML doc title, the URL, a subtitle, the first paragraph...sometimes even a photo caption.

Separately from the title, this article is an op-ed that doesn't add any significant information and is therefore not a good HN submission, especially not on a topic du jour like this one, which is generating hundreds of different articles.


I like these rules, and you do great work keeping things on track. This scenario got me thinking, though: what if all but one thing in an article falls into the "doesn't add any significant information and is therefore not a good HN submission" bucket? Is it better to move on without discussing it here because we only discuss complete works, or is there a mechanism to have this community discuss just one particular fact that unfortunately doesn't have a URL dedicated to it?

Actually, it would have a URL dedicated to it if the submitter used a `#:~:text=` fragment.


The standard mechanism you'll find in the dangworks is you submit the thing and write a (almost certainly first) comment about that one interesting thing. You get to steer the discussion somewhat but not frame it entirely.


I would say, submit the article with a neturalized title, then post a comment highlighting the interesting fact to start the discussion.

Edit: sorry pvg, your comment wasn't there when I started to post mine.


Good job barely hiding your political alignment. Wait, I mean bad job.


This accusation comes up whenever we make a call that isn't to someone's liking. It's a mechanical reflex, which explains why the "political alignment" we're accused of "not hiding" varies wildly with the observer.

I think we all know the feeling, but it's best to catch it internally first when it flares up, so one can ask oneself if it's really true. Lashing out reflexively doesn't help anything, including your own favored cause.

In this case, the reality is that I've posted things like the GP hundreds if not thousands of times, usually not on political topics (and scattered across all political alignments when they are). The principles are always the same, to the point of tedium. Anyone can see that for themselves if they want to bother:

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

https://hn.algolia.com/?query=%22significant%20new%20informa...


I still don’t know what to make of this information.

Assuming that Facebook isn’t an evil entity that favors chaos, all I see is plain numbers: People who like X also like this group.

I’m a reasonable user of Facebook groups and I sometimes join new groups. I’m currently part of about 20 groups but I rarely participate. Why hasn’t Facebook suggested extreme groups to me? All I get is travel and buy/sell/rent groups.

Is Facebook being good to me and bad to others? Or are bad/gullible people liking all kinds of stuff that leads them into the rabbit hole? Is it Facebook’s responsibility to curate all the content on the platform or are we just asking them to clean up some trash?

Is it possible to curate all of Facebook’s content? Would you like a platform where everything you share is curated? Would you use an email provider that blocked emails you might find interesting?

I’m not justifying what’s happening, but I don’t really understand what people expect of such a platform and whether they’d like it more if it just showed photos of puppies.


I think social media that prioritizes engagement just pushes people to whatever extreme they have a potential proclivity for.

Oh, you like x? How about you stay on our site and check out this whole series of content that people like you get really emotional about and can't stop clicking on.

It leads inexorably to extreme content.


Your last point rings true.

What’s the solution here? Prevent companies from using engagement as a metric? Do you want to be recommended random 13-views videos on YouTube? Because that’s what you get.

I don’t know if I’d like to lose good recommendation algorithms because of bad apples. However I don’t like bad apples either, they rot the whole batch.


Sadly I don't know a good solution either, and I'm with you that it's not all bad. It's a complex problem for sure.


We might consider more or less eliminating automated recommendation systems on social media and return to, as you put it, "just showing photos of puppies", i.e. just showing posts from entities that you have explicitly friended or followed. This won't entirely solve the problem of people coalescing into radical groups, but it might help if learning about the group required someone posting it to their feed for everyone to see.


The flip side is when your friends on the platform don't have enough interest in whatever interest of yours you're trying to learn more about... You go browsing groups and it needs to be sorted/ordered/tailored/suggested somehow unless you want some useless thing like alphabetical order. Should I join A1% Landscaping or Aaron's Aardvarks?


So a search engine?


Having the list be seeded by your own thoughts means you're still sort of in your bubble. I'm talking about the (important, IMHO) use case of wandering through the world seeing what presents itself. Perhaps ordering by popularity makes sense (simply skip to page nnnn to discover unpopular groups) but that feels like it would also have downsides.


I don’t think recommendations are the only thing at fault here. You just need your uncle Frank to share a chemtrail article and someone will like it and reshare it. Maybe someone will start a chemtrail group and invite your uncle.

I think the engine speeds up discovery even for people who don’t have many connections, but if you’re already googling chemtrails chances are you’ll find your group.

Let’s not forget that 4chan has no recommendations, yet...


Sure. There is no way to eliminate whacko conspiracy theorists and fringe forums. They existed before social media too. The idea is that interpersonal dynamics can help manage these things: if your uncle Frank posts about chemtrails, then hopefully someone he trusts can point him in a different direction in the comments, and if not, at least the people reading it will see the comments. If Facebook just surfaces this information, you get none of this: click and boom you're in an alternative reality full of lizard people and humans in cosmic battle.


> Is it possible to curate all of Facebook’s content?

Here's perhaps a better question:

If it's not feasible for a social space to be maintained in some semblance of good order because of its size, should that space even exist?

Do extremely horizontal social networks provide value to society proportional to the harms they seem to cause, and that the people who build them seem either unwilling or unable to stop?

I'd also pose this question about narrowly targeted advertising, which of course goes hand in hand with enabling these platforms to exist.

Things don't necessarily have to exist just because they can.


There are plenty of large groups that people derive social value from, maybe even most.

Unfortunately, Facebook mirrors human society and gives bad/profit-driven actors an easier avenue than traditional recruiting and distribution.

Most things are good/neutral, but scale means some things will go south


> There are plenty of large groups that people derive social value from, maybe even most.

I'm not sure what you mean here. Do you mean groups on facebook or that there are other groups on the scale of facebook people (ie. a user count in the billions) derive value from?

For the first, in the absence of facebook these groups would no doubt still exist. There were forums before facebook and some of them grew quite large. There were even regional social networks on smaller scales (I worked for one once).

If you mean the latter, well, I don't think that's really true. In so far as it is, there are only really a couple that aren't just owned by facebook to begin with.


> in the absence of facebook these groups would no doubt still exist.

I don’t think they would.

Creation, discovery, participation, following. None of these is particularly easy if you revert to forums and Google. I won’t join Cute Puppies Forum but I wouldn’t mind following an homonymous group on Facebook for a little while, maybe I’ll answer some questions too.

The average Joe won’t care enough to join their little city group, but Facebook makes it so easy that you find all kinds of non-tech people on it, ready to answer your questions (and sometimes insult you, but I block those on sight).


I don't think we really know what kind of social spaces would have evolved in a world where something like facebook didn't or couldn't grow. I think there were some hints of it evolving in the early 2000s, but when facebook blew up it just sucked ALL the air out of the room and forums and blogs all got stuck in 2009.

Maybe I'm wrong, but again I have to ask the question: Are those conveniences actually worth the harm we see from aggressive narrow ad targeting/tracking, as well as the issues mentioned here around parasitic viral memes, that make things like facebook even viable as businesses?

This is an evaluation we are allowed to make, and it's not a foregone conclusion that just because people can build these things they have a place in our society.


I’d rather keep my Facebook groups and have Facebook manually quarantine misinformation groups and pages instead, in a way similar to how Reddit does it.

I don’t think we should reduce public liberties because some people take advantage of them. I’d rather punish/limit them instead.


Ok, so who's going to accurately quarantine misinformation among the trillions of posts being made by billions of people? How many moderators does it take to do that effectively? Likely a few orders of magnitude more than facebook employs.

Anyways, no "public liberties" are unlimited. All are constrained where leaving them open creates significant harm or burdens other people's liberties. Even free speech and freedom of association.

People understood that once. "All my freedoms are absolute and who cares if society falls down with them" is a modern construct and it seems pretty plain to me that it's resulted in a lot of problems.


Perhaps what it comes down to is a confidence interval, and the broad overconfidence of tech companies when their product meets the real world. There's nothing inherently wrong with saying "oh, you like X, maybe you should try Y".

The problem comes in when you're too focused on providing the Y suggestions, in my opinion. There's a general overconfidence in the tech industry, particularly in advertising, and even more so in social media targeted advertising, that the suggestions that they make are actually useful to society as well as individuals.

I'm not saying that they're wrong about it, but I am saying that just because the previous status quo was "no measurement, trust us, it works" in the TV/radio/billboards era, and now you're bringing in "look, we measured it!" doesn't inherently mean that you are correct about that measurement, or that it is the right thing to measure in the first place. Perhaps the old status quo factored in innumerable intangible social factors that could then somehow translate into tangible results that you're not capturing in your new metrics.

The economics of the social advertising industry are wholly untied from providing genuine value, and this manifests itself in myriad ways: you're only seeing a symptom of it here.


It’s pretty clear that the numbers show that Facebook directly facilitates extremism through their recommendation system.


I don’t know whether that’s been proven, but I’ve always assumed it’s because controversy and rage fuels interactions, therefore Facebook “likes” the content. There’s a difference between numbers and human recommendations “tweaks” by Facebook themselves.


To be clear I don’t think FB is using humans to manually facilitate this. They develop and own the algorithms that do. They are directly responsible for the negative effects of that which facilitate extremists as their own research presented here demonstrates.


A better way to put it is that Facebook let it happen rather than make it happen.

I don’t know whether Facebook is directly responsible any more than Jonh Walker is directly responsible for car accidents. In both cases it’s the product’s troublesome users that are awful beings.

Perhaps we just need legislation like we have for alcohol, which doesn’t eliminate the problem but certainly reduces it.


If I know someone has a heroin habit, and I know they thus would buy some heroin off me if I offer them some, is it responsible of me to offer them heroin, since they want it?

But, you see, it is not me who is offering the heroin, it is just this machine which guesses the user's preferences...


What you’re suggesting is moderation. Someone has to tell Facebook that heroin is bad.

From what I see Facebook already implements automated content detection but you know it’s not perfect, content slips through and it’s often in the gray area.

Would you like your account to be blocked because you joked about something? Bots don’t have sense of humor.

Do you want some ibuprofen? Sorry, that’s a drug. Request dropped.

(Heck I’m even afraid this comment gets marked as spam because of the keywords)


I think the focus of the discussion is not deplatforming content, but tackling the extremism rabbit hole Facebook is causing people to fall into because of its suggestions system.

Of course I wouldn't like my account to be blocked because I posted a slightly edgy joke. But would we care if an edgy memes page gets marked as unsuitable for recommendations? Probably no. Would it be a problem if Facebook blocked an Ibuprofen ad to some users that are deemed as not suitable? I wouldn't care, in all honesty.

If limiting the extremism slippery slope of targeted reccomendations requires to mark extremist content as unsuitable for reccomendation, then I personally wouldn't mind. Facebook already does it to some extent, but its algorithms are either not good enough, or driven predominantly by other factors


Stopping recommendations altogether is another option. The internet worked perfectly fine before algorithmic recommendation.


That's what is called the echo chamber... slowly sliding into it with Facebook's engineering help.

What it leads to, we were able to witness yesterday.


If Facebook did anything to contribute to the current unrest, it was supporting competing echo chambers that challenge the dominant one. The organic emergence of echo chambers is surely a problem, but let’s not pretend that the corporate-broadcaster world of the past was more virtuous.

I believe that information being freed from top-down control is a necessary process for the improvement of our civilization. The alternative just leads to more corruption, slavery, and death.


We've been witnessing it for at least a year.


For quite a bit longer, I think.

The book Weapons of Math Destruction was published in 2016, and the conclusions therein were not exactly new at the time either.[ß] The business model of ad-based social media is focused on generating engagement, above all else.

Nothing generates more engagement than outrage, fear, or playing to primal instincts. To satisfy that demand, content providers are encouraged to generate the maximal outrage and foment fear. Social media platforms are incentivised to direct people towards that content, because they make more money that way.

The end result: social media platforms automate pile-on and radicalisation at the speed of lies, at global scale. Not because they are inherently evil, but because they are making money off of people being inherently terrible.

ß: The real meat of the book is in the first two chapters, as far as I'm concerned. Everything else in there is just repeating the arguments.


But yesterday was enough for even those in denial to see it.


I guess that depends on what they were denying. The Wired article is entirely about Facebook, Twitter, Youtube, et. al. facilitating extremism on the right, when certainly the reality is that they are facilitating it wherever they can get engagement, on either side of the political spectrum. I mean since October they basically shut down a lot of content that was very much right-of-center. Alex Jones was deplatformed a long time ago.


The header makes it seem like Facebook said this themselves, but it doesn't seem to be the case.


The article states that the number comes from a study that Facebook themselves undertook.


The figure comes from research conducted by Facebook:

> Facebook’s own research revealed that 64 percent of the time a person joins an extremist Facebook Group, they do so because the platform recommended it.

> Facebook has also acknowledged that pages and groups associated with QAnon extremism had at least 3 million members, meaning Facebook helped radicalize 2 million people.


I know it’s just a quote but I have to point out that’s not how statistics work. You can’t say 64% of a group were radicalized because 64% of people will join a group suggested by Facebook.

In fact all 3 million of those users could have had the group suggested by Facebook.


If you follow the links, the "64%" statistic is from data that is at least 5 years old. Are there more recent #s? A lot has changed in the last 5 years.

I'd also argue that "Facebook: We told them to join these groups" is a misleading take, since it's not a quote from Facebook and it's twisting their words. I guess "64% of Facebook users joined groups via recommendations" isn't sexy enough of a headline.

It's interesting that no one mentions the percentage for non-extremist groups. Assuming the omission is intentional, I wouldn't be surprised if the rate for rate for extremist groups is actually lower than for nonextremist groups


#AbolishCopyright. Allow people to freely share information with each other and you will allow decentralization to happen. You will stop subsidizing lies.

That is the answer. It really is as simple as that (not easy, but simple).


On another news: most of the time I joined non-extremist groups, it was also Facebooks suggestions.

If I create an account and join 2 groups about dogs, facebook is going to start suggesting groups about dogs...


It's clear social media companies are in over their heads. I'm glad I don't use them and they need serious oversight. Just today Twitter allowed the promotion of genocide on what they would consider to be their tightly moderated platform https://twitter.com/chineseembinus/status/134724760209453465...


again, we should focus more on education than moderation.


Facebook built a radicalization engine, and they should be held accountable for the radicals it creates.

This isn't even new, Facebook facilitated a genocide in Myanmar that everyone seems to forget about.

What will it take for society to acknowledge that Facebook was a massive mistake run by amoral sociopaths?


Anyone have an unbiased, objective definition of extremist group?

I mean, if I join PETA, does that count? Or Black Israelites Facebook page?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: