A new report by Jeff Horwitz and Deepa Seetharaman in the Wall Street Journal suggests that Facebook knew that its algorithm was dividing people, but did very little to address the problem. It noted that one of the company’s internal presentations from 2018 illustrated how Facebook’s algorithm aggravated polarizing behavior in some cases. A slide from that presentation said if these algorithms are left unchecked they would feed users more divisive content: According to WSJ, Zuckerberg & Co. shelved this presentation and decided not to apply its observations to any of the social network’s products. Plus, Joel Kaplan, Facebook’s chief of policy, thought these changes might have affected conservative users and publications. In a statement, Facebook said it has learned a lot since 2016 and has built a robust integrity team to tackle such issues: However, WSJ’s report noted that even before the company formed this team, a Facebook researcher named Monica Lee found in 2016 that “64% of all extremist group joins are due to our recommendation tools.” Facebook even sought to tackle the polarization problem with proposed ideas such as tweaking its algorithm, and temporary sub-groups to host heated discussions. However, these concepts were shot down because of they were “antigrowth.” In the end, the social network didn’t do much, in favor of upholding the principle of free speech — a value that Zuckerberg has talked about a lot lately. Earlier this month, Facebook named its Oversight Board —its Supreme Court, if you will, which can overrule the social network’s decision on content moderation. Hopefully, the company will be forthcoming in sharing its research and learnings to the board, and not wait for someone to report glaring problems with its products first. You can read WSJ’s full report on Facebook’s divisive algorithms and its internal studies here.

Facebook reportedly knew its algorithms promoted extremist groups  but did nothing - 8