Charlene Goldfield has posted “Recommendation Algorithms and Disinformation” on SSRN. Here is the abstract:
Recommendation algorithms deeply influence human and internet interaction. While its use may differ somewhat among the various social media and technology companies, it mostly uses “implicit feedback to track clicks, views, and other measurable user behaviors.” This information is then used as part of the recommendation algorithm, “which has the ultimate power on deciding who sees what content and when.” This algorithm model is mostly utilized by companies like Facebook. Overall, the algorithm is tweaked with the end goal of producing the most profits from the user’s behavior. There have been countless stories written on the impact of these algorithms and how they are used. But, more specifically, what these algorithms do is promote a psychological effect on the end-user. That is why more and more stories have focused on the effect of recommendation algorithms and disinformation or the promotion of “fake news.” That is because the algorithm is set in a way where it takes away the ability for users to see any diversity of information and focuses on the confirmation bias that is fed from the user’s preference.
This Paper proposes to create a working group consisting of social media companies that will dissect and showcase how recommendation algorithms are used and how users interact with the algorithm. By having a voluntary working group, discussions can be reviewed on how current legislation and regulations can help to dissuade the spread of disinformation meanwhile increasing the end user experience. The paper will, in particular, help to explain the development of these algorithms and look at ways to solve this issue. It will take a look at the current regulatory landscape to include explanations of section 230 and any First Amendment issues. Lastly, it will discuss the recommendation and how this will be effective for purposes of combating disinformation.