In the land of the free, why are all of our newsfeeds on any particular platform controlled by the same algorithm?
We should each determine what news, posts, images, or blogs (collectively referred to as content) are brought to us. This inability to do so isn’t just an annoyance, but also contributes to group-think, blind-sightedness of the masses, and manipulation by foreign or domestic entities. The influence of algorithms on our personal interactions and media intake can lead to improper manipulation by foreign powers or even domestic terrorist hackers, who “game” the system. Even when said system of influence is created by the governing body or platform creator, it oftentimes lacks the ability to properly represent the values (which may change constantly) of each individual customer or the capability to determine which pieces of information the user may consider relevant.
If these platforms really care about improving the quality of their users’ lives and leaving them with memorable, favorable AND beneficial experiences, then they need to include an option for users to customize how their newsfeed is aggregated. In this blog post, we'll look into how enabling users to determine their own information suggestion algorithm can reduce gaming of the system by bad actors, group-think, tribalism, and accidental information suppression by the moderator.
Let’s be clear. Many of the social media companies do allow any account to connect to their platform and implement algorithms that extract information and post through the use of coding (which many people can not code). Although these interfaces are extremely useful to programmers like myself, they are still not useful in regards to changing our experience or newsfeed in the app as you might see when you open the app while you’re out and about, or just using the platform for fun and friends. Still, allowing each user to program their feed within the app using a simpler mechanism seems to be entirely within of the realm of possibility, since they essentially offer something similar already. Also, many of these platforms already label posts that are being displayed because they are ads, so enabling people to customize their feeds shouldn’t affect those ads that will be promoted regardless. Even if the platforms just use the user’s algorithm in combination with their primary algorithm, it will still help preserve intellectual diversity. The worst-case scenario is that most people use the same settings, so then when it comes to the question of the platform’s manipulation of newsfeeds and exposure, it will ultimately be the responsibility of the collective users.
How does this impact us/U.S.
This issue is important because of the need for diversity: more specifically, diversity of thought, diversity of knowledge, and diversity of perspectives. Many people get information from these platforms and that information shapes our thoughts and views of topics ranging from what clothes to wear to a party to what laws should be passed. Allowing each user to customize their feed will make the flow of information on these platforms more dynamic. It’ll be more difficult for foreign or domestic influencers to push narrow-minded propaganda, and make it easier for naturally occurring nuanced and dynamic information to spread. It will be an additional obstacle these bad actors will have to overcome: not only how to tailor content to the varying personality types convincingly, but also how to shape it in a way that gets acquired by these many diverse newsfeed algorithms.
Not having this capability is how Facebook’s newsfeed promoted the icebreaker challenge over the Ferguson protests. That sort of disconnect is what’s helping to increase division and blind-sightedness. If people had had their own custom newsfeed algorithms within each social network, it would have increased the odds that at least one member in a user’s network would have had the information show up on their feed and spread it to their network.
Within each user’s network there is a certain level of “research diversity.” That is, the sort of information one person may look for is not the same sort of information another person may look for. When the different members of the user’s network do finally communicate, they share some of this information, since sharing information or disinformation is the fundamental purpose of communication. This essentially funnels any important or relevant information through the network to the person who is thought to perceive it as important or relevant. And during the process of being confronted by a “trusted” person with new, conflicting information, the user HAS TO engage their critical thinking skills (which helps them be resistant to propaganda) to do any or all of the following:
-Assess if the information is truly import or relevant to their point
-Logically reason if the new information truly supports or negates their previously held position
-Check for mitigating or dismissive factors
-Search for other, counterfactual information
-Finally accept or reject the information’s impact to their previously held beliefs
-Resolve any cognitive dissonance that may result from their newfound changed position and other previously held positions and/or beliefs
Yes, some of these steps may happen so quickly that it almost seems unconscious, but even in that instance they are still exercising those critical thinking skills. However, the important part is that this conflict is started by a trusted party, and thus will likely be seen as valuable enough to engage in the critical thinking tasks. The other (and maybe more important) part is that this can happen with positions on topics users are not as “dug in” on. In exercising their humility muscle, users will hopefully be more open to the idea of being wrong about the more “tribal” issues.
If the overall diversity of information being exposed to anyone in the user’s network is reduced, they all essentially end up viewing the same content on their individual feeds. This is not what makes a well-informed and educated populace. Studies have shown that critical thinking skills are what make disinformation and propaganda less convincing.
How can non-programmers program?
So of course we have to wonder how people could customize their newsfeed in such a way that’s not as complex as using programming languages, but still offers as much flexibility as possible in determining what features of an image, tweet, or post gets promoted. We’ll go over some of the User-Interfaces (UI) or how users can interact with the code in the next few paragraphs.
Surveys of information (supervised learning) vs likes, glance/views, and comments
The simplest way for the user and the platform to create custom algorithms would be surveys of what the user would like in their newsfeed. You may wonder how this differs from how many of the existing algorithms work. Well, many of those algorithms work by looking at what you like, view, and comment on. The issue with these metrics is that some things you may not like because they are bad news sort of posts. Some posts you may not comment on because they contain a controversial topic. With that said, none of these situations exclude the topic from being considered newsworthy. Even just using what you viewed can be ineffective because sometimes you may view something because of a misleading or unclear headline, for fun instead of for information, or even because your mouse slipped.
The alternate survey type could ask the user what sort of topics they want to flood their feed or not flood their feed. It could even ask a user about the content they have engaged with in the past, and if it’s something they want to use to determine if other content is worth putting on their feed. Another way is a comparative test, asking the user a series of questions comparing the newsfeed value of one piece of content to another.
Having an explicit survey of what the users want or don’t want in their newsfeed will provide guidelines for the platform to discriminate between content. If done regularly, it will also enable their AI to automatically search through their database of content that you would likely approve to be exposed to, rather than simply showing you what will get you to engage more with the platform.
Block Models to Build Feeds
One way non-programmers have programmed in the past is through the use of sequentially connecting blocks on a screen. Each block corresponds to a particular function (be it a type of filter, comparison, feature type, outlier type, etc.), and the connections between them determine the sequence when each function is executed.
For instance a block could do a transformation such as replace the number of views a post has with if its in the 1st, 2nd, 3rd,, 4th, 5th , etc. place of views. Then the user can set the filter to filter posts with below the thousandth place of views (say out of a total of 5 thousand posts) instead of using the actual number of views the thousandth place post had, as the cut-off.
Thus, each user could filter at different steps based on various features, and transform these features (for instance, a feature could be the number of times content has the word “bully” and a transformation would be if that count is above or below average) or even combine features to then filter or sort the feed results. This will give the users a way to create and execute flexible code in a way that capitalizes on visual representation and intuition.
Ask your computer to make the algorithm
The simplest way for a user to customize their feed would be to ask their computer to filter their feed based on some criteria. Similar to speaking to Cortana, Siri, or Alexa, there are frameworks and techniques that enable vocal interface with databases. Users will have to understand what features and other functions (similar to the blocks’ functions) can be used when speaking to their computer. These programs usually are bit computationally intensive and thus maybe impractical for a social media platform to implement and maintain such a feature. However, that doesn’t mean we should give up on this idea. The platform could have a user download said program to run on their own computer, and once the program outputs the file with the newsfeed algorithm in it, they can simply upload the file to the platform like they do pictures or videos.
Swapping feed algorithms
This last one is a bit meta. But once some people have created feed algorithms, they could send them to others or upload them for sharing. This would create a sort of coding competition even among the users, and it would encourage new and different techniques to produce new and different information streams. These feed algorithms would be the new app store on these platforms.
For these companies, having users generate their own newsfeed algorithm can help create a platform marketplace of newsfeed apps. It will also increase regular discussions on knowledge, data, and their relative value in the population, pushing our everyday citizens to the forefront of the information age. Some of these algorithms could even be repurposed for the platform.
Our feeds are a new part of life and should be respected for the impact they CAN and DO have on our lives. Now is the time to realize the feeds do more than simply drive engagement and ads. What we can do as consumers is talk about these concerns on social media so the platform owners hear us, and support companies that do enable user-customized newsfeeds. As consumers in any nation, we have to realize how these feeds are shaping our minds and relationships, which are the very fabric of civilization and even our humanity.