A bipartisan group of House lawmakers has introduced a new bill that would make it mandatory for services that use algorithms to serve content to offer a version that allows users to turn that feature off.
Called the Filter Bubble Transparency Act, the bill would require services like Facebook and Instagram to offer a version of their platforms described as “input transparent” that doesn’t pull data on users in order to generate algorithmic recommendations. The bill would exempt smaller companies with fewer than 500 employees, those with annual gross receipts lower than $50,000,000 in the last three-year period, and those that gather data on fewer than one million users annually.
Axios reports that the bill would not do away with algorithmic-based recommendation systems entirely, but would instead make it a requirement that the service includes a toggle that allows users to voluntarily turn that function off. The bill also states that those platforms that continue to use recommendation algorithms need to explicitly inform users that the algorithm bases recommendations on information it gleans from analyzing their personal data. This can come in the form of a one-time notification but would need to be clearly presented, the bill stipulates.
A major sticking point in the saga of Meta and its companies Facebook and Instagram is how the company has, or has not, been looking out for the safety of its users. After a bombshell report revealed that Facebook, now meta, was aware that its platforms were toxic for young people, the company has faced an onslaught of continued revelations about how little transparency it provides to its users.
At center stage in this conversation has been Facebook and Instagram’s insistence on using algorithms to generate feed content, a feature the company moved to over chronological timelines. Axios notes that the recent situation involving Facebook has renewed interest in bills that seek to give more people say in how algorithms shape their online experiences and shows action behind the anger over how platforms use algorithms to target users with specialized content.
“Its own research is showing that content that is hateful, that is divisive, that is polarizing, it’s easier to inspire people to anger than it is to other emotions,” Facebook whistleblower Frances Haugen has said. “Facebook has realized that if they change the algorithm to be safer, people will spend less time on the site, they’ll click on less ads, they’ll make less money.”
Facebook has repeatedly denied claims that it doesn’t do its best to protect its users, and specifically said that it continues to make “significant improvements” to tackle the spread of misinformation and harmful content on its platform.
One argument behind this new proposed bill is if the algorithm was removed entirely, the need to make improvements to it would be moot.
Image credits: Elements of header photo licensed via Depositphotos.
No comments:
Post a Comment