_Senate committee hears from tech policy chiefs on algorithm secrets_
“`There’s a hearing underway at the Senate judiciary committee into the algorithms tech giants use to influence what pops up most frequently for you to look at or be directed towards on social media.
Policy chiefs from Facebook, Twitter and YouTube are testifying. You can watch the hearing live, here.
The Axios political news site has a lively and reader-friendly take on all of this. Here’s a chunk:
Tech platforms have built the heart of their businesses around secretive computer algorithms, and lawmakers and regulators now want to know just what’s inside those black boxes.
Why it matters: Algorithms, formulas for computer-based decision making, are responsible for what we get shown on Facebook, Twitter and YouTube — and, increasingly, for choices companies make about who gets a loan or parole or a spot at a college.
How it works: When posts “go viral,” algorithms are why. Often, they work by detecting small blips in user interest and amplifying them.
Algorithms’ complexity and obscurity have helped tech firms make the case that they are neutral platforms. They also allow companies to stand at one remove from responsibility for decisions about promoting and demoting content.
But users and critics, increasingly aware of the power of these systems, now want to hold companies more responsible for the outcomes their code produces.
Driving the news: At a hearing on “Algorithms and Amplification,” executives from YouTube, Twitter and Facebook, along with Harvard researcher Joan Donovan and ethicist Tristan Harris, will testify Tuesday before the Senate Judiciary Committee’s privacy, technology and law subcommittee.
The subcommittee is led by Sen. Chris Coons (D-Del.) and ranking member Sen. Ben Sasse (R-Neb.).
The big picture: Government agencies around the world are starting to take up issues related to algorithms and machine learning.
Our thought bubble: The conversation in policy circles has long concentrated on the outer limits of content decisions — decisions about what gets removed and who gets banned. Those are what software people call “edge cases.” What gets recommended, and why, is the center of the issue.
Between the lines: Platforms have long used their algorithms to boost business metrics, such at the amount of time spent on their site. Increasingly, though, they are also acknowledging and tapping the power of algorithms to limit the spread of misinformation or hate speech that doesn’t merit an outright ban.“`
Follow
avantmalawi.com