In algos we trust. YouTube/Facebook’s ultimate solution: “cross your fingers and hope that AI will solve it” https://t.co/cxHJwqLUst #mscedc

The above quote comes from this article:

To quote further:

“The problem is one of scale. YouTube didn’t grow to the size it is by manually checking every video, and it’s not about to start it now. For one thing, it would be hugely expensive: 300 hours of video are uploaded every minute. Even assuming staff members did nothing but watch videos for eight hours a day, it would take more than 50,000 full-time staff to manually moderate it.

So the company relies on tricks which do scale: algorithmically classifying videos, by scanning the titles and video content itself; relying on users to flag problematic uploads; and, in large part, by trusting creators themselves to correctly label their work. That trust is backed up by force, though, with YouTube reserving the right to pull channels entirely from the site if creators consistently miscategorise their work.

But those tricks are showing their limitations, now. It’s taken a while, but Google has waded into the same battlefield that Facebook’s been losing on for years. At a certain size, it’s impossible to run a censorship regime that won’t produce a steady stream of errors indefinitely.”

Here, in one piece, are many of the hopes and fears for algorithmic regulation – and regulation of algorithms.

 

from http://twitter.com/Digeded
via IFTTT