
Klaus Schwab’s World Economic Forum (WEF) have announced plans to moderate the internet by using artificial intelligence (AI) that identifies “misinformation,” harmful content, or anything else Klaus Schwab decides should be censored.
The internet censorship proposal would require “subject matter experts” to provide training sets to the AI so it can learn to recognize and flag or restrict content that the WEF deems “dangerous.”
The WEF published an article Wednesday outlining a plan to overcome frequent instances of “child abuse, extremism, disinformation, hate speech and fraud” online, which the organization said cannot be handled by human “trust and safety teams,” according to ActiveFence Trusty & Safety Vice President Inbal Goldberger, who authored the article.
RELATED: WEF Advisor: ‘Common People’ Should Live In Fear, ‘We Don’t Need The Vast Majority of You’
The system works through “human-curated, multi-language, off-platform intelligence,” input provided from expert sources, to create “learning sets” for the AI machine.
“Supplementing this smarter automated detection with human expertise to review edge cases and identify false positives and negatives and then feeding those findings back into training sets will allow us to create AI with human intelligence baked in,” Goldberger stated.
DailyCaller report: In other words, trust and safety teams can help the AI with anomalous cases, allowing it to detect nuances in content that a purely automated system might otherwise miss or misinterpret, according to Goldberger.
“A human moderator who is an expert in European white supremacy won’t necessarily be able to recognize harmful content in India or misinformation narratives in Kenya,” she explained. As time goes on and the AI practices with more learning sets, it begins to identify the kinds of content moderating teams would find offensive, reaching “near-perfect detection” at a massive scale,
Goldberger said the system would protect against “increasingly advanced actors misusing platforms in unique ways.”
Trust and safety teams at online media platforms, such as Facebook and Twitter, bring a “nuanced comprehension of disinformation campaigns” that they apply to content moderation, said Goldberger.
Leave a Reply