Twitch is the go-to streaming platform for many content creators these days, what with the wealth of broadcast options, friendly engagement, and impressive reach it offers working streamers on a daily basis. Twitch averages over 2.9 viewers every month and housed 9.5 active streamers in February alone. Unfortunately, the app is beset by automated users a.k.a. “bots,” with 3/4ths of an average Twitch user’s view count attributed to view-botting, defined by Twitch support as “the practice of artificially inflating a live view count, using illegitimate scripts or tools to make the channel appear to have more concurrent viewers than it actually does.” Follow-botting — the prevalence and use of fake accounts — is even sketchier, and yet streams continue to attract plenty of bogus engagement every day.
Twitch finally cracked down on these problem users last week, banning over 7.5 million bots this month in a bid to protect its real-life streamers. The fake accounts were identified and pinned down using “ongoing machine learning technology” and will result in many streamers’ follower counts being grossly diminished. But it’s all for a better cause. It’s the goal of every budding and established streamer to draw in real-world patrons. Nobody wants fake views.
As both a commercial opportunity and a chatting implement, bots are hardly unorthodox tools on the Internet. The rise of social media and platforms like Twitch has led to greater corporate need for further audience engagement, with individual businesses hopelessly opting for fake accounts in the hopes of accumulating revenue, fast-tracking views, and attracting active participants. But it’s nothing more than play-acting, a manufactured reality meant to fool real-life users into publicly endorsing a seemingly legitimate entity. Navigating the interwebs has become more complicated than ever, with all sorts of pseudo-content polluting the information stream. Computers have evolved enough to properly mimic the human experience, and it shows. Fake news and fake engagement both belong to the same kind of insidious digital beast.
Fake engagement is defined by Twitch support as “artificial inflation of channel statistics, such as views or follows, through coordination or 3rd party tools,” creating “incidental and duplicitous” responses. Spam, basically. Fake accounts are “controlled by a computer or script” and made to seem as real as possible. Follow-botting usually involves targeting real-life accounts “en masse,” with bots typically produced “in batches.” View-botting is slightly different and often manifests as chat viewing bots, with the sole purpose of “imitating streamer/viewer interaction.” Botting may seem efficient, perhaps even futuristic, but it does nothing to contribute to the real-time growth of a channel. It undermines the site’s credibility, proving damaging to the streaming community as a whole. Twitch support writes: “False viewer growth is not conducive to establishing a career in broadcasting because the ‘viewers’ do not contribute to a healthy, highly engaged community.”
Fake engagement leading to “artificial inflation of channel statistics” is considered a major Twitch violation and may lead to an indefinite suspension. Coordinated botting — as in the case of Follow 4 Follow, Lurk 4 Lurk, and Host 4 Host — fall under the same category.