Itch.io, an independent game platform, declares that its domain was aggressively targeted and compromised due to a 'disreputable AI-driven' phishing complaint.
Itch.io experienced an unexpected offline period early Monday, attributing it to a misleading phishing complaint filed by someone pretending to be Funko, a collectible pop culture figure manufacturer. According to an earlier report by The Verge, this situation unfolded.
Itch.io is a renowned platform for independent game creators, allowing developers to sell their work, such as games and zines, without having to surrender a significant portion of their earnings to the platform itself. Itch.io offers developers flexibility in revenue sharing, even permitting them to waive any commission if they so choose. This is a significant departure from platforms like Steam that usually claim a 30% share.
In a tweet, Itch.io explained that their domain was made inaccessible due to Funko utilizing a "spammy 'AI-assisted' brand protection tool" from a company named BrandShield. An Itch.io representative later clarified that Funko was likely not the primary offender in this scenario; instead, they pointed the finger at BrandShield for employing automated technology to flag unauthorized uses of the Funko trademark. Instead of following the standard DMCA protocol, which would enable Itch.io to assess and remove the offending content without a site-wide outage, BrandShield opted for a more aggressive tactic, filing "fraud and phishing" notifications to both Itch.io's domain registrar and hosting provider.
The site was eventually restored following Itch.io's contact with its domain registrar. It seems the website itself never went offline; instead, the registrar temporarily disabled the domain.
BrandShield released a statement, emphasizing their role as a trusted partner for numerous brands. According to them, their AI-driven technology identified a potential risk from an Itch.io subdomain, triggering the temporary takedown. They clarified that this decision was made by the service providers, not BrandShield itself.
The misuse of automated moderation systems has become a common issue in the digital landscape today, with malevolent users filing false copyright claims on sites like YouTube to harass users or bully creators over trivial grudges or disliked content. Companies like YouTube have taken legal action against these malicious users in recent years, uncovering an individual in the Minecraft community who was filing fraudulent DMCA claims against other creators in order to extort them.
With an influx of requests, companies like YouTube have turned to automated systems for management. For instance, YouTube's ContentID enables the automatic identification and monetization of content uploaded without permission, offering copyright holders the option to profit from other users' videos containing their material instead of having the content taken down. However, these systems are not flawless, and platforms like YouTube may remove reported content without thorough review to preserve their Section 230 protections. This implies that platforms like YouTube are exempt from legal liability if they delete reported content within a reasonable timeframe.
The tech industry is continually evolving, and the future of platform management may rely heavily on artificial intelligence. The incident with Itch.io and Funko highlights the potential dangers of over-reliance on AI-driven brand protection tools, which can sometimes lead to false flags and site-wide disruptions.