Facebook has debuted a “web-enabled simulation” in which a population of bots it has created based on real users can duke it out – supposedly to help the platform deal with bad actors. But it’s not totally isolated from reality.
The social media behemoth’s new playpen for malevolent bots and their simulated victims is described in a company paper released on Wednesday with the ultra-bland, ’please, don’t read this, ordinary humans’ title of “WES: Agent-based User Interaction Simulation on Real Infrastructure.”
While the writers have cloaked their and their bots’ activities in several layers of academic language, the report reveals their creations are interacting through the real-life Facebook platform, not a simulation. The bots are set up to model different “negative” behaviors – scamming, phishing, posting ‘wrongthink’ – that Facebook wants to curtail, and the simulation allows Facebook to tweak its control mechanisms for suppressing these behaviors.
Even though the bots are technically operating on real-life Facebook, with only the thinnest veil of programming separating them from interacting with real-world users, the researchers seem convinced enough of their ability to keep fantasy and reality separate that they feel comfortable hinting in the paper of new and different ways of invading Facebook users’ privacy.