For the past week or so, there have been some reports from YouTube gamer channels, including this one from GhillieMaster, that claim there is a third-party tool out there that is being used by unnamed groups to either temporarily or permanently ban Xbox accounts.
The video claims that a person that was reportedly hit with this exploit got no explanations for this ban. Normally, Microsoft might send a reason why the account was banned. The same video also posted up a screenshot that reportedly shows the actual third-party tool that is being used.
However, The Verge got in contact with Microsoft about these reports and got a response from Kim Kunes, the general manager of trust and safety for the Xbox division. In short, Kunes says that whatever this exploit tool is, it cannot cause bans on Xbox accounts by itself:
Third party apps or tools cannot impact player enforcements, and no volume of inaccurate reports result in an enforcement. . . Only reports that have been reviewed by the Xbox Safety Team and determined to be accurate and in violation of our Community Standards result in an enforcement action such as suspension or an account ban.
Microsoft last posted about its efforts to enforce its Xbox content moderation rules in May 2023, for the period of time between July to December 2022. At that time it said it made 10.19 million total enforcements during the second half of 2022, up from 7.31 million enforcements in the first half of 2022.
Microsoft does use both humans and automated tools to make its enforcement decisions. At that time it said tools like Community Sift “work across text, video and images catching offensive content within milliseconds.”
Microsoft said that 7.51 million proactive enforcements during July-December 2022 were made against inauthentic accounts, including ones generated by bots.
Related Forum: Xbox Forum