Meta admits wrongly suspending Facebook Groups but denies wider problem 36 minutes ago Share Save Graham Fraser and Imran Rahman-Jones Technology reporters Share Save Getty Images Meta says it is "fixing" a problem which has led to Facebook Groups being wrongly suspended - but denied there is a wider problem with its platforms. In online forums, Group administrators say they have received automated messages stating, incorrectly, that they had violated policies so their Groups had been deleted. Some Instagram users have complained of similar problems with their own accounts, with many blaming Meta's artificial intelligence (AI) systems. Meta has acknowledged a "technical error" with Facebook Groups, but says it has not seen evidence of a significant increase in incorrect enforcement of its rules on its platforms more widely. One Facebook group, where users share memes about bugs, was told it did not follow standards on "dangerous organizations or individuals," according to a post by its founder. The group, which has more than 680,000 members, was removed but has now been restored. Another admin, who runs a group on AI which has 3.5 million members, posted on Reddit to say his group and his own account had been suspended for a few hours, with Meta telling him later: "Our technology made a mistake suspending your group." Thousands of signatures It comes as Meta faces questions from thousands of people over the mass banning or suspension of accounts on Facebook and Instagram. A petition entitled "Meta wrongfully disabling accounts with no human customer support" has gathered almost 22,000 signatures at the time of writing on change.org. Meanwhile, a Reddit thread dedicated to the issue features many people sharing their stories of being banned in recent months. Some have posted about losing access to pages with significant sentimental value, while others highlight they had lost accounts linked to their businesses. There are even claims that users have been banned after being accused by Meta of breaching its policies on child sexual exploitation. Users have blamed Meta's AI moderation tools, adding it is almost impossible to speak to a person about their accounts after they have been suspended or banned. BBC News has not independently verified those claims. In a statement, Meta said: "We take action on accounts that violate our policies, and people can appeal if they think we've made a mistake." It said it used a combination of people and technology to find and remove accounts that broke its rules, and was not aware of a spike in erroneous account suspension. Instagram states on its website AI is "central to our content review process". It says AI can detect and remove content against its community standards before anyone reports it, while content is sent to human reviewers on certain occasions. Meta adds accounts may be disabled after one severe violation, such as posting child sexual exploitation content. Getty Images