Twitter’s Blue Tick Meltdown – What can brands do to protect themselves?

It is just over three weeks since Elon Musk’s purchase of social media giant, Twitter, for US$44 billion. Users of the site and followers of the story will be aware of various changes to the site within this time, not all of which have been welcomed. Here, James Corlett and Jack Kimberley explore how businesses can protect themselves from the risks involved with ‘Twitter Blue’.

Other than sacking his legal team and removing a dangerous amount of the moderation teams, one thing that has hit the news is the introduction of the ‘Twitter Blue’ subscription service allowing users to buy blue-tick verification for a monthly fee of $7.99 or £8.

The ability for users to purchase verified status with Twitter Blue raises concerns about the proliferation of accredited fake accounts and ‘bots’ on Twitter. Until now, the blue tick has been a useful tool allowing users to identify reliable information on Twitter. Twitter Blue has already been exploited to impersonate other accounts, such as major brands and current and former political leaders. A fake account that had become verified via the Blue Tick programme, impersonating the pharmaceutical giant, Eli Lilly, tweeted the below knocking billions off the company’s share capital at a stroke.

Only a few days after its launch, the option to join Twitter Blue has now been removed and put on hold indefinitely. We now understand that a new updated version of Twitter: ‘Twitter 2.0’ incorporating a new ‘Twitter Blue’ will be rolled out on 29 November. The platform has, in the meantime, papered over the cracks with a grey, ‘official’ stamp. It is hoped that this will stop the advertising exodus the platform has seen in the past month.

This, coupled with Twitter letting go of 50% of its staff since the Musk takeover, has led to a large number of advertisers pausing or stopping their ad spend on the platform, with General Motors, United Airlines, and Pfizer all cutting spending until they are sure Twitter is still an appropriate forum to place their ads. Others have taken more drastic steps; Balenciaga went so far as to delete its entire profile from the platform.


Other than taking the drastic step of leaving, what action can brands take? It is difficult to bring about meaningful and quick change to published tweets. Screengrabs and the sheer volume of users makes this almost impossible, which is why the change in approach to verification has caused so much consternation.

However, there are some tools available to you, provided you act quickly.

The first is to follow the platform’s channels of reporting, including tweeting Twitter’s support team directly (although given the 75% reduction in staff this may be a slow process).

Secondly you can approach the police. The Fraud Act 2006 outlines a general offence of fraud which can be committed in three distinct scenarios, the most relevant being fraud by false representation. In enabling users to purchase verified status, accounts could be used to defraud fellow platform users by intentionally posing as an established brand or trusted news source and taking advantage of their reputation.

Under the Fraud Act 2006, if an account was to dishonestly make a false representation with the intent of making a gain or risking loss on the recipients’ part, then (subject to the remainder of the Act) the account may have committed fraud by false representation. The ability to purchase a ‘blue-tick’ is therefore problematic as a catalyst for potentially increasing the likelihood of fraud being committed.

We would recommend you seek legal advice and ensure you preserve the evidence available to you. You can use online software to download entire Twitter accounts in readable format. We would also recommend that you refrain from engaging with the offending account for various reasons but this point should be assessed on a case-by-case basis.

Online Safety Bill

Given the speed of changes, the world is monitoring Twitter closely. In the UK, recent reports suggest that when the Online Safety Bill returns to Parliament its scope will be narrowed and will not address abusive conduct which fails to meet the criminal threshold (so called ‘lawful but awful’ material).

The lawful but awful or legal but harmful provisions have been a sticking point within the Bill for some time. The Department for Digital, Culture, Media and Sport is rumoured to be dividing the legal but harmful responsibilities under the Bill to distinguish between the content which is harmful to adults and that which is harmful to children, with the former allegedly being phased out or reduced whilst the latter becomes the Bill’s focal point. This is yet to be confirmed and MPs have declined to comment on any recent progress.

Whilst the do it for the children focussed narrative garnered widespread support for the Bill, criticism has been raised as to Ofcom’s role and government supervision. Free speech advocates and civil liberties organisations have reneged against the government’s, acting through the secretary of state, ability to indirectly establish the standards for permissible legal speech and exercise indirect control over an independent agency to ensure such standards fit the agenda of government policy.

In consideration of the legislative uncertainty and the potentially volatile developments being made at Twitter under its new tenure, users might rightly fear that there is a risk that this sort of abusive conduct will go unmonitored on the platform.


[This blog is intended to give general information only and is not intended to apply to specific circumstances. The contents of this blog should not be regarded as legal advice and should not be relied upon as such. Readers are advised to seek specific legal advice.]


With thanks to Jack Kimberley for his contribution

By James Corlett and Jack Kimberley