Google is also in the process of verifying if its Adsense platform supplied the ad unit cited in a tweet posted this morning by Lewis:
"Dear @SkyNews and @guardian while I appreciate your coverage of my suing Facebook over defamation for fake ads. Your own advertising algorithms have then published similar fake ads about me on the pages with those news stories. Please rectify this immediately"
His tweet was apparently prompted by Twitter followers mentioning they were being served fake ads that use his face and name to endorse suspicious financial products on the two platforms.
Neither Guardian nor Sky Media were able to respond in time for this article.
However, the ad unit next to the Guardian article (as seen in the tweet below) is one supplied by Google's Adsense.
A Google spokesman added: "We have a set of guidelines which determine the ads that can and cannot run on our platform. When we become aware of ads that breach these guidelines, we quickly take the appropriate action."
Lewis is suing Facebook for failing to prevent, or swiftly remove false advertising that is ruining his reputation and has lured the vulnerable into falling for costly scams.
When contacted for comment, a Facebook spokesman only issued the following statement:
"We do not allow ads which are misleading or false on Facebook and have explained to Martin Lewis that he should report any ads that infringe his rights and they will be removed. We are in direct contact with his team, offering to help and promptly investigating their requests, and only last week confirmed that several ads and accounts that violated our Advertising Policies had been taken down."
Facebook's policies state clearly that ads posted on its platform must not contain deceptive, false or misleading content including deceptive claims, offers, or business practices.
It would not comment on Lewis' accusations that it had been slow to respond to his complaints about ads that violate these policies.
Nice in theory, harder in practice
Lewis's suggestion that Facebook notifies well-known people each time their image is used in an advert is a "nice theory but in practice, much more difficult", Conor Lynch, senior media manager at We Are Social, commented.
"It relies on defining 'well known', examining context (people have less issue with their face on content with positive associations), that person's team being set up to handle these approvals, and a host of other variables," he added.
It is "logistically impossible" for Facebook to monitor the millions of sponsored content posts that are added everyday to its platform, Dan Gilbert, founder and chief executive of BrainLabs, agreed.
"Machine learning is nowhere near advanced enough to be able to detect fake news, nor could Facebook hire enough human staff to thoroughly monitor its platform," he said.
Despite the advances in AI-powered image recognition, the algorithms are not sophisticated enough to cope with a situation like this.
"Facebook has an automated ad approval process, but unless there's an obvious red flag like nudity or swearing, content tends to pass. It will investigate issues with content if flagged by users – though Martin Lewis's complaint is that this process means relying on the individual to monitor and report," Lynch said.
It's not fair or reasonable to accuse Facebook of facilitating scams, Gilbert added. "All user-generated content platforms are liable to inappropriate and malicious actors; advertisers and users need to be realistic about the pros and cons of social media, and apply vigilance."