Google refused to share the percentage these bad ads represent against the total volume of ads served on its platform last year. However, a spokesman did say that these ads are a "small minority of our overall ads".
Ads that Google took action against last year included:
- Scraping: Google blocked more than 12,000 websites for "scraping", duplicating and copying content from other sites, up from 10,000 in 2016.
- Tabloid cloaking: more than 7,000 AdWords accounts were suspended for pretending to be news, up from 1,400 in 2016.
- Malicious activity: more than 130 million ads were removed last year for trying to abuse Google's ad network through malicious activity or by attempting to trick and circumvent our ad review processes.
- Malware: 79 million ads were blocked on Google's network for automatically sending people to malware-laden sites, and removed 400,000 of these unsafe sites.
- Trick to click: Google blocked 66 million "trick to click" ads and 48 million ads that were attempting to get users to install unwanted software.
Google has also taken steps to remove the economic incentives for sites to create and spread fraudulent content online. Last year, it removed 320,000 bad publishers from its ad network and blocked nearly 90,000 websites and 700,000 mobile apps for policy violations.
In May last year, Google also rolled out tech on AdSense that would let it block specific pages that violated its rules but not necessarily the entire website. The company said that this new technology has removed, on average, more than 2 million URLs each month.
Besides stepping up the use of technology to tackle advertiser and publisher violations, Google has also introduced 28 new advertiser policies and 20 new publisher policies to combat new threats.
This includes an expanded policy to cover forms of discrimination and intolerance beyond hate speech protections, we removed Google ads from 8,700 pages.
Google has been under pressure from its users, governments and the ad industry to clean up its ecosystem. Both in terms of stopping extremist content and content that appears to exploit children.
Yesterday, a Home Affairs Committee questioned YouTube on the continuing availability on the platform of material related to the neo-Nazi organisation known as "National Action" despite several undertakings given by Google senior management to the Committee over the past year that the content would be removed.
The European Commission, too, has given internet companies including Google, Facebook and Twitter two months to demonstrate progress in taking down extremist content, or face official legislation.
For 2018, Google plans to add new policies to address ads in unregulated and speculative financial prodcuts such as binary options, cryptocurrency, foreign exchange markets and contracts for difference.