Weaponizing Ads: How Governments Use Google Ads and Facebook Ads to Wage Propaganda Wars Eslam Elsewedy 20 min read · 19 hours ago 19 hours ago -- Listen Share In late 2024, the head of the UN’s Gaza aid agency made a disturbing discovery: when people searched for his organization on Google, the top result wasn’t the agency’s own site — it was a paid ad placed by the Israeli government. The ad mimicked a UN website but actually linked to an Israeli government page accusing the UN Relief and Works Agency (UNRWA) of supporting terroists (wired.com). “The spread of misinformation & disinformation continues to be used as a weapon in the war in Gaza,” UNRWA Commissioner-General Philippe Lazzarini warned, calling for investigations and stricter regulation of online propaganda (aljazeera.com). His alarm highlights a troubling new reality: digital advertising platforms have become battlefields for influence, where governments and political groups pay to sway public opinion in wars and crises. Traditional propaganda , think radio broadcasts, posters, state TV, has now gone high-tech. Platforms like Google Ads and Facebook (Meta) Ads allow paries to target specific audiences with tailored messages at massive scale. In theory, these companies have policies against hate speech or blatant lies. In practice, recent case studies show that sophisticated misinformation campaigns can exploit loopholes and lax enforcement, reaching millions of people with government-funded narratives. From the Israel–Palestine conflict to Russian and domestic political meddling, paid ads are being weaponized to promote war efforts, demonize opponents, and even undermine institutions like the UN. This article examines how it’s happening, why the platforms permit it, and what ethical and policy questions arise. Digital Propaganda via Paid Ads: A New Front in Information Warfare Paid advertising on Google and Facebook has become a potent tool for political persuasion or manipulation. Unlike organic social media posts (which rely on shares or algorithms), ads can guarantee visibility: if you pay, you reach your target. And the targeting can be extremely granular. Google Ads lets advertisers bid on search keywords or place banner/video ads on websites and YouTube, often filtered by geography or audience interests. Facebook/Meta’s ad system enables micro-targeting by demographics, location, and user interests, while requiring a “paid for by” disclaimer on political ads for transparency. In theory, this gives legitimate political campaigns a way to reach supporters , but it equally gives propagandists a direct channel to the eyeballs of a chosen populace. Researchers note that this capacity can be abused by partisan or state actors to “manipulate or distract citizens with misinformation and government propaganda,” posing serious challenges to democracy (academic.oup.com). A notorious early example was Russia’s Internet Research Agency, which in 2016 created hundreds of fake Facebook accounts and purchased at least $100,000 worth of divisive ads to influence the U.S. election (abcnews.go.com). Many of those ads didn’t mention candidates directly; instead they amplified polarizing messages on issues like immigration and race to inflame social tensions. At the time, Facebook admitted that most of these propaganda ads “did not violate any company policies or laws,” underscoring how unprepared the platform’s rules were (abcnews.go.com). The incident sparked global awareness that paid media ads could be used as a propaganda weapon , and led to new transparency measures like Facebook’s Ads Library. Yet, increased transparency hasn’t prevented the tactic from evolving. Recent conflicts show governments openly turning to ad campaigns as part of their information warfare strategy. Paid ads can be launched rapidly, scaled globally, and tailored to undermine an opponent or shape public perception of a war. Crucially, they also allow a state to influence foreign publics, beyond its own borders, often skirting the line of platform policies and international norms. The sections below explore a timely case study and the broader implications. Case Study: Israel’s Paid Propaganda Campaign in the Gaza War One of the clearest illustrations of war propaganda via paid ads is the Israeli government’s online advertising blitz during the 2023–2025 Gaza war. Israel has long engaged in hasbara (Hebrew for “explanation”), a term for state public relations efforts or propaganda (smex.org). But since the war in Gaza, Israel’s use of digital ads has intensified to unprecedented levels (smex.org). Targeting the UN: Google Ads to Discredit Humanitarians In mid-January 2024, staff at UNRWA USA (a U.S.-based fundraising affiliate of the UN agency) noticed something strange: Google searches for “UNRWA” were yielding an ad that looked like it was from UNRWA, but actually led to an Israeli government site (wired.com). Upon investigation, they discovered a months-long Google Ads campaign by the Israeli Government Advertising Agency to discredit UNRWA (wired.com). The Israeli ads, which were clearly labeled as such in Google’s transparency data, appeared on searches for over 300 related keywords, from “UNRWA” to “Gaza aid”, effectively hijacking traffic from people seeking the UN agency (wired.com). The content of the ads and the landing pages was unmistakably propagandistic: the Israeli site alleged that UNRWA was “inseparable from Hamas” and even employed terrorists (wired.com). One ad bluntly asked, “Paychecks for terrorists or humanitarian aid?”, suggesting money given to UNRWA would fund armed militants (abc.net.au). The aim of this campaign was clear: to cut off support and donations to the UN’s relief agency in Gaza. It came at a critical time, as UNRWA was providing life-saving food, water, and medical care to Palestinians under siege. UNRWA’s chief, Philippe Lazzarini, blasted Israel’s actions as a “deliberate disinformation campaign” to “dismantle the agency”, warning that smearing a humanitarian organization not only hurts its reputation but “puts the lives of our colleagues on the frontline at serious risk” (abc.net.au). In essence, Israeli authorities were using Google’s advertising system to undermine a UN institution in the middle of a humanitarian crisis. Google’s response was relatively hands-off. When UNRWA representatives appealed to Google to stop what they saw as a dangerous misinformation campaign, the company did not immediately take down the ads (wired.com). A Google spokesperson defended that any government may run ads on Google as long as they adhere to our policies, and stated that Google enforces those rules “consistently and without bias” (wired.com). In other words, because the Israeli ads did not overtly violate Google’s ad policies, they were allowed to run. Notably, Google’s ad policies do prohibit misrepresentation, but they do not have a blanket ban on misinformation unless it relates to specific sensitive areas like election integrity (gomixte.com). As Wired reported, Google generally permits questionable claims in ads “unless it undermines participation or trust in an electoral process” (gomixte.com). This loophole meant that propaganda undermining a humanitarian agency , while arguably unethical , wasn’t against the rules. The result: Israel’s anti-UNRWA ads often outcompeted the agency’s own Google ads. From May through July 2024, in head-to-head bidding, the Israeli ads won the top slot 44% of the time (versus UNRWA USA’s ads showing 34% of the time) (wired.com). UNRWA’s team had to spend tens of thousands of donor dollars trying to outbid Israel for visibility (wired.com). This “insidious” campaign, as UNRWA called it, exposed countless Americans and others to one-sided allegations just as they were searching for facts about Gaza relief (wired.com). “I want the public to know what’s happening,” said UNRWA USA director Mara Kronenfeld, “especially at a time when civilian lives are under attack in Gaza” (wired.com). By late 2024, news investigations across the world had caught on. ABC News in Australia found that those same Israeli ads linking UNRWA to Hamas were appearing on major Australian news sites, served via Google’s display network (abc.net.au). The ads featured a masked militant wearing both the Hamas emblem and an UNRWA headband, visually equating the UN agency with a terror group (abc.net.au). Captions like “UNRWA has alternatives, it must be replaced” were shown next to news articles (abc.net.au). ABC confirmed at least eight such ad variants were targeted specifically at Australian audiences (in English) and noted the campaign ran in multiple languages including German, Italian, French, and Spanish (abc.net.au). This truly was a global propaganda ad campaign, orchestrated by a government via Google’s platforms. UNRWA officials, upon learning of the global ads, reiterated that these tactics were “a wider disinformation campaign” by Israel to cripple the agency (abc.net.au). Despite criticism from UN allies (Australia’s Foreign Minister called Israel’s actions “reprehensible” and urged them to stop undermining UNRWA (abc.net.au), the Israeli government showed no signs of relenting. It even escalated measures offline, banning UNRWA operations in Israel, as the online ad offensive continued (abc.net.au) A coordinated ad push against the Hind Rajab Foundation (HRF) Alongside the UNRWA campaign, the Israeli Government Advertising Agency has also targeted the Hind Rajab Foundation (HRF), an EU-based human-rights nonprofit founded in 2024 to pursue accountability for alleged war crimes in Gaza and named in honor of five-year-old Hind Rajab, whose death became emblematic of the conflict (Hind Rajab Foundation). In Google’s Ads Transparency Center, the agency’s account shows creatives such as “Hind Rajab Foundation — HRF’s disturbing reality” that click through to a government microsite titled “Unmasking the Hind Rajab Foundation,” which portrays HRF as a pseudo-legal front with “extremist” ties (Ads Transparency Center, 2025; Government of Israel). Reporting on Israel’s recent $45 million placement with Google further notes that several ads explicitly link to this “Unmasking” report, indicating a coordinated effort to discredit HRF across search and display (Drop Site News). For context, HRF’s public materials frame the organization as a legal accountability initiative, while the story of Hind Rajab has gained global visibility through Kaouther Ben Hania’s Venice-premiering film The Voice of Hind Rajab, backed by high-profile producers including Brad Pitt, Joaquin Phoenix, Rooney Mara, and Jonathan Glazer, which has amplified awareness of the case well beyond activist circles (Reuters). Flooding Social Media with War Narratives The attack on UNRWA was just one facet of Israel’s broader digital PR war. Israeli agencies and allied groups pumped out hundreds of ads across Google/YouTube, Facebook/Instagram, and even children’s gaming apps to influence public opinion about the Gaza conflict. David Saranga, head of the Israeli Foreign Ministry’s digital bureau, confirmed that “the footage is part of a larger advocacy drive” in which the ministry spent $15 million on internet ads in just the first few weeks after October 7, 2023 (smex.org). Those ads often contained graphic and emotional imagery, for example, violent scenes and frightened Israeli families, even appearing as pop-ups in kids’ online games, where they left children “shocked and disturbed” (smex.org). Reuters journalists observed some of these graphic video ads playing in European video games used by children, raising serious questions about appropriateness and consent (business-humanrights.org). On Meta’s platforms (Facebook and Instagram), pro-Israel advertising also spiked dramatically. One effort led by a group calling itself “Facts for Peace” spent over $370,000 on Facebook/Instagram ads in a single month (November 2023) to push viral videos framing all support for Palestine as support for Hamas (business-humanrights.org). These ads, which amassed over 21 million views, were crafted to equate the Palestinian cause with barbaric violence. Despite the obviously misleading and inflammatory nature of such content, it spread widely: it was even promoted by right-wing influencers like Ben Shapiro and shared by official Israeli government social media (e.g. the Israeli embassy in Chile) (business-humanrights.org). Meta’s Ad Library showed 213 ads from Facts for Peace, of which only 3 were taken down for policy violations (business-humanrights.org). Meta told investigators that the campaign did not technically break their rules on transparency, since the ads carried a “Paid for by Facts for Peace” disclaimer and the page was authorized for political advertising. However, nowhere did the group disclose its true funding sources or organizers. (Journalists later linked it to a U.S. billionaire funding pro-Israel messaging (business-humanrights.org.) This case shows how easy it is for a new, opaque group to launch a massive political ad campaign and reach millions before anyone can fully scrutinize it. Israeli government entities themselves also ran extensive campaigns on social media. The Israeli Ministry of Foreign Affairs launched at least 75 distinct ads on YouTube alone in the first months of the war (smex.org). Some of these ads featured what observers called incitement to violence. In one video ad, a menacing message declared: “Israel will take every measure necessary to protect our citizens against these barbaric terrorists,” which borders on a call to collective violence (smex.org). YouTube’s political ads policy forbids content that “encourages others to commit violent acts,” yet this ad apparently slipped through (smex.org). On X (formerly Twitter), the Israeli government and military accounts also placed promoted posts with graphic images (e.g. charred buildings and victims), urging support for their military actions (smex.org). Such ads would seem to violate X’s pre-existing ban on state-affiliated media buying ads (a policy originally aimed at propaganda from outlets like Russia’s RT). Nonetheless, these promotions ran, and some critics questioned if Elon Musk’s open support for Israel influenced X’s lax enforcement (smex.org). By early 2025, reports revealed that Israel had even formalized a multi-million dollar contract with Google to sustain its global propaganda offensive. According to investigative reporting by Drop Site News (and cited by outlets like TRT World), the Israeli Prime Minister’s Office signed a $45 million, six-month deal with Google in June 2025 to run a worldwide advertising blitz downplaying the humanitarian crisis in Gaza (trtworld.com). The campaign kicked off just after Israeli authorities imposed a siege cutting off food, fuel, and medicine to Gaza in March 2025, and as officials worried about “the public relations fallout”. Leaked documents described Google as a “key entity” in Netanyahu’s PR strategy (trtworld.com). Sure enough, soon after, YouTube and display ads proclaiming “there is food in Gaza. Any other claim is a lie” flooded the internet, with one such video, produced by Israel’s Foreign Ministry, racking up over 6 million views via paid promotion (trtworld.com). In effect, Google was being directly paid to broadcast the message that reports of starvation in Gaza were false, despite widespread documentation of severe hunger by the UN and NGOs. Israeli propaganda officials openly characterized these efforts as “hasbara” — literally “propaganda” — in internal communications (trtworld.com). Alongside Google, Israel spent another $3 million on ads on X (Twitter) and $2.1 million on other ad networks to bolster its narrative (trtworld.com). The impact of Israel’s ad campaign is hard to quantify, but it undoubtedly reached broad swathes of the global public with the government’s perspective. By framing its military operations as justified self-defense and its critics (even humanitarian agencies) as terrorist sympathizers, Israel’s paid ads sought to shore up international support and neutralize opposition. Given that Israel’s security and billions in military aid depend on Western public opinion (smex.org), this digital influence strategy was a logical extension of the war itself. However, it came at the cost of injecting misinformation and extreme bias into the information ecosystem, potentially skewing public perceptions and policy debates. As we’ll see next, this case also exposes worrying gaps in platform policies and ethical oversight. Policy Gaps, Double Standards, and Ethical Dilemmas The use of paid ads for propaganda raises a blunt question: Are Google and Meta actually enforcing their own rules when it comes to wartime disinformation? The evidence from the above case suggests significant gaps and inconsistencies. Despite formal policies against harmful or misleading content, both companies’ systems enabled, and profited from, campaigns that arguably violate the spirit, if not the letter, of those rules. Google’s laissez-faire approach to misinformation: Google’s advertising policies forbid certain categories of content (hate speech, explicit incitement to violence, etc.) and ban misrepresenting who you are. But they do not outright ban false or misleading claims in ads except in narrow contexts like election integrity or COVID-19 info (gomixte.com). This is why blatant propaganda, e.g. accusing a UN agency of abetting terror with scant evidence, can pass muster on Google Ads. Google relied on users or affected parties to report problematic ads, and stated it would “take swift action” if actual policy violations were found (wired.com). But in practice, Google did not pro-actively vet the truth of Israeli government claims . The company was, after all, in a business relationship with Israel (not to mention the formal $45M deal) and may have been reluctant to police a powerful client’s messaging beyond basic checks. This hands-off stance highlights a moral gray area : allowing a paying customer to spread potentially dangerous misinformation because it doesn’t fit neatly into a prohibited category. It suggests that platform policy enforcement can be very literal and reactive, leaving ethical judgment by the wayside. Google’s advertising policies forbid certain categories of content (hate speech, explicit incitement to violence, etc.) and ban misrepresenting who you are. But they do not outright ban false or misleading claims in ads except in narrow contexts like election integrity or COVID-19 info (gomixte.com). This is why blatant propaganda, e.g. accusing a UN agency of abetting terror with scant evidence, can pass muster on Google Ads. Google relied on users or affected parties to report problematic ads, and stated it would “take swift action” if actual policy violations were found (wired.com). But in practice, . The company was, after all, in a business relationship with Israel (not to mention the formal $45M deal) and may have been reluctant to police a powerful client’s messaging beyond basic checks. This hands-off stance highlights a : allowing a paying customer to spread potentially dangerous misinformation because it doesn’t fit neatly into a prohibited category. It suggests that platform policy enforcement can be very literal and reactive, leaving ethical judgment by the wayside. Meta’s inconsistent enforcement and transparency issues: Facebook/Meta, on paper, has more expansive rules for political ads. They require identity verification for political advertisers and mandate a “Paid for by [Name]” disclosure on each ad. Certain types of violent or hateful content are disallowed even in ads. In the Israel–Gaza context, however, these rules were enforced unevenly. Meta’s own Ads Library data shows a bias: when ads violated policies (e.g. contained hate speech or graphic violence), pro-Palestinian ads were taken down faster and more frequently than pro-Israeli ads ( smex.org). SMEX’s analysis of thousands of ads found that Israeli war-cheerleading ads often remained live longer despite breaking rules , whereas ads calling for Gaza humanitarian aid got removed with stricter urgency (smex.org). For example, the “Facts for Peace” videos that implied all Palestinian supporters endorse terror arguably constituted hateful generalization, yet most stayed up until their paid run ended. Meanwhile, at least one civil society test in late 2023 found that Facebook approved ads containing explicit hate speech calling for violence against Palestinians , indicating failures in the automated moderation system (business-humanrights.org). Meta responded that it prioritizes transparency — pointing to the existence of the Ad Library — but researchers found the library frustratingly opaque. Some ads that clearly ran were missing from public search, and crucial data like why an ad was removed or who exactly was targeted were absent (smex.org). Meta shut down useful tools like CrowdTangle (which helped monitor content virality) in 2024, hampering independent oversight (smex.org). All of this suggests that Meta’s professed neutrality masks a deeply flawed system, where enforcement can be influenced by political sensitivities or errors, and where the company’s profit motive in selling ads may conflict with taking decisive action against harmful content. Facebook/Meta, on paper, has more expansive rules for political ads. They require identity verification for political advertisers and mandate a “Paid for by [Name]” disclosure on each ad. Certain types of violent or hateful content are disallowed even in ads. In the Israel–Gaza context, however, these rules were enforced unevenly. Meta’s own Ads Library data shows a bias: when ads violated policies (e.g. contained hate speech or graphic violence), smex.org). SMEX’s analysis of thousands of ads found that , whereas ads calling for Gaza humanitarian aid got removed with stricter urgency (smex.org). For example, the “Facts for Peace” videos that implied all Palestinian supporters endorse terror arguably constituted hateful generalization, yet most stayed up until their paid run ended. Meanwhile, at least one civil society test in late 2023 found that Facebook , indicating failures in the automated moderation system (business-humanrights.org). Meta responded that it prioritizes transparency — pointing to the existence of the Ad Library — but researchers found the library frustratingly opaque. Some ads that clearly ran were missing from public search, and crucial data like why an ad was removed or who exactly was targeted were absent (smex.org). Meta shut down useful tools like CrowdTangle (which helped monitor content virality) in 2024, hampering independent oversight (smex.org). All of this suggests that Meta’s professed neutrality masks a deeply flawed system, where enforcement can be influenced by political sensitivities or errors, and where the company’s profit motive in selling ads may conflict with taking decisive action against harmful content. Double standards for different conflicts: A striking contrast can be seen in how platforms treated Russian state propaganda versus Israeli state propaganda. During Russia’s 2022 invasion of Ukraine, Western tech firms took an unusually hard line against Russia’s online influence operations. Meta banned Russian state media outlets from running ads or monetizing content on its platform and demoted their posts (smex.org). It also aggressively labeled or removed Russian disinformation about the war, under heavy pressure from governments and public opinion. Google similarly demonetized Russian state-affiliated channels (like RT) and limited their reach. These steps were lauded as Big Tech taking a stand against wartime disinformation. Yet, when it came to Israel, engaging in similar behavior, the response was far more lenient. No blanket bans or demonetization were applied to Israeli state entities pushing propaganda, even though some of their content arguably violated the same principles (for instance, denying documented human suffering in Gaza, or using dehumanizing language about “barbaric terrorists” broadly). Observers have pointed out this double standard , noting that platforms seemed willing to bend rules or look the other way for a U.S.-aligned government (smex.org). This inconsistency not only undermines the credibility of platform policies but also raises geopolitical questions: are ethical standards being applied universally, or only when convenient? A striking contrast can be seen in how platforms treated Russian state propaganda versus Israeli state propaganda. During Russia’s 2022 invasion of Ukraine, Western tech firms took an unusually hard line against Russia’s online influence operations. Meta on its platform and demoted their posts (smex.org). It also aggressively labeled or removed Russian disinformation about the war, under heavy pressure from governments and public opinion. Google similarly demonetized Russian state-affiliated channels (like RT) and limited their reach. These steps were lauded as Big Tech taking a stand against wartime disinformation. No blanket bans or demonetization were applied to Israeli state entities pushing propaganda, even though some of their content arguably violated the same principles (for instance, denying documented human suffering in Gaza, or using dehumanizing language about “barbaric terrorists” broadly). Observers have pointed out this , noting that platforms seemed willing to bend rules or look the other way for a U.S.-aligned government (smex.org). This inconsistency not only undermines the credibility of platform policies but also raises geopolitical questions: are ethical standards being applied universally, or only when convenient? Violations of international norms: Beyond platform rules, there’s an argument that some of these ad campaigns tread on international ethical standards or even laws. For example, deliberately spreading false information to obstruct humanitarian aid (as Israel’s anti-UNRWA ads aimed to do) could be seen as running counter to International Humanitarian Law, which seeks to protect aid efforts and civilians during conflict. Incitement to violence against a population is outlawed under the Genocide Convention and other treaties — and while social media ads we’ve discussed stop short of explicit incitement, they foment hatred and misunderstanding that can fuel violence. The UN Secretary-General and other officials have repeatedly condemned the weaponization of disinformation in conflict settings, calling it “destructive” and urging tech companies to clamp down (wired.com) (aljazeera.com). Allowing a state to pay to propagate one-sided or false narratives undermining a UN agency could be seen as abetting an attack on that international institution’s integrity. These are largely uncharted waters, there’s no clear international law for “information warfare”, but the ethical condemnation is growing. As Lazzarini implored, such campaigns “should stop and be investigated”, and social media firms must do more to combat disinformation and hate speech in war (aljazeera.com). All these issues point back to the central dilemma: What responsibility do the tech giants have when their advertising tools are used to spread propaganda or inflame conflicts? If they act as neutral carriers, they risk facilitating harm; if they intervene, they become arbiters of truth in explosive political situations. The next section looks at how Google, Meta, and others have responded, or failed to respond, and what more could be done. The Responsibility of Tech Companies in Wartime Propaganda Major tech companies often insist they are platforms, not publishers, they provide the space, but aren’t responsible for every message that advertisers choose to push. However, the extreme examples we’ve explored put that claim to the test. When a company is paid millions to disseminate content that may be false or harmful, can it really wash its hands of accountability? Here are some considerations and responses from the companies and experts: Official stances and defenses: Google’s official line, as noted, is that it applies its ad policies equally to all, including governments, and will remove ads that violate those policies (wired.com). Implicit in that statement is that Google does not see itself as the arbiter of factual truth in ads, unless a lie crosses certain predefined lines, Google will host it for a paying client. Meta, for its part, touts the transparency of its system. A Meta spokesperson, responding to concerns about the “Facts for Peace” campaign, emphasized that the ads were “clearly labeled with a ‘paid by’ disclaimer” and publicly archived in the Ads Library, implying that this level of transparency exceeds that of TV or print political ads (business-humanrights.org). Meta also points out that it has an Ad Standards enforcement team and that ads violating policies (when caught) are taken down and documented. In practice, as we saw, that enforcement can lag or falter — but the company’s message is that shining light on ads is the solution, rather than heavy-handed censorship . Google’s official line, as noted, is that it applies its ad policies equally to all, including governments, and will remove ads that violate those policies (wired.com). Implicit in that statement is that Google does not see itself as the arbiter of factual truth in ads, unless a lie crosses certain predefined lines, Google will host it for a paying client. Meta, for its part, touts the transparency of its system. A Meta spokesperson, responding to concerns about the “Facts for Peace” campaign, emphasized that the ads were “clearly labeled with a ‘paid by’ disclaimer” and publicly archived in the Ads Library, implying that this level of transparency exceeds that of TV or print political ads (business-humanrights.org). Meta also points out that it has an Ad Standards enforcement team and that ads violating policies (when caught) are taken down and documented. In practice, as we saw, that enforcement can lag or falter — but . Profit vs. principle: Critics argue that a big reason these platforms struggle to self-regulate is that they make money from every ad impression. When conflict-related content goes viral, ad spending surges . A joint analysis by CalMatters and The Markup found that after war broke out in Gaza on Oct 7, 2023, Meta saw a major increase in ad revenue related to the conflict (calmatters.org). In October 2023 alone, an estimated $3.1 million was spent on Facebook ads about the Israel–Gaza war, a huge spike compared to previous months (calmatters.org). This includes not just state propaganda but also ads for fundraising, merchandise, and advocacy around the crisis. The point is that war and political violence can become lucrative business for social media companies , something Meta’s own employees have acknowledged internally (Facebook famously admitted that outrage and misinformation drive engagement, which in turn drives ad revenue (calmatters.org). This profit motive can create a perverse disincentive to crack down on borderline content. While $3 million is a drop in the bucket of Meta’s $100+ billion annual revenue, it’s still revenue. And for Google, a $45 million contract from Israel’s government is significant. The companies risk accusations of “conflict profiteering” if they appear to take money to propagate one side’s propaganda. This has led to calls for them to refuse or refund ad buys that clearly aim to deceive or stoke conflict, though such moves remain rare. Critics argue that a big reason these platforms struggle to self-regulate is that they make money from every ad impression. When conflict-related content goes viral, . A joint analysis by CalMatters and The Markup found that after war broke out in Gaza on Oct 7, 2023, Meta saw a major increase in ad revenue related to the conflict (calmatters.org). In October 2023 alone, an estimated was spent on Facebook ads about the Israel–Gaza war, a huge spike compared to previous months (calmatters.org). This includes not just state propaganda but also ads for fundraising, merchandise, and advocacy around the crisis. The point is that , something Meta’s own employees have acknowledged internally (Facebook famously admitted that outrage and misinformation drive engagement, which in turn drives ad revenue (calmatters.org). This profit motive can create a perverse disincentive to crack down on borderline content. While $3 million is a drop in the bucket of Meta’s $100+ billion annual revenue, it’s still revenue. And for Google, a $45 million contract from Israel’s government is significant. The companies risk accusations of “conflict profiteering” if they appear to take money to propagate one side’s propaganda. This has led to calls for them to refuse or refund ad buys that clearly aim to deceive or stoke conflict, though such moves remain rare. Calls for stronger policies: Human rights organizations and digital rights groups are urging the platforms to develop special policies for ads in conflict zones or on politically sensitive issues . For instance, SMEX and others have suggested that in the context of an armed conflict, platforms should implement tailored rules to prevent the spread of hate speech and misinformation via ads, given the real-world stakes (smex.org). Concretely, this could mean temporarily banning state-run or state-funded advertising in active conflict situations , or at least subjecting them to human review and fact-checking. It could also mean disallowing ads that target foreign populations with war propaganda, treating it akin to foreign election interference. Another idea floated by experts is a “circuit breaker” approach: if a sudden war or crisis erupts, platforms might pause all political and issue-based ads in the affected regions until they can ramp up oversight. (Notably, Meta has recently decided to halt all political, electoral, and social issue ads in the entire EU starting in 2025, in response to new regulations (about.fb.com). This broad-brush ban is a compliance move for Europe, but it shows that turning off political ads is technically feasible when mandated.) Human rights organizations and digital rights groups are urging the platforms to develop . For instance, SMEX and others have suggested that in the context of an armed conflict, platforms should implement tailored rules to prevent the spread of hate speech and misinformation via ads, given the real-world stakes (smex.org). Concretely, this could mean , or at least subjecting them to human review and fact-checking. It could also mean disallowing ads that target foreign populations with war propaganda, treating it akin to foreign election interference. Another idea floated by experts is a “circuit breaker” approach: if a sudden war or crisis erupts, platforms might pause all political and issue-based ads in the affected regions until they can ramp up oversight. (Notably, Meta has recently decided to halt all political, electoral, and social issue ads in the entire EU starting in 2025, in response to new regulations (about.fb.com). This broad-brush ban is a compliance move for Europe, but it shows that turning off political ads is technically feasible when mandated.) Transparency and researcher access: Many observers argue that if companies won’t ban propaganda ads outright, they should at least empower watchdogs to track and expose them. This means improving their ad transparency tools. Meta’s shutting down of CrowdTangle and the limitations of the Ads Library have drawn heavy criticism (smex.orgsmex.org). Experts like Sam Jeffers of Who Targets Me advocate for more robust disclosures — for example, revealing detailed targeting parameters of political ads (so we know who governments are trying to influence) and keeping archives of removed ads including why they were removed (business-humanrights.org) (smex.org). There are also calls for independent audits of platform algorithms and ad delivery, to see if certain viewpoints are getting amplified unfairly. Ultimately, greater transparency can help civil society and journalists “police” the propaganda if the platforms themselves are slow to do so. Many observers argue that if companies won’t ban propaganda ads outright, they should at least empower watchdogs to track and expose them. This means improving their ad transparency tools. Meta’s shutting down of CrowdTangle and the limitations of the Ads Library have drawn heavy criticism (smex.orgsmex.org). Experts like Sam Jeffers of Who Targets Me advocate for — for example, revealing detailed targeting parameters of political ads (so we know who governments are trying to influence) and keeping archives of removed ads including why they were removed (business-humanrights.org) (smex.org). There are also calls for independent audits of platform algorithms and ad delivery, to see if certain viewpoints are getting amplified unfairly. Ultimately, greater transparency can help civil society and journalists “police” the propaganda if the platforms themselves are slow to do so. Regulatory pressure and international standards: Governments and international bodies are starting to weigh in. The European Union’s Digital Services Act (DSA) and upcoming regulations on political advertising put legal requirements on big platforms to prevent misuse. For example, the DSA requires rapid removal of illegal content (which could include illegal hate speech or incitement in ads) and hefty fines for non-compliance. The EU is also pushing for strict transparency on political ads and even contemplating banning microtargeting for political messages. These rules, though EU-specific, often end up being adopted globally by platforms for simplicity’s sake. Outside of Europe, the lack of regulation has been evident, which is why UN officials like Lazzarini are directly calling for “more regulations for companies… to combat disinformation and hate speech” online (aljazeera.com). We may see moves in the UN or other international forums to establish norms against certain propaganda techniques. In the long run, if self-regulation fails, tech companies might face binding rules prohibiting them from accepting money for ads that undermine peace and truth, a challenging rule to draft, but an increasingly pertinent conversation. Conclusion The era of governments weaponizing Google and Facebook ads is upon us, raising thorny questions about truth, free speech, and corporate responsibility in the digital age. The case of Israel’s advertising blitz in the Gaza war demonstrates how easily paid platforms can be turned into tools of war propaganda, blasting millions with biased or false narratives at the click of a button. It also shows the real harms at stake: humanitarian aid blocked, democratic institutions undermined, and public discourse polluted. And Israel is not alone, from superpowers to militant groups, many actors have tested the boundaries of online ads to sway hearts and minds. For the platforms, this is a moment of reckoning. Can Google and Meta continue to say “we’re just the middleman” while cashing checks for propaganda campaigns? Critics argue that neutrality is not neutral when it enables deception and violence. Yet, deciding where to draw the line is complex. Mistakes or overreach in moderation could themselves be seen as partisan interference. The tightrope between allowing robust political advocacy and preventing harmful propaganda is a difficult one to walk, especially under global scrutiny. What is clear is that doing nothing is no longer tenable. Sunlight and accountability are the minimum: users deserve to know who is behind the political ads they see and to have confidence that egregious lies or incitements won’t be promoted by the world’s most powerful information platforms. Moving forward, it will likely require a mix of solutions , improved self-governance by tech firms, independent oversight, and smart regulation, to ensure that the tools of modern advertising are not abused to foment conflict or erode democracy. As the saying goes, “In war, truth is the first casualty.” In our digital world, we must decide how much we are willing to let paid algorithms hasten that casualty, or whether we can find ways to uphold truth even amid the fog of online war. Sources: Google involved in $45M deal with Netanyahu’s office to advertise Israeli hasbara — report — TRT World https://www.trtworld.com/world/article/538017f344e5 Israel Is Buying Google Ads to Discredit the UN’s Top Gaza Aid Agency | WIRED https://www.wired.com/story/israel-unrwa-usa-hamas-google-search-ads/ UNRWA head accuses Israel of buying Google ads to block donations to agency | UNRWA News | Al Jazeera https://www.aljazeera.com/news/2024/8/31/unrwa-head-accuses-israel-of-buying-google-ads-to-block-donations-to-agency Mass Political Information on Social Media: Facebook Ads … https://academic.oup.com/jeea/article/22/4/1678/7607367 Facebook says it sold $100,000 in ads to fake Russian accounts during presidential election — ABC News https://abcnews.go.com/Politics/facebook-sold-100000-ads-fake-russian-accounts-presidential/story?id=49667831 Inside the Israeli occupation’s propaganda ad factory — SMEX https://smex.org/inside-the-israeli-occupations-propaganda-ad-factory/ Google ads linking UNRWA with Hamas appearon Australian news websites as part of a global campaign — ABC News https://www.abc.net.au/news/2024-12-05/unrwa-hamas-google-ads-published-on-australian-news-sites/104685074 How Google Ads Are Weaponized — Mixte Communications https://gomixte.com/blog/how-google-ads-are-weaponized/ Palestine/Israel: Viral campaign ads attacking proPalestine movement points to concerning gaps in Meta rules; incl. co. comment — Business & Human Rights Resource Centre https://www.business-humanrights.org/en/latest-news/palestineisrael-viral-campaign-ads-attacking-pro-palestine-movementpoints-to-concerning-gaps-in-meta-rules-incl-co-comment/ When Transparency Fails: Meta’s Political Ad Policy During Israel’s War on Gaza — SMEX https://smex.org/when-transparency-fails-metas-political-ad-policy-during-israels-war-on-gaza/ How Meta brings in millions off political violence — CalMatters https://calmatters.org/economy/technology/2024/10/how-meta-brings-in-millions-off-political-violence/ Ending Political, Electoral and Social Issue Advertising in the EU in … https://about.fb.com/news/2025/07/ending-political-electoral-and-social-issue-advertising-in-the-eu/ Ads Transparency Center. (2025). Israeli Government Advertising Agency [Advertiser page]. Google. https://adstransparency.google.com/advertiser/AR00827556497616535553 Drop Site News. (2025, June 18). Google’s $45 million contract with Netanyahu’s office to launder Gaza famine denial through ads. https://www.dropsitenews.com/p/google-youtube-netanyahu-israel-propaganda-gaza-famine Reuters. (2025, September 4). Venice ovation fuels hopes for Gaza girl film to reach global audience. https://www.reuters.com/business/media-telecom/venice-ovation-fuels-hopes-gaza-girl-film-reach-global-audience-2025-09-04/