The bartender's first hoot is so clean and high-pitched it sounds piped in from the ceiling speakers — a single whooo that slices through the post-punk and clinking glassware. My friend Michael jolts on his barstool, beer sloshing dangerously close to the rim. "Did you hear that owl?" he whispers. "Not an owl," I say, matter-of-factly, wiping condensation from my glass before it drips onto the bar. The bartender, in his mid-30s with slicked-back hair and an immaculate black apron, lets out another whooo. "It's Tourette's," I add quietly into his ear. He takes a long, slow swig of his hefeweizen, processing. I have a close family friend with a similar tic. We let our conversation wander — plans for later that summer and the Lakers' offseason moves. Ten minutes in, he caves, as he usually does, checking a buzz from his pocket. He opens Instagram and stops, his confusion unmistakable. "What?" I ask, leaning in as the bartender slides us our check. Filling the screen of his iPhone 16 Pro Max, clad in a scuffed clear case, sits a sponsored post: "Tourette Syndrome Awareness Month. Donate Today." Michael's voice drops into a register I don't usually hear outside ghost stories. "We literally just talked about Tourette's. How did I get this ad already?" I manage a laugh that's only half genuine. "Your phone isn't listening to you." Even as I say it, I know how razor-thin the reassurance sounds. He signs the receipt, pockets the phone and mutters, "So if my phone isn't listening, then what is it?" It's a question that has reverberated across countless conversations dating back to the start of the smartphone era two decades ago. Today's phones — from Apple's iPhone lineup to Androids from Samsung, Google, Motorola and others — are far more powerful and fully woven into the fabric of our daily existence, ever on standby to assist in all manner of tasks, but also reaching out to us through a steady stream of prompts and alerts. It would be eerie if it weren't so commonplace. But underlying the well-appreciated utility, there has always been a gnawing sense of unease. It's not just the phones themselves, but the sweeping online-ness of our lives, from our social media postings to our Amazon purchases, from our Snap Maps to our Google searches and ChatGPT queries. Technology knows us intimately, often too close for comfort. When a phone seems to be listening to us randomly, we're not wrong to feel like a boundary has been crossed. That feeling has created a wariness that just won't go away. "This conspiracy theory has been going on for literally decades," says Serge Egelman, research director of the Usable Security and Privacy group at the Berkeley-affiliated International Computer Science Institute and co-founder of AppCensus, which audits mobile apps for privacy. Getty Image/ Zooey Liao/ CNET Your phone isn't listening, for good reasons Outside the bar, Beverly Boulevard shimmers under a neon glow in the lingering heat of an early-summer Los Angeles night. I tell my friend there are lots of reasons that ad appeared on his iPhone — but none of them involve a microphone listening. The truth is actually very straightforward. Ordinary even. And that's even more unsettling. "It's far more sinister than a hot mic," says Egelman. There's no credible evidence that your phone runs a secret, always-on microphone to target ads, and there are clear technical and policy reasons why. Independent researchers have gone looking for covert "listening" and found none, including a definitive 2018 Northeastern University study that has yet to be superseded. What they did catch in a handful of cases were screen recordings or image and video uploads to third parties. Creepy, sure, but not a hot mic. Laws matter, too. The federal Wiretap Act bans intercepting conversations without consent, and many states (like California) require all parties to consent, stacking civil and even criminal liability on covert, continuous capture. An "always-listening for ads" feature would constantly record non-consenting bystanders and invite massive legal exposure. I know that's not completely reassuring, but that's why it's implausible in practice. When I run the bar moment by ad-tech veteran Ari Paparo, he doesn't flinch. Paparo helped build the pipes — he founded the Beeswax DSP (acquired by Comcast's FreeWheel) and led product management at AppNexus/DoubleClick — so he's seen exactly how ad targeting really works. "I'm very confident this is not happening. The phone is not actually listening to you," he says. "I would say that 100% of my colleagues in the advertising world agree with me." I know that's a tough pill to swallow, but he offers the real and almost boring explanation for why it feels uncanny: People are predictable. "The ads are attempting to guess what you're interested in," he says. "It's all statistics." Simple version, for the record: Ads follow your behavior. No listening required. Here's why you get ads that feel like they're listening It feels like your phone is listening because the systems that serve you ads thrive on your patterns — they don't need your whispered secrets. Here's the breakdown on how an eerily suspicious advertisement makes it to your phone. Getty Image/ Zooey Liao/ CNET Think of four players working in sequence: platforms, advertisers, identity providers and data brokers. (There are extra middlemen, including publishers, ad exchanges, verification and measurement providers, doing more behind the scenes.) One: The platform (Instagram, YouTube, Facebook, TikTok). This is home turf. The platform watches what you do inside the app: what you follow, linger on, save, search and tap on. It also knows basic context about you, like your rough location, device model, language and time of day, and it runs the auction that decides which ad you see. The platform's model predicts what you're likely to do next (scroll, tap, buy, donate, etc.) and ranks ads by a mix of price, predicted response and ad quality. If it thinks you're very likely to act, a lower-bid ad can beat a higher-bid one. Two: The advertisers (brand, nonprofit, campaign). They bring a goal (clicks, purchases, donations), a budget and the creative (images, video, text). Many also bring their own customer lists — emails or phone numbers of past buyers or donors — which the platform hashes (turns into one-way fingerprints so it can look for matches without seeing the raw addresses) and tries to match to accounts. While hashing helps with privacy, it isn't the same as anonymity: Matches are still possible if both sides hash the same inputs. From there, the advertiser can ask the platform to find people who behave like those customers (lookalikes). They can also set simple guardrails: cities, ZIP codes, age ranges, schedule windows and "don't show to people who already bought." Three: Identity providers (the matchmakers). These companies help link records that belong together — your email, your phone number, your connected TV, the laptop on your home Wi-Fi — without directly handing your name around. They keep identity graphs that say "these devices likely belong to the same person or household," which helps advertisers measure whether an ad on one screen led to action on another. Think of them as the glue that makes cross-device campaigns and measurement possible. Four: Data brokers (the collectors and wholesalers). These firms (LiveRamp, Acxiom, TransUnion) buy, scrape and package information about you, then sell or license it to marketers and advertisers. There is no record as to how many data broker companies there are in the US, but there could be thousands (California keeps a public registry). They pull data from apps, websites and store loyalty programs, then ship ready-made audiences ("visited auto lots," "recent home-improvement shoppers") or labels ("new homeowner," "pet owner"). They work mostly out of sight — privacy policies often call the data "de-identified." But once those files are matched to your account, the platform's system decides when to show you an ad. "The unsettling feeling that your device is spying on you is real — but the culprit isn't a secret microphone. It's the data broker industry," Eva Galperin, director of cybersecurity for the Electronic Frontier Foundation, tells me. Now, stitch the pieces together in real time. You open Instagram. The app asks, "What ad should we show right now?" The platform checks your in-app behavior and context, sees which advertisers are aiming for people like you (including those using matched customer lists or broker-supplied groups) and runs an instant auction to show you an ad. If you and a friend are on the same Wi-Fi or have been on the same household network, both of you may fall into the same target bucket. If you're near a TV where a campaign just ran, that can raise the odds too — co-location and household signals say "these folks influence each other." Budgets also matter. Money tends to concentrate in hours and places where the model expects better results, so delivery clusters in time. That's why an ad can land the same night you talked about a topic, because the system already had reasons to try you tonight, and you happened to be scrolling when the budget was flowing. When digital assistants were caught listening There are valid reasons why so many people believe that their phones listen to them. It goes beyond the phones themselves to the wider array of devices waiting for us to speak to them, like Amazon's Alexa smart speakers. In 2019, an Apple contractor revealed that they were regularly listening to audio recordings, which sometimes included snippets of ongoing drug deals or people having sex, as part of a "grading process" to improve recognition for the Siri voice assistant. After public backlash, Apple apologized, paused the program and later made it opt-in. Apple agreed to a 2025 settlement, while denying wrongdoing and claiming Siri audio isn't used for ads. In the same year, a Belgian broadcaster revealed contractors could hear snippets of Google Assistant recordings, reporting showed Amazon teams listened to some Alexa recordings and Facebook paid contractors to transcribe snippets of opt-in voice chats. These incidents mostly involved quality reviews of virtual assistants and, in some cases, non-phone devices — not covert ad targeting by your phone's mic. But they were vivid and mishandled enough to make "always listening" still feel plausible today. "People complain Alexa or Siri don't understand them, yet believe they can perfectly overhear conversations to target ads," says Egelman. "That's cognitive dissonance, not evidence." Adding fuel to the fire, last year a leaked Cox Media Group pitch deck touted an "Active Listening" ad product that would target ads based on ambient audio. After the coverage, Google dropped CMG from its partner program. CMG later said the product had been discontinued, and it denied using device microphones in that way. Even so, it kept the listening narrative in headlines, despite platforms publicly disavowing it. Getty Image/ Zooey Liao/ CNET How advertising feeds on your data Again, the reason we see those ads is simple: It's all about data. Data is one of the world's most valuable resources, up there with oil and water. Digital ads are a massive business — marketers spent nearly $1.1 trillion on advertising in 2024, with the biggest share going to digital. Look at the leaders. Meta booked about $162 billion in ad revenue in 2024 — nearly all of its sales. Amazon made $15.7 billion from advertising in just the second quarter of 2025 (up 22% year over year). Walmart's retail media arm pulled in $4.4 billion in 2024 and is still growing fast. All this money flows because data makes ads predictable: who to reach, when to show up and and whether it worked. The better the data, the better the predictions, the more the platform can charge. That's why the "ad machine" keeps investing in first-party data and AI. And to get that data, no one needs to eavesdrop through a microphone. "In reality, devices are tracking you in other ways," says the EFF's Galperin. Getty Image/ Zooey Liao/ CNET We supply everything the ad machine (platforms, brokers, retailers and so on) needs, often without realizing it. They work together to turn ordinary traces into timing. Your actions become labels; those labels group you with others who have similar profiles: an audience. That group yields a prediction: the ad you see. The app remembers what you do, the advertiser brings a list of people it already knows, a broker matches the dots and the pipes decide — in a blink — whose ad to show you. That's why an ad can arrive with unnerving precision, as if it overheard you. It didn't. It read your week, and it had great timing. Back to that Tourette's ad at the bar. Here's the boring path that likely put it there. In Instagram's split-second auction, my friend's in-app behavior (previous donations, affinity for mental health) and context (late, in LA, scrolling) met whatever the advertiser brought — probably a matched list of past donors or newsletter signups, plus a lookalike built from them. A data broker may have added fuel: prebuilt cause- or health-interest groups, or extra labels added to those donor lists, matched by hashed (scrambled) emails or phone numbers. If we were on the same Wi-Fi or had been in the same places that week, co-location/household signals could have nudged both of us into range. The model picked the moment. Meta even surfaces some of this in-app. Tap "Why am I seeing this ad?" and you'll often see a plain-English reason tied to your activity or the advertiser's audience. When asked for comment, Meta pointed CNET to an explainer in its Privacy Center: "We only use your microphone if you've given us permission and are actively using a feature that requires the microphone." The company also noted its page on what data it uses for ads and the ad controls available to users. Your mind plays tricks on you But the uncanniness of it all isn't just what's in your phone. It's what's in your head as well. Once a topic is on your mind, you start spotting it everywhere — it's known as the frequency illusion, sometimes referred to as the Baader-Meinhof phenomenon. Linguist Arnold Zwicky coined the term. Once you notice something new, you start seeing it everywhere. In 2022, I purchased a 1986 Mercedes-Benz 560SL. A few weeks later, I started noticing the car on every block — or at least it felt that way. Did the city of Los Angeles buy a fleet of my car overnight? No. What's more likely is that I joined a club and started seeing the members. After this recognition, confirmation bias takes over. You remember the eerie hit — the ad that lands right after the conversation — but you discard the thousands of misses before that. The story writes itself: We said it and then you saw it. Therefore, the phone must have listened. "What is happening is targeted advertising — and some of it is cognitive biases," says Egelman, the privacy researcher. "Your friend probably doesn't make note of all of the irrelevant ads he sees." We all miss the things we're not expecting to encounter. Research into inattentional blindness (the "gorilla" strolling through a basketball drill) shows how attention edits reality. Most of the advertisements stream by, but the one aligned with what's top of mind pops out as if it were placed just for you. You scrolled past the advertisements for an irrelevant airline credit card and Japanese selvedge denim, but you noticed the trip to Cancun, because well, you just talked about going to Cancun to your best friend. Add two mental shortcuts: availability heuristic (vivid examples feel more common) and illusory correlation (when two things happen together, we assume they're linked). Then confirmation bias seals it. Once you name those biases, the spell weakens. And you see how it really works. I started collecting other coincidences. My friend kept getting fish tank ads after talking to her personal trainer about fish tanks. They have each other's contacts saved in their phones and my friend is already a pet owner, so it tracks. My mom swears she received knee brace ads seconds after talking to her friend about knee pain. She had knee surgery two years ago, and she probably googled something about knee pain recently, which she has no recollection of, obviously. I received an advertisement for very specific baby-blue Rimowa carry-on luggage after talking about it to a friend. I've owned Rimowa luggage before, and not only that, but I'm visiting Germany in the fall, so it makes sense I would receive a travel ad of some sort. No microphones required. Just data. Getty Image/ Zooey Liao/ CNET Hot mics and 'Hey Siri' One of the clearest reality checks came from a Northeastern University team that tested about 9,100 Android apps back in 2018 and watched what actually left the phone. It found no evidence of apps secretly recording and shipping audio to ad networks. When data did leak, the surprises were different: a handful of screen recordings and image or video uploads to third parties, and voice assistants that sent text transcripts, not raw audio, for processing. Last month, I caught up with the researcher who led that study. "The thing we didn't expect was screen recording — it was like someone looking over your shoulder, and that data went to a third-party's servers, not the app you were using," says David Choffnes, a professor of computer science at Northeastern and executive director of the university's Cybersecurity and Privacy Institute. CNET did some informal testing of its own back in 2019. There was no indication that Facebook was listening in on conversations as a trigger for serving ads. Getty Image/ Zooey Liao/ CNET A true hot mic would leave fingerprints. If something were siphoning audio 24/7, you'd notice it in your bill, your battery widget and your status bar long before a conspiracy TikTok video "explained" it. "That can't be happening. … Your phone would be constantly streaming audio," says Egelman. "It would show up on your bill, and your battery would not last very long." Smart speakers are also to blame for the conspiracy theory. Yes, they are "always listening," but only for a wake word. Until that match fires, nothing is supposed to leave the house. Amazon says Echo devices detect the wake word locally and don't store or send audio to the cloud unless activation occurs. The company also says that voice history can be reviewed/deleted and interactions can inform relevance for ads with Alexa, although you can opt-out. Your phone also keeps a tiny listener running, but it's not what you think. A small, on-device model sits in standby and listens only for the wake words. On the iPhone, Apple's own research describes it as a lightweight recognizer that runs all the time and wakes the full system only when it hears "Hey Siri." Android devices do the same with "Hey Google." Google explains that Assistant waits in standby, processing a few seconds of audio locally to detect the trigger. If no activation is detected, nothing is sent or saved. Only after activation does the device record your request and send it for fulfillment (and by default, those audio recordings aren't even saved to your account). According to Google, its consumer devices don't use ambient sound for ad personalization. Many devices use a low-power digital signal processor for this precisely so it won't drain your battery or beam ambient chatter anywhere. Apple says it has never used Siri data to build marketing profiles or make it available for advertising. Google's Assistant page spells out the same architectural boundary: standby doesn't ship your audio; activation is explicit, reviewable and controllable in settings. Wake-word engines exist to launch a helper, not to feed an ad slot. Yes, false triggers and misfires happen. The system can mishear the wake words and briefly record before you cancel. But that's a quality-of-assistant problem, not an ad-targeting pipeline. You can delete those interactions and even tighten sensitivities. Choffnes also co-authored a 2020 test that pumped 134 hours of TV audio at Amazon, Google, Apple and Microsoft speakers while watching the light rings and network traffic. They saw no 24/7 recording — just occasional false wakes, usually a few seconds, with a few longer outliers. When a device did wake, it typically sent that short clip to the company's cloud servers for processing. In other words, "sent to the cloud" means the speaker thought it heard the wake word and uploaded a brief recording so the assistant could interpret it. That confirms short clips can exist on vendor servers (and, on some platforms, be reviewed or deleted), but it's not a continuous microphone or an ad-targeting pipeline. "This was mostly a good-news story: we found no evidence of constant recording — just short, triggered clips when a device thought it heard the wake word," says Choffnes. "Bottom line: For the most part, most consumers shouldn't be concerned about pervasive listening." The part that sticks is the mood that smart speakers create. Once you live with a device that can wake on a word, every well-timed ad on your phone feels like the same mechanism at work. From cookies to AI Getty Image/ Zooey Liao/ CNET How did we get here, to this place where advertisements work so incredibly well that we swear it could only happen by our phones secretly listening to us? Before the models we have now, there were cookies. In the early days of the internet, web pages were like goldfish — no memory between clicks. So cookies showed up as a convenience feature to keep your shopping cart intact, remember your logins and save your language preference. First-party cookies did that housekeeping just fine. Then came the side doors. Ad networks and analytics firms set third-party cookies and tiny "pixels" on lots of sites, which let them recognize the same browser across the web. That's when the web started to feel like a hall of mirrors. You looked at a toaster once, and the toaster followed you for a week. Mobile devices then complicated the trick. Apps don't use browser cookies, so platforms leaned on mobile ad IDs, and more importantly, their own logged-in universes. Add the rise of retail media — ads tied to actual receipts — and you get precise targeting without passing identities around. The more the industry traded IDs for inference — scores, cohorts, household context — the more ads arrived with uncanny timing and fewer obvious breadcrumbs. To you, that feels like eavesdropping. To the system, it's just the next step after cookies: less about who you are, more about what the math thinks you'll do next. Now add the AI piece. Platforms aren't just placing ads, they're making them. Meta's Advantage Plus suite includes generative AI tools for ad creative. Google's Performance Max can generate headlines, descriptions, and images and video variants. Amazon Ads ships image and video generators so a product photo turns into lifestyle scenes in minutes. Under the hood, measurement keeps drifting from identity to intent. More math runs on-platform or on-device (think Apple's privacy-preserving attribution), so less raw data sloshes around. What you feel is the same: a score appears at decision time — likelihood to donate tonight: 0.62 — and the system spends accordingly. AI will keep tightening the timing and tailoring the message, so ads will feel even more like they "heard" you. They didn't. They modeled you. Is this all something to truly be scared of? I'm not worried about my phone listening to me, but there is something that disturbs me. It's the shadow I leave behind on the internet — a "ghost profile" stitched together from my clicks, searches, locations and card swipes. Data brokers are the ones who make that ghost useful. That's how the shadow becomes actionable — brokers say who, identity tools say which account and the platform says now. Sensitive location data is the first tripwire. Regulators have warned that brokered phone location trails can reveal visits to reproductive health clinics, houses of worship, shelters and recovery centers. The Federal Trade Commission's lawsuit against data broker Kochava lays that out in plain terms. The agency says selling this data exposes people to stigma, stalking, discrimination and even physical harm. Then come the workarounds. When agencies can't easily get data with a warrant, some have bought it instead. Records obtained by the ACLU show Department of Homeland Security components (including CBP and ICE) purchasing access to phone location records after the Supreme Court's Carpenter ruling made warrantless cell-site tracking tougher. Brokers' "anonymized" files can also be used to out or coerce individuals. In 2021, a senior Catholic official resigned after reporters obtained location-based app data — purchased from a broker and linked to a device that frequented gay bars and used Grindr — illustrating how readily "anonymous" trails can be tied back to a person. And the fallout isn't limited to embarrassment. Profiles that time an ad can also shape prices and eligibility. US prosecutors forced Meta to overhaul its housing-ad delivery system after alleging the algorithmic tools could produce unlawful demographic skews. Regulators have flagged similar risks in employment and credit contexts. And location brokers have marketed feeds and services to government and military customers, underscoring how easily ad tech exhaust crosses into surveillance use cases. Reporting has also documented the US military buying app location data from brokers like Babel Street and X-Mode for "counter-terrorism" purposes. Once the shadow profile exists, it's portable. That's the danger beyond ads. How to protect yourself What can we do about all of this? Start by cutting down the data you give off each day. In your web browser, run a tracker blocker like uBlock Origin alongside a behavior-based add-on such as EFF's Privacy Badger. "I recommend using both together. I use them all the time," says Galperin. Getty Image/ Zooey Liao/ CNET On your phone, do a quick permission audit every few months: set Location to "While Using the App" (and Approximate when you can), revoke mic/camera for anything that doesn't need it and delete the apps you never open. iPhone users can skim Settings > Privacy & Security > App Privacy Report to spot noisy apps, and Android's Privacy Dashboard shows timestamped mic/camera/location access. You won't vanish, but you'll close some doors. As Galperin likes to put it, the easiest win is simple: turn off location services unless you truly need them. Another effective move is subtraction. Fewer apps means fewer places collecting data about you. When something asks for access, give it the minimum it needs. Skip contact uploads and "Find Friends," avoid signing in with the same identity everywhere, and use email aliases for newsletters and loyalty signups so profiles don't fuse by default. "Only install and use apps that you really need," says Northeastern's Choffnes. "When they're asking for permissions to access data or asking you to enter data, try to enter as little as you can get away with." In some states, you can file data-subject access requests to see what companies hold about you and who they share it with. On Instagram, for example, you can go to your profile > three-dash menu > Accounts Center > Your information and permissions and export all the information Instagram has on you, including content and information you've shared and activity and info Instagram collects. Getty Image/ Zooey Liao/ CNET When I downloaded my data, I was able to see my activity that Meta tracked outside of its apps, including online purchases, information I've submitted to advertisers (included my address), categories associated with my activity (engaged shopper, household income, travel plans) and a list of advertisers using my activity or information to show me ads. You can also pull some data back. If doing it yourself isn't realistic, data-deletion services (such as Easy Opt Out and Optery) will file broker opt-outs on your behalf. They're imperfect and they're not one-and-done — brokers replenish constantly — which is why Galperin calls it a "constant cat-and-mouse game." Do it anyway, because trimming the broker trail reduces how precisely systems can time you. Beyond that it becomes a public policy matter. "The idea that this is an individual's responsibility is ridiculous. It's highly technical and adversarial," says Christo Wilson, professor and founding member of the Cybersecurity and Privacy Institute at Northeastern, who also worked on the 2018 study. "I'm an expert and I'm still exposed. No individual can win against an ecosystem built to surveil." The myth survives because it flatters us In the past few months, I've had the same exchange with friends, cousins, Uber drivers, a woman at a kid's birthday party, a drunk guy on a bar patio who swore his phone had betrayed him. They lean in and tell me about the ad that materialized right after a conversation, the one so on the nose it felt like a dare. "Your phone isn't listening to you," I say. "It's timing." I tell them it's data and context and a system that's really good at guessing. "I don't believe that," they say, almost every time. I get it. A secret microphone is a better story than a spreadsheet with great aim. A villain you can point to beats a vast spider web you can't see. Here's a metaphor to try out: Think of the city as a weather radar. Every tap, swipe and purchase you make is a little blip. Most evaporate. Some cluster. When the storm cells line up, the system flashes an alert. That flash is the ad. It didn't hear thunder — it saw the pattern and predicted it. On my phone, meanwhile, my feed is a conveyor: little rectangles, arguments and declarations about what I might do if nudged just right. I tap one, save another and linger on a third without meaning to. It's nothing. It's everything. It's a fresh addition to the profile that's already following me around. I think this myth survives because it flatters us. It says we're interesting enough to spy on, significant enough to bother. But the truth is plainer and, somehow, more intimate: We are legible. Not to a person with headphones in a van, but to a system that grades our likelihoods and gets paid when it's right. If you want a monster, it's not a microphone. It's the quiet arithmetic that reads your week, guesses your mood and launches a message at the exact second it thinks you're most likely to swallow it. I wish the explanation landed cleaner at the bar, at the party, on the sidewalk. People want to believe in the listening because it puts the cause in the room with them. The ad-tech model is almost ethereal, which makes it hard to place, and seemingly impossible to escape. But we're not haunted. We're forecast. We're predictable. Visual Designer | Zooey Liao Art Director | Jeffrey Hazelwood Creative Director | Viva Tung Video Host | JD Christison Video Editor | Jon Gomez Project Manager | Danielle Ramirez Editors | Corinne Reichert Director of Content | Jonathan Skillings