Signal Through the Slop — Week 17, April 2026
- Andrew Riker
- Apr 23
- 5 min read

AI search is changing how brands get found, recommended, and trusted. Each week, we pull signal from Reddit, LinkedIn, and industry research to track what's actually moving in the space. Here's what caught our attention this week.
What Reddit is saying this week
The loudest conversation on Reddit right now is a simple, frustrated question: how do you actually know if you're showing up in AI search? Across r/SEO, r/bigseo, and r/marketing, practitioners are running variations of the same thread — asking what tools or methods people use to track brand appearances in ChatGPT, Perplexity, and Google AI Overviews. The answers are scattered. Manual prompt testing is the most common recommendation, but everyone acknowledges it doesn't scale. The tool suggestions that do come up are described as inconsistent — the data doesn't always match what people see when they test manually. There's no consensus method yet, and the frustration is real.
The Google Core Update that completed on April 8 is still being processed. Practitioners are filing into threads trying to figure out whether ranking changes affected AI Overviews pickup, or whether the two are separate signals. The emerging read from the more experienced contributors: they're not separate. Google is increasingly rewarding the same content signals in both traditional rankings and AI Overview eligibility — helpful, specific, structured content is winning both. One comment captured it cleanly: "Google is basically telling us what it wants to surface in both places."
Reddit is also having a meta-moment. Following data showing Perplexity cites Reddit in nearly half of its top source selections, practitioners are debating whether intentional Reddit community presence is a viable AI visibility strategy. The conversation is appropriately cautious — most participants distinguish between earned presence (genuine, consistent participation in relevant communities over time) and gamed presence (low-effort comment-dropping, which gets flagged). Nobody has clean data yet on whether brand participation in subreddits reliably translates to brand citations in AI responses. But the question is being asked seriously, which is itself a signal about where community-based visibility strategy is heading.
One thread worth noting: the ghost citation concept is getting traction. Growth Memo published data showing that 61% of LLM citations are "ghost citations" — a domain gets linked but the brand name never appears in the AI-generated text. Practitioners are starting to question whether citation count is a valid metric at all, if the brand isn't actually being mentioned in the answer. The thread had no clean resolution, but the discomfort with existing measurement approaches is warranted.
What LinkedIn is saying this week
LinkedIn's professional feed is dominated by the same measurement concern as Reddit, but the framing is different. Where Reddit asks "how do I track this," LinkedIn is asking "how do I report this to leadership." The Semrush study analyzing 89,000 LinkedIn URLs cited in AI search landed as the week's most-shared piece of research and gave practitioners something concrete to bring to their content strategy conversations.
The study's headline finding: LinkedIn is the second most-cited domain across ChatGPT, Google AI Mode, and Perplexity — cited in roughly 11% of AI responses on average. On ChatGPT specifically, that number climbs to 14.3%. The implication practitioners are pulling from this: LinkedIn is not just a networking platform or a distribution channel. It's a content corpus that AI systems actively draw from when generating professional and B2B answers. The format breakdown matters: LinkedIn articles account for 50–66% of what gets cited, versus 15–28% for standard feed posts. Educational and advice-driven content makes up 54–64% of AI citations from LinkedIn. The frequency finding is also circulating: 75% of authors whose LinkedIn content gets cited in AI search post five or more times within a four-week window.
The ghost citation conversation is playing out on LinkedIn as well, with more strategic framing. Practitioners are noting that if 61% of citations are brand-invisible — the domain gets credited, the brand name doesn't appear — then the citation counts in most AI visibility dashboards are overstating actual brand exposure. The follow-on conversation is about what metric actually matters: brand mention rate within the response text, not just citation count. That distinction is starting to show up in how practitioners are thinking about reporting.
The April 8 Core Update fallout is also threading through LinkedIn, particularly among SEO-adjacent audiences who are connecting ranking volatility to AI Overviews eligibility. The emerging take that's gaining traction: the update confirmed that content written for AI Overviews eligibility and content written for traditional ranking authority are converging on the same signals. That's a meaningful framing shift — GEO and SEO are not parallel tracks. They're collapsing into one content strategy decision.
What the research shows this week
The data this week is pulling in a few directions at once, which makes it more interesting than a typical week.
Google AI Overviews coverage keeps growing. Semrush's Q1 2026 data puts AI Overviews at 25.11% of Google searches, up from 13.14% in January 2025 — roughly doubling in 15 months. BrightEdge's industry-level data adds nuance: coverage varies dramatically by vertical, with healthcare at 88% and e-commerce around 13%. That spread means aggregate numbers are nearly useless for planning. A brand needs to know its own category's coverage rate to understand how much of its search real estate is already AI-mediated.
The March 2026 Core Update (completed April 8) was the most significant algorithm event of the year so far, with SERP volatility rated 9.5 out of 10. The practical effect: sites that saw ranking drops are correlating those drops with content that lacks the structural and specificity signals Google's AI systems reward. The update is functioning as a forcing function for content strategy rather than a traditional ranking tweak.
Three specific data points from Growth Memo are worth holding onto. First, pages with headlines that directly answer a user question achieve 41% ChatGPT citation rates, versus 29% for loosely related headlines — specificity at the headline level matters. Second, focused content covering 26–50% of a topic outperforms comprehensive guides covering 100% of a topic. The "write the definitive guide" playbook doesn't transfer to AI search. Third, 61% of LLM citations are ghost citations — the domain is credited but the brand name doesn't appear in the generated text, which means raw citation counts are a misleading proxy for actual brand exposure.
The Semrush 89K LinkedIn URL study is the most substantial platform research of the week, but the citation pattern data across platforms is worth noting separately: ChatGPT cites LinkedIn in 14.3% of responses; Google AI Mode in 13.5%; Perplexity in only 5.3%. Meanwhile, Perplexity cites Reddit in 46.7% of its top citations. These aren't interchangeable platforms — each has distinct source preferences. A GEO strategy that treats ChatGPT, Perplexity, and Google AI Mode as equivalent surfaces is leaving precision on the table.
One counterintuitive finding from BuzzStream: approximately 75% of sites that block OpenAI's crawl bots still appear in AI search citations. The bot-blocking lever that brands think they control over AI visibility may not actually work. LLMs appear to be drawing from cached training data or third-party sources regardless of crawl permissions.
That's the signal this week — back next Monday with more.
Comments