Skip to main content
案例分析

【AI Visibility Case Study】Strong SEO, Invisible to AI? — 39/100

Published on March 24, 2026

Case Overview

This case study examines an office chair manufacturer and seating solutions provider with a well-established product lineup spanning ergonomic chairs, conference seating, and custom bulk configurations. Their customer base ranges from small business office procurement to large enterprise fit-outs. When we conducted an AI visibility health check on the company, we uncovered a striking contradiction: the company's website scored 83 on traditional SEO technical metrics — solid, well-maintained fundamentals — yet its AI search visibility score came in at just 34. Out of 16 AI platform queries, the brand went completely unmentioned in 10 of them.

This "strong SEO, weak GEO" gap is one of the most common patterns we see among manufacturers navigating the AI search era. The overall score for this health check came in at 39/100, placing the company in the category of: AI Visibility Potential — Untapped.

Score Breakdown

Three dimensions were evaluated to build a complete picture of the company's AI visibility standing. Across all three, scores fell below 50 — with website performance emerging as the single biggest drag on the overall result.

Dimension Score Status
AI Brand Mention Rate 34 / 100 ⚠️ Needs Improvement
GEO Technical Audit 40 / 100 ⚠️ Needs Improvement
Website Performance (PageSpeed) 45 / 100 ⚠️ Needs Improvement
Overall Score 39 / 100 🔴 AI Visibility Potential: Untapped

The most critical bottleneck is the PageSpeed Performance score, which measured at just 7 out of 100 in live testing — far below Google's recommended minimum of 50, and well beneath the threshold most AI crawlers can tolerate. This directly limits how efficiently AI systems can retrieve and index the company's website content, creating an invisible ceiling on its AI visibility reach.

AI Search Visibility Testing: Platform-by-Platform Results

We queried four major AI platforms, simulating the behavior of real procurement decision-makers searching for office chair suppliers, ergonomic seating recommendations, and commercial furniture manufacturers. Each platform received four queries from different angles, for a total of 16 tests. The results revealed clear and significant variation across platforms.

Claude

Claude returned two effective mentions out of four queries — one positive and one ambiguous — with two queries producing no mention at all. The positive mention appeared in a broad industry query along the lines of "recommended office chair manufacturers." The ambiguous mention placed the brand in a supplier list with uncertain language. The two non-mentions occurred when queries focused on specific use cases or procurement specifications. This suggests Claude has a basic awareness of the brand, but hasn't crossed the threshold for consistent, confident recommendations.

ChatGPT

ChatGPT performed slightly better than Claude, with two positive mentions and two non-mentions across four queries. Positive results were concentrated in broad, industry-overview type questions. When queries became more specific — such as "best office chair for prolonged sitting" or "bulk seating for a 10-person SME office" — the brand dropped off the recommendation list entirely. This reflects a content depth problem: strong AI visibility requires layered, scenario-specific content, not just brand awareness at the category level.

Gemini

Gemini's results mirrored ChatGPT's — two positive mentions and two non-mentions. Mentions again clustered around high-level industry queries, while detail-oriented scenarios produced no results. Notably, Gemini tends to favor sources that provide rich structured data. The company's website currently lacks H1 tags and OG Tags, which directly undermines Gemini's ability to interpret and reference the site's content with confidence.

Perplexity

Perplexity produced the most concerning results: zero mentions across all four queries. Because Perplexity relies heavily on real-time web crawling and indexing, a PageSpeed score of 7 almost certainly triggers crawler timeouts or failed page loads — meaning the company's content never makes it into Perplexity's recommendation pool at all. This is the most urgent technical issue to address, and it's the primary reason AI visibility on this platform is effectively zero.

In total, the brand was mentioned in 6 out of 16 queries — an overall mention rate of 37.5%. Critically, those mentions were concentrated almost entirely in broad, high-level queries. The company has virtually no AI visibility coverage in long-tail, scenario-driven search situations where buying decisions are often made.

Competitive Landscape

During AI platform testing, competing brands consistently appeared on recommendation lists where this company did not. International names like Herman Miller, Steelcase, and Humanscale were nearly always present in ergonomic chair queries. At the regional level, a number of competitors with well-developed digital content assets have already established stable footholds in AI-generated recommendations.

What distinguishes the consistently recommended brands? They tend to maintain detailed buying guides, use-case articles, and clearly implemented Schema markup on their websites — precisely the content strategy elements this company currently lacks. According to the health check report, at least 8 competitors have already built reliable AI visibility in recommendation contexts. That gap is widening.

GEO Technical Audit

The GEO (Generative Engine Optimization) technical audit covered 9 key indicators. The company's website passed 5 and failed 4, for a pass rate of 55.6% — leaving meaningful room for improvement.

Technical Item Status
Schema JSON-LD Structured Markup ✓ Pass
Sitemap Configuration ✓ Pass
Title Tag Setup ✓ Pass
PageSpeed Performance (audit item level) ✓ Pass
SEO Technical Score ✓ Pass
Meta Description ✗ Not Configured
OG Tags (Social Sharing) ✗ Not Configured
Canonical URL ✗ Not Configured
H1 Tag ✗ Not Configured

Among the four failed items, the missing H1 tag has the most direct impact on AI visibility. AI models rely heavily on H1 as a semantic anchor when interpreting page content. Without it, an AI system is essentially reading a document with no title — forced to guess at the page's main topic rather than being clearly directed. Missing OG Tags reduce content shareability on social platforms, indirectly limiting the brand's chances of being cited across the web. The bare domain inaccessibility issue — where only the www version resolves correctly — risks fragmenting crawler indexing if a 301 redirect is not in place, creating silent AI visibility losses over time.

Website Performance

The website performance numbers are the starkest finding in this entire health check. The PageSpeed Performance score came in at just 7 out of 100 in live measurement — far below Google's recommended minimum of 50, and below the tolerance threshold of most AI crawler systems. To put this in context: the same website's SEO technical score is 83. The 76-point gap between these two figures tells a clear story — traditional SEO has been carefully maintained, while modern performance standards have been entirely overlooked.

Low performance scores translate directly into AI crawler timeouts. When a crawler cannot fully load a page before timing out, it cannot read the content — and content that can't be read cannot contribute to AI visibility. This is why Perplexity returned zero mentions, and why mentions on other platforms remain inconsistent. Priority improvements should include image compression, server-side caching, and lazy loading for non-critical resources. These are well-understood technical fixes that can meaningfully expand the company's AI visibility coverage once implemented.

Expert Recommendations

Based on the health check data, we identified three priority improvement areas, each representing a concrete, measurable AI visibility opportunity.

Finding 1: Performance Issues Are Closing the Door on Every AI Platform

A PageSpeed score of 7 is the root cause of almost every AI visibility problem identified in this audit. When AI crawlers encounter slow-loading pages, they abandon the crawl — meaning the content never enters the recommendation pipeline. Fixing this isn't about compressing a few images. It requires a systematic diagnosis of where performance is being lost: server response times, render-blocking resources, unoptimized assets. Until this is addressed, every other GEO optimization effort will deliver a fraction of its potential impact.

Finding 2: Content Coverage Is Too Shallow for Long-Tail AI Queries

The company currently earns mentions in broad, category-level AI queries — but disappears the moment a user asks something more specific, such as "best ergonomic chair for all-day use" or "seating solution for a growing 20-person team." This is where buying decisions actually happen, and where competitors are actively building their AI visibility. A content strategy mapped to the buyer's decision journey — covering use cases, selection guides, and comparison content — would significantly expand the company's footprint in AI recommendations.

Finding 3: Missing Structural Tags Undermine AI Semantic Understanding

Four basic technical elements — H1, Meta Description, OG Tags, and Canonical URL — are all absent simultaneously. For AI models evaluating whether to recommend a brand, content structure clarity is a key signal. Presenting a page without these elements is the equivalent of handing an AI a report with no title, no summary, and no clear authorship. Restoring these foundational tags is a relatively low-effort, high-return action that would immediately improve how AI systems interpret and represent the company's website.

AI Search Trends in the Office Seating Industry

Procurement behavior in the commercial office seating market is undergoing a quiet but significant shift. Historically, office furniture buying was driven by admin managers, facilities teams, and vendor quote comparisons — a process built on showroom visits and word-of-mouth referrals. Today, a growing share of that initial research phase — the "where do I even start looking" stage — is moving from Google to conversational AI tools like ChatGPT, Claude, and Gemini.

Several patterns define how AI search behavior plays out in this industry. First, context-driven queries vastly outnumber brand-driven ones. Buyers typically ask "what seating solutions work for a 50-person open office?" before they ask about specific brands. Content that answers situational questions earns far more AI citations than generic brand pages. Second, ergonomics and health-related search intent is growing rapidly. Hybrid work arrangements have heightened employee and employer awareness of prolonged sitting risks, making queries like "office chair for back pain support" and "posture-friendly seating for remote workers" high-frequency AI search topics. Third, AI is increasingly used to shortlist vendors for large procurement decisions. When a company is budgeting for a full office fit-out, decision-makers are turning to AI to quickly generate a credible supplier shortlist — and that shortlist is almost entirely determined by each brand's AI visibility profile.

For office chair manufacturers and seating solution providers, there is a clear and time-sensitive opportunity. Most players in this space have not yet invested meaningfully in GEO optimization. The brands that move first will build durable AI visibility advantages that become increasingly difficult for latecomers to displace.

Find Out Where Your Brand Stands in AI Search

If you want to understand how your brand actually performs across ChatGPT, Claude, Gemini, and other AI platforms, we're here to help:

For more AI visibility case studies across industries, visit the Joseph Intelligence Case Study Index to explore GEO optimization strategies and real benchmark data.

Disclaimer

This article is based on anonymized health check data. All information that could identify the company has been removed. AI platform responses are probabilistic and may vary between sessions. Technical audit scores and performance metrics reflect a specific point-in-time snapshot.

FAQ

Why does a high SEO score not guarantee strong AI visibility?
SEO scores measure traditional technical factors like metadata, backlinks, and crawlability for search engines like Google. AI visibility depends on additional factors — including page load speed for AI crawlers, content depth across buyer scenarios, structured semantic markup like H1 tags, and the presence of authoritative, citable content. A site can be technically sound for Google while remaining largely invisible to AI recommendation engines like ChatGPT or Perplexity.
How does website performance affect AI search visibility?
AI platforms like Perplexity rely on real-time web crawling to gather information. If a website loads too slowly, AI crawlers will time out before reading the content — effectively making that content invisible to the AI. In this case study, a PageSpeed score of 7/100 was the primary reason the brand received zero mentions on Perplexity and inconsistent mentions on other platforms.
What is GEO optimization and how is it different from SEO?
GEO stands for Generative Engine Optimization — the practice of structuring your website's content and technical setup so that AI models like ChatGPT, Claude, and Gemini are more likely to understand, trust, and recommend your brand. While SEO focuses on ranking in traditional search engine results pages, GEO focuses on appearing in AI-generated answers and recommendations. Both matter, but they require different strategies.
How can an office furniture company improve its AI visibility?
The most impactful steps are: (1) Fix website performance issues so AI crawlers can fully access your content; (2) Implement missing technical elements like H1 tags, Meta Descriptions, OG Tags, and Canonical URLs; (3) Build scenario-specific content that answers the questions buyers actually ask AI — such as seating guides for different office sizes, ergonomic advice articles, and product comparison content; (4) Add structured Schema markup to help AI systems accurately interpret your brand and offerings.
Which AI platforms should B2B brands prioritize for visibility?
For most B2B brands, ChatGPT and Gemini currently represent the highest query volumes and should be the primary focus. Claude is growing steadily in professional and enterprise use cases. Perplexity is particularly important for brands targeting research-heavy buyers, as it surfaces real-time web results — meaning strong website performance is essential to appear there. A comprehensive AI visibility strategy should aim to perform consistently across all four platforms.

Ready to Elevate Your Digital Marketing?

Let our AI-driven solutions help your business grow