Case Overview
This case study features an office chair manufacturer and seating solutions provider with a product line spanning ergonomic chairs, standard office seating, and corporate procurement packages. During our AI visibility audit, we uncovered a striking contradiction: when AI platforms were asked directly about this brand by name, several returned positive mentions. But when potential buyers searched using industry-level queries — such as "best office chairs for businesses" or "ergonomic chair recommendations" — the company vanished entirely from AI-generated responses.
This is one of the most common AI visibility blind spots we observe among established manufacturers: the brand exists, but it has no industry-level AI presence. The overall score for this audit was 39/100, rated as "AI Visibility Potential: Undeveloped."
Score Breakdown
AI visibility is determined by three core dimensions, and a weakness in any single area will drag down the overall result. In this case, the most significant bottleneck was website performance — a PageSpeed score of just 7 out of 100, which directly impairs AI crawlers from fully indexing page content and is the primary factor behind the low overall score.
| Dimension | Score | Rating |
|---|---|---|
| AI Brand Mention Rate | 34 / 100 | ⚠️ Needs Improvement |
| GEO Technical Audit | 40 / 100 | ⚠️ Needs Improvement |
| Website Performance (PageSpeed) | 45 / 100 | ❌ Significant Weakness |
| Overall Score | 39 / 100 | ❌ Undeveloped |
One detail worth highlighting: the PageSpeed SEO sub-score reached 83/100, indicating that traditional SEO elements like meta tags and page structure have been thoughtfully configured. However, the enormous gap between a performance score of 7 and an SEO sub-score of 83 tells a clear story — the technical settings are in place, but execution at the infrastructure level is severely lacking.
AI Search Visibility Testing
We submitted queries to four major AI platforms — Claude, ChatGPT, Gemini, and Perplexity — across two scenarios: brand-specific queries and industry-level queries. A total of 16 queries were executed to simulate the realistic search behavior of prospective buyers evaluating office seating solutions.
Claude
In brand query scenarios, Claude provided a positive mention on the first attempt, confirming the brand's existence and describing its product direction. A second query returned a vaguer response, suggesting that the AI's grasp of the company's brand information is inconsistent. In industry query scenarios — such as "recommended office chair brands for businesses" — neither query produced a mention, with competing brands filling every recommended slot.
ChatGPT
ChatGPT delivered the most consistent brand recognition of the four platforms. Both direct brand queries returned positive mentions, indicating that OpenAI's training data includes relevant information about the company. However, neither of the two industry queries returned a mention — reinforcing a critical distinction: brand recognition and industry-level AI visibility are two entirely different metrics.
Gemini
Gemini's results closely mirrored ChatGPT's: positive mentions on both brand queries, zero mentions on both industry queries. This consistent pattern strongly suggests that while the company's brand information exists somewhere online, it lacks content assets that are closely associated with industry-level keywords. As a result, when Gemini generates industry recommendations, the company simply isn't a candidate.
Perplexity
Perplexity produced the most concerning results. Across four industry queries, the company was not mentioned once. Since Perplexity is built around real-time web search, its zero mention rate directly reflects how little industry keyword coverage the company's currently indexable web content provides.
Core Finding: Of 16 total queries, 6 returned mentions — a 37.5% mention rate. However, all 6 of those mentions came from direct brand queries. The 10 industry queries produced a combined score of 0/10. In practical terms, this means only someone who already knows the brand name can "find" this company through AI. Any buyer actively searching for office chair solutions — a potential new customer — will never encounter it.
Competitive Landscape
During industry query testing, AI platforms consistently recommended a set of competing brands. The names surfacing in these AI-generated recommendation lists included internationally recognized players such as Herman Miller, Steelcase, Okamura, Humanscale, and Ergohuman, alongside selected regional manufacturers.
These competitors share several common characteristics that explain their strong AI visibility: a well-developed content ecosystem (including review articles, buying guides, and comparison content), robust structured data markup, and high citation density from third-party media and review platforms. The company in this case study, by contrast, has virtually no industry-oriented content assets — and that gap is the root cause of the disparity in AI recommendations.
GEO Technical Audit
The GEO (Generative Engine Optimization) technical audit evaluates whether a website provides the foundational conditions for AI crawlers to correctly understand, index, and cite its content. Nine key indicators were assessed, with a pass rate of 5 out of 9 (approximately 56%), yielding a technical audit score of 40/100.
| Check Item | Result | Impact |
|---|---|---|
| Schema JSON-LD | ✓ Pass | Structured data foundation is in place |
| Sitemap | ✓ Pass | Crawlers can discover page structure |
| Title Tag | ✓ Pass | Page titles are configured |
| Meta Description | ✗ Fail | AI summaries cannot extract description text |
| OG Tags | ✗ Fail | No preview data available for AI citations or social sharing |
| Canonical URL | ✗ Fail | Duplicate content issues dilute page authority |
| HTTP/2 | ✗ Fail | Inefficient data transfer slows crawling |
| H1 Tag | ✗ Fail | Page topic signals are unclear to AI crawlers |
| Bare Domain 301 Redirect | ✗ Fail | Non-www domain inaccessible, breaking index integrity |
Among the five failed items, the missing H1 tag and unconfigured Canonical URL pose the greatest threat to AI visibility. Without an H1 tag, AI crawlers cannot identify a page's primary topic. Without canonical tags, duplicate pages compete against each other, fragmenting content authority. The absence of OG tags means the company loses control over how its brand information is represented whenever it's cited or shared by third-party sources.
Website Performance
Page load speed is a prerequisite for AI crawlers to fully index a website's content. The company's website recorded a PageSpeed performance score of just 7 out of 100 — a critically low result. At this performance level, AI crawlers are highly likely to time out before completing a full page load, meaning the content may never be included in training data or real-time indexes.
While the SEO sub-score of 83 shows that static configuration elements like meta tags have been addressed, the performance bottleneck effectively cancels out those technical efforts. Common causes of this type of performance failure include uncompressed, high-resolution product images (a frequent issue on e-commerce pages for physical products like office chairs), disabled browser caching, and render-blocking JavaScript resources. The failure to enable HTTP/2 further compounds load delays when multiple resources must be fetched simultaneously.
Expert Recommendations
Based on the audit data, we identified three priority areas with the highest potential return for improvement. Each involves layered strategic decisions; the following outlines the core diagnostic directions.
Priority 1: Resolve the Performance and Indexability Crisis
A PageSpeed score of 7 is not just a user experience problem — it directly determines whether AI crawlers can read the page at all. The inaccessible bare domain compounds this by cutting off additional indexing pathways. These are foundation-layer issues. Addressing everything else while these remain unresolved is the equivalent of building on sand. Priority: Critical.
Priority 2: Build Industry Content Assets to Break the "Brand-Yes, Industry-No" Pattern
Ten industry queries, zero results. The core reason is a near-complete absence of content associated with industry-level keywords. AI recommendation logic is straightforward: the brands that answer buyers' questions are the brands that get recommended. Currently, the company's website lacks the depth of procurement-focused content that would qualify it as a relevant answer. Competitors are filling that vacuum entirely.
Priority 3: Deepen Schema Markup Semantics
While Schema JSON-LD is technically present (one of the few passing items in the technical audit), basic markup implementation is not sufficient. Full deployment of Product Schema, Organization Schema, and FAQ Schema would give AI platforms the structured signals they need to accurately understand the company's product categories, service scope, and brand positioning — and to include it as a candidate in industry-level query responses.
AI Search Trends in the Office Seating Industry
Procurement behavior in the office chair and seating solutions market is undergoing a structural shift — one that moves in lockstep with the rise of AI search.
In traditional purchasing workflows, corporate buyers discovered seating vendors through search engines, trade shows, or sales referrals. In the AI search era, the starting point of the procurement decision chain has quietly moved upstream. HR managers and office administrators now open ChatGPT and ask questions like "what are the best ergonomic chair brands for all-day sitting," "how should I budget for seating for a 100-person office," or "what's the difference between ergonomic chairs and standard office chairs." These conversational queries are intent-rich, decision-oriented, and heavily dependent on the recommendation lists AI provides. If your brand isn't on the list, that potential customer effectively doesn't know you exist.
There's also a particularly high-value AI search scenario specific to this industry: B2B bulk procurement research. When a growing startup needs chairs for 50 workstations, or an established company is replacing aging office furniture, the procurement lead typically turns to AI first for initial market research — using it to shortlist three to five candidate brands before entering the formal quoting process. Only brands that appear in this "AI pre-screening" phase have a chance at the comparison and negotiation stages that follow.
The remote work trend has added another dimension: knowledge-driven individual purchasing decisions. Consumers research office chairs through AI before buying — asking things like "what chair is best for people with back pain" or "best ergonomic chairs under $500." These long-tail queries are numerous and high-converting, because people asking them are typically already in a buying mindset.
For office chair manufacturers, this represents a genuine window of opportunity. Most brands in this space have not yet systematically optimized for GEO, meaning the competitive landscape for AI visibility remains relatively open. The brands that build comprehensive content assets and solid technical foundations now will gain a compounding first-mover advantage as AI search becomes the default starting point for procurement research.
Find Out Where Your Brand Stands in AI Search
If you want to understand how your brand actually performs across AI platforms like Claude, ChatGPT, and Gemini, we're here to help.
- 🔍 Free AI Visibility Self-Assessment Tool: Get a preliminary evaluation report in 5 minutes
- 📋 Book a Free Results Consultation: Receive an in-depth analysis from our team tailored to your industry and competitive landscape
For more AI visibility case studies, visit the Joseph Intelligence Case Study Index.
Disclaimer
This article is based on anonymized audit data from an actual engagement. All information that could identify the specific company has been removed. AI platform responses are non-deterministic — results may vary across different query sessions and time periods. Technical audit results and performance scores represent a point-in-time snapshot.
Frequently Asked Questions
Q: Why does my brand show up when searched by name but not in category or product searches on AI platforms?
A: This is one of the most common AI visibility gaps we identify in audits. Brand-name recognition in AI systems typically comes from general web mentions and training data, while industry-level recommendations require content that directly addresses buyer questions using category-specific keywords. If your website lacks buyer-focused content — buying guides, comparison articles, use-case pages — AI platforms have no basis to include you in those recommendations.
Q: How much does website speed actually affect AI visibility?
A: More than most companies realize. AI crawlers operate under time constraints similar to traditional search crawlers. If a page takes too long to load, the crawler abandons it before fully reading the content. This means even well-written, keyword-rich content may never be indexed if the underlying site performance is poor. In this case, a PageSpeed score of 7/100 represents a near-complete barrier to AI content discovery.
Q: How long does it take to improve AI visibility after making technical and content changes?
A: Performance and technical fixes — such as improving PageSpeed, adding canonical tags, and implementing H1 tags — can show impact within weeks as crawlers re-index the site. Content-driven improvements to industry visibility typically take longer, often two to four months, as AI platforms need time to process, associate, and weight new content. Structured data (Schema markup) can accelerate this process by giving AI systems explicit, machine-readable signals about your content's relevance.
Q: What type of content is most effective for improving industry-level AI visibility in the B2B space?
A: Content that directly answers the questions buyers ask during the research phase tends to perform best. For office seating, this includes procurement guides (e.g., "how to choose ergonomic chairs for a large office"), comparison content (e.g., "ergonomic chairs vs. standard chairs: what's the ROI for employee wellness"), and use-case articles addressing specific buyer segments. FAQ-format content and structured product descriptions with full Schema markup also significantly improve AI citation rates.