Monthly LLM Visibility Benchmark — April 2026
What do 4 AI models cite in French beauty D2C?
94
Top score — Typology (composite)
8
Brands below 50/100 — invisible to AI
−22
Avg pts drop for D2C brands on ingredient-specific queries
Visibility Heatmap
Scores by brand & platform
15 French beauty D2C brands ranked by average LLM visibility score across 4 AI models (0–100)
LLM Visibility Score — April 2026
ChatGPT
Perplexity
Gemini
Claude
Full Rankings
15 brands ranked — April 2026
Visibility scores reflect how often each brand is mentioned across ChatGPT, Perplexity, Gemini, and Claude in discovery, comparison, and purchase queries.
| # | Brand | ChatGPT | Perplexity | Gemini | Claude | Overall | Trend |
|---|
Key Findings
What the data reveals
01
An 11.8× gap separates 1st from last
Typology scores 94/100. Joëlle Ciocco scores 8/100. Same market, same vertical, same LLMs — an 11.8× visibility gap. Typology consistently appears across all 40 prompt categories. It has the broadest indexed content in both French and English — skincare actives, routines, ingredient guides, and editorial coverage — which correlates strongly with top-tier scores across all four models.
02
6 brands score below 35 — despite real community strength
Respire has 500k+ community members but scores 46/100. Seasonly has cult status but 33/100. Merci Handy, known for its vibrant brand, sits at 29/100. The gap isn't product quality — it's indexed content volume. A single tier-1 editorial feature or structured FAQ page can shift a score by 10–15 points.
03
Ingredient-specific queries expose a new gap
When users ask about retinol, vitamin C, hyaluronic acid, or niacinamide, the results diverge sharply. Brands with ingredient education content (Typology, Caudalie, Embryolisse) hold strong. D2C-only brands without ingredient guides drop 18–25 points on these queries vs. their general discovery scores.
Methodology — Read full methodology →
Analysis based on real LLM responses to 40 French consumer-style queries per brand — covering skincare discovery, ingredient-specific queries, purchase intent, brand comparison, and category exploration — across ChatGPT (gpt-4o-mini), Perplexity (sonar), Gemini (2.0 Flash), and Claude (3.5 Haiku). Scores are averaged across 3 independent runs conducted at 24-hour intervals for statistical stability. Visibility scored 0–100 based on mention frequency and position. Composite score weighted: ChatGPT 40%, Perplexity 25%, Gemini 20%, Claude 15%. Data reflects web-indexed content as of April 23, 2026. Rescanned: April 23–25, 2026. Next benchmark: May 2026.
15 brands
4 LLMs
40 prompts / brand
3 runs × 24h intervals
FR prompts · Beauty vertical
New methodology
Scanned April 23, 2026
Free weekly intelligence
Get your brand's visibility score — every Monday
Stop guessing if AI mentions you. Benchfolk scans 4 LLMs — ChatGPT, Perplexity, Gemini & Claude — and sends you a score, a benchmark, and 3 things to fix. Just a 90-second read.
Get your free Scorecard — in 24h →One free scan. No credit card. Upgrade to Intel for weekly tracking.