Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.shareofmodel.ai/llms.txt

Use this file to discover all available pages before exploring further.

Frequently asked questions about Share Of Model — focused on the Brand Analysis module, metrics and the credit system.

Metrics

The Brand Mention Rate measures how often a brand is mentioned by LLMs in response to a query about brands in a specific category. A high BMR signals strong visibility and market presence; a low BMR suggests the need for better content strategies or SEO improvements.
Share of Voice combines two factors: Brand Mention Rate and Average Position in LLM responses. SOV provides a comprehensive view of brand visibility by considering both frequency and prominence. Higher SOV reflects more frequent and higher-ranking mentions.
The Brand Mention Rate Over Time chart tracks the evolution of your brand’s visibility across LLMs — useful for monitoring trends and evaluating the impact of marketing efforts or external factors.
It compares the top 25 brands in your category, ranked by how frequently they are mentioned in LLM responses. Use it to benchmark your brand’s performance against competitors.
A two-step approach:
  1. Identify the top 10 brands overall — the 10 with the highest SOV across all selected LLMs.
  2. Calculate SOV per LLM — compute each top brand’s SOV per LLM.
Why this method? Fixing the top 10 brands based on overall SOV ensures consistent brand representation across the bar charts, making models directly comparable.What if a brand is in the top 10 overall but not for a specific LLM? It still appears, even if its rank drops on that LLM, to maintain consistency.
It quantifies how LLMs perceive specific attributes of your brand and competitors on a -5 to +5 scale. Use it to compare attribute perception across multiple brands (e.g. quality, customer satisfaction).
The Perceived Strengths and Perceived Weaknesses by LLM charts show how each LLM perceives your brand vs. competitors. They highlight which brand aspects are most frequently recognised and where improvements are needed.
It identifies the key advantages of your brand as perceived by LLMs, grouped into clusters. Use it to understand where your brand excels vs. competitors and surface actionable insights for marketing.
Weaknesses are extracted from LLM responses and grouped into clusters, making it easy to spot recurring areas of improvement and benchmark against competitors.
It tracks how your brand’s visibility changes vs. competitors over time, combining BMR and Average Position. Use it to read shifts in brand prominence and the impact of campaigns or external events.
Awareness is measured through textual mentions and frequency, the sentiment behind mentions, and association with key features. Metrics such as mention rates, sentiment analysis and SOV provide quantifiable insight into how top-of-mind a brand is in various contexts.
  • Brand awareness — LLMs are asked to list brands within a category, producing Mention Rate and Share of Voice (SOV).
    • Mention Rate — percentage of times a brand appears in LLM responses (e.g. 55/100 = 55%).
    • SOV — combines mention frequency with response position (top positions weighted more heavily).
  • Brand perception — LLMs rate brands on attributes (-5 to +5), and provide perceived strengths and weaknesses grouped into clusters with their share of pros/cons.
Together, these metrics let you assess visibility, prominence and consumer-style sentiment.

Methodology

A three-step process:
  1. Attribute embeddings are generated for each pro and con (representing meaning as numerical vectors).
  2. A clustering algorithm groups similar attributes by embedding similarity.
  3. An LLM names each cluster, summarising the theme.
Cluster size is then computed by summing occurrences of each attribute belonging to the cluster.
Auto attributes are generated by LLMs. We ask them to suggest important themes or concepts to consider when buying or investing in the analysis category — those suggestions are then used to evaluate the brand.
Charts are displayed weekly or monthly. For new analyses, only a few data points may exist, which makes the line appear flat.
For brand perception and awareness, data tends to stabilise within days or weeks. Pros/cons clusters and attributes need more time — usually about a month — for meaningful patterns to emerge.
The analysis currently focuses on popular LLMs (ChatGPT, Gemini, LLaMA, Claude). New models — and upgraded versions — are added based on priority as they become available.
LLM data collection starts only on the day the analysis is launched, unlike media data which has historical records. Review the SOV chart after about a month for enough data.
For Brand Perception, models are prompted once per week. For Search Visibility, you can choose weekly, monthly or one-shot.
No. Each prompt is handled individually with no back-and-forth, so previous prompts don’t affect current responses. Context windows remain unaffected.
Occurrences of each attribute. Cluster size is the sum of occurrences for attributes belonging to the cluster. Filters can scope counts to all brands (including competitors) or a specific brand.
Prompt-engineering pipelines industrialise generation and versioning, so we have several variations per analysis.
  • Awareness: “What brands come to mind when you think of {category}?”
  • Category Perception: “Which is better, {brand} or {competitor 1}?”
Selecting a country prepends a “pre-prompt”. For example, choosing USA turns the prompt into something like: “I am from the USA. What brands come to mind when we are talking about {category}?”
Weight is distributed equally across all LLMs.
Position weighting is non-linear: 1st place earns 25 points, 2nd place 10 points, 3rd 5 points, 4th 4 points, etc. A brand can have visible SOV without prominent positions.
For sections that don’t require a specific brand (e.g. Brand Awareness), a system groups variations of the same name (e.g. “AWS” and “Amazon Web Services”). For sections that target a specific brand (Brand Perception, Category Perception), this is not an issue.
Share Of Model is not a consumer survey — it’s a tool for measuring LLMs. Prompts are designed to assess how LLMs interpret and respond in a way that aligns with real-world consumer interactions, without replicating an exact customer’s reasoning.
Each country (including Worldwide) corresponds to a specific set of prompts.
  • Worldwide — prompts that explicitly state “I’m a worldwide…”
  • France — prompts like “I’m a consumer from France…”
  • Spain — prompts like “I’m a consumer from Spain…”
These datasets are separate and not automatically combined.
  • Selecting only Worldwide shows only Worldwide responses.
  • Selecting Worldwide + France combines both, but excludes any other country.
  • Selecting all available filters combines all the data.
Worldwide is a separate dataset, not an average or sum.

Credits

A credit is the unit used to track the consumption of platform resources. The value depends on the module and the complexity of the task.See the full Credit System article.
Credits = regions × personas. This reflects the increased complexity of analysing the brand across multiple markets and audience segments.
  • 2 regions × 3 personas = 6 credits.
  • 1 region × 1 persona = 1 credit.
Asset typeCredits
Text1 per asset
Image1 per asset
Video3 per asset
One-to-one — each search query consumes 1 credit. 50 queries = 50 credits.See the Search credit update for full details.

What’s next

Brand Analysis Starter Kit

Run your first analysis.

Credit System

The credit model explained.