How per4mx works

We track how AI models recommend your company. Here is what we measure, why it matters, and how to read your results.

Topics

What buyers search for

A topic is a business need your customers have. Not a keyword - a real question a buyer would ask an AI assistant.

Examples: "CRM data cleansing services Switzerland" or "B2B lead providers Zurich". Each topic can target specific regions and languages. When you set up per4mx, we auto-discover your topics from your website. You can add, remove, or customize them anytime.

Example

If you sell accounting software in Switzerland, your topics might be: "best accounting software for Swiss SMBs", "payroll solutions DACH region", "bookkeeping automation tools".

Prompts

Simulating real buyer conversations

For each topic, we create multiple prompts - different ways a real buyer might ask AI about that need.

Why multiple prompts? Because buyers do not all ask the same question. One might type "best CRM tools Switzerland", another asks "who can help me clean up our B2B contact database?". A single query only shows one angle. Multiple prompts reveal how consistently AI recommends you across different phrasings, intents, and perspectives. We average the results across all prompts for each topic, giving you a reliable picture - not a lucky (or unlucky) snapshot from one specific question.

Example

Topic "B2B lead providers" might have 5 prompts: a direct comparison query, a use-case question, a budget-focused query, a region-specific question, and an industry-specific variation.

Ranking position

Where you appear

When an AI model answers a buyer question, it typically lists several companies. Your ranking position is where you appear in that list.

Position 1-2 means the AI considers you a top recommendation. Position 3-5 means you are visible and mentioned favorably. Position 6-10 means you appear but are not the first choice. Not ranked means the AI did not mention you at all. We show this as bands (#1-2, #3-5, #6-10) rather than exact numbers because AI rankings fluctuate slightly between queries. The band gives you the meaningful signal without false precision.

Mention rate

How consistently you appear

Mention rate answers: "Out of all the different ways buyers ask about this topic, how often does AI bring up my company?"

A mention rate of 80% means you appear in 8 out of 10 different prompt variations. This is different from ranking - you could be ranked #1 when mentioned, but only mentioned in 30% of conversations. Or you could be mentioned in 90% of conversations but usually at position #5. Both metrics together tell the full story. High mention rate + high ranking = strong visibility. High mention rate + low ranking = known but not recommended. Low mention rate + high ranking = strong for specific queries, invisible for others.

Visibility score

The single number

The visibility score (0-100) combines ranking positions across all AI models, weighted by each model's market reach.

ChatGPT has more users than Brave, so a #1 ranking on ChatGPT contributes more to your score than a #1 on Brave. We weight each model by its approximate market share: ChatGPT 30%, Google AI Overview 25%, Google AI 12%, Gemini 10%, Copilot 8%, and so on. The score is comparable across competitors - if your visibility score is 72 and a competitor has 58, you are objectively more visible across AI platforms. We use the same calculation for everyone, so the comparison is fair.

Smoothing

Why your data looks stable

AI models do not give the exact same answer every time you ask. We smooth your data across the last 3 monitoring runs to filter out noise.

If you ranked #2 last week, #4 this week, and #2 next week, the raw data would look like a rollercoaster. The smoothed value shows the real trend: you are consistently around #2-3. This prevents false alarms ("we dropped 2 positions!") and false celebrations ("we jumped to #1!") from normal AI variability. Real changes still show through - if you genuinely improve over multiple runs, the smoothed score reflects it within 1-2 cycles.

Regions and languages

Local relevance

AI answers differ by region and language. A query in German about Swiss providers gives different results than the same question in English.

per4mx lets you monitor specific region + language combinations. Your prompts can target "Switzerland / German", "Germany / German", or "Switzerland / English" independently. This matters because a Swiss company might rank #1 in German queries but be invisible in English ones - or vice versa. Each combination is tracked separately, and you can filter your dashboard to see performance per market.

Variance

Why results differ between runs

You might notice your ranking changes slightly even when you have not changed anything. This is normal and expected.

AI models are probabilistic - they generate slightly different responses each time. They also continuously update their training data and web indexes. A competitor publishing new content, a news article mentioning your industry, or even the time of day can influence results. This is exactly why we use multiple prompts per topic and smooth across runs. Single-point measurements would be misleading. Our multi-prompt averaged approach gives you the stable, reliable signal that reflects your true AI visibility.

AI Portrait

How models perceive you

Beyond rankings, we extract how each AI model characterizes your company across competitive dimensions.

The AI Portrait shows scores on dimensions like "Data Quality", "Market Presence", or "Service Range" - whatever axes the AI models naturally use to differentiate companies in your space. You can compare your portrait against any competitor to see exactly where you lead and where you trail. These axes are locked after the first analysis so your portrait is comparable across runs. This tells you not just where you rank, but why.

Gap analysis

What to improve

For every topic where you are not in the top positions, per4mx identifies exactly what is missing and generates an action plan.

The gap analysis compares what AI models say about top-ranked competitors versus what they say about you. It identifies specific content gaps, missing signals, and areas where competitors are better described. Each action is categorized: things we can generate for you (blog posts, landing pages, FAQs), things you need to do (add testimonials, update pricing, fix Schema.org), and external signals (directory listings, press mentions). You can paste the URL of an existing page and we will optimize it for better AI visibility, or generate new content from scratch.

Frequently Asked Questions

How often should I check my dashboard?
Weekly is ideal. AI visibility changes gradually - daily checks would mostly show noise. After publishing new content or making website changes, check after the next monitoring run to see the impact.
Why does my ranking differ between providers?
Each AI model uses different training data, web indexes, and ranking algorithms. It is completely normal to rank #1 on ChatGPT but #5 on Perplexity. This is why we track all major models and give you a weighted overall score.
Can I influence which prompts are used?
Yes. We auto-generate prompts during setup, but you can edit, add, or remove them anytime. You can also use AI to suggest new prompt variations. More prompts give more reliable data.
What happens if I am not ranked at all?
This means AI models do not mention your company for that topic. The gap analysis will tell you exactly why and what content or signals are missing. This is the starting point for most customers - per4mx helps you get from invisible to recommended.
How long until I see improvements?
Content changes typically take 2-4 weeks to reflect in AI rankings, as models need to recrawl and reindex your site. Structured data changes (Schema.org, FAQs) can show impact faster. per4mx tracks the trend so you can see progress over time.

Ready to understand your AI visibility?

Start with a free check - no credit card required.

Check your visibility