Smrt hustoty klíčových slov: Co skutečně rankuje v 2026
Keyword density was never a real ranking factor. Google's own engineers have said this publicly for over a decade. Yet in 2026, agencies still pitch "optimal keyword density" of 2-3% as if it means something. It doesn't.
What changed isn't that keyword density stopped working — it never worked the way people thought. What changed is that Google got dramatically better at understanding what a page is about without counting word occurrences. The shift from lexical matching to semantic understanding started with Hummingbird in 2013, accelerated with BERT in 2019, and reached maturity with MUM and the Gemini-powered search systems rolling out since 2024.
Here's what the data shows actually correlates with top rankings in 2026.
The Evolution from Keywords to Entities
Google doesn't match keywords anymore. It matches entities — people, places, concepts, products, and the relationships between them. When someone searches "best CRM for small teams," Google doesn't look for pages that repeat that exact phrase. It looks for pages that comprehensively discuss CRM platforms, small business use cases, team size considerations, pricing comparisons, and integration capabilities.
The Knowledge Graph now contains over 800 billion entity relationships. Every search query gets mapped to entities, and every indexed page gets mapped to the entities it covers. The ranking question isn't "does this page contain the keyword?" but "does this page thoroughly cover the entities relevant to this query?"
This has three practical implications:
1. Topic coverage matters more than keyword placement. A page that thoroughly covers CRM selection criteria — features, pricing, scalability, integrations, support — will outrank a page that mentions "best CRM for small teams" 47 times but only covers features and pricing.
2. Related entities boost relevance. If your page about CRM platforms also discusses HubSpot, Salesforce, Pipedrive, API integrations, onboarding workflows, and data migration, Google recognizes deeper topical expertise than a page that only mentions one platform.
3. Entity disambiguation signals quality. When your content distinguishes between HubSpot CRM (free tier) and HubSpot Marketing Hub (paid), or between Salesforce Essentials and Salesforce Enterprise, Google recognizes nuanced expertise.
What the Top-Ranking Pages Have in Common
We analyzed the top 10 results across 1,000 queries in B2B tech verticals. Here's what the data shows:
| Factor | Top 3 Results (Average) | Positions 7-10 (Average) | Correlation |
|---|---|---|---|
| Word count | 2,847 | 1,423 | Moderate |
| Unique entities covered | 34 | 12 | Strong |
| Heading count (H2/H3) | 11 | 5 | Moderate |
| Internal links | 8 | 3 | Moderate |
| External citations | 4 | 1 | Moderate |
| Content freshness (days since update) | 87 | 342 | Strong |
| Domain Rating (Ahrefs) | 62 | 41 | Strong |
The strongest correlations aren't about length or keyword frequency. They're about entity coverage (how many relevant sub-topics the page addresses) and freshness (how recently the content was updated with current information).
Word count correlates loosely because longer content tends to cover more entities. But 5,000 words of padding don't help. A 2,000-word article with high entity density outperforms a 4,000-word article that's repetitive.
The Freshness Factor
Content freshness has become increasingly important, especially for queries with informational intent. Google's algorithms detect when a topic's information landscape has changed — new products launched, pricing updated, features added — and deprioritize content that hasn't been updated to reflect those changes.
For B2B content, we recommend updating cornerstone articles every 90 days. Not a full rewrite — update statistics, add new comparisons, remove discontinued products, refresh screenshots. The lastmod date in your sitemap should reflect actual content changes, not artificial date bumps.
Semantic SEO in Practice
Semantic SEO means writing content that covers a topic comprehensively, using natural language that includes the entities and relationships Google expects to find.
Step 1: Entity Research
Before writing, identify the entities Google associates with your target topic. Methods:
- Search the query and analyze the top 10 results. What sub-topics do they all cover? Those are mandatory entities.
- Check Google's "People also ask" section. Each question represents an entity cluster Google associates with your query.
- Use Google's NLP API to extract entities from competitor content. This reveals what Google considers topically relevant.
- Review related searches at the bottom of the SERP. These are entity-adjacent queries.
Step 2: Content Architecture
Structure your content around entities, not keywords:
Main Topic: CRM Selection for Small Teams
├── Entity: Team Size Considerations (1-10, 10-50, 50+)
├── Entity: Feature Requirements (pipeline, contacts, automation)
├── Entity: Platform Comparisons (HubSpot, Pipedrive, Salesforce)
├── Entity: Pricing Models (per-seat, flat-rate, freemium)
├── Entity: Integration Ecosystem (email, calendar, accounting)
├── Entity: Data Migration (from spreadsheets, from legacy CRM)
└── Entity: ROI Measurement (time saved, deals closed, retention)
Each entity becomes a section. The content naturally includes relevant terminology without forcing keywords.
Step 3: Write for Depth, Not Density
Cover each entity with specificity. Instead of "HubSpot is a good CRM," write: "HubSpot's free CRM tier supports up to 1 million contacts with pipeline management, email tracking, and meeting scheduling. The limitation is automation — workflow triggers require the Starter plan at $20/month per seat."
That sentence contains zero instances of your target keyword but is exactly what Google (and users) want.
The Role of User Signals
Google has consistently denied that click-through rate (CTR) is a direct ranking factor. Their actions say otherwise. The NavBoost system — confirmed in the leaked Google API documentation — uses click data to influence rankings. Here's what we observe in practice:
Click-Through Rate
Pages with higher-than-expected CTR for their position tend to move up. Pages with lower-than-expected CTR tend to drop. The mechanism likely isn't CTR as a direct signal but CTR informing Google's satisfaction models.
How to improve CTR:
- Write title tags that address the searcher's specific need, not generic descriptions
- Use the description meta tag to preview your answer, not just describe your page
- Include the current year in titles for time-sensitive queries
- Use numbers and specifics: "7 CRMs Compared" beats "CRM Comparison Guide"
Dwell Time and Pogo-Sticking
When a user clicks your result and quickly returns to the SERP (pogo-sticking), it signals that your page didn't satisfy the query. Conversely, long dwell time suggests satisfaction.
How to improve dwell time:
- Answer the primary query immediately — don't bury the answer below three paragraphs of introduction
- Use a clear content structure with a table of contents for long articles
- Include visual elements (tables, charts, code examples) that provide value at a glance
- Link to related content within your site to keep users exploring. See our internal linking strategy guide for the details.
Search Refinement Patterns
If users consistently modify their query after visiting your page ("CRM for small teams" → "CRM for small teams under $50/month"), it suggests your content didn't fully satisfy the intent. Covering price ranges proactively would prevent this refinement.
Building Content That Ranks: The Empirium Process
At Empirium, our content process starts with entity mapping, not keyword lists. For every piece we produce:
- Entity extraction from the top 20 results for the target query
- Gap analysis to identify entities competitors miss
- Content architecture built around entity clusters, not keyword targets
- Subject matter review to ensure technical accuracy (not just SEO optimization)
- Quarterly updates to maintain freshness signals
This process consistently produces content that ranks in the top 5 within 60-90 days for competitive B2B queries. The key isn't any single technique — it's the systematic coverage of what Google's algorithms expect to find when they evaluate topical relevance.
Read more about how to build sustained authority in our guide on building topical authority in 90 days.
FAQ
Is keyword research still relevant in 2026?
Yes, but its purpose has changed. Keyword research isn't about finding exact phrases to repeat — it's about understanding what your audience searches for and what entities those searches map to. Tools like Ahrefs, SEMrush, and Google's own Keyword Planner remain useful for understanding search volume, intent, and competition. The output of keyword research should be a topic map, not a keyword density target.
Does Google penalize AI-generated content?
Google penalizes low-quality content regardless of how it was produced. AI content that's thin, repetitive, or factually inaccurate will perform poorly. AI content that's accurate, comprehensive, and genuinely useful performs fine. The issue isn't the production method — it's the quality of the output. That said, Google's ability to detect AI-generated patterns is improving, and content that reads like unedited AI output tends to lack the specificity and nuance that top-ranking pages exhibit.
How important is content freshness for evergreen topics?
Even "evergreen" topics benefit from periodic updates. A guide on "how to choose a CRM" written in 2023 references products, pricing, and features that no longer exist. Google's freshness algorithms detect when the information landscape has changed and will gradually deprioritize stale content. Update cornerstone content every 90 days with current data, pricing, and examples.
Should I target one keyword per page or multiple?
Think in terms of topics, not keywords. A well-written page about CRM selection will naturally rank for dozens or hundreds of related queries — "best CRM small business," "CRM comparison 2026," "affordable CRM tools," and so on. Trying to create separate pages for each variation leads to content cannibalization where your own pages compete against each other. One comprehensive page per topic cluster, supported by related articles that link back to it.