SEO trends driven by search engines

SEO trends driven by search engines

Who knows how to get a site to the top of the search results, better than the search services themselves? It is Google and Yandex that set the basic rules for SEO promotion. This means that their innovations should always be given attention and time in the first place.

Moreover, in recent years, both search engines have distinguished themselves with new products just in terms of search engine optimization. Google has two such novelties. First, the service made a claim to a new level, showing the world the first “smart” ranking system BERT. Its implementation finally reversed the old SEO rules based only on the number of keywords. And already on the basis of the neural network, the recent addition of PEU or Page Experience Update appeared.

Yandex, striving to keep up with the main and only serious competitor, is also switching to artificial intelligence. This means that in the pursuit of traffic, you can no longer “hide” exclusively behind key phrases. And especially for high-frequency keywords.

What should be the site to like the new algorithms of each of the search engines? In fact, nothing changes globally. As it was stated about 5 years ago, it is not the form with the keys that is important, but the content with useful facts and tips. Actually, this is what you need to rely on. Then the technical indicators will be applied. It remains only to figure out what to look at in order to understand what this “application” has happened.

Google and its new ranking algorithm for user factors

The completely new, albeit not innovative in terms of mission, Page Experience Update algorithm is more convenient in terms of assessing the site for compliance with the requirements. If only because it has absolutely clear parameters – Core Web Vitals metrics. They will be used to evaluate the user experience of web resources. Actually, this analysis of web pages based on the behavior of readers is the essence of the concept.

There are only 3 Core Web Vitals metrics (vital indicators of a web page):

  • Largest Contentful Paint, LCP;
  • First Input Delay, FID;
  • Cumulative Layout Shift, CLS.

Loading the main content block

The first metric is devoted to the page loading speed, more precisely, its largest objects (Largest Contentful Paint). Loading the entire main unit in 2.5 seconds is considered normal. If the page takes longer to load, then it will be downgraded in the SERP. A good reason to optimize the weight of the “heaviest” objects, including all kinds of scripts such as virtual fitting rooms or calculators.

Delay in the first login phase

The First Input Delay parameter determines the time period between the user’s transition to the page – the first behavioral action (for example, opening a product card) – and the site’s reaction to it. Measured in milliseconds.

The optimal FID value is 100 ms. If the site’s response to a request or user action takes longer, you need to conduct a technical audit. Most likely, for exceeding this indicator, they will not be particularly punished immediately. But in the final summary assessment, this will play a role.

Visual stability

Literally translated, Cumulative Layout Shift is a measure of the cumulative page change. That is, any switching on it while it is open on the user’s screen.

If there are no more than 2 such switches (de facto jumps and failures) per session, the resource will most likely be considered as attractive to the reader. The optimum is set at CLS to 0.1. This means that only 1 jump can occur during the entire session (but better, of course, 0).

YATI from Yandex

Yandex’s transformers (the name of its search crawlers since the release of the Palekh and Korolev updates) are also changing their approaches. More precisely, they are consistently moving from the old methodologies for assessing material “head-on” to the new practice of predicting user actions.

In other words, the neural network of a company called YATI, according to the statements of its developers, will be able to predict which of the issuing sites the reader wants to choose, what kind of text will best answer him the question asked. Of course, it is not so easy to adapt to such intentions of the robot, having at its disposal a considerable, but limited by “non-learning” software arsenal of SEO, is not so easy.

Although, according to some experts, the secret should be in simplicity. If the page provides useful information, the smart algorithm will choose that information. If the resource contains only structured texts with somehow inserted keywords, it will be difficult to get into the top 5.

As it was before

If the truth is learned by comparison, then by comparing 2 ranking models – the classic and the fundamentally new – it will be possible to understand what to do next with SEO. So, according to experts who tracked the milestones in the evolution of Yandex algorithms, earlier the robot simply matched the user’s request and the content on the site page. The more they had in common, the higher the site was in the search results.

At the same time, to help the scanner, which with all its programmatic diligence still could not understand the content of the materials, we used “clarifying” assessment tools:

  • spam;
  • behavioral factors;
  • sizes of texts;
  • the presence of markings and many other points (more than 200 points in total, the exact number is unknown).

How will it be now

The YATI algorithm, built on the basis of neural network technologies and having acquired the ability to independently learn, will be able not to analyze, but to understand the user’s request. And paraphrase it, and narrow / expand the scope to which the information of interest belongs, and much more.

Accordingly, he will also be able to understand what is specifically shown on the promoted pages. And over time – and compare how much the answer in the material correlates with the reader’s question.

It is clear that the revolution will not happen very quickly. But over time, YATI can really eliminate the need to use key phrases in texts. True, experts have already come to the conclusion that this will only touch on some topics. For the rest, everything will work as before (semantics, structure, fast loading, information about the author, etc.).

Leave a Reply

Your email address will not be published.