The pandemic “flared up” during the most heated debate about whether classic sites have a future in the field of online earnings and online sales. And put an end to it. Because it is de facto very difficult to work remotely and sell via the Internet without websites. At least to the extent that any self-respecting business should strive.
Development of sites is guaranteed. Even more, working with them will be complicated many times over by the growing competition. And such an important aspect as the selection of semantics in this niche will have to pay maximum attention.
However, experts in the field of content management and developers of all kinds of software for the same purposes do not plan to stop trying to facilitate and speed up one of the most expensive and time-consuming processes for filling a website. There are already more than a hundred different programs designed to at least partially optimize the work with the semantic core. And their number is growing.
This means that one of the first plans is the question: is there any point in using the methods of automated collection of the semantic core. To answer it, you need to disassemble its strengths and weaknesses. And for this you need to compare both algorithms of actions.
Two Methods for Collecting Semantics
So, today, the formation of a core of key phrases for each new article is carried out in two ways.
- Manually. It will cost more in terms of time and money, will require the involvement of 2 to 10 people and will take from 2 weeks to several months. But at the exit, the manager will receive a detailed list of planned articles for the site.
- Using one of the proven automation methods. As a rule, it lasts no more than 1 day and costs at least 3-5 times less. But in the process, mistakes and omissions of important phrases are not excluded.
How are semantics collected by hand?
Here the action plan is as follows: first you need to collect absolutely all the words by which one or another product (service, product, job) can be searched on the Internet, and then extra and non-working keys (commercial, for example, for an information site) are gradually eliminated from them. In practice, the process looks like this:
- one specialist prepares a diagram with all the branches of subtopics within the framework of a key topic (it can be in the form of a complex plan, diagram, algorithm, often called a map);
- the same performer (but better – 2 assistants) collect all phrases (markers) that may be associated with its subtopics from the existing map;
- the semantics team checks the collected words and phrases (and there can be tens or hundreds of thousands of them) for relevance for the region and frequency (the first stage of screening);
- at the second stage of filtering, everything that looks illogical, does not relate to a specific business (seller, brand), may not be liked by the search engine scanner or is duplicated from the list;
- the third stage of filtering coincides with the process of grouping the collected keys for future article plans.
This is a very general plan, in which additional steps are often “wedged”. For example, the control of the main competitors, according to the results of which it turns out that some important keys were missed. They need to be completed by returning to the first step of the plan and repeating all of its points anew.
What does an automated process look like?
The process of automating semantic work can look different depending on what kind of software the performers use. The simplest (and therefore common) scenario is collecting keys following the example of a competitor’s texts.
It is great when phrases are collected to fill a seller’s website (writing articles on landing pages). And sometimes it gives good results for blog content (because competitors are more likely to write materials in it on topics that are relevant to the buyer).
What the work of collecting semantics looks like:
- the necessary pages of competitors are run through a functional service that can select the keys and LSIs embedded in them;
- the resulting list is loaded into a paid or free catalyst;
- then the results are grouped (again, automatically, the user only sets the conditions for dividing into blocks).
The method has 2 areas of increased responsibility of those who control the process. First, at each stage, the results of the automatic analysis must be checked manually. Secondly, it is very important here to choose the right subjects for analysis – competitors.
The more accurately they are selected (by niche, geolocation, brand positioning, target audience, etc.), the more accurate the semantics will be. Actually, this search takes most of the time – from 2 to 12 hours. The rest of the manipulations (including non-program control) take no more than 2-3 hours.
Pros and cons of the automated method
Of course, it is necessary to consider the particular advantages and disadvantages of the second scenario after choosing its method. In the case of automation using competitors’ articles, the following advantages can be distinguished in comparison with manual selection:
- 1 person can do everything, no need to assemble a team of specialists;
- from the first plus comes the second, saving on salaries (the average cost of an hour of work semantics, if it is not regular, is 250–350 rubles);
- the whole range of work can be entrusted to junior employees (it is not necessary for each site, for example, a digital agency to involve a chief specialist);
- gain in time is the main advantage of automation. In the case of content resources, it is of key importance, because the trends for some audience groups here change literally every week, and after spending a month with semantics, you can get 50% of plans for articles that are no longer of interest to anyone.
What are the disadvantages
There are also disadvantages. In the above example (competitor analysis), you will have to fully rely on the fact that someone has worked well on the topic. Anything that your colleagues missed, you will also miss if you do not combine manual and automatic methods.
The second point concerns the niche. Automatic collection of keys, following the example of other sites, only works when there are sites with a completely similar theme. Finally, the third significant disadvantage: the process will not be fully automated anyway. There will be a lot of routine.