Introduction
I have been working in a product company in the fields of finance, economics, and business for about 5 years now. In addition to this subject area, we have a second, equally important focus — SEO optimization 📈. My previous job was at a web studio, where I created numerous small and medium-sized websites, some of which were also tailored for search engine optimization. The difference in the quality of optimization between these two places is significant. In my current job, a tremendous amount of attention is paid to this, and rightfully so; we always have a team of 5-7 SEO specialists. Moreover, even the company executives are excellent SEO specialists themselves 👨💼. During this time, I have listened to many smart words among them, some of which I have absorbed well.

Theoretical and Technical SEO Optimization
In SEO optimization, I will highlight two directions: the first is the familiar SEO that I will call theoretical or simply regular, and the second is technical.
The first 📊 is what all SEO specialists do: competitor analysis, search services like Wordstat, creating a semantic core, and subsequent work before launching a website or a specific section of a website.
The second ⚙️ direction I want to highlight is technical SEO. In my opinion, it emerged much later than theoretical SEO, as in the early years of the internet, this aspect of promotion was of little concern to anyone, and search engine algorithms paid significantly less attention to it. Over the years of working in my current company, I have developed a clear list 📋 of what is included in technical SEO optimization, and perhaps someday I will write about it as well, for example, in posts at work, but it is unlikely to appear in my blog, as these are complex technical aspects that I do not publish in my blog.
Wordstat is a service from Yandex that allows you to understand and analyze which queries users of the Yandex search engine most frequently search for.
The semantic core of a website (semantics) is a list of keywords and phrases that bring targeted visitors to the site, used for promoting the site in search engines.
Triggers for Writing an Article
This article will primarily focus on theoretical SEO optimization and its imperfections today. However, since it is closely related to technical SEO, I would also classify some aspects of technical work as useless if we had ideal search engines.
The idea for writing this article came from a certain false opinion that I constantly hear (even at work), that search engines, through neural networks, have now developed to such an extent that they cannot be deceived 🤖 and that they are very smart. Every year, more ranking factors are applied; however, do they really improve the overall picture of SEO optimization? In my opinion, no! Let’s try to figure it out.
Ranking (in search results) is the prioritization of search results for different websites and links. In other words, when ranking, the search engine decides why one website for the same query should be at the top of the first page while another should be somewhere in the middle of the second page.
In any case, I regularly encounter SEO at work, and sometimes I even read something for myself. This year, I read the book 📗 "The Bright Side of Website Promotion" by Ramazan Mindubaev, Dmitry Bulatov, and Albert Safin. I also watched a number of video materials accompanying this book. The authors refer to this as the bright side of website promotion. However, for me, it feels more like the dark side of SEO and a lot of meaningless and foolish work. Of course, it’s not as dark as when people used to deceive search engines by listing all the keywords in a completely hidden block on their website pages. Still, I do not see a bright side in modern SEO; rather, I see the imperfections of search engines. I am in no way criticizing modern SEO specialists for their methods. They are simply playing by the existing rules and utilizing all possible and permissible optimization methods.
So, what's wrong with search engines?
I will try to break down all my grievances into several points and examine each one in detail.
1. Algorithms and Ranking Factors. 🔝
There are many factors and criteria by which a search engine evaluates the "quality" of content. Each system may have a varying number of such criteria. For example, Yandex has over 200 factors. Many factors of search engines can be grouped based on similar criteria (for instance, behavioral factors: how users behave on the page, or domain name factors: how old the domain name is, whether it appears in spam databases, how well the site's themes correspond and fit together) and other groups of factors.
Every year, ranking factors are improved, but this happens so slowly that with such minor improvements, we will reach a truly quality search engine in about 50 years. Each subsequent ranking algorithm seems more like fine-tuning existing factors: increasing factor A by 1%, decreasing factor B by 4%. Whether this change is made by hands or neurons, we do not know, but what matters is that it is clearly insufficient for a conceptually good search.
Among the ranking factors, there are many strange ones: the time spent on a site does not always indicate the quality of the content; the age of the domain: an old domain does not always mean it is of higher quality than a recently registered one, and so on.

2. Eternal Experiments. ⚖
We are always conducting experiments and tests: measuring how this will allow us to outpace competitors for certain queries or simply gain more traffic (visitors) to the website. We do not know precisely what will appear in the snippets of search results, only relying on conditional recommendations. I have nothing against testing aimed at analyzing human behavior and perception, as all people are different, and changing a green button to blue can indeed increase or decrease the number of clicks by a certain percentage. But if people are all different, then the search engine (for example, Google) is a single entity. Why does swapping certain blocks or adding some textual information to a block, in the opinion of the search engine, make a page better or worse? We should not have to guess what the best solution is according to the search engine. We should know it definitively.
Snippets (in search results) are additional materials from a page displayed alongside the link to the site and the test description of that page. Snippets can include addresses, phone numbers, accordions, mini-tables, and many other types of information.
3. Generation of Excess Content. 🗃
Humanity today produces 8,500 times more content in a single day than is stored in the Library of Congress; every second, 1,000 times more is published, and daily, 80 million times more internet content is created than the 130 million printed books published throughout human history.
Brett King
One of the main problems with SEO, in my opinion, is the generation of excess content and the process of building a semantic core. First, we analyze search queries. Then we adjust (compose) the titles and other key phrases of the page according to the intents in order of importance (in descending order, where the most important ones are placed at the beginning).
Intents are the desires and intentions of the user; what they have in mind when entering a search query.
When people say that a search engine is a complex system of factors that neural networks and other learning/self-learning algorithms work with, I always see it differently. In my understanding, a search engine is like a child that reacts to queries. Whoever optimizes better—legally or through deceit (by finding a loophole)—will be prioritized. You never know for sure what this child likes and what it doesn’t, and you are always conducting various A/B tests. It’s like a trusting grandmother who has her own opinion, but it is rarely truly accurate until a person (an assessor) comes along whom the search engine trusts implicitly.
Let’s provide a very real example. We have 15,000 to 20,000 pages on the website that are 95-99% similar in content. Only the titles change, and in some cases, there is indeed a tiny amount of different information.
Such pages could include, for example:
Pensioner loans of 100,000 rubles in Smolensk
Pensioner loans of 100,000 rubles in Omsk
Pensioner loans of 100,000 rubles in Tyumen
There could be thousands of such cities. The required amount can vary, say, from 10,000 to 1 million, and in the case of a pensioner, the borrower can be anyone: a student, a military personnel, a housewife, a disabled person, an immigrant, and many other social groups. Any noun describing who a person can be will fit the query. And how do we handle such situations? Just think for a moment about what we do to achieve our goals in order to "feed" this information to the search engine. That's right, we generate all possible combinations of options! And all this is just to ensure that our titles are as close as possible to the frequency of the query. To cover more queries, we create hundreds, if not thousands, of pages with various combinations of these options.
Why can't we just create a single page?
"Pensioner loans of 100,000 rubles in the cities [Smolensk, Omsk, Tyumen]." However, in modern realities, such a query (unless it's from an extremely authoritative site) will not rank highly. Moreover, even this option is not ideal for the search engine of the future. The problem with this query is that it explicitly states the amount of 100,000 rubles and the category of the borrower: a pensioner. This does not mean that in these cities, one cannot get a loan for a different amount or for other social groups. An ideal search engine should understand that there is a single page where one can get information about loans from amount N to amount M in cities (list of cities) for groups (list of social groups).
Loans {10,000, 15,000, 20,000, ..., 1,000,000} {pensioner, student, housewife, ..., disabled} in {Smolensk, Omsk, ..., Tyumen}.
The API for interaction (enumeration) for the search engine, which website optimizers would provide, may and most likely will be completely different and more extensive. But I am confident that both modern SEOs and regular blog, online store, and other website administrators would understand this interaction.
An API is a set of tools and functions in the form of an interface for creating new applications, allowing one program to interact with another.
In the ideal search engine of the future, this query should not rank lower than a specific high-frequency query. The search engine should pay attention to the quality of information, its reliability, speed, and convenience of presentation. That's it! No need for 20,000 pages. If a resource owner needs to provide any parameters for interaction with the search engine, it would be easy to transmit available pages, available cities, and available social groups. Thousands of websites, primarily online stores, especially small and medium-sized ones competing with larger companies for high-frequency queries, would not have to create thousands of pages. Just one single page from each site in the search engine databases. Imagine how we could save hundreds or even thousands of hard drives and hundreds of servers if we stopped duplicating information, doing unnecessary work, and focused on quality rather than template-based approaches tailored for search engines.
High-frequency queries are those that have a high demand frequency on the internet. Getting a website to rank for a high-frequency query significantly impacts its traffic growth and visibility.
4. Ignoring Requirements and Guidelines. 🔗
A search engine is a black box full of magic (and it's unclear whether it's good or bad) that is hidden from the eyes of SEO specialists. No one really knows what the outcome will be after optimizing pages among thousands of competing pages. However, even within this magic, there are several clear rules that search engines allow everyone to play by. Among them are the title and description of the page, which will be displayed in the search results. What a person fills in the special fields should be what appears when the page is displayed! Does it really work this way? Yes! Does it actually work this way? Not entirely! Even knowing where and how to write the information for the description that the user should see, the search engine rarely ignores the description provided in the required field and instead takes a completely different one that, in the algorithm's view, fits better.
5. The Role of Assessors. 🦸♂️
I might be revealing something new to some, but search engines do not operate completely independently. Assessors play a significant role, and in some cases, even a decisive one. They can lower or raise the ranking of a source in the search engine at their discretion. And while they generally process a very small amount of information, they do exist! We also pay great attention to assessors at work. We place important (in our opinion) information in the most visible spots so that it immediately catches the eye of the assessors who, even if briefly, visit the site. If search engines were perfect and ideal, no outside individuals would be needed.
Assessors are individuals, representatives of search engines, who check the quality and reliability of information.
6. Technical Complexity in Programming. ⛓
This may be the most challenging point to read, filled with many unfamiliar terms. However, I couldn't leave it out. As a result of the points mentioned above and some unmentioned issues, there arises a complexity in creating and maintaining such a resource. While the use of SEO-friendly URLs (human-readable URLs) can still be somewhat justified, as readable links and addresses are always more pleasant to perceive, the overall "correct" nesting of a website's URLs, the enhancement of website subsections, and the use of subdomains purely for SEO purposes are all complete utopias and significantly complicate programming. Designing an SEO web application that is well-optimized for SEO and fully meets the requests and desires of SEOs is a very challenging task. If it comes to needing to completely or significantly restructure the nesting of pages and sections during the operation of the website, the complexity of the process can multiply several times. Sometimes, in such overhauls, temporary or permanent workarounds are simply unavoidable. When my programmer friends ask why we don't use frontend frameworks in our SEO projects, one of the main issues I mention is that the routing of any known frontend framework today cannot fully meet the demands of SEOs. Often, even the routing of backend frameworks (which are designed, among other things, for flexible URL handling) is insufficient to satisfy the needs of SEO optimization, so what can be said about various Reacts and Angulars? Perhaps we should conclude this section filled with complicated terminology and move on.
How I see search engines
What I expect from next-generation search engines:
- Absence of Assessors
- More modern and, importantly, advanced ranking factors
- Universality of queries without the need to duplicate and copy countless meaningless pages (this is perhaps the most important factor)
- More humanity, rather than mindless processing and analysis of big data
- A/B testing can influence user behavior on the site (bright buttons with calls to purchase can increase conversion rates by a few percentage points or, conversely, deter users due to being intrusive). However, A/B testing should not be used to track website behavior in search results.
- Transparency (which would eliminate the need for A/B testing)
- Unified rules of the game

Every time, there are websites and optimizers that manage to reach the top (high and advantageous positions in search results) through dark optimization methods (this is called dark SEO, which could lead to a ban, and thankfully, it is becoming less common each year). However, I want to believe that the term "the dark side of providence" will remain somewhere in the late 2000s. Meanwhile, the bright optimization that SEOs refer to as modern methodology is likely to undergo significant changes.
When to expect improvements
When can we expect truly quality search engines? In the next 5-10 years, I don’t think anything will change drastically. There is hope for quantum computers and quantum computing, but they will not comprehensively solve all problems. They may only address some issues, such as the lack of computational power. They can allow for faster information processing. However, this will likely be insufficient to build a search engine that fully satisfies us. Moreover, I don’t see any prerequisites for modifications to existing algorithms and ranking factors to significantly impact quality and elevate search engines to a completely new level in the foreseeable future. Perhaps something conceptually new is needed, and quantum computing and computers could provide that impetus.
SEO by Unified and Clear Rules
What will SEO look like if the rules of the game are truly transparent and uniform for everyone: suppose we clearly know what the headings should be (or rather, they shouldn’t even be that important) and what data will definitely go into the microdata. When all websites become “equal” in terms of attractiveness, what should search engines focus on? I believe we can still work on and emphasize technical metrics: loading speed 🏃♂️, page performance (these metrics are already considered now) 🖥, but they do not correspond to reality. Many poor-quality websites still make it to the top of search results, while fast and user-friendly sites often end up lower. This is because theoretical SEO is prioritized. In next-generation search engines, theoretical optimization should completely disappear or, at the very least, transform into something else. Since search engines will possess “true” (much higher quality) artificial intelligence 🤖, especially if assessors are excluded, search engines should independently determine the quality and reliability of the information provided at a new level. This should become the primary ranking criterion (even more so than technical optimization) and completely replace theoretical SEO. At present, I don’t see any prerequisites for search neural networks to analyze information adequately. Whether this is due to the vast amount of information on the internet, which continues to grow exponentially, or the poor quality of neural networks — it’s hard to say. I tend to think it’s more the latter. But one thing I know for sure: by generating 20,000 identical pages, we are clearly going down the wrong path and only complicating the work of search engines while “polluting” countless servers with unnecessary information. However, lacking good alternatives, we still engage in such promotion. In the search engines of the future, we won’t look at intents to form headings to fit phrases as much as possible to queries. We will directly tell search engines who we are and what services we provide. The quality of how well we do this should be determined by the search engine. The collection of semantic cores will change. With the emergence of quantum computers with stable qubits, overall computational power will increase. This will allow for more frequent and higher-quality indexing of pages. But whether the quality of search engine indexing will improve remains an open question.
Conclusions
I will try to summarize the above and briefly list the problems I see with modern search engines.
- 🔝 Imperfect factors and algorithms for ranking and assessing website quality.
- ⚖ Constantly having to conduct A/B tests and various experiments to understand how they will affect search results.
- 🗃 Generating a ton of unnecessary pages and duplicate content, all to satisfy the search engine. This resembles forcing an answer to a problem rather than providing a quality solution.
- 🦸♂️ Human intervention (assessors) in the system doesn’t necessarily mean something bad, but it should be a rare occurrence, not a regular one. Search bots should be able to handle this well on their own.
And a little more about search engines
Stepping slightly away from the main topic (discussions about the quality of theoretical SEO today), let’s talk a bit more about search engines in general 💡.
Few people know, but besides search engines like Yandex and Google, there are many others, including quite good ones. If search engines like Rambler, Bing, Yahoo, and Mail do not inspire trust for various reasons (some have a small search base, some are no longer at their peak and will never return), there are a few search engines that pique my interest. Speaking of DuckDuckGo 🦆, it’s a good search engine with a large database and decent privacy (at least at the time of writing this article). Besides DuckDuckGo, I have the Brave search engine from the corresponding browser bookmarked, which I plan to try out soon. There’s also another interesting search engine, You, which intrigues me even more than Brave and DuckDuckGo. In particular, for programming queries, I find the results from this search engine even more appealing than those from Yandex or Google.
Sometimes I have at least two more questions in my mind related to search engines. I will try to outline my thoughts further.
- Will the share of Yandex and Google in internet dominance change? (Since this post is aimed at Russian-speaking readers, and these search engines are the main ones for them, I took these as examples.) If I have to give a brief answer, it’s probably no. While Yandex may gradually lose quality or at least fall behind Google's features in the context of sanctions (which are generally imposed on the region), Google itself faces little hindrance in the world at large and can continue to develop. Therefore, I believe that Google will continue to dominate. This applies for the next 20 years. But let’s not forget that there have been many instances in the IT world where a company's policies led to a crisis and the loss of clear market dominance: Xerox, Intel, and of course, the search engine Yahoo, which dominated its field in the early 2000s. Most likely, Google knows how to learn from others' mistakes and will not allow this to happen to itself. Moreover, for Google, the search engine is indeed an important commercial component 💵. But let’s not speculate about what will happen in 50 years. Perhaps no one will even remember the Google search engine, just as people don’t remember Yahoo now 😟.
- Can specialized search engines 🔍 emerge that will surpass more universal ones in quality? This question is perhaps even more complex. To some extent, such systems already exist and operate (search) within their own framework, but they are unlikely to extend beyond that and compete with more universal ones. On the other hand, if such a search engine truly represented something very progressive in terms of searching, I would start using it, regardless of whether it was for programming code or building materials; the main thing is that the search in that niche would be as natural and human-like as possible. Such a search would imply the absence of the algorithms we are accustomed to and would be entirely based on machine learning and more advanced technologies. On the other hand, if such a search engine were to emerge that could work exceptionally well in a specific niche, what would prevent it from being applied and trained in other niches? Then we would again slide back to a universal search, but this time it would be more advanced.