по-русски in English
Analyzers
on site
13 Show
all
Russia World

What are the Search Quality Analyzers?

AnalizeThis.ru is a project based on independent automated evaluation of search engines' quality. The idea was conceived and realized by Ashmanov & Partners Company. The project is a technological one: the search quality is not assessed "by hand", rather we find such search parameters as can be measured automatically. Daily measurings make it possible to closely watch the search engines' highs and lows, their rivalry and their struggle for web search improvement.

Since the start of the project in 2007, we have detected dozens of such automatically comparable parameters. The mere listing of the analyzers whose number at the end of 2012 amounted to 37 has become rather cumbersome, so they were organized into ten different groups.

So, what actually is an analyzer? Well, basically it is a program with a two-fold function: first, it sends the search engines a specially selected set of queries and collects the output; next, it checks the extent to which the output meets specified formal criteria (for more details please refer to the "How do the Analyzers Work" section). These criteria are, of course, identical for each search engine.

The results of the analysis are published, so that at any moment you can see not only the points obtained by a search engine this or the other day (here are, for example, those of the Navigation Analyzer of January 1, 2010), but also the queries asked, or even the output yielded by a specific search engine to a specific query on that day (for example, that by Yahoo in response to "камаз" (Kamaz), the correct is highlighted with green). All this guarantees a high degree of objectivity in our evaluation.
To the best of our knowledge, our project is unique in Runet and, possibly, beyond.

How Do The Analyzers Work?

To see an analyzer's description please go to its page. Besides, all the descriptions can be found here.

Let's look more carefully at how the analyzers function using the example of the very first and, probably, the simplest of them, the Navigational Search Analyzer. It checks how often a user looking for a specific website actually receives the link in the output.

Every day each search engine receives a set of 100 queries (selected randomly from a larger set of 600 queries), the same queries for each SE. For other analyzers, this larger set can consist of less than 600 queries, the minimum number of queries being 100. Then specific templates process the output and extract the SERP (top ten results). For each result, the program saves the link to the web page found (often, when the explicit link is absent from the search results, it has to be specifically sorted out), the title and the snippet (the text fragment representing its page). Then the links are compared to the standard link previously assigned for this query (usually there is one standard link, but there can be more). Thus, every one out of the hundred queries provides data concerning the number of standard links in the SERP and, even more than that, concerning their position among the top ten results of each search engine. Having calculated the percentage of the queries with the correct answer to the overall number of queries, we obtain the search engine's navigational search quality grade. Other analyzers can have different methods of control, like presence (or absence) of certain words in the title of the site, in a snippet or on a site page. But the main algorithm of compiling a set of queries, finding appropriate standard answers and assessing their presence or absence in the output remains the same for almost all the analyzers. Yet, that does not mean that there is little to do beyond the automatical control. To set a new analyzer going, we have to define its working principle (and in particular, the method of controlling the results), to find appropriate queries and assign appropriate markers (i.e., standard answers to these queries), to check the results and to correct the queries and the markers accordingly. After the analyzer has been launched, we have to watch out lest our markers get out of date, as it eventually happens, for example, in the above-mentioned Navigational Search Analyzer, whose markers tend to become inaccessible, change the address, get new "mirrors" and so on.

There are also such analyzers that demand more 'handwork'. Such are, for example, the Search Spam and the Assessors Analyzers where the never-ceasing inflow of new results makes a regular check by our analytic team indispensable.

But even with such analyzers we try to minimize the necessary amount of handwork and to maintain the objectivity of evaluation: those who assess the relevance of results or the amount of spam never know which search engine has yielded this or the other result.

Beside the main parameter to be assessed, some analyzers measure additional search features that are in some way connected to it, but may be of independent interest. Sometimes, this can involve stricter or looser count of results.

What Do We Need the Analyzers for?

Thanks to technological progress, people today have an instant access to immense amounts of information. This creates a serious challenge for information retrieval, so the existence and the quality of search engines becomes a crucial factor. As for existence, the search engines present on the Web are numerous enough. But how to understand which one is worth using?

The goal of our project is to answer this question by demonstrating the strong and the weak sides of different search engines. Thereby we shall give any interested user a possibility to choose his search instrument conciously, not being driven by habit or prejudice.

Indeed, to know strong and weak points of Google, Yandex, Yahoo and other search engines may be both useful and curious, not only for the terminal users, but also for any one engaged in the search industry.

Web masters and optimizers can develop deeper understanding of the search engines' 'nature' and closely trace changes in it by using the Updates Analyzer, already very popular among the SEO folk, or by looking at sudden leaps that occur from time to time in the ranking of the output in other analyzers.

As for journalists and bloggers, our analyzers can give them some diverting stuff to write about in their weekly columns or analytical reviews. By the way, you might like to have a look at our project blog where we register newly introduced features and discuss the accomplishments / failures of the search engines.

It is self-evident that the objective data on the search engine's functioning can be of great use for investors, who can rely on them when assessing the risks of investing into a search business.

But there is one category of users for whom our service is just indispensable, and these are search engine developers. The analyzers will help them to make a true to life comparison between them and their competitors, to gain a fuller comprehension of their pros and cons and, ultimately, to reach a higher level of user-friendliness.

A word for the wise: we strongly advise any search engine developer against using the analyzers' data for immediate tuning, fixing, correcting their search algorithms. In such case the data instantly lose their objectivity, and the first person to suffer from it is the search engine developer himself: instead of a true picture he only gets a false and contorted one.

What Search Engines Enter the Analyzers?

At the end of 2012, Runet's seven search engines: Bing, Google, Mail.ru, Mail.ru β, Rambler, Yahoo! and Yandex (in alphabetical order) are being evaluated for different parameters of their output. But we must hold in mind that not all of them are mutually independent. Thus, beginning from 2011 Rambler uses Yandex's search algorithms, albeit their output still differs in some respects. It is quite possible that soon we shall exclude Rambler's output from evaluation (as we did with Aport somewhat earlier), all the more so as its search share fell from more than 15% back in 2007 to about 1% now.

Further on, starting from 2010 Yahoo! conducts search on Bing's algorithms, but here the differences in the Russian language output are far more significant.

Finally, Mail.ru-beta is a pilot project of Mail.ru with an open public access. It serves for testing various search techniques that later will (or will not) be applied in the basic Mail.ru search service. So, the results yielded by Mail.ru-beta will differ from those of Mail.ru but slightly.

So much for Runet. But we are also interested in comparing the search quality for queries formulated in English. The set of search engines here is quite different, the number of analyzers very small as yet. But we plan to develop this line of work, including new search engines, preparing new analyzers and possibly also installing an infrastructure in different countries, similar to that we have installed in 10 cities of Russia for our Regional Search Analyzer.

So, What Search Engine is The Best?

It is pretty tough and it is not our goal to give a definitive answer to this. We just compare search engines from different points of view and publish the results on a daily basis. Yet, as there is no avoiding the choice, we'll venture a cautious answer - or, actually, two answers - to this question.

First, we'd like to refer you to our Integral Search Quality Analyzer whose task is to sum up the data of all the other analyzers (except the Assessor Analyzer, described below, and two Analyzers —Updates and Clicks —that are not immediately related to search quality) and to count a search engine's average score. The score of each analyzer is taken with a coefficient within the range from 0 to 1, according to its overall value and the relative frequency of its queries.

If it seems to you that some other positioning of coefficients would make more sense, you are by all means invited to make your own choices here and see how it influences the results.

A definite flaw in the Integral Search Quality Analyzer is that it is rather capricious; for instance, it makes an abrupt leap whenever we include some new analyzer into the counting.

Second, the search quality as a whole can be assessed directly, too. The method (assessors' evaluation) is well known and has long been used by search engine developers - but the results are, of course, never published.

In 2012 we have developed the first independent Assessor Analyzer, so now it is possible to compare the relevance of the output of various search engines directly.

You should not be surprised that the data of the two above-mentioned analyzers - the Integral and the Assessor - do not always coincide, since the Integral Search Quality Analyzers includes such parameters as are in no way related to the relevance of the output, e.g. the data retrieval speed.

Feedback

Web search is a constantly changing thing, and its changes cannot but intervene with the work of our analyzers. Sometimes we have to put in additional features or even replace algorithms to stay tuned to the retrieval procedure. Some queries (in the first place those related to Data Freshness) are subject to regular manual check, the other queries and markers are checked automatically on a day-to-day basis.

Nonetheless, it may happen that you notice some mistakes we have overlooked. If that is the case, we'd be very grateful if you let us know about it. We'd also like to receive your comments or ideas regarding our service on info@analyzethis.ru.