Bahaikipedia:Search engine test

From Bahaikipedia
Jump to: navigation, search

Search engines allow users to examine web pages on the internet, which in turn allows checking of when and how certain expressions are used. This is helpful in identifying sources, establishing notability, checking facts, and discussing what names to use for different things (including articles).

This page documents how to use search tools to best advantage, and covers useful search tools, examples/tutorial, pitfalls and traps to avoid, and common biases and limitations.

Common search engines include: Google (link) (including newsgroups, news, and books), Alexa (link) and (The Wayback Machine, link).
This page uses the Google search engine for its examples, but similar principles apply to most others.

Search engine tests[edit]

Uses of search engine tests[edit]

A test using a search engine is intended to help with the following research questions:

  1. Popularity - Identifying how popular (or how little-known) something is (often called the "Google test")
  2. Usage - Identifying how and where a term is commonly being used, and by whom
  3. Genuine or hoax - Identifying if something is genuine or a hoax (or spurious, unencyclopedic)
  4. Notability - Confirm whether it is covered by independent sources or just within its own circles.
  5. Reliable sources - Identifying what sources (and websites) may exist for something
  6. More information - Unearthing of notable facts and citations which can be used in articles.
  7. Names and terminology - Identifying the names used for things (including alternative names and terminology)
  8. Copyright status checks - Identifying whether text is a direct (or near) copy of material on some web page, and (sometimes) identifying copyright holders and licensing status.

Depending on the subject matter, and how carefully it is used, a search engine test can be very effective and helpful, or produce misleading or non-useful results. In most cases, a search engine test is a first-pass heuristic or "rule of thumb".

Common search engines[edit]

Type Examples
General search engines Google, Yahoo, etc
Website popularity indexes Alexa, Hitwise
General information
News and media  
Historical archives of web pages, Google cache (how web pages looked and their contents, at different times or if deleted)

Google groups (usenet), and some other sources are date-stamped, and have been archived for over twenty years, making them useful as a historical record.

What a search test can do, and what it can't[edit]

A search engine can list pages and text which others have placed on the internet.

Search engines can:

  • Provide information, and leads to pages, that assist with the above goals
  • Confirm "who's reported to have said what" according to sources (useful for neutral citing)
  • Often provide full cited copies of source documents
  • Confirm roughly how popularly referenced an expression is
  • Search more specifically within certain websites, or for combined and alternative phrases (or excluding certain words and phrases that would otherwise confuse the results).

Search engines cannot:

  • Guarantee the results are reliable or "true" (search engines index whatever text people choose to put online, true or false).
  • Guarantee why something is mentioned a lot, and that it isn't due to marketing, reposting as an internet meme, spamming, or self-promotion, rather than importance.
  • Guarantee that the results reflects the uses you mean, rather than other uses. (Eg, a search for a specific John Smith may pick up many "John Smiths" who aren't the one meant, many pages containing "John" and "Smith" separately, and also miss out all the useful references indexed under "John M. Smith" or "John Michael Smith")
  • Guarantee you aren't missing crucial references through choice of search expression.
  • Guarantee that little mentioned or unmentioned items are automatically unimportant.

and search engines often will not:

  • Provide the latest research in depth to the same extent as journals and books, for rapidly developing subjects.
  • Be neutral.

A search engine test cannot help you avoid the work of interpreting your results and deciding what they really show. Appearance in an index alone is not usually proof of anything.

Search engine tests and Wikipedia policies[edit]


Search engine tests may return results that are ficticious, biased, hoaxes or the like. It is important to consider whether the information used derives from reliable sources before depending upon it. Less reliable sources may be unhelpful, or need their status and basis clarified, so that other readers gain a neutral and informed understanding to judge how much reliance to place upon them.


Google (and other search systems) do not have a neutral point of view. Bahaikipedia does. Google indexes self created pages and media pages which do not have a neutrality policy. Wikipedia has a neutrality policy that is mandatory and applies to all articles, and all article-related editorial activity.

As such, Google is specifically not a source of neutral titles -- only of popular ones. Neutrality is mandatory on Wikipedia (including deciding what things are called) even if not elsewhere, and specifically, neutrality trumps popularity.

(See B:NPOV#Neutrality and Verifiability for information on balancing the policies on verifiability and neutrality, and B:NPOV#Article naming on how articles should be named)


Raw hit count is a very crude measure of importance. Some unimportant subjects have many "hits", some notable ones have few or none, for reasons discussed further down this page.

Hit count numbers alone can only rarely "prove" anything about notability, without further discussion of the type of hits, what's been searched for, how it was searched, and what interpetation to give the results. On the other hand, examining the types of hit arising (or their lack) often does provide useful information related to notability.

Using search engines[edit]

Search engine expressions (examples and tutorial)[edit]

This section covers search expressions for Google web search. Similar approaches will work in many other search engines, and other Google searches, but always read their help pages for further information as search engines' capabilities and operation often differ.

A search engine such as Google has both an easy, and an advanced search. The advanced search makes it easier to enter advanced options, that may help your searching. The following collapsible sections cover basic examples and help for using search engines with Wikipedia.

Specialized search engines such as medical paper archives have their own specialized search structure not covered here.

Specific uses of search engines in Wikipedia[edit]

  • Google Groups or other date-stamped media, can help establish the timing and context of early references to a word or phrase.
  • Google News can help assess whether something is newsworthy. Google News used to be less susceptible to manipulation by self-promoters, but with the advent of pseudo-news sites designed to collect ad revenues or to promote specific agendas, this test is often no more reliable than the others in areas of popular interest, and indexes many "news" sources that reflect specific points of view. The news archive goes back many years but may not be free beyond a limited period.
  • Google Book Search has a pattern of coverage that is in closer accord with traditional encyclopedia content than the Web, taken as a whole, is; if it has systemic bias, it is a very different systemic bias from Google Web searches. Multiple hits on an exact phrase in Google Book Search provide convincing evidence for the real use of the phrase or concept. Google Book Search can locate print-published testimony to the importance of a person, event, or concept. It can also be used to replace an unsourced "common knowledge" fact with a print-sourced version of the same fact.
  • Topics alleged to be notable by popular reference can have the type of reference, and popularity, checked. An alleged notable issue that only has a few hundred references on the internet may not be very notable; truly popular internet memes can have millions or even tens of millions of references. [1] However note that in some areas, a notable subjject may have very few references; for example one might only expect a handful of references to some archaeological matter, and some matters will not be reflected online at all.
  • Topics alleged to be genuine can be checked to test if they are referenced by reliable independent sources; a good test for hoaxes and the like.
  • Copyright violations from websites can often be identified (as described above).
  • Alternative spellings and usages can have their relative frequencies checked (eg, for a debate which is the more common of two equally neutral and acceptable terms).
  • Google Groups (USENET newsgroups) is a significantly different sample from websites, and represents, for the most part, conversations in English conducted by people on various topics. Because the sources are very different, hit numbers are not comparable, however Group searches are particularly helpful in identifying matters which might be discussed, or whose presence may have been artificially inflated by promotional techniques; it is suspicious if a phrase gets, say, 100,000 Web hits but only 10 Groups hits.

Specialized search engines[edit]

Google Scholar works well for fields that (1) are paper-oriented and (2) have an online presence in all (or nearly all) respected venues. Most papers written by computer scientists will show up, but for less technologically current fields, representation in Google Scholar is less reliable. Even the journal Science only puts articles online back to 1996. Thus, Google Scholar should rarely be used as proof of non-notability.

Medline, now part of Pubmed, is the original broadly-based search engine, originating over four decades ago and indexing even earlier papers. Thus, especially in biology and medicine, Pubmed "associated articles" is a Google Scholar proxy for older papers with no on-line presence. E.g., The journal Stroke puts papers on-line back through the 1970's. For this 1978 paper [2],Google Scholar lists 100 citing articles, while Pubmed lists 89 associated articles

There are a large number of law libraries online, in many coutnries, including: Library of Congress, Library of Congress (THOMAS), Indiana Supreme Court, FindLaw (USA); Kent University Law Library and sources (UK).

Interpreting results[edit]


A raw hit count should never be relied upon to prove notability, without attention paid to the sources, or types of reference, or whether this actually does evidence notability or non-notability, case by case. It has always been, and very likely always will remain, an extremely inconsistent tool for measuring notability, and should be considered as one part of evidence, and never definitive or conclusive alone. A manageable sample of sites found should be opened individually, to actually verify the relevance of the reported pages.

Other useful considerations in interpreting results are:

  • Article scope: If narrow, fewer references are required. Try to categorize the point of view, whether it is NPOV, or other; e.g., notice the difference between Ontology and Ontology (computer science).
  • Article subject: If it's about some historical person, one or two mentions in reliable texts might be enough; if it's some Internet neologism or a pop song, it may be on 700 pages and might still not be considered 'existing' enough to show any notability, for Wikipedia's purposes.

Biases to be aware of[edit]

In most cases search results should be reviewed with a careful and aware skepticism before relying upon them. Common biases include:

General biases[edit]

General (the internet or people as a whole)
  • Personal bias - Tendency to be slightly more receptive to beliefs that one is familiar with, believes, or are common in their daily culture, and also to be more doubtful about beliefs and views that contradict ones preferred views.
  • Cultural and computer-usage bias - Biased towards information from internet-using developed countries and affluent parts of society (internet access). Countries where computer use is not so common, will often have lower rates of reference to equally notable material, which may therefore appear (mistakenly) non-notable.
  • Undue weight - May disproportionally represent some matters, especially related to popular culture (some matters may be given far more space and others far less, than fairly represents their standing):- popularity is not notability.
  • Sources not readily accessible - Some sources are accessible to all, but many are payment only, or not reported online.
General web search engines (Google, Yahoo web search etc)
  • Dark net - Search engines exclude a vast number of pages, and this may include systematic bias so that some matters are excluded disproportionately (for example, because they are commonly visible on sites that do not allow google indexing, or the content for technical reasons cannot be indexed (flash or image-based websites etc)
  • Search engines as promotion tool - A huge industry exists seeking to influence site position, popularity, and ratings in such searches, or sell advertizing space related to searches and search positions. Some subjects, such as pornographic actors, are so dominated by these that searches cannot be reliably used to establish popularity.
  • Review process varies, some sites accept any information, others have some form of review or checking system in place.
  • Self-mirroring - Sometimes other sites pick up Wikipedia content, which is then passed around the internet, and more pages built up based upon it (and often not cited), meaning that in reality the source of much of the search engine's findings are actually just copies of Wikipedia's own previous text, not genuine sources.
  • Popular usage bias - Popular usage and urban legend is often reported over correctness. Examples: 1/ a search for the incorrect Charles Windsor gives 10 times more results than the correct Charles Mountbatten-Windsor, 2/ A search for the most common spelling of El Niño will often report it spelt "El Nino", without the diacritic, 3/ Urban legends are often reported widely, for example hundreds of sites report that the USS Constitution set sail in 1779, although the correct date is 1797.
  • Popular views and perceptions are likely to be more reported. For example, there may be many references to acupunture and confirming that people are often allegic to animal fur, but it may only be with careful research that it is revealed there are medical peer reviewed assessments of the former, and that people are usually not allergic to fur, but to the sticky skin particles ("dander") within the fur.
  • Language selection bias For example, an Arabic speaker searching for information on homosexuality in Arabic, will likely find pages which reflect a different bias than an English speaker searching in English on the same subject, since popular and media views and beliefs about homosexuality can differ widely between English speaking countries (USA, UK, Australasia) that tend to include a relatively higher proportion of homosexuality-accepting groups, and Arabic speaking countries (Middle East) that tend to include a relatively lower proportion.
  • Note that other Google searches, particularly Google Book Search, have a different systemic bias from Google Web searches and give an interesting cross-check and a somewhat independent view.

Alexa ratings[edit]

In some cases, it is helpful to estimate the relative popularity of a website. Alexa Internet is a tool for this (Hitwise is another). To test Alexa's ranking for a particular web site, visit and enter the URL.

The Alexa measuring system is based on a toolbar that users must choose to install, and which can only be installed on the Internet Explorer browser and Microsoft Windows. Sources of bias include both websites whose users disproportionately do not install such toolbars, or who are less often users of Windows and Internet Explorer, as well as webmasters who install Alexa Toolbar for the sole purpose of enhancing their ratings. Specifically, Alexa rankings are not part of the notability guidelines for web sites for several reasons:

  • Below a certain level, Alexa rankings are essentially meaningless, because of the limited sample size. Alexa itself says ranks worse than 100,000[3] are not reliable, and some critics feel it is worse than that.
  • Alexa rankings vary and include significant systematic bias which means the ratings often do not reflect popularity, but only popularity amongst certain groups of users (See Alexa Internet#Concerns). Broadly, Alexa rates based upon measurements by a user-installed toolbar, but this is a highly variable tool, and there are large parts of the internet user community (especially corporate users, many advanced users, many open-source and non-Windows users) who do not use it and whose internet reference use is therefore ignored.
  • Alexa rankings do not reflect encyclopedic notability and existance of reliable source material if so. A highly ranked web site may well have nothing written about it, or a poorly ranked web site may well have a lot written about it.
  • A number of unquestionably notable topics have web sites with poor Alexa rankings.

Foreign languages, non-Latin scripts, and old names[edit]

Often for items of non-English origin, or in non-Latin scripts, a considerably larger number of hits result from searching in the correct script or for various transcriptions. An Arabic name, for instance, needs to be searched for in the original script, which is easily done with Google (provided one knows what to search for), but problems may arise if - for example - English, French and German webpages transcribe the name using different conventions. Even for English only webpages there may be many variants of the same Arabic or Russian name. Personal names in other languages (Russian, Anglo-Saxon) may have to be searched for both including and excluding the patronymic, and searches for names and other words in strongly inflected languages should take into account that arriving at the total number of hits may require searching for forms with varying case-endings or other grammatical variations not obvious for someone who does not know the language. Names from many cultures are traditionally given together with titles that are considered part of the name, but may also be omitted (as in Gazi Mustafa Kemal Pasha).

Even in Old English, the spelling and rendering of older names may allow dozens of variations for the same person. A simplistic search for one particular variant may underrepresent the web presence by an order of magnitude.

A search like this requires a certain linguistic competence which not every individual Wikipedian possesses, but the Wikipedia community as a whole includes many bilingual and multilingual people and it is important for nominators and voters on AfD at least to be aware of one's own limitations and not amake untowqrd assumptions when language or transcription bias may be a factor.

Google unique page count issues[edit]

Note also, that the number of hits reported by search engines may be only an estimate. For example, Google may only calculate the actual number of hits when the user finally navigates through all pages, to the last page of the results, since it's only then that Google applies all criteria to a query (such as eliminating duplicate and spam control). At times, the hit count can be significantly cut (by a factor of 10 or more) when the list is fully accessed. A site-specific search may help determine if most of the hits are coming from the same web site; a single web site can account for hundreds of thousands of hits.

For search terms that return many results, Google uses a process that eliminates results which are "very similar" to other results listed, both by disregarding pages with substantially similar content and by limiting the number of pages that can be returned from any given domain. For example, a search on "Taco Bell" will only give a couple pages from even though many in that domain will certainly match. Further, Google's list of unique results is constructed by first selecting the top 1000 results and then eliminating duplicates without replacements. Hence the list of unique results will always contain fewer than 1000 results regardless of how many webpages actually matched the search terms. For example, from the about 742 million pages related to "Microsoft", Google presently returns 552 "unique" results (as of Jan 9, 2006[4]). Caution must be used in judging the relative importance of websites yielding well over 1000 search results.

Search engine limitations - technical notes[edit]

Many, probably most, of the publicly available web pages in existence are not indexed. Each search engine captures a different percentage of the total. Nobody can tell exactly what portion is captured.

The estimated size of the World Wide Web is at least 11.5 billion pages [5], but a much deeper (and larger) Web called the dark net, estimated at over 3 trillion pages,Template:Cite needed exists within databases whose contents the search engines do not index. These dynamic web pages are formatted by a Web server when a user requests them and as such cannot be indexed by conventional search engines. The United States Patent and Trademark Office website is an example; although a search engine can find its main page, one can only search its database of individual patents by entering queries into the site itself.

Google, as all search engines should, follows the robots.txt protocol and can be blocked by sites that do not wish their content to be indexed or cached by Google. Sites that contain large amounts of copyrighted content (Image galleries, subscription newspapers, webcomics, movies, video, help desks), usually involving membership, will block Google and other search engines. Other sites may also block Google due to the stress or bandwidth concerns on the server hosting the content.

Google has also been the victim of redirection exploits that may return more results for a specific search term than exist actual content pages.

Google and other popular search engines are also a target for search engine "search result enhancement", also known as search engine optimizers, so there may also be many results returned that lead to a page that only serves as an advertisement. Sometimes pages contain hundreds of keywords designed specifically to attract search engine users to that page, but in fact serve an advertisement instead of a page with content related to the keyword.

Search engines also might not be able to read links or metadata that normally requires a browser plugin, Adobe PDF,or Macromedia Flash, or where a website is displayed as part of an image. Search engines also can not listen to podcasts or other audio streams, or even video mentioning a search term.

Forums, membership-only and subscription-only sites (since googlebot does not sign up for site access) and sites that cycle their content are not cached or indexed by any search engine. With more sites moving to AJAX/Web 2.0 designs, this limitation will become more prevalent as search engines only simulate following the links on a web page. AJAX page setups (like Google maps) dynamically return data based on realtime manipulation of javascript.

Further reading[edit]

See also[edit]

  • Meta:Mirror filter, a way to filter sites from Google search to remove sites which mirror Wikimedia content
  • {{find}} a template designed to help with Google books, news archive and scholar searches

External links[edit]