Tuesday, 15 December 2015

"Do not track" does not mean anonymous browsing

A question that I'm often asked is "do search engines that don't track your search history also anonymize your IP address?" DuckDuckGo is the first search tool that often springs to mind with respect to "do not track".  It does not store searches, web history or IP addresses when you use it to search. Also, it does not pass on the search terms you used to the sites that you visit. However, the sites that you visit will still be able to see your IP address.  See https://duckduckgo.com/privacy for further details.

Ixquick (http://ixquick.com/) and StartPage (http://startpage.com/) are similar but have an additional feature that gives you the option to display a page from the results list using a proxy. Run the search as normal and you'll see the usual set of results. Next to each result you should see a "proxy" link. Click on that and you go through a proxy server making you invisible to the website you are visiting.



Any links that you subsequently click on and which are on the same site also go through the proxy. As soon as you follow any links that take you off that site then you are warned that you that you will be "unproxied".



The disadvantages of using the proxy option are that it can be slower, some functions on the page may not work, and I have come across some pages that do not display at all.

Thursday, 29 October 2015

Wayback Machine gets funding to rebuild and add keyword searching

The Wayback Machine (http://www.archive.org/), also known as the Internet Archive,  is always a popular site on my search workshops. It is a fantastic way of discovering how web pages looked in the past and for tracking down documents that are no longer on the live web.

It isn't 100% guaranteed to have what you are looking for and at present you need the URL of the web site or document in order to use it. People often ask if keyword searching is possible; it isn't at the moment but it will be.

The Internet Archive has received support from the Laura and John Arnold Foundation (LJAF) and will be re-building the Wayback Machine. When it is completed in 2017, the next generation Wayback Machine will have more webpages that are easier to find and will include keyword indexing of homepages.

Further details of the rebuild are on the Internet Archive blog at http://blog.archive.org/2015/10/21/grant-to-develop-the-next-generation-wayback-machine/

Wednesday, 28 October 2015

Google introduces RankBrain

We've known for some time that Google has been buying heavily into artificial intelligence and looking at applying it not only to its robotics and driverless cars projects but also to search. Now it is official: artificial intelligence and machine learning plays a major role in processing Google queries and is, Google says,  the third most important signal in ranking results. It has been named RankBrain.

Danny Sullivan covers the story in Search Engine Land and looks at the implications for search. There is a follow up story  by Danny that goes into more detail, FAQ: All About The New Google RankBrain Algorithm, and he makes a guess at what the number 1 and 2 ranking signals are (Google won't say!).

Both are very interesting articles on how Google is using RankBrain in search especially the FAQ,  which is a "must read" if you want to begin to understand how Google is now handling your search.

Sunday, 26 July 2015

Google rolls out "People also ask"

There have been reports (http://searchengineland.com/google-tests-new-mobile-search-design-people-also-ask-box-219078) for several months that Google has been testing a new query refinement box called "People also ask".  It now looks as though it has gone live. The feature suggests queries related to your search after the first few entries in your results list. It doesn't appear for all queries and it is dependent on how you ask the question. My search on 'what are statins' gave me the usual, standard results list.  When I searched on 'types of statins' the 'People also ask' box popped up with "How do statins work to lower cholesterol?", "How do statins lower cholesterol?" and "What is a statin drug?



To see further information you have to click on the downward pointing arrow next to the query but instead of a list of sites you see just an extract from a page supposedly answering the question, a bit like the Quick Answers that sometimes appear at the top of your search results. There is an option, though, to run a full search on the query you have chosen.  As with the Quick Answers, there are no clues as to how or why Google has selected a particular page to answer the query.

The queries for 'People also ask' are also different from the suggested queries that are listed as you type in your question into the standard Google search box.



Those of you who have attended my talks and workshops will no doubt be waiting for me to come up with an example of a Google howler. Here it is: a search on 'tomato blight prevention uk' comes up with "What is potato blight?" (close, and the organism that causes late potato and tomato blight is the same) and "What is an ANEMBRYONIC pregnancy?".



No, I don't know what an ANEMBRYONIC  pregnancy is (why the capital letters?) but it has nothing to do with potato or tomato blight!

At present, this is not a feature that I am finding useful. For me it is a hindrance rather than a help and just clutters up the results page with superficial or irrelevant suggestions. But as my queries tend to be quite complex and often incorporate advanced search commands, which seem to disable it, I don't expect to be seeing much of this feature.

Monday, 11 May 2015

Flickr messes up big time

 My "abstract" cat (or possibly dog),
according to Flickr
A few days ago Flickr revamped its website yet again. Flickr users have become used to changes that offer no improvements in functionality, and it rarely comes as a surprise that some aspects of the service are sometimes made worse. The most recent updates did not seem to be that significant. The layout is different; search is just as bad as ever with odd and irrelevant results popping up; and you still cannot directly edit an incorrectly, Flickr assigned location. The last is possible but it involves a somewhat Heath Robinson approach, more of which in a separate posting.

This time, though, Flickr has made a huge mistake. It has been using image recognition technology for about a year to automatically generate tags for users' photos but, until now, those tags have been hidden from users. They are now visible. The official announcement is on the Help Forum, Updates on tags (http://www.flickr.com/help/forum/en-us/72157652019487118/) followed by many pages of users comments, mostly negative. Flickr's mistake is not in making the tags visible or doing the tagging at all, but in not allowing users the option to opt-out or offering a global tag deletion tool.

The computer generated tags have been added retrospectively to everyone's photos, so some of us now have the prospect of checking thousands of images for incorrect or irrelevant tags. My experience, so far, is that most of them are useless. I honestly cannot see how the tags "indoor" or "outdoor", which seem to be applied to the majority of my photos, are helpful in a search. If the auto generated tags have already been used in Flickr's search it would explain why the results are often rubbish.

It is easy to spot the difference between user and Flickr generated tags: the former are in a grey box and the latter in a white or light grey box.


If you want to delete a Flickr generated tag you have to do it tag by tag, photo by photo. Do not go on a tag deletion frenzy just yet, though. There are reports that the deleted tags sometimes reappear.

Oddities that I have spotted so far in my own photostream include a photo of our local polling station auto-tagged with "shop" (http://www.flickr.com/photos/rbainfo/17209179077/), and an image of a building site tagged with "snow" (http://www.flickr.com/photos/rbainfo/17332657995/). I suspect that in the latter case Flickr was confused by the amount of dust and debris surrounding what remains of the buildings.

To see the full horror of what Flickr has done, click on the Camera Roll link on your Photostream page and then Magic View. My cat has been tagged several times as a dog and once as abstract, which I suggest should be replaced by "Zen". And to a photo of three hippos in Prague Zoo have been added animal, ape, elephant, tortoise, baby, child and people (http://www.flickr.com/photos/rbainfo/8712618469/). Note that Magic View only uses Flickr auto generated tags; we users are obviously not to be trusted!

I admit that there are a handful of instances where Flickr has reminded me of potentially relevant tags, so I might be tempted by an option whereby Flickr suggests additional tags. But I want to make the final decision as to whether to add them or not. I most certainly do not want Flickr adding, without my permission, thousands of tags to my back catalogue. And by the way, Flickr, whatever happened to my privacy setting of who can "Add notes, tags, and people:Only you", which you have clearly breached.

It is bad enough to have to deal with the rubbish that Google dishes out, but to have to cope with Flickr's lunacy as well is too much. Flickr, you have seriously messed up this time. Many of us do know what we are doing most of the time when we tag our photos. Carry on down this route and you won't just annoy your users but risk losing a substantial number of them, some of whom pay for Pro accounts.

Friday, 8 May 2015

Google dumps Reading Level search filter

It seems that Google has dumped the Reading Level search filter. This was not one that I used regularly but it was very useful when I wanted more serious, in-depth, research or technically biased articles rather than consumer or retail focused pages. It often featured in the Top Tips suggested by participants of my advanced Google workshops.

It was not easy to find. To use it you had to first run your search and then from the menu above the results select ‘Search tools’, then ‘All results’, and from the drop menu ‘Reading level’. Options for switching between basic, intermediate and advanced reading levels then appeared just above the results.


So another tool that helped serious researchers find relevant material bites the dust. I daren't say what I suspect might be next but, if I'm right, its disappearance could make Google unusable for research.

Monday, 2 March 2015

And you thought Google couldn't get any worse

We've all come across examples of how Google can get things wrong: incorrect supermarket opening hours (http://www.rba.co.uk/wordpress/2015/01/02/google-gets-it-wrong-again/), false information and dubious sources used in Quick Answers (http://www.rba.co.uk/wordpress/2014/12/08/the-quality-of-googles-results-is-becoming-more-strained/), authors who die 400 years before they are born (http://googlesystem.blogspot.co.uk/2013/11/google-knowledge-graph-gets-confused.html), a photo of the actress Jane Seymour ending up in a carousel of Henry VIII's wives (http://www.slate.com/blogs/future_tense/2013/09/23/google_henry_viii_wives_jane_seymour_reveals_search_engine_s_blind_spots.html) and many more. What is concerning is that in many cases no source is given. According to Search Engine Land (http://searchengineland.com/google-shows-source-credit-quick-answers-knowledge-graph-203293) Google doesn't provide a source link when the information is basic factual data and can be found in many places. But what if the basic factual data is wrong? It is worrying enough that incorrect or poor quality information is being presented in the Quick Answers at the top of our results and in the Knowledge Graph to the right, but the rot could spread to the main results.

An article in New Scientist (http://www.newscientist.com/article/mg22530102.600-google-wants-to-rank-websites-based-on-facts-not-links.html) suggests that Google may be looking at significantly changing the way in which it ranks websites by counting the number of false facts in a source and ranking by "truthfulness". The article cites a paper by Google employees that has appeared in arXiv (http://arxiv.org/abs/1502.03519) "Knowledge-Based Trust: Estimating the Trustworthiness of Web Sources". It is heavy going so you may prefer to stick with just abstract:

"The quality of web sources has been traditionally evaluated using exogenous signals such as the hyperlink structure of the graph. We propose a new approach that relies on endogenous signals, namely, the correctness of factual information provided by the source. A source that has few false facts is considered to be trustworthy. The facts are automatically extracted from each source by information extraction methods commonly used to construct knowledge bases. We propose a way to distinguish errors made in the extraction process from factual errors in the web source per se, by using joint inference in a novel multi-layer probabilistic model. We call the trustworthiness score we computed Knowledge-Based Trust (KBT). On synthetic data, we show that our method can reliably compute the true trustworthiness levels of the sources. We then apply it to a database of 2.8B facts extracted from the web, and thereby estimate the trustworthiness of 119M webpages. Manual evaluation of a subset of the results confirms the effectiveness of the method."

If this is implemented in some way, and based on Google's track record so far, I dread to think how much more time we shall have to spend on assessing each and every source that appears in our results. It implies that if enough people repeat something on the web it will deemed to be true and trustworthy, and that pages containing contradictory information may fall down in the rankings. The former is of concern because it is so easy to spread and duplicate mis-information throughout the web and social media. The latter is of concern because a good scientific review on a topic will present all points of view and inevitably contain multiple examples of contradictory information. How will Google allow for that?

It will all end in tears - ours, not Google's.

Saturday, 28 February 2015

More UK information vanishes into GOV.UK

Just when you've finally worked out how to search some of the key UK government web resources they disappear into the black hole that is GOV.UK.

The statistics publication hub went over a few weeks ago and the link http://www.statistics.gov.uk/ now redirects to http://www.gov.uk/government/statistics/announcements. Similarly, Companies House is now to be found at http://www.gov.uk/government/organisations/companies-house and the Land Registry is at http://www.gov.uk/government/organisations/land-registry. Most of the essential data, such as company information and ownership of properties, can still be found via GOV.UK and in fact some remains in databases on the original websites. For example, following the links on GOV.UK for information on a company eventually leads you to the familiar WebCHeck service at http://wck2.companieshouse.gov.uk/. Companies House useful list of overseas registries, however, seems to have totally disappeared but is in fact hidden in a general section covering all government "publications" (http://www.gov.uk/government/publications/overseas-registries#reg).

Documents may no longer be directly accessible from the new departmental home pages so a different approach is needed if you are conducting in-depth research. GOV.UK is fine for finding out how to renew your car tax or book your driving theory test - two of the most popular searches at the moment - but its search engine is woefully inadequate when it comes to locating detailed technical reports or background papers. Using Google's or Bing's site command to search GOV.UK is the only way to track them down quickly, for example biofuels public transport site:www.gov.uk.  Note that you need to include the 'www' in the site command as site:gov.uk would also pick up articles published on local government websites. This assumes, though, that the document you are seeking has been transferred over to GOV.UK.

There have been complaints from researchers, including myself, that an increasing number of valuable documents and research papers have gone AWOL as more departments and agencies are assimilated Borg-like by GOV.UK. Some of the older material has been moved to the UK Government Web Archive at http://www.nationalarchives.gov.uk/webarchive/.


This offers you various options including an A-Z of topics and departments and a search by keyword, category or website. The latter is slow and clunky with a tendency to keel over when presented with complex queries. I have spent hours attempting to refine my search and wading through page after page of results only to find that the article I need is not there, nor anywhere else, which is an experience several of my colleagues have had. This has led to conspiracy theories suggesting that the move to GOV.UK has provided a golden opportunity to "lose" documents.

I am reminded of a scene from Yes Minister:

James Hacker: [reads memo] This file contains the complete set of papers, except for a number of secret documents, a few others which are part of still active files, some correspondence lost in the floods of 1967...

James Hacker: Was 1967 a particularly bad winter?

Sir Humphrey Appleby: No, a marvellous winter. We lost no end of embarrassing files.

James Hacker: [reads] Some records which went astray in the move to London and others when the War Office was incorporated in the Ministry of Defence, and the normal withdrawal of papers whose publication could give grounds for an action for libel or breach of confidence or cause embarrassment to friendly governments.

James Hacker: That's pretty comprehensive. How many does that normally leave for them to look at?

James Hacker: How many does it actually leave? About a hundred?... Fifty?... Ten?... Five?... Four?... Three?... Two?... One?... *Zero?*

Sir Humphrey Appleby: Yes, Minister.

From "Yes Minister" The Skeleton in the Cupboard (TV Episode 1982) - Quotes - IMDb  http://www.imdb.com/title/tt0751825/quotes 

For "floods of 1967" substitute "transfer of files to GOV.UK".

Friday, 2 January 2015

Google gets it wrong again

Yesterday, on New Year's Day, I came across yet another example of Google getting its Knowledge Graph wrong. I wanted to double check which local shops were open and the first one on the list was Waitrose. I vaguely recalled seeing somewhere that the supermarket would be closed on January 1st but a Google search on waitrose opening hours caversham suggested otherwise. Google told me in its Knowledge Graph to the right of the search results that Waitrose was in fact open.



Knowing that Google often gets things wrong in its Quick Answers and Knowledge Graph I checked the Waitrose website. Sure enough, it said "Thursday 01 Jan: CLOSED".



If you look at the above screenshot of the opening times you will see that there are two tabs: Standard and Seasonal. Google obviously used the Standard tab for its Knowledge Graph.

I was at home working from my laptop but had I been out and about I would have used my mobile, so I checked what that would have shown me. Taking up nearly all of the  screen was a map showing the supermarket's location and the times 8:00 am - 9:00 pm. I had to scroll down to see the link to the Waitrose site so I might have been tempted to rely on what Google told me on the first screen. But I know better. Never trust Google's Quick Answers or Knowledge Graph.