Author Archives: Hugh E. Williams

About Hugh E. Williams

Search engine guy, Engineer, Executive, Father, and eBay-er.

A Whirlwind Tour of a Search Engine

Thanks to Krishna Gade and Michael Lopp, last night I had the opportunity to speak at Pinterest’s new DiscoverPinterest tech talk series. I spoke for around 45 minutes, taking the audience on a tour of the world of (mostly) web search — skating across the top of everything from ranking, to infrastructure, to inverted indexing, to query alterations, and more. I had a lot of fun.

I also had the chance to listen to four of Pinterest’s engineering leaders discuss their work in browsing, content-based image retrieval, infrastructure, and graph processing and relevance. They’re up to some interesting work — particularly if you’re interested in the intersection of using images, human-curated data, and browsing.

There’s a video of the presentations coming, and I’ll update this post with a link to that soon. In the meantime, here’s the deck: pinterest search v2.

On a social note, it was great to see several of the folks I worked with at Bing. Krishna took a selfie with Yatharth Saraf and I. Those were truly the days — we were in many ways ahead of our time.

Shameless advertisement: if you’d like me to present on search (or anything else) at your organization, please feel free to ask. I have a 30 minute, 1 hour, and whole day tutorial on search engines. I’m also available for consulting and advising!

See you next time.

Measuring Search Relevance

How do you know when you’ve improved the relevance of a search engine? There are many ways to understand this, for example running A/B tests on your website or doing qualitative studies in a lab environment with a few customers. This blog post focuses on using large numbers of human judges to assess search performance.

Relevance Judgment

The process of asking many judges to assess search performance is known as relevance judgment: collecting human judgments on the relevance of search results. The basic task goes like this: you present a judge with a search result, and a search engine query, and you ask the judge to assess how relevant the item is to the query on (say) a four-point scale.

Suppose the query you want to assess is ipod nano 16Gb. Imagine that one of the results is a link to Apple’s page that describes the latest Apple iPod nano 16Gb. A judge might decide that this is a “great result” (which might be, say, our top rating on the four-point scale). They’d then click on a radio button to record their vote and move on to the next task. If the result we showed them was a story about a giraffe, the judge might decide this result is “irrelevant” (say the lowest rating on the four point scale). If it were information about an iPhone, it might be “partially relevant” (say the second-to-lowest), and if it were a review of the latest iPod nano, the judge might say “relevant” (it’s not perfect, but it sure is useful information about an Apple iPod).

The human judgment process itself is subjective, and different people will make different choices. You could argue that a review of the latest iPod nano is a “great result” — maybe you think it’s even better than Apple’s page on the topic. You could also argue that the definitive Apple page isn’t terribly useful in making a buying decision, and you might only rate it as relevant. A judge who knows everything about Apple’s products might make a different decision to someone who’s never owned an digital music player. You get the idea. In practice, judging decisions depend on training, experience, context, knowledge, and quality — it’s an art at best.

There are a few different ways to address subjectivity and get meaningful results. First, you can ask multiple judges to assess the same results to get an average score. Second, you can judge thousands of queries, so that you can compute metrics and be confident statistically that the numbers you see represent true differences in performance between algorithms. Last, you can train your judges carefully, and give them information about what you think relevance means.

Choosing the Task and Running the Judging

You have to decide what queries to judge, how many queries to judge, how many answers to show the judges, and what search engines or algorithms you want to compare. One possible approach to choosing queries is to randomly sample queries from your search engine query logs. You might choose to judge hundreds or thousands of queries. For each query, you might choose to judge the first ten results, and you might choose to compare the first ten results from each of (say) Google, Bing, and DuckDuckGo.

Most search companies do their relevance judgment with crowdsourcing. They put tasks in the public domain, and pay independent people to perform the judgments using services such as CrowdFlower. This does create some problems – some people try to game the system by writing software that randomly answers questions, or they answer fast and erroneously. Search companies have to work constantly on detecting problems, and removing both the poor results and judges from the system. To give you a flavor, one thing search folks do is inject questions where they know what the relevance score should be, and then check that the judges answer most of those correctly (this is known as a ringer test). Another thing folks do is look for judges who consistently answer differently from other judges for the same tasks.

Scoring Relevance Judgments

When you’ve got tens of answers for each query, and you’ve completed judging at least a few hundred queries, you’re ready to compute a metric that allows us to compare algorithms.

An industry favorite is NDCG, Normalized Discounted Cumulative Gain. It sounds complicated, but it’s a common-sense measure. Suppose that on our four-point scale, you give a 0 score for an irrelevant result, 1 for a partially relevant, 2 for relevant, and 3 for perfect. Suppose also that a query is judged by one of the judges, and the first four results that the search engine returns are assessed as relevant, irrelevant, perfect, and relevant by the judge. The cumulative gain after four results is the sum of the scores for each result: 2 + 0 + 3 + 2 = 7. That’s shown in the table below: result position or rank in the first column, the judges score or gain in the second column, and a running total or cumulative gain in the third column.

Rank Judgment (Gain)
Cumulative Gain
1 2 2
2 0 2
3 3 5
4 2 7

Now for the Discounted part in NDCG. Search engine companies know that the first result in the search results is more important than the second, the second more important than the third, and so on. They know this because users click on result one much more than result two, and so on. Moreover, there’s plenty of research that shows users expect search engines to return great results at the top of the page, that they are unlikely to view results low on the page, and that they dislike having to use pagination.

The Discounted part of NDCG adds in a weighting based on position: one simple way to make position one more important than two (and so on) is to sum the score divided by the rank. So, for example, if the third result is ”great”, its contribution is 3 / 3 = 1 (since the score for “great” is 3, and the rank of the result is 3). If “great” were the first result, its contribution would be 3 / 1 = 3. In practice, the score is often divided by the log of the rank, which seems to better match the user perception of relevance. Anyway, for our example and to keep it simple, the Discounted Cumulative Gain (DCG) after four results is 2 / 1 + 0 / 2 + 3 / 3 + 2 / 4 = 3.5. You can see this in the table below: the third column has the discounted gain (the gain divided by the rank), and the fourth column keeps the running total or cumulative gain.

Rank Judgment (Gain)
Discounted Gain Discounted Cumulative Gain (DCG)
1 2 2/1 2
2 0 0/2 2
3 3 3/3 3
4 2 2/4 3.5

The Normalized part in NDCG allows us to compare DCG values between different queries. It’s not fair to compare DCG values across queries because some queries are easier than others: for example, maybe it’s easy to get four perfect results for the query ipod nano, and much harder to get four perfect results for 1968 Porsche 912 targa soft window. If the search engine gets a high score for the easy query, and a poor score for the hard query, it doesn’t mean it’s worse at hard queries – it might just mean the queries have different degrees of difficulties.

Normalization works like this: you figure out what the best possible score is given the results you’ve seen so far. In our previous example, the results scored 2, 0, 3, and 2. The best arrangement of these same results would have been: 3, 2, 2, 0, that is, if the “great” result had been ranked first, followed by the two “relevant” ones, and then the “irrelevant”. This best ranking would have a DCG score of 3 / 1 + 2 / 2 + 2 / 3 + 0 / 4 = 4.67. This is known as the “ideal DCG,” or iDCG.  Our NDCG is the score we got (3.50) divided by the ideal DCG (4.67), or 3.50 / 4.67 = 0.75. Now we can compare scores across queries, since we’re comparing percentages of the best possible arrangements and not the raw scores.

The table below builds out the whole story. You’ve seen the first four columns before. The fifth and sixth columns show what would have happened if the search engine had ordered the results in the perfect order. The seventh and final column shows a running total of the fourth column (the DCG) divided by the sixth column the (ideal or iDCG), and the overall NDCG for our task is shown as 0.75 in bold in the bottom-right corner.

Rank Judgment (Gain)
Discounted Gain Discounted Cumulative Gain (DCG)
Ideal Discounted Gain Ideal Discounted Cumulative Gain (iDCG) Normalized Discounted Cumulative Gain (NDCG)
1 2 2/1 2.0 3/1 3.0 0.67
2 0 0/2 2.0 2/2 4.0 0.5
3 3 3/3 3.0 2/3 4.67 0.64
4 2 2/4 3.5 0/4 4.67 0.75

Comparing Search Systems

Once you’ve computed NDCG values for each query, you can average them across thousands of queries. You can now compare two algorithms or search engines: you take the mean average NDCG values for each system, and check using a statistical test (such as a two sided t-test) whether one algorithm is better than the other, and with what confidence. You might, for example, be able to say with 90% confidence that Google is better than Bing.

As I mentioned at the beginning, this is one important factor you could consider when comparing two algorithms. But there’s more to search engine comparison than comparing NDCG metrics. As I’ve said in previous posts, I’m a huge fan of measuring in many different ways and making decisions with all the data at hand. It takes professional judgment to decide one algorithm is better than another, and that’s part of managing any search engineering effort.

Hope you found this useful, see you next time!

Afterword

I published a first version of this post on eBay’s technical blog in 2010. I owe thanks to Jon Degenhardt for cleaning up a math error, and formatting the tables.

It’s an apostrophe

Even the smartest folks I know can’t get apostrophes right. (Please don’t read all my blog posts and find the mistakes!). Let me see if I can help. 

  • “It’s” is equivalent to “it is”. If you write “it’s” in a sentence, check it makes sense if you replace it with “it is”. If yes, good. If no, you probably meant “its”
  • “Its” is a possessive. “The dog looked at its tail”. As in, the tail attached to the dog was stared at by the aforementioned canine

Get those right, and you’re in the top 98% of apostrophe users.

Don’t write “In the 1980’s, rock music was…”. You mean “In the 1980s, …”. As in, the plural: the ten years that constitute the decade that began in 1980. These are also correct: “He collected LPs” or “She installed LEDs instead of incandescent globes”. You’ll find some people argue about these: for example, some folks write “mind your P’s and Q’s”, and argue correctness. I personally think it’s wrong, there are many Ps and Qs, and so it should be “mind your Ps and Qs”.

Watch out for possession of non-plurals that end in consonants. “Hugh William’s blogs are annoying” and “Hugh Williams’ blogs are annoying” are both wrong. “Hugh Williams’s blogs are annoying” is right (in more ways than one?).

One trick I use is this: if you say the “s”, add the “s”. Hugh Williams’s blog. Ross’s Dad. The boss’s desk. If you don’t say it, don’t add it. His Achilles’ heel. That genres’ meaning.

Have a fun week!

Fireside chat at the DataEdge Conference

The video of my recent conversation with Michael Chui from McKinsey as part of the UC Berkeley DataEdge conference is now online. Here it is:

The discussion is around 30 minutes. I tell a few stories, and most of them are mostly true. We talk about my career in data, search, changing jobs, inventing infinite scroll, eBay, Microsoft, Pivotal, and more.  Enjoy!

Putting Email on a Diet

A wise friend of mine once said: try something new for 30 days, and then decide if you want to make it permanent.

Here’s my latest experiment: turning off email on my iPhone. Why? I found I was in work meetings, or spending time with the family, and I’d frequently pick up my phone and check my email. The result was I wasn’t participating in what I’d chosen to be part of — I was distracted, disrespectful of the folks I was with, and fostering a culture of rapid-fire responses to what was supposed to be an asynchronous communication medium. So, I turned email off on my iPhone. image1

What happened? I am enjoying and participating in meetings more. I am paying attention to the people and places I have chosen to be. And I’m not falling behind on email — I do email when I choose to do it, and it’s a more deliberate and effective effort.

Have I strayed? Yes, I have. When I’m truly mobile (traveling and away from my computer), I choose to turn it on and stay on top of my inbox — that’s a time when I want to multitask and make the best use of my time by actually choosing to do email. And then I turn it off again.

My calendar and contacts are still enabled. On the go, I want to know where and when I need to be somewhere, and to be able to consciously check my plans. I also want to be able to contact people with my phone.

Will I stick with it? I think so. Give it a try.

See you next time.

Armchair Guide to Data Compression

Data compression is used to represent information in less space than its original representation. This post explains the basics of lossless compression, and hopefully helps you think about what happens when you next compress data.

Similarly to my post on hash tables, it’s aimed at software folks who’ve forgotten about compression, or folks just interested in how software works; if you’re a compression expert, you’re in the wrong place.

A Simple Compression Example

Suppose you have four colored lights: red, green, blue, and yellow. These lights flash in a repeating pattern: red, red, green, red, red, blue, red, red, yellow, yellow (rrgrrbrryy). You think this is neat, and you want to send a message to a friend and share the light flashing sequence.

You know the binary (base 2) numbering system, and you know that you could represent each of the lights in two digits: 00 (for red, 0 in decimal), 01 (for green, 1 in decimal), 10 (for blue, 2 in decimal), and 11 (for yellow, 3 in decimal). To send the message rrgrrbrryy to your friend, you could therefore send them this message: 00 00 01 00 00 10 00 00 11 11 (of course, you don’t send the spaces, I’ve just included them to make the message easy for you to read).

Data compression. It's hard to find a decent picture!

Data compression. It’s hard to find a decent picture!

You also need to send a key or dictionary, so your friend can decode the message. You only need to send this once, even if the lights flash in a new sequence and you want to share that with your friend. The dictionary is: red green blue yellow. Your friend will be smart enough to know that this implies red is 00, green is 01, and so on.

Sending the message takes 20 bits: two bits per light flash and ten flashes in total (plus the one-off cost of sending the dictionary, which I’ll ignore for now). Let’s call that the original representation.

Alright, let’s do some simple compression. Notice that red flashes six times, yellow twice, and the other colors once each. Sending those red flashes is taking twelve bits, four for sending yellow, and the others a total of four for the total of twenty bits. If we’re smart, what we should be doing is sending the code for red in one bit (instead of two) since it’s sent often, and using more than two bits for green and blue since they’re sent once each. If we could send red in one bit, it’d cost us six bits to send the six red flashes, saving a total of six bits over the two-bit version.

How about we try something simple? Let’s represent red as 0. Let’s represent yellow as 10, green as 110, and blue as 1110 (this is a simple unary counting scheme). Why’d I use that scheme? Well, it’s a simple idea: sort the color flashes by decreasing frequency (red, yellow, green, blue), and assign increasingly longer codes to the colors in a very simple way: you can count 1s until you see a 0, and then you have a key you can use to look in the dictionary. When we see just a 0, we can look in the dictionary to find that seeing zero 1s means red. When we see 1110, we can look in the dictionary to find that seeing three 1s means blue.

Here’s what our original twenty-bit sequence would now look like: 0 0 110 0 0 1110 0 0 10 10. That’s a total of 17 bits, a saving of 3 bits — we’ve compressed the data! Of course, we need to send our friend the dictionary too: red yellow green blue.

It turns out we can do better than this using Huffman coding. We could assign 0 to red, 10 to yellow, 110 to blue, and 111 to green. Our message would then be 0 0 111 0 0 110 0 0 10 10. That’s 16 bits, 1 bit better than our simple scheme (and, again, we don’t need to send the spaces). We’d also need to share the dictionary: red <blank> yellow <blank> blue green to show that 0 is red, 1 isn’t anything, yellow is 10, 11 isn’t anything, blue is 110, and green is 111. A slightly more complicated dictionary for better message compression.

Semi-Static Compression

Our examples are two semi-static compression schemes. The dictionary is static, it doesn’t change. However, it’s built from a single-pass over the data to learn about the frequencies — so the dictionary is dependent on the data. For that reason, I’ll call our two schemes semi-static schemes.

Huffman coding (or minimum-redundancy coding) is the most famous example of a semi-static scheme.

Semi-static schemes have at least three interesting properties:

  1. They require two passes over the data: one to build the dictionary, and another to emit the compressed representation
  2. They’re data-dependent, meaning that the dictionary is built based on the symbols and frequencies in the original data. A dictionary that’s derived from one data set isn’t optimal for a different data set (one with different frequencies) — for example, if you figured out the dictionary for Shakespeare’s works, it isn’t going to be optimal for compressing War and Peace
  3. You need to send the dictionary to the recipient, so the message can be decoded; lots of folks forget to include this cost when they share the compression ratios or savings they’re seeing, don’t do that

It doesn’t matter what kind of compression scheme you choose, you need to decide what the symbols are. For example, you could choose letters, words, or even phrases from English text.

Static Compression

Morse code is an example of a (fairly lame) static compression scheme. The dictionary is universally known (and doesn’t need to be communicated), but the dictionary isn’t derived from the input data. This means that the compression ratios you’ll see are at best the same as you’ll see from a semi-static compression scheme, and usually worse.

There aren’t many static compression schemes in widespread use. Two examples are Elias gamma and delta codes, which are used to compress inputs that consist of only integer values.

Adaptive Compression

The drawbacks of semi-static compression schemes are two-fold: you need to process the data twice, and they don’t adapt to local regions in the data where the frequencies of symbols might vary from the overall frequencies. Imagine, for example, that you’re compressing a very large image: you might find a region of blue sky, where there’s only a few blue colors. If you built a dictionary for only that blue section, you’d get a different (and better) dictionary than the one you’d get for the whole image.

Here’s the idea behind adaptive compression. Build the dictionary as you go: process the input, and see if you can find it in the (initially empty) dictionary. If you can’t, add the input to the dictionary. Now emit the compressed code, and keep on processing the input. In this way, your dictionary adapts as you go, and you only have to process the data once to create the dictionary and compress the data. The most famous example is LZW compression, which is used in many of the compression tools you’re probably using: gzip, pkzip, GIF images, and more.

Adaptive schemes have at least three interesting properties:

  1. One pass over the data creates the dictionary and the compressed representation, an advantage over semi-static schemes
  2. Adaptive schemes get better compression with the more input they see: since they don’t approximate the global symbol frequencies until lots of input has been processed, they’re usually much less optimal than semi-static schemes for small inputs
  3. You can’t randomly access the compressed data, since the dictionary is derived from processing the data sequentially from the start. This is one good reason why folks use static and semi-static schemes for some applications

What about lossy compression?

The taxonomy I’ve presented is for lossless compression schemes — those that are used to compress and decompress data such that you get an exact copy of the original input. Lossy compression schemes don’t guarantee that: they’re schemes where some of the input is thrown away by approximation, and the decompressed representation isn’t guaranteed to be the same as the input. A great example is JPEG image compression: it’s an effective way to store an image, by throwing away (hopefully) unimportant data and approximating it instead. The MP3 music file format and the MP4 video format (usually) do the same thing.

In general, lossy compression schemes are more compact than lossless schemes. That’s why, for example, GIF image files are often much larger than JPEG image files.

Hope you learnt something useful. Tell me if you did or didn’t. See you next time.

Don’t use a Pie Chart

I don’t like pie charts. Why? It’s almost impossible to compare the relative size of the slices. Worse still, it is actually impossible to compare the slices between two pie charts.

Pie charts in action.

Pie charts in action.

Take the example above. Take a look at Pie Chart A. The blue slice looks the same size as the red or green slice. You might draw the conclusion they’re roughly the same. In fact, take a look at the histogram below — the red is 17 units, blue is 18 units, and green is 20 units. The histogram is informative, useful for comparison, and clear for communication. (There’s still a cardinal sin here though: no labels on the axes; I’ll rant about that some other day.)

Compare pie charts B and C above. It sure looks like there’s the same quantity of blue in B and C, and about the same amount of green. The quantities aren’t the same: the histograms below show that green is 19 and 20 respectively, and blue is 20 and 22.

You might argue that pie charts are useful for comparing relative quantities. I’d argue it’s possible but difficult to interpret. Take a look at three yellow slices in charts A, B, and C. They look to be similar percentages of the pies, but it’s hard to tell: they are oriented slightly differently, making the comparison unclear. What’s the alternative? Use a histogram with the y-axis representing the percentage.

Pie charts get even more ugly when there’s lots of slices, or some of the slices are ridiculously small. If you’re going to use them — I still don’t think it’s a good idea — then use them when there are only a few values and the smallest slices can be labeled. The pie chart below is ridiculous.

A pie chart that's nearly impossible to read. I'm not sure what conclusions you could draw from this one.

A pie chart that’s nearly impossible to read. I’m not sure what conclusions you could draw from this one.

Why do I care? I’m in the business of communicating information, and I spend much of my time reviewing information that people share with me. Clarity is important — decisions are made on data. See you next time.