Author Archives: Hugh E. Williams

About Hugh E. Williams

Search engine guy, Engineer, Executive, Father, and eBay-er.

It’s a Marathon, not a Sprint…

What makes a career successful over the long term? How do you sustain (or even increase) your professional impact, while also deriving meaning and enjoyment from your life? This was the question I set about answering in my talk at StretchCon last week. To see my answers, you can watch the presentation. You can also view the prezi1966321_313990785474109_1977531760847682083_o

The Backstory

I asked eleven colleagues about their success. I chose colleagues who have made it to the C-suite (whether as a CEO, CTO, or another C-level designation) and that appeared to do it with balance between their professional and personal lives. Ten of the eleven responded, and nine of the ten shared thoughts before my deadline. I thank Chris Caren, Adrian Colyer, John Donahoe, Ken Moss, Satya Nadella, Mike Olson, Christopher Payne, Stephanie Tilenius, and Joe Tucci for their help.

I sent each of these colleagues an email that went something like this:

I am speaking about successful careers being about sustained contribution (and not a series of sprints, all-nighters, or unsustainable peaks). Would you be up for giving me a quote I could use and attribute to you? I admire your ability to work hard and smart, while obviously also having a life outside of work.

Their replies were varied, but as you’ll see in the video, there were themes that repeated in their answers. I shared edited quotes in the talk, and promised that I’d share their complete thoughts in my blog. The remainder of this blog is their complete words.

Chris Caren

Chris is the CEO and Chairman of Turnitin. We worked together at Microsoft, and Chris was (and sometimes still is!) my mentor. Here are his thoughts in response to my questions:

My philosophy:  I do my best work when my life is in balance — family, me, and work.  I need a routine of hard work, but no more than 9-10 hours a day, solid exercise daily, low stress (via self-control), 7-8 hours of sleep at a minimum each day, and the time I want with my family and for myself.  When I maintain this balance, I am maximally effective at work — both in terms of quality of thinking and decision making, and maximum output.  More hours worked actually pull down my impact as a CEO.

Adrian Colyer

Adrian was the CTO of SpringSource, the custodians of the Spring Java programming framework. We worked together at Pivotal, where he was the CTO of the Application Fabric team. Recently, Adrian joined Accel Partners as an Executive-in-Residence. Here are Adrian’s thoughts:

A great topic! Maybe the most counter-intuitive lesson I’ve learned over the years is that I can make a much more valuable contribution when I work* less. So work-life balance isn’t really a trade-off as most people normally present it (I have more ‘life’, but sacrifice ‘work’ to get it), it’s actually a way of being better at both life *and* work!

* ‘work’ in the sense that most people would intuitively think of it – frenetic activity.

When I’ve analysed this, I came to realise that when work crowds everything else out I often end up in a very reactive mode. But the biggest and most impactful things you can do – especially as a leader – don’t come about during that constant fire-fighting mode. The vast majority of my important insights and decisions – the things that made the biggest positive impact on the organisations I was working with at the time – have come in the space I’ve made around the busy-ness of the office to actually allow myself the luxury of thinking! Running, cycling, walking and so on have all been very effective for me over the years. But so is just taking some time out in the evening and not necessarily even consciously thinking about work, the brain seems to be very good at background processing! That time has also given space to allow my natural curiosity and love of learning to be indulged. In turn that creates a broader perspective, exposes you to new ideas, and allows you to make connections and insights that you otherwise might not of. All of this feeds back into the work-related decisions and issues you are wrestling with and helps you to make breakthroughs from time to time.

To the extent I’ve been successful over the years, I attribute much of that not to being smarter than the people around me, nor to working ‘harder’, but to creating the space to think.

John Donahoe

John is the CEO of eBay Inc. John was an enthusiastic sponsor of my work while I was there. When I asked John for his thoughts, he sent me a speech he’d recently given to the graduating class at the Stanford Business School. In it, you’ll find John’s thoughts of his professional and personal journey.

Ken Moss

Ken recently became the CTO of Electronic Arts. Prior to that, Ken and I worked together on, off, and on over a period of nine years. Ken was the GM of Microsoft’s MSN Search when I joined Microsoft, and left to found his own company. I managed to help persuade Ken to come to eBay for a few years. Here are Ken’s thoughts:

Always focus on exceeding expectations in the present, while keeping your tank 100% full of gas for the future. There is no quicker way to stall your career than by burning yourself out. I’ve seen many potentially brilliant careers cut short as someone pushed themselves too far past their limits and became bitter under-performers. It’s always in your control.

Satya Nadella

Satya became the CEO of Microsoft at the beginning of 2014. Satya was the VP of the Bing search team at Microsoft for around half the time I was there, and we have stayed in touch since. Here are Satya’s thoughts:

I would say the thing that I am most focused on is to harmonize my work and life vs trying to find the elusive “balance”. Being present in the lives of my family in the moments I am with them is more important than any quantitative view of balance.

Mike Olson

Mike is the Chairman, Chief Strategy Officer, and former CEO of Cloudera. We have interacted during my time at Pivotal, and also during my time at eBay. Mike was kind enough to invite me to give the keynote at Hadoop World in 2011. Here’s Mike’s thoughts:

I have always tried to optimize for interesting — working on problems that are important to me, with people who blow my hair back. The combination has kept me challenged and inspired, and has guaranteed real happiness in the job.

By corollary, you have to be willing to walk away from a good paycheck and fat equity if the work or the people are wrong. Money is cheaper than meaning. I’ve done that a few times. There’s some short-term angst, but it’s paid off in the long term.

Christopher Payne

Christopher is the SVP of the North America business at eBay. Christopher and I have worked on, off, and on for nine years. Christopher was the founding VP of the search team at Microsoft. He left to found his own company, his company was bought by eBay, he hired me to eBay to help run engineering, and he then moved over to run the US and Canadian business teams. Here are Christopher’s thoughts:

I believe strongly in the need to maintain my energy level in order to have the most impact in my career. To do this I find I have to make the time to recharge. For me this means taking walks during the work day, taking all of my vacation, and not being on email 24/7. With my energy level high I find I can be significantly more creative and productive over the long term.

Stephanie Tilenius

Stephanie recently founded her own company, Vida. While she’s spent parts of her career at Kleiner-Perkins, Google, and other places, we met at eBay where we spent around six months working together. Here are Stephanie’s thoughts:

… my point of view is that you have to do something you love, that will sustain you. You also have to know what drives you, what gets you out of bed, for me it is having an impact (for others it may be making money or playing a sport, etc.) You will always be willing to give it your all and you are more likely to innovate if you love what you are doing and constantly growing, challenging the status quo (stagnation is really out of the question, humans don’t thrive on it!). I am committed to my work and to constant innovation but also to having a family and I could not be great at either without the other. They are symbiotic in my mind, they both make me happy and a better person. I have learned it is about integration not necessarily perfect balance. If you integrate life and work, you are much more likely to be successful. The other day my son was out of school early and our nanny had an issue so I brought him to work and he did code academy and talked to some of our engineers. He enjoyed himself and was inspired.

Joe Tucci

Joe is the Chairman of EMC, VMware, and Pivotal, and the CEO of EMC. I met Joe in the interview process at Pivotal, and have worked with him through board and other meetings over the past year. Here’s Joe’s thoughts:

Being a successful CEO is relatively straight forward… 1st – retain, hire, and develop the best talent, 2nd – get these talented individuals to work together as a team (do not tolerate selfishness), 3rd – get this leadership team to embrace a stretch goal that is bigger then any of them imagine they can attain, and 4th – maniacally focus the leadership team on our customers (always striving to exceed their expectations)

I enjoyed giving the talk at Stretch, and interacting with these colleagues in putting it together. I hope you enjoyed it too. See you next time.

A Whirlwind Tour of a Search Engine

Thanks to Krishna Gade and Michael Lopp, last night I had the opportunity to speak at Pinterest’s new DiscoverPinterest tech talk series. I spoke for around 45 minutes, taking the audience on a tour of the world of (mostly) web search — skating across the top of everything from ranking, to infrastructure, to inverted indexing, to query alterations, and more. I had a lot of fun.

I also had the chance to listen to four of Pinterest’s engineering leaders discuss their work in browsing, content-based image retrieval, infrastructure, and graph processing and relevance. They’re up to some interesting work — particularly if you’re interested in the intersection of using images, human-curated data, and browsing.

There’s a video of the presentations coming, and I’ll update this post with a link to that soon. In the meantime, here’s the deck: pinterest search v2.

On a social note, it was great to see several of the folks I worked with at Bing. Krishna took a selfie with Yatharth Saraf and I. Those were truly the days — we were in many ways ahead of our time.

Shameless advertisement: if you’d like me to present on search (or anything else) at your organization, please feel free to ask. I have a 30 minute, 1 hour, and whole day tutorial on search engines. I’m also available for consulting and advising!

See you next time.

Measuring Search Relevance

How do you know when you’ve improved the relevance of a search engine? There are many ways to understand this, for example running A/B tests on your website or doing qualitative studies in a lab environment with a few customers. This blog post focuses on using large numbers of human judges to assess search performance.

Relevance Judgment

The process of asking many judges to assess search performance is known as relevance judgment: collecting human judgments on the relevance of search results. The basic task goes like this: you present a judge with a search result, and a search engine query, and you ask the judge to assess how relevant the item is to the query on (say) a four-point scale.

Suppose the query you want to assess is ipod nano 16Gb. Imagine that one of the results is a link to Apple’s page that describes the latest Apple iPod nano 16Gb. A judge might decide that this is a “great result” (which might be, say, our top rating on the four-point scale). They’d then click on a radio button to record their vote and move on to the next task. If the result we showed them was a story about a giraffe, the judge might decide this result is “irrelevant” (say the lowest rating on the four point scale). If it were information about an iPhone, it might be “partially relevant” (say the second-to-lowest), and if it were a review of the latest iPod nano, the judge might say “relevant” (it’s not perfect, but it sure is useful information about an Apple iPod).

The human judgment process itself is subjective, and different people will make different choices. You could argue that a review of the latest iPod nano is a “great result” — maybe you think it’s even better than Apple’s page on the topic. You could also argue that the definitive Apple page isn’t terribly useful in making a buying decision, and you might only rate it as relevant. A judge who knows everything about Apple’s products might make a different decision to someone who’s never owned an digital music player. You get the idea. In practice, judging decisions depend on training, experience, context, knowledge, and quality — it’s an art at best.

There are a few different ways to address subjectivity and get meaningful results. First, you can ask multiple judges to assess the same results to get an average score. Second, you can judge thousands of queries, so that you can compute metrics and be confident statistically that the numbers you see represent true differences in performance between algorithms. Last, you can train your judges carefully, and give them information about what you think relevance means.

Choosing the Task and Running the Judging

You have to decide what queries to judge, how many queries to judge, how many answers to show the judges, and what search engines or algorithms you want to compare. One possible approach to choosing queries is to randomly sample queries from your search engine query logs. You might choose to judge hundreds or thousands of queries. For each query, you might choose to judge the first ten results, and you might choose to compare the first ten results from each of (say) Google, Bing, and DuckDuckGo.

Most search companies do their relevance judgment with crowdsourcing. They put tasks in the public domain, and pay independent people to perform the judgments using services such as CrowdFlower. This does create some problems – some people try to game the system by writing software that randomly answers questions, or they answer fast and erroneously. Search companies have to work constantly on detecting problems, and removing both the poor results and judges from the system. To give you a flavor, one thing search folks do is inject questions where they know what the relevance score should be, and then check that the judges answer most of those correctly (this is known as a ringer test). Another thing folks do is look for judges who consistently answer differently from other judges for the same tasks.

Scoring Relevance Judgments

When you’ve got tens of answers for each query, and you’ve completed judging at least a few hundred queries, you’re ready to compute a metric that allows us to compare algorithms.

An industry favorite is NDCG, Normalized Discounted Cumulative Gain. It sounds complicated, but it’s a common-sense measure. Suppose that on our four-point scale, you give a 0 score for an irrelevant result, 1 for a partially relevant, 2 for relevant, and 3 for perfect. Suppose also that a query is judged by one of the judges, and the first four results that the search engine returns are assessed as relevant, irrelevant, perfect, and relevant by the judge. The cumulative gain after four results is the sum of the scores for each result: 2 + 0 + 3 + 2 = 7. That’s shown in the table below: result position or rank in the first column, the judges score or gain in the second column, and a running total or cumulative gain in the third column.

Rank Judgment (Gain)
Cumulative Gain
1 2 2
2 0 2
3 3 5
4 2 7

Now for the Discounted part in NDCG. Search engine companies know that the first result in the search results is more important than the second, the second more important than the third, and so on. They know this because users click on result one much more than result two, and so on. Moreover, there’s plenty of research that shows users expect search engines to return great results at the top of the page, that they are unlikely to view results low on the page, and that they dislike having to use pagination.

The Discounted part of NDCG adds in a weighting based on position: one simple way to make position one more important than two (and so on) is to sum the score divided by the rank. So, for example, if the third result is ”great”, its contribution is 3 / 3 = 1 (since the score for “great” is 3, and the rank of the result is 3). If “great” were the first result, its contribution would be 3 / 1 = 3. In practice, the score is often divided by the log of the rank, which seems to better match the user perception of relevance. Anyway, for our example and to keep it simple, the Discounted Cumulative Gain (DCG) after four results is 2 / 1 + 0 / 2 + 3 / 3 + 2 / 4 = 3.5. You can see this in the table below: the third column has the discounted gain (the gain divided by the rank), and the fourth column keeps the running total or cumulative gain.

Rank Judgment (Gain)
Discounted Gain Discounted Cumulative Gain (DCG)
1 2 2/1 2
2 0 0/2 2
3 3 3/3 3
4 2 2/4 3.5

The Normalized part in NDCG allows us to compare DCG values between different queries. It’s not fair to compare DCG values across queries because some queries are easier than others: for example, maybe it’s easy to get four perfect results for the query ipod nano, and much harder to get four perfect results for 1968 Porsche 912 targa soft window. If the search engine gets a high score for the easy query, and a poor score for the hard query, it doesn’t mean it’s worse at hard queries – it might just mean the queries have different degrees of difficulties.

Normalization works like this: you figure out what the best possible score is given the results you’ve seen so far. In our previous example, the results scored 2, 0, 3, and 2. The best arrangement of these same results would have been: 3, 2, 2, 0, that is, if the “great” result had been ranked first, followed by the two “relevant” ones, and then the “irrelevant”. This best ranking would have a DCG score of 3 / 1 + 2 / 2 + 2 / 3 + 0 / 4 = 4.67. This is known as the “ideal DCG,” or iDCG.  Our NDCG is the score we got (3.50) divided by the ideal DCG (4.67), or 3.50 / 4.67 = 0.75. Now we can compare scores across queries, since we’re comparing percentages of the best possible arrangements and not the raw scores.

The table below builds out the whole story. You’ve seen the first four columns before. The fifth and sixth columns show what would have happened if the search engine had ordered the results in the perfect order. The seventh and final column shows a running total of the fourth column (the DCG) divided by the sixth column the (ideal or iDCG), and the overall NDCG for our task is shown as 0.75 in bold in the bottom-right corner.

Rank Judgment (Gain)
Discounted Gain Discounted Cumulative Gain (DCG)
Ideal Discounted Gain Ideal Discounted Cumulative Gain (iDCG) Normalized Discounted Cumulative Gain (NDCG)
1 2 2/1 2.0 3/1 3.0 0.67
2 0 0/2 2.0 2/2 4.0 0.5
3 3 3/3 3.0 2/3 4.67 0.64
4 2 2/4 3.5 0/4 4.67 0.75

Comparing Search Systems

Once you’ve computed NDCG values for each query, you can average them across thousands of queries. You can now compare two algorithms or search engines: you take the mean average NDCG values for each system, and check using a statistical test (such as a two sided t-test) whether one algorithm is better than the other, and with what confidence. You might, for example, be able to say with 90% confidence that Google is better than Bing.

As I mentioned at the beginning, this is one important factor you could consider when comparing two algorithms. But there’s more to search engine comparison than comparing NDCG metrics. As I’ve said in previous posts, I’m a huge fan of measuring in many different ways and making decisions with all the data at hand. It takes professional judgment to decide one algorithm is better than another, and that’s part of managing any search engineering effort.

Hope you found this useful, see you next time!

Afterword

I published a first version of this post on eBay’s technical blog in 2010. I owe thanks to Jon Degenhardt for cleaning up a math error, and formatting the tables.

It’s an apostrophe

Even the smartest folks I know can’t get apostrophes right. (Please don’t read all my blog posts and find the mistakes!). Let me see if I can help. 

  • “It’s” is equivalent to “it is”. If you write “it’s” in a sentence, check it makes sense if you replace it with “it is”. If yes, good. If no, you probably meant “its”
  • “Its” is a possessive. “The dog looked at its tail”. As in, the tail attached to the dog was stared at by the aforementioned canine

Get those right, and you’re in the top 98% of apostrophe users.

Don’t write “In the 1980’s, rock music was…”. You mean “In the 1980s, …”. As in, the plural: the ten years that constitute the decade that began in 1980. These are also correct: “He collected LPs” or “She installed LEDs instead of incandescent globes”. You’ll find some people argue about these: for example, some folks write “mind your P’s and Q’s”, and argue correctness. I personally think it’s wrong, there are many Ps and Qs, and so it should be “mind your Ps and Qs”.

Watch out for possession of non-plurals that end in consonants. “Hugh William’s blogs are annoying” and “Hugh Williams’ blogs are annoying” are both wrong. “Hugh Williams’s blogs are annoying” is right (in more ways than one?).

One trick I use is this: if you say the “s”, add the “s”. Hugh Williams’s blog. Ross’s Dad. The boss’s desk. If you don’t say it, don’t add it. His Achilles’ heel. That genres’ meaning.

Have a fun week!

Fireside chat at the DataEdge Conference

The video of my recent conversation with Michael Chui from McKinsey as part of the UC Berkeley DataEdge conference is now online. Here it is:

The discussion is around 30 minutes. I tell a few stories, and most of them are mostly true. We talk about my career in data, search, changing jobs, inventing infinite scroll, eBay, Microsoft, Pivotal, and more.  Enjoy!

Putting Email on a Diet

A wise friend of mine once said: try something new for 30 days, and then decide if you want to make it permanent.

Here’s my latest experiment: turning off email on my iPhone. Why? I found I was in work meetings, or spending time with the family, and I’d frequently pick up my phone and check my email. The result was I wasn’t participating in what I’d chosen to be part of — I was distracted, disrespectful of the folks I was with, and fostering a culture of rapid-fire responses to what was supposed to be an asynchronous communication medium. So, I turned email off on my iPhone. image1

What happened? I am enjoying and participating in meetings more. I am paying attention to the people and places I have chosen to be. And I’m not falling behind on email — I do email when I choose to do it, and it’s a more deliberate and effective effort.

Have I strayed? Yes, I have. When I’m truly mobile (traveling and away from my computer), I choose to turn it on and stay on top of my inbox — that’s a time when I want to multitask and make the best use of my time by actually choosing to do email. And then I turn it off again.

My calendar and contacts are still enabled. On the go, I want to know where and when I need to be somewhere, and to be able to consciously check my plans. I also want to be able to contact people with my phone.

Will I stick with it? I think so. Give it a try.

See you next time.

Armchair Guide to Data Compression

Data compression is used to represent information in less space than its original representation. This post explains the basics of lossless compression, and hopefully helps you think about what happens when you next compress data.

Similarly to my post on hash tables, it’s aimed at software folks who’ve forgotten about compression, or folks just interested in how software works; if you’re a compression expert, you’re in the wrong place.

A Simple Compression Example

Suppose you have four colored lights: red, green, blue, and yellow. These lights flash in a repeating pattern: red, red, green, red, red, blue, red, red, yellow, yellow (rrgrrbrryy). You think this is neat, and you want to send a message to a friend and share the light flashing sequence.

You know the binary (base 2) numbering system, and you know that you could represent each of the lights in two digits: 00 (for red, 0 in decimal), 01 (for green, 1 in decimal), 10 (for blue, 2 in decimal), and 11 (for yellow, 3 in decimal). To send the message rrgrrbrryy to your friend, you could therefore send them this message: 00 00 01 00 00 10 00 00 11 11 (of course, you don’t send the spaces, I’ve just included them to make the message easy for you to read).

Data compression. It's hard to find a decent picture!

Data compression. It’s hard to find a decent picture!

You also need to send a key or dictionary, so your friend can decode the message. You only need to send this once, even if the lights flash in a new sequence and you want to share that with your friend. The dictionary is: red green blue yellow. Your friend will be smart enough to know that this implies red is 00, green is 01, and so on.

Sending the message takes 20 bits: two bits per light flash and ten flashes in total (plus the one-off cost of sending the dictionary, which I’ll ignore for now). Let’s call that the original representation.

Alright, let’s do some simple compression. Notice that red flashes six times, yellow twice, and the other colors once each. Sending those red flashes is taking twelve bits, four for sending yellow, and the others a total of four for the total of twenty bits. If we’re smart, what we should be doing is sending the code for red in one bit (instead of two) since it’s sent often, and using more than two bits for green and blue since they’re sent once each. If we could send red in one bit, it’d cost us six bits to send the six red flashes, saving a total of six bits over the two-bit version.

How about we try something simple? Let’s represent red as 0. Let’s represent yellow as 10, green as 110, and blue as 1110 (this is a simple unary counting scheme). Why’d I use that scheme? Well, it’s a simple idea: sort the color flashes by decreasing frequency (red, yellow, green, blue), and assign increasingly longer codes to the colors in a very simple way: you can count 1s until you see a 0, and then you have a key you can use to look in the dictionary. When we see just a 0, we can look in the dictionary to find that seeing zero 1s means red. When we see 1110, we can look in the dictionary to find that seeing three 1s means blue.

Here’s what our original twenty-bit sequence would now look like: 0 0 110 0 0 1110 0 0 10 10. That’s a total of 17 bits, a saving of 3 bits — we’ve compressed the data! Of course, we need to send our friend the dictionary too: red yellow green blue.

It turns out we can do better than this using Huffman coding. We could assign 0 to red, 10 to yellow, 110 to blue, and 111 to green. Our message would then be 0 0 111 0 0 110 0 0 10 10. That’s 16 bits, 1 bit better than our simple scheme (and, again, we don’t need to send the spaces). We’d also need to share the dictionary: red <blank> yellow <blank> blue green to show that 0 is red, 1 isn’t anything, yellow is 10, 11 isn’t anything, blue is 110, and green is 111. A slightly more complicated dictionary for better message compression.

Semi-Static Compression

Our examples are two semi-static compression schemes. The dictionary is static, it doesn’t change. However, it’s built from a single-pass over the data to learn about the frequencies — so the dictionary is dependent on the data. For that reason, I’ll call our two schemes semi-static schemes.

Huffman coding (or minimum-redundancy coding) is the most famous example of a semi-static scheme.

Semi-static schemes have at least three interesting properties:

  1. They require two passes over the data: one to build the dictionary, and another to emit the compressed representation
  2. They’re data-dependent, meaning that the dictionary is built based on the symbols and frequencies in the original data. A dictionary that’s derived from one data set isn’t optimal for a different data set (one with different frequencies) — for example, if you figured out the dictionary for Shakespeare’s works, it isn’t going to be optimal for compressing War and Peace
  3. You need to send the dictionary to the recipient, so the message can be decoded; lots of folks forget to include this cost when they share the compression ratios or savings they’re seeing, don’t do that

It doesn’t matter what kind of compression scheme you choose, you need to decide what the symbols are. For example, you could choose letters, words, or even phrases from English text.

Static Compression

Morse code is an example of a (fairly lame) static compression scheme. The dictionary is universally known (and doesn’t need to be communicated), but the dictionary isn’t derived from the input data. This means that the compression ratios you’ll see are at best the same as you’ll see from a semi-static compression scheme, and usually worse.

There aren’t many static compression schemes in widespread use. Two examples are Elias gamma and delta codes, which are used to compress inputs that consist of only integer values.

Adaptive Compression

The drawbacks of semi-static compression schemes are two-fold: you need to process the data twice, and they don’t adapt to local regions in the data where the frequencies of symbols might vary from the overall frequencies. Imagine, for example, that you’re compressing a very large image: you might find a region of blue sky, where there’s only a few blue colors. If you built a dictionary for only that blue section, you’d get a different (and better) dictionary than the one you’d get for the whole image.

Here’s the idea behind adaptive compression. Build the dictionary as you go: process the input, and see if you can find it in the (initially empty) dictionary. If you can’t, add the input to the dictionary. Now emit the compressed code, and keep on processing the input. In this way, your dictionary adapts as you go, and you only have to process the data once to create the dictionary and compress the data. The most famous example is LZW compression, which is used in many of the compression tools you’re probably using: gzip, pkzip, GIF images, and more.

Adaptive schemes have at least three interesting properties:

  1. One pass over the data creates the dictionary and the compressed representation, an advantage over semi-static schemes
  2. Adaptive schemes get better compression with the more input they see: since they don’t approximate the global symbol frequencies until lots of input has been processed, they’re usually much less optimal than semi-static schemes for small inputs
  3. You can’t randomly access the compressed data, since the dictionary is derived from processing the data sequentially from the start. This is one good reason why folks use static and semi-static schemes for some applications

What about lossy compression?

The taxonomy I’ve presented is for lossless compression schemes — those that are used to compress and decompress data such that you get an exact copy of the original input. Lossy compression schemes don’t guarantee that: they’re schemes where some of the input is thrown away by approximation, and the decompressed representation isn’t guaranteed to be the same as the input. A great example is JPEG image compression: it’s an effective way to store an image, by throwing away (hopefully) unimportant data and approximating it instead. The MP3 music file format and the MP4 video format (usually) do the same thing.

In general, lossy compression schemes are more compact than lossless schemes. That’s why, for example, GIF image files are often much larger than JPEG image files.

Hope you learnt something useful. Tell me if you did or didn’t. See you next time.