Tag Archives: ebay

Managing small teams: the tools you should use

Bobby Kostadinov recently asked me the following question on twitter:

And I was curious enough to go ask my team at eBay, where we use yammer to have fun conversations like this one.

The net of it is the team prefers simple approaches over complicated ones:

  • Many folks prefer using sticky notes, a whiteboard, and a camera for capturing them
  • A few folks like a spreadsheet, probably one with a few macros that create burn down charts and statistics
  • Using tools such as Jira, Trello, and Github Issues is becoming popular in the team

My personal experience with very small teams has been with sticky notes, whiteboards, and simple spreadsheets. When I began my career in 1990, I used a whiteboard on my first team and a polaroid camera to capture it. Many years later, I used a simple spreadsheet, and sticky notes late in the project to close down bugs and those final few features. If I had my time again on a small project, I’d probably do exactly the same thing.

The people on my team are wiser than me — they’ve got current experience with small teams — and here’s a compilation of what they said.

The most popular method: a whiteboard, a camera, and sticky (aka PostIt) notes

Ravi started the conversation about a simple approach and says that “for a small team – especially if they are working on a new project – [the] best I have seen so far is a whiteboard with cards or sticky notes. Put tasks as sticky notes or write in a card, move it along the white board from start to finish. Annotate the sticky with blockers, etc. if required”. Ibrahim has worked at two other major Internet companies prior to eBay, and says “Wow. Sticky notes people. It’s the most ‘obvious’ answer to all this. Sticky notes, whiteboard, and a camera (to capture) is all you need”.

Mitch, who’s probably been around as long as me, has the same experience and agrees, except when a team is remote. In that case, it’s time to switch to tools: “hypermail, jira (or similar, integrated system)”. Farah is of a similar mind and agrees “… sticky notes with daily huddles work very well. But being remote I have a strong preference for documenting everything. Taking pictures of the whiteboard after every huddle and putting the images on a wiki might work.”

Use a Tool

Sri’s one of our Vice Presidents, and has a long and distinguished career at eBay from individual contributor all the way up. He has “seen most open source initiatives use Jira. The rich portfolio of plugins make it a great tool for both the teams and for the project owner. Nice thing about Jira is that the companion wiki (confluence, the one we run) can surface it neatly inline as well through simple macros.”

There were many votes for Github Issues, including from Utkarsh who says that you should use it if you “are based off github. Simple and just works.”

Another popular emerging tool is Trello, which quite a few folks recommended. Other tools mentioned were Pivotal Tracker, newcomer Asana, and redmine.

Use a Spreadsheet

Jon D, who’s a veteran too, says that he’d stick with a spreadsheet. “For such a small team, especially if co-located, simple is good. A spreadsheet (Excel) or MS Project (using only basic features) are sufficient”. He was supported by several folks, who pointed out that there are good spreadsheets you can download and use, and that Google Docs is a nice way to collaborate across the team.

Other ideas

A few other ideas were mentioned, including using email (Google’s Gmail has excellent search capabilities), groups on Yammer, a Wiki, and (urrggghhh!) Sharepoint.

Hope this helps you Bobby, and a few others out there! Looking forward to starting a conversation on this one…

The race to build better search: a Reuters article

I spent a couple of hours with Alistair Barr from Reuters discussing search at eBay, and our Project Cassini rewrite of eBay’s search platform. Alistair published the story yesterday, and it’s a good, short read. Thanks Alastair for sharing the story with your readers.

Alistair discusses Walmart’s search rewrite (which didn’t take too long by the sounds of it — my recent blog post suggests why), quotes responses from Google’s PR team, and shares insights from my good friend Oren Etzioni who works at both the University of Washington and the rather awesome shopping decision engine, decide.com. He mentions Google’s features that obviously do a little light image processing to match color and shape intents in queries such as “red dress” or “v-neck dress” against the content in the images.

Google does do lots of things well, but they’re often not the first to do them — we built that color search into Bing’s image search in 2008 (try this “red dress” query). On a related note, eBay has a rather cool image search feature, which we really should make more prominent in our search experience (mental note: must work on that). Try this “red dress” query, and you’ll see results that use visual image features to find related items.

I’ll be back with part #3 of my Ranking at eBay series soon.

Ranking at eBay (Part #2)

In part 1 of Ranking at eBay, I explained what makes the eBay search problem different to other online search problems. I also explained why there’s a certain kinship with Twitter, the only other engine that deals with the same kinds of challenges that eBay does. To sum it up, eBay’s search problem is different because our items aren’t around for very long, the information about the items changes very quickly, and we have over 300 million items and the majority are not products like you’d find on major commerce web sites like Walmart or Amazon.

In this post, I explain how we think about using data in the eBay ranking problem. In the next post, I’ll explain how we combine all of that data to compute our Best Match function, and how it’s all coming together in a world where we are rebuilding search at eBay.

Ranking Factors at eBay

Let’s imagine that you and I work together and run the search science team at eBay. Part of our role is to help make sure that the items and products that are returned when a customer runs a query are ordered correctly. Correctly means that the most relevant item to the customer’s information need is in the first position in our search results, the next most relevant is in the second position, and so on.

What does relevant mean? In eBay’s case, you could abstract it to say that the item is great value from a trusted seller, it matches the intent of the query, and it’s something that buyers want to buy. For example, if the customer queries for a polaroid camera, our best result might be a great, used, vintage Polaroid camera in excellent condition. Of course, it’s subjective: you could argue it should be a new generation Polaroid camera, or some other plausible argument. In a general sense, relevance is approximated by computing some measure of statistical similarity — obviously, search engines can’t read a user’s mind, so they compute information to score how similar an item is to a query, and add any other information that’s query independent and can help. (In a future post, I’ll come back and explain how we understand whether we’ve got it right, and work to understand what the underlying intent is behind a query.)

Let’s agree for now that we want to order results from most- to least-relevant to a query, when the user is using our default Best Match sorting feature. So, how do we do that? The key is having information about what we’re ranking: and I’ll argue that the more, different information we have, the better job we can do. Let’s start simply: suppose we only have one data source, the title of the item. I’ve shown below an item, and you can see it’s title at the top, “NICE Older POLAROID 600 Land Camera SUN AUTO FOCUS 660”.

A Polaroid Camera on eBay. Notice the title of the item, "NICE Older POLAROID 600 Land Camera SUN AUTO FOCUS 660"

Let’s think about the factors we can use from the item title to help us order results in a likely relevant way:

  • Does the title contain the query words? The rationale for proposing this factor is pretty simple: if the words are in the title, the item is more relevant than an item that doesn’t contain the words.
  • How frequently are the query words repeated in the title? The rationale is: the more the words are repeated, the more likely that item is to be on the topic of the query, and so the more relevant the item.
  • How rare are each of the query words that match in the title? The rationale is that rarer words across all of the items at eBay are better discriminators between relevant and irrelevant items; in this example, we’d argue that items containing the rarer word polaroid are probably more likely to be relevant than items containing the less rare word camera.
  • How near are the query words to the beginning of the title? The argument is that items with query words near the beginning of the title are likely more relevant than those containing the query words later in the title, with the rationale that the key topic of the item is likely mentioned first or early in the title. Consider two examples to illustrate:  Polaroid land camera 420 1970s issued still in nice shape retro funk, and PX 100 Silver Shade Impossible Project Film for Polaroid SX-70 Camera. (The former example is a camera, the latter example is film for a camera.)

Before I move on, let me just say that these are example factors. I am not sharing that we do or don’t use these factors in ranking at eBay. What I’m illustrating is that you and I can successfully, rationally think about factors we might try in Best Match that might help separate relevant items from irrelevant items. And, overall, when we combine these factors in some way, we should be able to produce a complete ordering of eBay’s results from most- to least-relevant to the query.

So far, I’ve given you narrow examples about text factors from the title. There are many other text factors we could use: factors from the longer item description, category information, text that’s automatically painted onto the item by our algorithms at listing time, and more. If we worked through these methodically, we could together write down factors that we thought might intuitively help us rank items better. At the end of process, I’m guessing we’d have written downs tens of factors for the text alone we have at eBay.

You can see my argument coming together: if you used just one or two of these factors, you might do a good, basic job of ranking items. But if you use more information, you’ll do better. You’ll be able to more effectively discern differences between items, and you’ll do a better job of ranking the items. Net, the more (new, different, and useful) information you have, the better.

What’s key here is that we need different factors, and we need factors that actually do the right thing. There are some simple ways we can test the intuition about a factor before we use it. For example, we could ask a simple question: do users buy more of items that have this factor than those that don’t? In practice, there’s much more sophisticated things we can do to validate a factor before we decide to actually build it into search (and I’ll leave that discussion to another time).

The Factor Buckets

I believe in a five bucket framework of factors to build our eBay Best Match ranking function:

  1. Text factors (discussed above)
  2. Image factors
  3. Seller factors
  4. Buyer factors
  5. Behavioral factors

Pictures or images are an important part of the items and products at eBay. Images are therefore an interesting possible source of ranking factors. For example, we know that users prefer pictures where the background is a single color, that is, where the object of interest is easily distinguished from the background.

The seller is an important part of the buyer’s decision to purchase. You can likely think of many factors that we could include in search: how long have they been selling? How’s their feedback? Do they ship on time? Are they a trusted seller?

Buyer factors is an interesting bucket. If you think about the buyer, there’s many potential factors you might want to explore. Do they always buy fixed price items? What are the categories they buy in? What’s the shoe size they keep on asking for in their queries? Do they buy internationally?

Behavioral factors are also an exciting bucket. Here’s a few examples we could work on: does this item get clicks from buyers for this query? What’s the watch count on the item? How many bids does the auction have? How many sales have their been of this fixed price item, given it’s been shown to users that many times? If you want to dig deeper into this bucket, Mike Mathieson wrote a super blog post on part of our behavioral factor journey.

Where we are on the factors journey

We formed our search science team in late 2009, when Mike Mathieson joined our team. We’ve built the team from Mike to tens of folks in the past couple of years, and we’re on a journey to make search awesome at eBay. Indeed, if you want to join the team — and have an awesome engineering or applied science background, you can always reach out to me.

Right now, we use several text factors in Best Match, we have released a few seller factors and behavioral factors, and we have begun working on image and buyer factors. All up, we have tens of factors in our Best Match ranking function. You might ask: all of these factors seem like they’d be useful, so why haven’t you done more? There’s a few good reasons:

  1. Our current search engine doesn’t make it easy to flexibly combine factors in ranking. (that’s one good reason why we’re rewriting search at eBay.)
  2. It takes engineering time to develop a factor, and make it available at query time for the search ranking process. In many cases, factors are extremely complex engineering projects — for example, imagine how hard it is to process images and extract factors when there’s 10 million new items per day (and most items have more than 1 image), and you’re working hard to get additions to the index complete within 90 seconds. Or imagine how challenging it is to have real-time behavioral factors available in a multi-thousand computer search grid within a few seconds. (If you’ve read Part #1 of this series, you’ll appreciate just how real-time search is at eBay.)
  3. Experimentation takes time. Intuition is the easy part, building the factor, combining it with other factors, testing the new ranking function with users, and iterating and improving takes time. I’ll talk more about experimentation and testing in my next post

In the third and final post in this series, I’ll explain more about how we combine factors and give you some insights into where we are on the search journey at eBay. Thanks for reading: please share this post with your friends and colleagues using the buttons below.

Ranking at eBay (Part #1)

Search ranking is the science of ordering search results from most- to least-relevant in response to user queries. In the case of eBay, the dominant user need is to find a great deal on something they want to purchase. And eBay search’s goal is to do a great job of finding relevant results in response to those customer needs.

eBay is amazingly dynamic. Around 10% of the 300+ million items for sale end each day (sell or end unsold), and a new 10% is listed. A large fraction of items have updates: they get bids, prices change, sellers revise descriptions, buyers watch, buyers offer, buyers ask questions, and so on. We process tens of millions of change events on items in a typical day, that is, our search engine receives that many signals that something important has changed about an item that should be used in the search ranking process. And all that is happening while we process around 250 million queries on a typical day.

In this post, I explain what makes eBay’s search ranking problem unique and complex. I’m aiming here to give you a sense of why we’ve built a custom search engine, and the types of technical search ranking challenges we’re dealing with as we rebuild search at eBay. Next week, I’ll continue this post and offer a few insights into how we’re working on the problem.

What’s different about eBay

Here are a few significantly different facets of eBay’s search problem space:

  1. Under typical load, it takes around 90 seconds from an item being listed by an eBay seller to when it can be found using the search engine. The same is true for any change that affects eBay’s search ranking — for example, if the number of sales of a fixed price multi-quantity item changes, it’s about 90 seconds until that count is updated in our index and can be used in search ranking. Even to an insider, that’s pretty impressive: there’s probably no other search engine that handles inserts, updates, and deletes at the scale and speed that eBay does. (I’ll explain real time index update in detail in a future post, but here’s a paper on the topic if you’d like to know more now.)
  2. In web search, there are many stable signals. Most documents persist and they don’t change very much. The link graph between documents on the web is reasonably stable; for example, my home page will always link to my blog, and my blog posts have links embedded in them that persist and lead to places on the web. All of this means that a web search engine can compute information about documents and their relationships, and use that as a strong signal in ranking. The same isn’t true of an auction item at eBay (which are live for between 1 and 7 days), and it’s less true of a fixed price item (many of which are live for only 30 days) — the link graph isn’t very valuable and static pages aren’t common at eBay
  3. eBay is an ecosystem, and not a search-and-leave search engine. The most important problem that web search engines solve is getting you somewhere else on the web — you run a query, you click on a link and you’re gone. eBay’s different: you run a query, you click on a link, and you’re typically still at eBay and interacting with a product, item, or hub page on eBay. This means that at eBay we know much more than at a web search engine: we know what our users are doing before and after they search, and have a much richer data set to draw from to build search ranking algorithms.
  4. Web search is largely unstructured. It’s mostly about searching blobs of text that form documents, and finding the highest precision matches. eBay certainly has plenty of text in its items and products, but there’s much more structure in the associated information. For example, items are listed in categories, and categories have a hierarchy. We also “paint” information on items as they’re listed in the form of value:attribute pairs; for example, if you list a men’s shirt, we might paint on the item that it is color:green, size:small, and brand:american apparel. We also often know the product that an item is: this is more often the case for listings that are books, DVDs, popular electronics, and motors. Net, eBay search isn’t just about matching text to blobs of text, it’s about matching text or preferences to structured information
  5. Anyone can author a web document, or create a web site. And it’ll happily be crawled by a search engine, perhaps indexed (depends on what they decide to put in their index), and perhaps available to be found. At eBay, sellers create listings (and sometimes products), and everything is always searchable (usually in 90 seconds under typical conditions). And we know much more about our sellers than a web search engine knows about its page authors
  6. We also know a lot about our buyers. A good fraction of the customers that search at eBay are logged in, or have cookies in their browser that identify them. Companies like Google and Microsoft also customize their search for their users when they are logged in (arguably, they do a pretty bad job of it — perhaps a post for another time too). The difference between web search and eBay is that we have information about our buyers’ purchase history, preferred categories, preferred buying formats, preferred sellers, what they’re watching, bidding on, and much more
  7. Almost every item and product has an image, and images play a key role in making purchase decisions (particularly for non-commodity products). We present images in our search results

There are more differences and challenges than these, but my goal here is to give you a taste, not an exhaustive list.

Who has similar problems?

Twitter is probably the closest analog technically to eBay:

  • They make use of changing signals in their ranking and so have to update their search indexes in near real-time too. But it’s not possible to edit a tweet and they don’t yet use clicks in ranking, so that means there’s probably much less updating going on than at eBay
  • Twitter explains that tweet rates go from 2,000 per second to 6000 to 8000 when there is a major event. eBay tends to have signals that change very quickly for a single item as it gets very close to ending (perhaps that’s similar to retweet characteristics). In both cases, signals about individual items are important in ranking those items, and those signals change quickly (whether they’re tweets or eBay items)
  • Twitter is largely an ecosystem like eBay (though many tweets contain links to external web sites)
  • Twitter makes everything searchable like eBay, though they typically truncate the result list and return only the top matches (with a link to see all matches). eBay shows you all the matches by default (you can argue whether or not we should)
  • Twitter doesn’t really have structured data in the sense that eBay does
  • Twitter isn’t as media rich as eBay
  • Twitter probably knows much less about their users’ buying and selling behaviors

(Thanks to Twitter engineering manager Krishna Gade for the links.)

Large commerce search engines (Amazon, Bestbuy, Walmart, and so on) bear similarity too: they are ecosystems, they have structure, they know about their buyers, they have imagery, and they probably search everything. The significant differences are they mostly sell products, and very few unique items, and they have vastly fewer sellers. They are also typically dominated by multi-quantity items (for example, a thousand copies of a book). The implication is there is likely vastly less data to search, relatively almost no index update issues, relatively much less inventory that ends, relatively much less diversity, and likely much fewer changing signals about the things they sell. That makes the search technical challenge vastly different; on the surface it seems simpler than eBay, though there are likely challenges I don’t fully appreciate.

Next week, I’ll continue this post by explaining how we think about ranking at eBay, and explain the framework we use for innovation in search.

Clicks in search

Have you heard of the Pareto principle? The idea that 80% of sales come from 20% of customers, or that the 20% of the richest people control 80% of the world’s wealth.

How about George K. Zipf? The author of the “Human behavior and the principle of least effort” and “The Psycho-Biology of Language” is best-known for “Zipf’s Law“, the observation that the frequency of a word is inversely proportional to the rank of its frequency. Over simplifying a little, the word “the” is about twice as frequent as the word “of”, and then comes “and”, and so on. This also applies to the populations of cities, corporation sizes, and many more natural occurrences.

I’ve spent time understanding and publishing work how Zipf’s work applies in search engines. And the punchline in search is that the Pareto principle and Zipf’s Law are hard at work: the first item in a list gets about twice as many clicks as the second, and so on. There are inverse power law distributions everywhere.

The eBay Search Results Click Curve

Here’s the eBay search results click curve, averaged over a very large number of queries. The y-axis is the total number of clicks on each result position, and the x-axis is the result position. For example, you can see that the first result in search (the top result, the first item you see when you run a query) gets about eight times as many clicks on average as the fifteenth result. The x-axis is labelled from 1 to 200, which is typically four pages of eBay results since we show 50 results per page by default.

eBay Click Curve. The y-axis is number of clicks per result, and the x-axis is the result position.

As a search guy, I’m not surprised by this curve (more on that topic later). It’s a typical inverse power law distribution (a “Zipf’s Law” distribution). But there are a couple of interesting quirks.

Take a look at the little bump around result position 50 on the x-axis. Why’s that there? What’s happening is that after scrolling for a while through the results, many users scroll to the very bottom of the page. They then inspect the final few results on the page (results 46 to 50), just above the pagination control. Those final few results therefore get a few more clicks than the ones above that the user skipped. Again, this isn’t a surprise to me — you’ll often see little spikes after user scroll points (in web search, you’ll typically see a spike in result 6 or 7 on a 10-result page).

I’ve blown up the first ten positions a little more so that you can see the inverse power law distribution.

Search click curve for the first 10 results in eBay search.

You can see that result 1 gets about 4 times as many clicks as result 10. You can also see that result 2 gets about 5/9ths of the clicks as result 1. This is pretty typical — it’s what you’d expect to see when search is working properly.

Interestingly, even if you randomize the first few results, you’ll still see a click curve that has an inverse power law distribution. Result 1 will almost always get more clicks than result 2, regardless of whether it’s less relevant.

Click Curves Are Everywhere

Here are some other examples of inverse power law distributions that you’ll typically see in search:

  • The query curve. The most popular query is much more popular than the second most popular query, and so on. The top 20% of queries account for at least 80% of the searches. That’s why caching works in search: most search engines serve more than 70% of their results from a cache
  • The document access curve. Because the queries are skew in distribution, and so are the the clicks per result position, it’s probably not surprising that a few documents (or items or objects) are accessed much more frequently than others. As a rule of thumb, you’ll typically find that 80% of the document accesses go to 20% of the documents. Pareto at work.
  • Clicks on related searches. Most search engines show related searches, and there’s a click curve on those that’s an inverse power law distribution
  • Clicks on just about any list: left navigation, pagination controls, ads, and any other list will typically have an inverse power law distribution. That’s why there’s often such a huge price differential between what advertisers will pay in search for the top position versus the second position
  • Words in queries, documents, and ads. Just like Zipf illustrated all those years ago, word frequencies follow an inverse power law distribution. Interestingly, and I explain this in this paper, Zipf’s formal distribution doesn’t hold very well on words drawn from web documents (a thing called Heap’s law does a better job). But the point remains: a few words account for much of the occurrences

What does this all mean? To a search guy, it means that when you see a curve that isn’t an inverse power law distribution, you should worry. There’s probably something wrong — an issue with search relevance, a user experience quirk (like the little bump I explained above), or something else. Expect to see curves that decay rapidly, and worry if you don’t.

See you again next Monday for a new post. If you’re enjoying the posts, please share with your friends by clicking on the little buttons below. Thanks!