When a user clicks on a web page, it’s a good sign. They’re engaged with your content, they’re using your site. Clicking is certainly better than abandonment: where a user doesn’t interact with the page.
I recommend that you track percent abandonment of each of your pages: it’s a simple measure of how you’re doing in engaging customers. The lower the abandonment, the better.
There are a few exceptions, where abandonment doesn’t necessarily imply a problem. For example, a page that tells the time, or otherwise answers a question, doesn’t require a click. Indeed, a click might be bad news – the answer wasn’t a good one.
The more clicks, the more interesting the link to users?
Not necessarily. Suppose you’ve got a page with many links in many places. All things being equal, expect to see many more clicks on elements that are higher on the page. For elements at the same vertical height, expect to see more clicks on elements that are on the left of the page (the opposite is true for languages that are right-to-left such as Hebrew). All up, if a link is in the top left of the page, it’ll get clicked much more. In general, expect an extreme value distribution of the clicks.
Since users scan left to right, top to bottom, you should organize a page so that the most important things for users are closer to the top left, and the less important things are closer to the bottom right. This is pretty intuitive – you expect a web site to have the most common, useful links in its header, and the less useful more obscure stuff in the footer.
Hmmm. So, which links on my page do users like?
If you want to understand the relative performance of links on a page, you could consider swapping them and comparing the number of clicks. For example, suppose you’ve got a header on a page with three elements “About”, “Jobs”, and “Help”. You could measure the number of clicks on these for a week. You could then swap the “Help” and “About” links, and measure for another week. Does “Help” get more clicks in week two than “About” did in week one? If yes, the second ordering is better; if no, stick with the original ordering.
You need to be careful what you compare. It’s pretty safe to compare “Help” and “About” text in a header between two experiments. But you’ll find that there’s text you can create that will get more clicks, regardless of whether it’s more useful. It’s a well-known industry fact that “top ten” lists on tabloid sites get way more clicks than other stories. Text such as “click here” gets more clicks. Images attract the eye to nearby text, so that it gets more clicks. If you move something that customers are used to, expect them to click it less. And so on.
Experiments at scale
In a large-scale web business (such as eBay), we generally don’t do experiments sequentially in time. Instead, we’ll show the first alternative to some fraction of users, and the second alternative to another fraction of users. We can then compare the two populations over the same time, which both speeds up experimentation cycles, and also reduces any effects of seasonality or other differences between experiments that aren’t carried out at the same time. (There’s some issues this creates – a topic for another time.)
A Click is a Vote
If a user clicks on a link, this tells you something about that link. If it gets more clicks than you’re expecting, it’s usually good news (more on this topic later). If it gets less clicks than you’re expecting, it’s typically bad news.
What’s not widely known is that a click on a link tells you something about the links that come before it. Specifically, if a user skips a link on a page and clicks on the next link, this tells you that then former link isn’t relevant to the user. It doesn’t tell you links below it are irrelevant to the user — users leave the page when they see the first thing that’s relevant. This is a well-known phenomena in search engines.
Good clicks and bad clicks
A click is a good, basic signal. As I said at the start, it’s generally better to get a click than to not get one.
But there are ways you can make a click an even more reliable signal of user happiness. One simple trick is to factor in how long the user dwelled on whatever it was that was clicked. If they click on a link, press the back button immediately, and return to the original page, it’s actually a sign of unhappiness. They didn’t find what they wanted. If you wanted a rule of thumb, I’d say any click that dwells less than 10 seconds is a bad sign. I’d say any click that dwells more than 30 seconds is a good sign. You could try counting clicks, bad clicks, and good clicks, and drawing your conclusions from there.
Please share this post using the buttons below (hopefully they’ll pull your clicks!). See you next week!