Nic Newman
Story of how they got it wrong and then put it right.
History: 2yrs ago, Weather site was pretty dated and had grand ambitions to update site.
[video]
Product took over a year to get out. BBC recorded thousands of complaints. Within 2-3 months it was re-launched, solving most of the problems. Site retested in beta with people who complained the last time around.
In original version did lots of focus groups and used this to decide what to do. eg ppl said they wanted everything on one page, so they did (without checking that it was right). Second thing was beta test was on folks that were weather nuts so they wanted all the detail. Public wanted simplicity.
For version 2, checked people’s complaints and coded them so that there was a top ten list of things people didn’t like. Then went back out to focus groups. Found people who were really angry and tried to work with these people to improve. Pressure from above (director general et al) meant quick wins were important.
Beta test 2, tested on 1300 people and survey on their views in comparison with old weather site. Some quantitative data made a big difference when talking to DG about whether new site is going to work.
Key learnings:
Q. What kind of quantitative measures?
A. Surveys and click-tracking. Eg of 1300 people, how many get to the forecast page and how many personalize the homepage.
Q. Is it important to build the tracking into the homepage?
A. Yes, important not only in final site but also in the beta sites etc.
Q. Is there danger that some of the ppl who complain the loudest are not representative?
A. Absolutely. Danger is that others in company look at emails, comments, blogs and assume it’s representative. Good to have data to show that they’re not.