Back in the 1990s I used to work for a chain of second hand music shops and one of the more time consuming tasks that we had to carry out was price reductions. It worked like this:
Apart from collectors’ items vinyl records were initially priced based on condition and CDs, tapes, VHS videos and later DVDs (yes it was that long ago) were initially priced as new, regardless of what we thought we would be able to sell them for.
Every fortnight, everything in the shop would be marked down by 50p or £1. (Reductions day was once a week and we’d do half the stock one week, half the following week). When items reached the price that someone was willing to pay for them, they’d sell. Items which persistently failed to sell would keep getting reduced until they reached a low of 10p – and if they didn’t sell for 10p, they would be sold as part of a lucky dip bargain bundle.
There are some obvious disadvantages to this approach to pricing, of course:
- the majority of stock at any one time being seriously overpriced
- customers not understanding why multiple copies of an album were all priced differently
- new releases losing their value over time faster than the reductions cycle and ending up selling for 50p months later when we could have sold them for £5 at the time
The big advantage, and the reason we had that system, was that (again apart from collectors items) it wasn’t possible to underprice anything so we would never miss out on profit.
There were many management discussions about how reductions could be automated, but it never came to pass. The technology wasn’t up to delivering a solution that would be easy for customers (and staff) to understand, but it wasn’t just that – I came to the conclusion that this was a task that not only couldn’t be automated, but shouldn’t be automated – and here’s why.
Each shop was staffed with a mixture of expert buyers and part time staff. To be an expert buyer, you needed to have an encyclopaedic knowledge of music but that wasn’t enough – you also needed to have a strong commercial sense of how much the titles that were brought in for sale could be sold on for.
- Pay too much and the profit margin would be cut to ribbons.
- Pay too little and that seller wouldn’t come back and would tell their friends not to come back either. This was much more important that you might think as a lot of the incoming stock came from collectors, DJs and music journalists and other people in the business most of whom knew each other.
So how did the buyers know what things would sell for?
- Seeing customers buy the item at a certain price
- Knowing how many of that item was out in the display areas unsold and at what price
- Taking into account how much similar items e.g. earlier albums by the same artist had sold or failed to sell for
- Secret knowledge
1 is easy – just spend time on the till. Not only is it easy, it could be made even easier with a computerised stock system – just look up what the last one sold for
2 is harder, and the reason why I wanted the buyers to do reductions themselves (and not palm them off on the part time staff) was so they would understand from experience, not just what was selling but what was failing to sell. Perhaps we could have got around it by having some sort of fully computerised stock system but given levels of technology 30 years ago and most stock items not yet having barcodes, it was going to cost more to enter items into the database than they were worth.
3 is a bit of a killer because it requires a value judgement. A stock system could tell you what the artist’s previous album typically sold for, but it can’t tell you whether the new album is any good. If it’s good, a lot of reviewers keep their copies so there’s low supply and high demand. If it’s a turkey, the shop gets flooded with copies from reviewers and disappointed customers and no-one wants them.
4 is the absolute killer though, because knowing what’s happened in the past is not enough. What computerised stock system could tell you to dig out every copy of an album that’s out in the racks and reprice it at maximum price because the bass line’s been sampled and used in a high profile hip hop track? Or (ghoulish I know) to do the same when the artist has died?
3 and 4 require the buyer to put bits of knowledge from disparate sources together and come up with a solution that’s more than the sum of its parts and even 30 years later, people are still far, far better than computers at extrapolating from incomplete patterns.
People are also far better than computers at spotting stupid wrong answers.
A recent, particularly egregious example of computers doing a job that should be done by people and getting it horribly wrong was the A level grades fiasco in 2020.
Exams had to be cancelled because of the pandemic, so some bright spark came up with an algorithm to prevent grade inflation. “Bright spark”, because it should immediately have been obvious that there would have to be some degree of inflation. Why? Because every year, some students who had been predicted high grades mess up on the day and it’s difficult or impossible to predict who will and who won’t.
To award grades without students actually taking exams, but not give out extra A* grades to allow for students who might have fluffed it on the day but now don’t have the opportunity to do so, you have to introduce some mechanism for allocating “fluffed it on the day” to a number of students equal to the number you’d expect to do just that in an actual exam year. So clearly, obviously, bound to be unfair.
It was even worse than that though. Apart from in subjects where there were very few entrants the algorithm awarded a spread of results to each school based on previous years’ results at that school. So if someone in the last few years was classified U, some hapless 2020 A-level student in that school would be allocated a U grade regardless of their centre assessed grade and regardless of what they achieved in their mocks.

When a teacher is complaining on Twitter that one of their students has been given an A* in Further Maths (a subject with few entrants) but a C in Maths – well, you can’t get much more of a stupid wrong answer than that!
What can we learn from all this?
Automated decision making will often produce stupid wrong answers because a computer can only work with the data it’s given (the GIGO law – garbage in, garbage out). Whereas a human can often spot errors in the data and/or extrapolate from incomplete data, pulling in fresh information from elsewhere to see the big picture that the computer can’t.
Sometimes it’s better to give yourself the experience of entering data manually (like the staff member doing reductions) than import it automatically. You’ll get a much better feel for what’s going on and be able to pick up any anomalies.
This is why we don’t have a automated reranking system for brand listings on Best New Bingo Sites. We do use a lot of information from click and earnings reports but it is collated and sanity checked by a human (me). Past performance is simply not enough of a guide to future success. Even if a brand hasn’t changed its welcome offer, some other brand may have a great new offer or a new brand may be about to launch. Either of those could cause traffic to a brand to drop off sharply. And then there’s spotting anomalies, like lots of click outs from a brand review when we rank so highly for the brand name that existing players are using our site as a way to return there. Or lots of click outs that don’t result in registration or deposits, indicating that there may be an issue with the landing page or the registration process.
Maybe one day computers will be capable of pattern recognition from incomplete or disparate data. But it’s surely a long way off – and until then, as far as seeing the big picture is concerned, humans really are smarter than computers.