Contributor(s): Professor Martin Anthony | Diagnosing tumours, playing video games, detecting credit card fraud, recognising faces, reading handwriting… they don't seem like similar tasks, but they are all cases where "machine learning" is employed to enable computers to make intelligent decisions. And although the various tasks look very different, the mathematics behind them is remarkably similar, as Professor Martin Anthony explains in this short film. When computers fail to do something we find easy – reading handwriting, recognising faces – it's tempting to think of them as stupid machines. But it's often the case that tasks we find relatively easy to perform evade explicit codification. How, for example, would you specify rules which correctly identified cats and only cats – including three-legged cats – but excluded dogs? Employing ideas from probability theory, statistics, linear algebra, geometry and discrete mathematics, machine learning aims to generate systems of instructions – algorithms – that allow computers to perform cognitive-style tasks. In abstract terms, machine learning involves detecting patterns in very large datasets, clustering together similar objects and distinguishing dissimilar ones. This could help with the detection of anomalies (as with the identification of malignant tumours or fraudulent credit card usage), or it could be used to recognise patterns – making sense of handwritten characters, for example. But despite the extraordinary real-world effects this theoretical work makes possible, Professor Anthony, like many mathematicians, isn't directly concerned with the uses to which his work is eventually put – 'I think of myself as an applicable mathematician,' he says, 'if it wasn't interesting mathematically, I'd probably be doing something else.'