In my third year of mathematics at Monash University I was introduced to the concept of iterative methods. For me, mathematics had always been about getting exact solutions to problems. The field of computational mathematics was foreign, yet intriguing. There existed ways to simply, efficiently and accurately solve extremely large problems that existed outside a purely theoretical landscape.
The most interesting problem I’ve been exposed to, and as it turns out a relatively simple one was that of the ‘Netflix prize’ run from 2006-2009. This was a publicly available problem and one that came with a $1,000,000 prize. If you have a Netflix subscription or have ever been on a streaming site then there’s a good chance you’ve been exposed to the user end of some high-level computational mathematics.
The ‘Netflix prize’ itself was based on the recommendation problem, in which we wish to match types of movies to specific users based on how the users have previously rated other movies. This problem is not specific to movies or even Netflix itself. Any company that gives recommendations based on previous user experience must solve this problem or at least one that’s similar.
A simple solution to this problem relies on using the alternating least squares method, an optimisation algorithm that takes turns minimising distinct parts of a problem until a solution has been found. The recommendation problem can be broken up into a matrix decomposition which effectively involves trying to find two matrices that multiply together to give the matrix you want. In this case the ‘Netflix prize’ was given to the people who had the fastest and most accurate way of reconstructing a full matrix whose entries corresponded to the known data about how users were rating movies.
Effectively, knowing how you’ve rated movies in the past we can use the alternating least squares method to determine what you would rate other movies in the future, giving Netflix the ability to recommend movies to you that would be of real interest.
These sorts of problems are why I chose to do my AMSI Vacation Research Scholarship in iterative methods. My project was an extension of the alternating least squares method and an attempt to use other methods to accelerate its ability to converge. Increasing the speed of such algorithms is necessary in the age when data sets are only becoming larger and more difficult to deal with. In the future I hope to continue to work on creating new methods to help solve such problems to increase our ability to deal with and pull the most information from data. You never know what the next million-dollar question might be!
Drew Mitchell was one of the recipients of a 2017/18 AMSI Vacation Research Scholarship.