*Michael Fotopoulos was one of the recipients of a 2017/18 AMSI Vacation Research Scholarship.*

**By Edric Wang, Australian National University (University of Sydney)
**

When we learn algebra in school we learn about sets of numbers, such as the integers: 0, 1, -1, 2, -2 and so on. We also learn to apply operations on those numbers, such as addition. We learn rules for these operations and consequences of these rules. In abstract algebra, we consider the same ideas in a more abstract setting: instead of studying only numbers, we study how the properties of how abstract elements interact under operations.

For example, let our elements be the symmetries of a square: that is, all of the motions of the square which preserve its shape. For example, reflection about a given axis and clockwise rotation by 90 degrees are two examples of symmetries. Now we can define an operation on these symmetries known as composition: we compose two symmetries by performing one after the other. Now we have a set of elements and an operation on those elements, so we can study the properties of this operation. In this case, these properties will tell us about the symmetry of the square.

It turns out that the two systems we have described (the integers with addition and the symmetries of the square with composition) are both examples of what is known as a group in abstract algebra. This is to say that they share the same underlying structure. Therefore, if we study such groups in the abstract, then any results we obtain can be applied to both the integers and the symmetries of the square. So, abstract algebra allows us to prove results in sweeping generality instead of having to prove essentially the same concept under different guises. In this way, once we recognise a mathematical object as an instance of a certain algebraic structure, we already know a lot about that object without having done any work.

*Edric Wang was one of the recipients of a 2017/18 AMSI Vacation Research Scholarship.*

**By Michael Fotopoulos, Monash University
**

Most mathematicians and mathematics appreciators alike would be familiar with the broad and quintessential field of calculus. Although not properly formalised until the 1800’s by the efforts of Cauchy and Weierstrass, calculus has been instrumental in the development of physics, mathematics, engineering, and even biology and chemistry from the 1600’s right through to the modern day.

To briefly refresh the reader, calculus concerns itself with extrema of mathematical objects – be it ordinary functions, surfaces, or in the case of my research, functionals. Naïvely speaking a functional is like a function that takes *functions *as its parameters. Generally, locating and classifying extrema involves taking derivatives but unfortunately this is not so simple for functionals. This precise dilemma led to the development of the *Calculus of Variations. *Without inciting too much technical jargon and definitions – calculus of variations allows us to devise a rigorous definition for these “functional derivatives” and perform mathematics with them.

My research concerned itself with finding extrema of a particular functional (hereafter called an “energy”) that one can attribute to two-dimensional surfaces living in three-dimensional Euclidian space. To understand more precisely the energy with which I interested myself we must journey now to another field of mathematics called *differential geometry*. The element of this field with which I concerned myself was *mean curvature*. You can think of curvature as a measure of how much a surface deviates from a plane; a point with positive mean curvature would bend down all around like the top of a mountain, and vice-versa a point with negative mean curvature would bend up to form a bowl or a valley. From this notion of mean curvature, we can define an energy that computes the total mean curvature of a given surface (via an integral over the entire surface). Surfaces that minimise this energy, we appropriately call *minimal surfaces, *whose mean curvature is 0 everywhere*. *Examples of these surfaces are the shapes formed by soap films when stretched between two solid surfaces. There are many other examples of minimal surfaces whose properties have been well studied in the literature for some time.

For me specifically, I studied a slightly different energy (whose formal definition is too unwieldy for the scope of this blog) whose minimisers form a larger class of surfaces called “constant mean curvature” surfaces. Obviously, these surfaces also contain the above described minimal surfaces for which the mean curvature is the constant 0. In my report, I detail a method for uncovering a system of partial differential equations whose solutions are not only these types of surfaces as well as many more who describe other extrema of the energy in question.

As such a broad topic, this research project allowed me to learn and utilise a number of different fields of mathematics, laying the foundations for more in depth future work and research in analysis.

**Yueyi Sun, The University of Sydney**

Game theory has various applications related to economics. There are many popular concepts in game theory that we might hear about before, such as Nash equilibrium.

Today, many of us have tried investing into trading markets, such as stock market, future market and option market. We may or may not truly understand the market operation rule but it is well known that the market price to a great extend depends on the demand and supply. Also, the market is relatively stable, and the market price will be kept in an equilibrium unless some big news comes out and consequently price fluctuates and finally hits a new balance. How the market keeps the balance? One of the tool used is the order book, which is an electronic list of buying and selling orders. Investors will send electronic massages to the order book, the messages will contain a side (“buy” or “sell”), a price and a quantity. There is a matching engine in the order book, and if a new order matches the order book, the transaction will take place. All the orders in the order book are pending orders, which means that investors can withdraw the order at any time before transaction happens. In the finance or economics, there are lots of papers studying the relationship between order book volume and the trend of the market price. I am very excited to get the summer research opportunity to use the game theory approach to study how the order book volume will influence the trading strategy for each investor to influence the market price.

Most of the investors have some common trading behaviors. Individual investors tend to trade small quantities with a high frequency. Institutional investors tend to trade large quantities with a relatively low frequency. In some way, we can assume the investors in the same group are identical. That’s why we introduce mean field game approach to model the order book dynamics. The identical investors assumption can satisfy the assumptions in the mean field game. The mean field game is used when the number of players is substantial, so that it would be difficult to analyze the individually optimal strategy based on all the other players’ strategies. “Mean” is the average strategy, we use the average strategy to determine the optimal personal strategy.

The first part of my summer research project is to learn some interesting concepts in game theory. The first part is both relaxing and exciting, the text book is understandable and I can learn new things every day. The second part focuses on using the mean field game approach to model the order book in the market, which is challenging yet interesting as well. The gain of my summer research is not only the new knowledge, more importantly, I have a much clearer idea about how the academic research is being undertaken, and consequently strengthened my belief to pursue a further academic study.

Reference:

Lachapelle, A, Lasry, J-M, Lehalle, C-A, & Lions, P-L 2016, ‘Efficiency of the price formation process in presence of high frequency participants: a mean field game analysis’, *Mathematics and Financial Economics*, vol. 10, pp. 223-262.

*Yueyi Sun was one of the recipients of a 2017/18 AMSI Vacation Research Scholarship.*

**Quinn Patterson, University of Wollongong**

My project has been concerned with the relationship between curvature and holes. The first question we must ask then is what do we mean by curvature? For a surface, we can use a nice picture:

A surface has positive curvature at a point if all the curves passing through that point bend in the same direction. The curvature is negative at that point if there are curves that pass through the point bending in opposite directions like a saddle. The curvature is positive if there is some curve that passes through the point which does not bend at all.

For example the sphere has positive curvature everywhere. The Torus (a doughnut) can be seen to have all three curvatures at different points. On the outside curves bend in the same direction, whilst inside the hole the Torus looks like one big saddle with curves bending in opposite directions.

Now we wish to relate the curvature of a surface to the existence of holes. But how do we come up with a good notion of a hole? To keep track of holes, we count how many kinds of loops there are that can’t be pulled into a point without having to be broken or having to leave the surface. Imagine you laid down a loop of string on the sphere. If you pulled both ends of the string tighter so that the loop contracted smaller and smaller, it would eventually just become a point, and all this can be done without having to leave the sphere or having to break the string. On the Torus however, there are two different loops that cannot be pulled off to a point and cannot be slid into one another. These are the two different types of black lines in the picture above. If you tied a piece of string around through the hole then you could slide it around the cylinder of the Torus, but there would be no way to pull the string into a point because the cylinder is in the way! The other kind of loop would be if you tied a string along the top of the Torus around the hole. You wouldn’t then be able to contract the string to a point without having to leave the surface entirely! You also cannot slide the two different types of loops into each other either. If your surface had any kind of hole, you would be able to tie a loop through the hole which then could not be slipped off the surface. Thus if we have loops that can’t be contracted, we must have a hole.

The main result of my project has been using this notion of a ‘hole’ to prove the following:

**Theorem: **If a surface has a hole, then it must have negative curvature at some point.

There are quite a few technicalities and caveats. Mathematics allows us to consider surfaces in ‘higher dimensions’, and in higher dimensions there are many more directions you can bend, making curvature a lot more complicated. Similarly, loops which can’t be contracted to points behave strangely. However, with the intuition of the case for surfaces, in my project we have been able to show similar analogous statements for surfaces in higher dimensions too.

*Quinn Patterson was one of the recipients of a 2017/18 AMSI Vacation Research Scholarship.*

**Lachlann O’Donnell, University of Wollongong**

We might all feel like we have some idea exactly what curvature is but it is a surprisingly difficult concept to formalise. For instance you might think of curvature as something which tells us how much an object bends in each direction but this is not a very rigorous definition for it. This tells us nothing of what it might mean to bend in a particular direction or how it might be different to other parts of the surface. To illustrate what we might call something with positive curvature we have an easy experiment. First find any spherical object you don’t mind drawing on (this could be something like an orange/tangerine or perhaps a balloon) and a marker with which to draw with.

Mark out any two points on your object of choice and draw a straight line (that won’t immediately hit the other point) from both of these points. You may notice that as you extend these lines suddenly they start approaching each other until they finally meet. This is quite different from the standard geometry you will see in school as we know that parallel lines should never meet. This is only true for flat sections of surfaces like the plane, or the face of a cylinder. So we say that an object has positive curvature if it behaves like the sphere at some section of the surface and is flat if it’s like a plane.

What about negative curvature? We continue our experiment instead on shapes that appear to be like saddles, and find that as we trace out our straight lines they start to branch off from each other and get further, and further, away from each other. This is different to the flat case since the distance between the two lines is always increasing, not remaining stationary like in the flat case. An example of a surface with negative curvature everywhere is the pseudo-sphere.

This experiment is all well and good, but where exactly is curvature seen in higher level mathematics? Without going too in depth we shall focus on an application that initially seems to have nothing to do with curvature but is fundamental to this application. Say you wish to find and classify every possible surface which has minimal surface area (this has obvious applications in engineering and physics) how might you go about doing so? This problem goes back to the time of Lagrange (~1760), surprisingly the answer is that the curvature of the surface defines whether a surface is minimal or not. We call the average curvature in each direction the mean curvature and a minimal surface is equivalent to those surfaces which have zero mean curvature. This lets in some very strange examples of minimal surfaces for instance Costa’s surface.

So curvature is a really useful concept in both pure and applied mathematics, as it explicitly tells us something of the geometry of the surface we are considering, which is invaluable for areas like Differential Geometry and Geometric Analysis.

*Lachlann O’Donnell was one of the recipients of a 2017/18 AMSI Vacation Research Scholarship.*

**Angus Alexander, University of Wollongong**

The motivation for this project lies in physics. Physics is the study of matter, energy and the interaction between them. To study this interaction, we often want to develop equations of motion and solve them to develop a picture of how our system will behave. A common technique for doing this is to develop what is called an action principle. For more information on these see [1].

One of the earliest action principles, Fermat’s principle (also known as the principle of least time), was developed in the 17^{th} century by French mathematician Pierre de Fermat and states that the path light travels between two points is the path which takes the least time. This principle can be used to describe the properties of light in many optical systems, such as reflection from mirrors.

Possibly the most well known action principle is Hamilton’s principle. Also known as the principle of least action, Hamilton’s principle was developed in the 19^{th} century by Irish mathematician William Rowan Hamilton. The principle of least action states that the actual path travelled by a particle will be that which minimises the action, a specific function of the kinetic and potential energy. The principle of least action is fundamental in classical mechanics and equivalent to Newton’s third law of motion. Figure 1 [2] below demonstrates the actual path taken by a particle in comparison to a path of greater action.

In the 20^{th} century physicist Richard Feynman applied this technique to quantum mechanics (the study of the behaviour of subatomic particles), however his approach is difficult to work with and often poorly defined. Feynman’s approach does not work when our particles approach the speed of light, as is the case in quantum relativistic mechanics. The behaviour of particles in this setting is often quite strange and difficult to determine.

Feynman created a pictorial representation of these, Feynman diagrams. Lines represent the trajectories of particles, with squiggly and straight lines depicting different types of particles, moving from left to right. The intersection of lines indicates particle interactions and emissions, the processes we wish to study.

Figure 2 [3] above shows an electron and positron pair annihilating and producing a photon that decays into a quark-antiquark pair, which radiates a gluon.

In the 1990s mathematician Alain Connes’ developed an action principle for relativistic quantum mechanics in an idealistic situation, however this principle does not apply to real physical problems. The goal of this project was to investigate how recent results by Bar and Strohmaier [4] could be combined with that of Connes’ [5] to develop such a principal for more realistic models. The main achievement of this project was analysing such a principle in the simple case of a particle on a cylinder.

References

[1] Wolfgang Yourgrau and Stanley Mandelstam. *Variational Principles in Dynamics and Quantum Theory*, General Publishing Company, 1968.

[2] University of Reading (1996). *Rays and Geometrical Optics. *http://www.met.reading.ac.uk/pplato2/h-flap/phys6_2.html . Retrieved 20/2/18

[3] Wikipedia Foundation, Inc. (2018). *Feynman Diagram*. https://en.wikipedia.org/wiki/Feynman_diagram . Retrieved 20/2/18

[4] Alain Connes. *Gravity coupled with matter and the foundation of non commutative geometry*. Communications in Mathematical Physics, (1):155-156, 1996.

[5] Christian Bar and Alexander Strohmaier. *An index theorem for Lorentian manifolds with compact spacelike Cauchy boundary*. American Journal of Mathematics, 2015.

*Angus Alexander was one of the recipients of a 2017/18 AMSI Vacation Research Scholarship.*

**Vishnu Mangalath, The University of Western Australia**

Pick a random point between zero and one. Can you write a program that computes that number to any decimal point? The answer, surprisingly, will almost always be no. This is because almost every real number is not computable, meaning there is no program that computes the number to a desired precision. This is quite an odd notion, since almost every number you will have seen, rational or irrational, will have been computable. The reason for this is because every algorithm can be simulated by a model called a *Turing machine*, and there are only countably many Turing machines.

A Turing machine is a very simple model of a computer developed by the mathematician and computer scientist Alan Turing. It operates on an infinite strip of tape divided into discrete cells. In each cell, there is a zero or a one, onto which the machine can read and write. This happens one at a time, and according to a set of finite instructions given by the user. It turns out that despite the simplicity of this model, every algorithm run by modern computers can be represented by a Turing machine.

Since the list of instructions must be finite we can represent every Turing machine as a finite sequence of zeroes and ones. This means there is precisely one Turing machine for each natural number, implying that the number of Turing machines is countable. The real numbers, however, are uncountable. So the vast majority of real numbers are not computable. One example of a non-computable number is Chaitin’s number, which represents the probability that a randomly constructed program will halt.

We can then extend the notion of computable numbers to computable analysis. This is concerned with extending notions from real analysis, which is the study of functions from the that map real numbers to real numbers, to the set of computable numbers. There are quite a few results from real analysis that have similar formulations in computable analysis, such as the Heine-Borel theorem.

*Vishnu Mangalath was one of the recipients of a 2017/18 AMSI Vacation Research Scholarship.*

**Yilun He, The University of Sydney**

My supervisor introduced the idea of false discoveries to me last year. In statistics, almost every practical decision has a chance to be wrong, due to the randomness of real world. One important job of statisticians is to control such risk at appropriate level. If a decision method is too conservative, it will be useless because it will never give you positive statement. If a decision method does not achieve the safety level it claims, it may lead to catastrophic errors.

The idea of false discovery control was initially invented to mitigate the problem that, when people require a large number statistical decision at the same time, it is almost not possible to guarantee correctness for all of them. Playing russian roulette once does not guarantee your death, but if you play it ten times a day you are not likely to survive. Traditionally in such scenario, no one can make any useful statement. False discovery control saved it, and statistical decisions can be made in such scenario.

But being able to join the gamble is not always a good thing. You have to know about the game rules. Statisticians provide those tools and advertise their effect, and sometimes people use this tool before they learn about the risk. The nature of false discovery rate control is potentially dangerous, and many statisticians are aware of it, but the application side are not warned.

I studied, implemented and tested a few methods that find better balance between risk and utility. This reminds me of some ancient Chinese wisdom. The idea of Yin and Yang is mainly about balance, and it acts as the core of Chinese philosophy.

It also inspired much critical thinking about statistics. How do we define an appropriate confidence level? If we make many decisions in our life, can we afford a solid proportion to be totally wrong? This invaluable experience will surely take me further into the realm of mathematics and statistics.

*Yilun He was one of the recipients of a 2017/18 AMSI Vacation Research Scholarship.*

**Syamand Hasam, The University of Sydney**

One of the interesting parts of my research this summer was trying to have a look at the structure of how things can be related to one another. In this post I will be summarising part of what my research was about while introducing some elementary concepts in mathematics and statistics, finishing with a nice counting puzzle that is tangentially related to some ideas I’ll present.

Suppose we analyse a sample of organic material containing proteins and we want to identify the proteins and the peptides (what proteins are composed of) inside the sample. From the analysis we obtain what are called spectra and for each spectrum we try to match a peptide to it, the current methods of doing so are not exact and certain. It’s only with a certain probability that we correctly match a spectra (σ) with a peptide (p). At the same time, there may be thousands of such spectra each with their own probabilistic peptide match.

One interesting question would be to see how these spectra are dependent on one another. Remember, each spectrum is an analysed form of a real physical peptide, so there could be various reasons why they could be related to one another. Suppose we have a way of determining how likely there would be a relationship between two spectra σi , σj . For a full exposition of how I approached this problem, refer to my VRS report, but for our purposes we only need to be concerned about there being many spectra σ1, σ2, . . . , σn, each with some peptide match p1, p2, . . . , pn respectively (duplicates being possible). Moreover, for each pair of spectra σi , σj we either have that they are related or they are unrelated. In mathematics what we have constructed is called a graph. A visual representation of a graph is given below.

In this graph each circle (called vertices) represents some spectra σi , a line between two circles (called edges) represents that the spectra that the vertices represent are dependent. This graph is a famous one called the Petersen graph named in honour of the Danish mathematician Julius Petersen. I have coloured the circles to represent each spectra being matched to different peptides. The red ones being matched to some peptide and the blue ones to another peptide.

Now, questions related to this kind of coloured graph are interesting in how they relate to the research that I had done. Do we expect that spectra that are dependent are more likely to match to the same peptide? We do not need to have that spectra match to the same peptide in order for them to be 1 dependent. Just a note, we see that vertices are connected if there is a path on the edges between them. Here is a problem that you the reader can have a crack at. Suppose we have the Petersen graph displayed above, it has 10 vertices and 15 edges, each vertex is distinct and labelled.

*How many ways is there to colour the vertices of the Petersen graph, 5 red and 5 blue, such that all the red vertices are connected and all the blue are connected?*

I’ll write the answer at the bottom of this post, but the important part is to find why the answer is as it is. The reason why questions like these are important is that we want to be able to know how likely certain peptide matches are in certain dependency structures.

The answer is: 132. Good luck!

*Syamand Hasam was one of the recipients of a 2017/18 AMSI Vacation Research Scholarship.*

**Ruebena Dawes, The University of Sydney**

I didn’t begin my mathematics degree out of a particularly strong passion for the subject. I had always enjoyed maths, but it was largely a pragmatic and spontaneous choice, driven by a rousing and convincing speech at the open day of my university. Over the last three years, however, my love and awe for the subject has grown dramatically. My favourite part of my degree has been the moments of clarity wrought by stretching my mind to understand new mathematical concepts.

One simple example came at the tail end of a unit on ordinary differential equations. In leading us through the Poincare-Hopf Index Theorem, my lecturer asked us to consider the Cartesian plane (shown right) and asked us why we couldn’t see the axes intersecting at infinity like we could see them intersecting at 0 (the origin). He then asked us to imagine the plane as a sphere that also intersects at infinity, and said that the reason we can’t see it intersect is that we are viewing it from that point- we are looking at the origin from infinity (shown below). If we can change our perspective to the origin (by a change of coordinates), then we will be able to see the plane intersect at infinity.

There was more to the theorem, but I just found this concept so stimulating. Something as plain and simple as the x,y-axes could be thought of in a totally different way. I was so excited that I explained it to my mum. She found it interesting but was mostly puzzled. I remember her saying “I just didn’t know that that kind of thing was maths”.

I think many people experience this same confusion when they hear enthusiasts expound on the beauties of mathematics, and I don’t blame them. When I think back to learning and using the quadratic equation ad nauseam in high school I don’t feel particularly inspired. Mathematics is an exceedingly useful discipline. learning to apply theorems and crunch the numbers is a great life skill, but it is not all there is to appreciate about maths.

The example I gave above is not meant to serve as a profound example of the beauty of maths, it was just something I recall finding thought-provoking. But I do mean it to serve as an example of one of the foremost things I discovered throughout my degree: Learning mathematical concepts that require stretching your imagination and extending your mind, is as enriching to the soul as reading poetry or viewing a great piece of art. To me, the joy of mathematics is the joy of intuition and revelation, and I am so grateful for the fact that I have been able to study such a unique discipline these past three years. I am looking forward to spending the rest of my life learning more mathematics and seeking out more of these moments of intuition and revelation.

*Ruebena Dawes was one of the recipients of a 2017/18 AMSI Vacation Research Scholarship.*