**By Sajit Gurubacharya, University of Western Sydney
**

Like so many things that are being automated nowadays, computers understanding our language is one of them. It’s funny because a only a few decades ago programmers had to code in assembly languages, essentially taking directly to computers and now look how the tables have turned!

The big tech companies have been doing it for quite some time now. Siri, Cortana and Alexa are just a few examples. They recognize your speech and convert it into text and then make sense of that text. Similarly when you’re Googling a question, it’s able to understand the question and give you (more or less) accurate answers directly, or at least guide you in the correct direction by being able to understand the text present in the millions of web pages. How do they do it?

My project was addressing a similar question, but on a smaller scale. How can we detect racism in Australian social networks? Instead of analysing web pages, I analysed tweets using a machine learning algorithm called Word2vec. As the name suggests, it converts words into vectors of real numbers. But how does one even start putting a value on a word? Using their word length, part of speech or frequency? It starts with finding the meaning of the word, not off of a dictionary but rather the other words that are surrounding it in a sentence. In other ‘words’ the algorithm looks at the context of the word and seeks meaning in it.

Example:

The quick brown fox jumps.

The quick red fox jumps.

Here ‘red’ and ‘brown’ are both surrounded by similar words. This could mean that ‘red’ and ‘brown’ are similar in some way to the algorithm. It could be done in one of two ways. It can either look at the surrounding words of our target word and guess which word might fit in, or just look at the word itself and guess which other words might make sense surrounding it. The former method is known as CBOW and the latter Skip-gram. Either way after going through millions of cases, in my case tweets, my model was able to answer questions such as ‘What’s the closest word to Vegemite?’—Nutella with 87% similarity and ‘Which is the odd one out amongst Sydney, Melbourne, Auckland and Brisbane?’—Auckland it says. It was quite fascinating when I first tried these out, because not only did it make sense of words but gave answers to questions based in the Australian context. Later on in the project, we looked into visualising and plotting these word vectors into a graph and seeing where the racist words lie. As such we hope to classify if and where racist sentiment is prevalent in our model to better detect racism in Australian social networks.

*Sajit Gurubacharya was a recipient of a 2018/19 AMSI Vacation Research Scholarship.*

**By James Evans, University of Western Australia
**

A generalised n-gon is a finite configuration of points and lines obeying the following conditions:

- for all k<n there are no ordinary k-gons in the geometry
- there are lots of ordinary n-gons in the geometry
- the system is ‘non-degenerate’

Collectively the generalised n-gons are called generalised polygons.

*An example of a finite generalised 6-gon/hexagon*

They may seem basic, but a lot emerges from these simple axioms. For example, each polygon has a pair of numbers, (s,t) such that every line contains s+1 points and every point touches t+1 lines. There are severe restrictions on the values of s and t. An important result is the Feit-Higman theorem: generalised n-gons only exist for n = 2, 3, 4, 6 and 8. These sorts of results show that the polygons have a highly regular but also tightly constrained structure, suggesting that they are rare and exotic objects.

The most compelling and surprising aspect of the polygons is their symmetry. The primary examples, known as the classical polygons, are absurdly symmetric. Their symmetry groups are huge compared to their sizes and obey very strong conditions. But most curiously, their symmetry groups are among the infamous finite simple groups.

There are many mysteries surrounding the polygons and their symmetries. My research focused on this one: have we found all generalised polygons whose symmetry groups are simple groups?

The main method of progress is the following. Start with all groups obeying some condition. Then using the powers of group theory, number theory and more, try to determine which ones are the symmetries of some polygon (without knowing the polygons they are the symmetries of!). In many cases it has been shown that if a polygon obeys certain (even quite weak) conditions, then its symmetry group must be (almost) simple. This is a baffling and exceptional result: why would even weak conditions force the symmetry group to be simple? It is not at all clear what about the axioms, which seemingly have nothing to do with symmetry, could force this. Answering that question is the eventual end goal of this whole effort.

Once the restrictions on the symmetry groups have been found, the next step is to use these to learn about the polygons themselves: to find examples, to derive general facts about their structure, etc. This placement made a small contribution here. We created a program which takes in any finite group and produces all 4-gons which it ‘acts on point-primitively’ (whatever that means). This was used to rule out a troublesome case which had avoided theoretical treatment.

Despite all of the progress that has been made so far, the generalised polygons remain remarkable and puzzling objects. Indeed, it seems that most of what we have learned boils down to ‘the polygons are more remarkable and puzzling than we thought’. But this is probably for the best: if mathematicians were able to completely understand the polygons, then they would no longer have such an interesting and difficult problem to investigate.

*James Evans was a recipient of a 2018/19 AMSI Vacation Research Scholarship.*

**By Thomas Goodwin, University of Technology Sydney
**

The brain is made of billions of neurons. Neurons transfer tiny electrical signals to one another which describes how the brain is activated when doing different tasks. One approximation to the number of neurons in the brain is 85 billion [1]! One recent open question in biology is explaining the mechanism of neural structures between neuroscience and behavioural therapy. How does the brain structure change and adapt as we learn something new?

One important insight shown in the late 20th century was neuroplasticity. This describes how the brain changes overtime, even in the adult brain. As we learn something new, neuron connections can strengthen or weaken overtime, and new neural circuits can be formed with the repetition of a movement or behaviour.

Directed graph, i.e. a graph with vertices and edges connecting these vertices with arrows on each edge to give a direction of information flow, we begin to get an idea about the structure and flow of signals in the brain. A Bratteli Diagram is a directed graph with vertices at each level *n* and edges connected at each level. We say that each vertex represents a neuron and the edges between each vertex is the synaptic connections between each neuron.

Using Measure theory, a mathematical way of measuring the size of sets (distance or volume of strange sets), we can assign probabilities on each edge, we can see how different paths on the Bratteli Diagram are analogous is neural circuits being activated in the brain.

By changing these probabilities over time, we can see how these represents neuroplasticity in the brain. By analysing how this measure of neural paths changes overtime, we discuss how future research can relate this to modern techniques of fMRI (imaging brain activity) and how, with some further work we can simulate brain activity by simulating random walks down Bratteli Diagrams.

**References**

[1] Robert W Williams and Karl Herrup. The control of neuron number. *Annual review of neuro-science*, 11(1):423–453, 1988.

*Thomas Goodwin was a recipient of a 2018/19 AMSI Vacation Research Scholarship.*

**By Daniel Condon, University of Technology Sydney
**

When we first hear terms like “Artificial Intelligence” (AI) and “Deep Learning,” we tend to judge them with a set of preconceived ideas generated by a sensationalist media and a plethora of Hollywood films. It is easy to assume that anything produced by Artificial Intelligence is extremely complicated, and kind of scary – whether that be chess playing computers, driverless cars, or humanoid robots like Sophia; the first robot to receive citizenship of any country. However, I hope to convince you that (at least for now) Artificial Intelligence is deceptively simple and that there is really nothing that scary going on under the hood in most instances of AI.

By now we are quite used to computers performing basic mathematical operations like multiplying or dividing much faster than us—this doesn’t scare us. We understand that the structure of computers allows them to make calculations like these extremely quickly and that our pocket calculators certainly don’t need to be conscious to find the square root of 158. I would like to argue that we should feel the same way about most of the things AI is doing today.

Let’s take the case of an Artificial Neural Network learning how to identify the emotion shown on a human’s face. Without any specialist knowledge, this seems like a daunting task, and the thought of the computer learning about emotions might give us reason to consider it conscious – however this is simply not the case.

An Artificial Neural Network (ANN) despite originally being inspired by the brain, actually bears almost no resemblance to the brain whatsoever. At heart, it is more like a glorified calculator. We can think of an ANN as a function which converts an input to an output. In this case, it would be converting an image into a label. The way it connects the image to the label is to simply multiply the values of each pixel by some parameters which we can think of as tuneable dials. (There is also some non-linearity in there, which again sounds complicated but is often as simple as converting any negative values to 0.)

When an ANN first begins its task of predicting an emotion from a picture of a face, it tunes these dials randomly, and so the first guess probably isn’t going to be very good. In order to make a better guess it “learns” by looking a few thousand pictures of faces. The actual “learning” is done by checking how far off its prediction was from the true label, and tuning the dials so that this distance is minimized. This process is called optimization and usually taught to students during high school.

That’s all there is to it folks. Almost all AI in the world today can be explained in terms like these, and so we shouldn’t think of Jesse the Driverless Car, Deep Blue the Chess Playing Computer or Sophia the Hong Kongese Robot as anything more than glorified calculators with arms and wheels.

*Rohin Berichon was a recipient of a 2018/19 AMSI Vacation Research Scholarship.*

**By Rohin Berichon, University of Queensland
**

When you talk to about studying geometry, you often find them recalling memories of the Pythagorean theorem, or cyclic geometry from their high school maths classes. Of course, while this is a form of geometry, it’s far from the modern geometry that is so pervasive in the modern sciences.

Modern geometry is far from the ages of drawing lines on a page with a straight edge and compass. Even so, we still care about the “nice” properties that make that page so useful to the practical sciences. Notions of area, volume, parallelism, and length are all key properties that modern geometry studies. Whilst the paper is now a manifold, and the rulers are complicated functions defined on local neighbourhoods of these structures, the key concepts are still alive.

A key part of my project over the summer was to investigate what we mean when we say a line is “straight”. As simple as this question sounds, there are numerous ways to interpret the word “straight”. This may sound overly semantic, but it’s useful to pin down exactly what we mean when we define something in a new light.

As we’ve come to learn from drawing lines on paper, a straight line is just the line that minimises the distance between two points. Maybe this is the right definition of straight lines. Unfortunately, when we want to look at straight lines in more interesting spaces, for example the sphere, the straight lines between two points can be connected in one of two ways. Either we can take the shortest path around the equator, or we can move the opposite direction and eventually reach the same place. The latter of these lines is certainly not the shortest path, but it is as straight as the first. This second path however is always the shortest path between “close” points on the sphere. It turns out the notion of local distance minimisation is a good definition of a straight line, and it reflects a lot of the physical laws we expect to be able to derive from our flat paper geometry from before.

We may ask ourselves if there are other intuitive ways to define what we mean by straight. Using the Earth as an example, we already know what a straight line is. That is, if we walk without turning where we are facing, then we are moving in a straight line. This generalises back to the flat sheet of paper very nicely, since if we start walking one way, and don’t turn at all, we will draw out a straight line.

It turns out that these two definitions of straight lines are equivalent to each other. In fact, any reasonable definition of a straight line should always be equivalent to these definitions. Throughout the summer I studied a number of these definitions in trying to learn more about the nature of geometry. In doing so, I learned that seemingly obvious things can be extremely difficult to pin down in a good way.

*Rohin Berichon was a recipient of a 2018/19 AMSI Vacation Research Scholarship.*

**By Jacquie Omnet, University of Queensland
**

My project dealt with finding solutions to a particular geometric partial differential equation (PDE) with given Dirichlet boundary values. The geometric nature of this problem meant that in some instances, solutions could be realised as constant mean curvature surfaces with boundary given by the Dirichlet data. To understand what this means, we first need a crash course on surfaces and curvature:

Surfaces are 2-dimensional objects in 3-dimensional space which locally resemble a distorted plane. We restrict our attention to *bounded surfaces*; surfaces which can be contained within a sufficiently large ball. A *closed surface* is a surface which completely separates two regions of space. For example, a sphere is a closed surface; there is no way to pass from the interior to the exterior without intersecting the surface itself. If we cut away the bottom half of the sphere, it no longer has this property. Moreover, the resulting surface now has a boundary curve where the two halves used to meet. We call this type of surface a *surface with boundary*.

We work with a surface mathematically by working with its *parametrisation*. This is a map from a region in the plane onto the surface. A parametrisation is *conformal* if it locally preserves angles.

Seldom do we talk about surfaces without also talking about some notion of curvature. The problem in my project was concerned with the *mean curvature *of a surface. Without getting too technical, we can intuitively think of the mean curvature as a pointwise measure of how much the surface differs from a plane. We say a surface has constant mean curvature (CMC) if the mean curvature takes the same value at every point on the surface, and say it is minimal if this value is zero.

Surfaces can be used to model natural phenomena. For example, if we dip a wire frame into some soap solution, the soap will form a minimal surface with the wire as its boundary. Soap experiments are what inspired the classic *Plateau problem*: for a given boundary curve, does there exist a zero mean curvature surface with that boundary? This is closely related to the problem in my project.

The physical motivation for my project is the following. Given a boundary curve and a value for the CMC, does there exist a surface with these parameters? Formulating this problem mathematically amounts to solving a PDE. My report looks to answer the questions of existence, uniqueness and regularity of solutions to this PDE. Due to the nature of the formulation, solutions don’t necessarily have the physical realisation that motivated the PDE. If, however, a solution is conformal, it can be interpreted as a parametrisation for a surface satisfying the motivating conditions.

*Jacquie Omnet was a recipient of a 2018/19 AMSI Vacation Research Scholarship.*

**By Gavrilo Sipka, University of Queensland
**

It’s quite interesting to think the journey life can take us, when I entered high school I was a young youth filled with hope and dreams. Looking back for the simple knowledge I had in them I idealised mathematics and physics hoping one day to study either of those at university. But as with all first acts in a story there would turn out to be some challenges toward that dream. My final two years of high school saw my grades began to drastically fall. In hindsight I take full responsibility for the events that occurred. I became disinterested, distracted and quite lazy over those two years, leading me to believe that I was no longer capable of even studying at university. So what was a youth to do?

After graduating and seeing the sub-par results I obtain compared to the standards I once set for myself, I decided to take a gap year and recuperate my energy. I was fortunate enough to just scrap into an arts degree at my university. So when I finally started as with many people I had this daunting feeling that I wasn’t going to be able to “make it”, in the year I spent away from study I had found a revitalisation of the importance of self-agency and self-responsibility. The goal was to put my head to the books and transfer towards a physics degree at the time. Well I ended up doing the former but the later never came to fruition, in that first year although in hindsight it was quite basic I was able to experience proper mathematics for the first time in my life. Things were never as clear as they had been at that point, I realised that the only thing my heart wanted from there on was to study pure mathematics.

It’s quite fascinating to see how such basic abstract concepts and notions developed long ago through the might of human ingenuity were able to evolve into such complex fields and topics. Through my studies I have been able to learn about things such as abstract algebra, representation theory, analytics number theory, analysis and many more topics. So in hindsight “what was I worried about to begin with?”.

*Gavrilo Sipka was a recipient of a 2018/19 AMSI Vacation Research Scholarship.*

**By Vivien Yeung, University of Wollongong
**

The phrase, ‘having your cake and eating it’ is quite literal these days, as the cake and pastries industry is worth $2 bn in Australia alone.

For thousands of years, bakers around the world have slaved over how to create the perfect, fluffy, moist cake. But baking cake requires lots of energy in a complex process involving simultaneous heat and mass transfer. To minimise energy consumption when baking in industry, predictive mathematical models are desired.

The science of baking is an area of increasing interest in food engineering. Engineers have conducted experiments on baking different types of cake and have modeled the heat and mass transfer phenomena inside of it. One such experiment was conducted by (Sakin et al., 2007) where a very thin sample of white cake was baked (3 mm thick and 220 mm in diameter). It’s more of a pancake, really!

A relatively simple model that can model the baking of a *thin* sample of cake is the lumped reaction engineering approach (LREA), a drying model developed by chemical engineers (Putranto et al., 2011, Chen & Putranto, 2013), and was applied to the experimental data obtained by (Sakin et al., 2007).

What does drying have to do with baking? Well, when heat is applied to the cake, water migrates from wet core of the cake batter up to the surface, where it then evaporates. We can think of this process as ‘drying’. So although the LREA has predominantly been used to model the *drying* of thin materials (such as drying milk droplets and thin fruit slices (Chen & Putranto, 2013)), the LREA is still appropriate to model baking.

The LREA is a system of *differential equations *that can be solved to obtain moisture content and temperature profiles. Temperature profiles implicitly measure energy requirements for baking because heat is thermal energy being transferred due to differences in temperature.

But to solve this system in the first place, we need to estimate the *parameters* of the model. It’s a recipe for disaster that involves lots of chemical engineering knowledge, but I promise that it’s not a dry subject!

**References**

[1] Chen, X.D., & Putranto, A. (2013). Modeling drying processes: A reaction engineering approach, Cambridge University Press.

[2] Putranto, A., Chen, X.D., & Zhou, W. (2011). Modeling of baking of thin layer of cake using the lumped reaction engineering approach (LREA). *Journal of food engineering, 105(2), 306-311.*

[3] Sakin, M., Kaymak-Ertekin, F., Ilicali, C., 2007. Modeling the moisture transfer during baking of white cake. Journal of Food Engineering 80, 822–831.

*Vivien Yeung was a recipient of a 2018/19 AMSI Vacation Research Scholarship.*

**By James Lawless, University of Wollongong
**

The aim of this project was to learn what the area theorem for a black hole is and why it is an open problem.

A black hole is a region in 3D, like a ball, that once you enter you cannot leave and the event horizon is the boundary of this region, the surface of the ball. Figure 1 gives an analogy for the black hole and event horizon in 2D.

*Figure 1. Rock climbing analogy for a black hole*

The area theorem for a black hole states that “the surface area of the event horizon will only increase as time passes by.” The project looked at a proof for the area theorem that makes an extra assumption about the event horizon. The assumption requires the event horizon to be smooth. The assumption of a smooth surface rules out many types of black holes; merging black holes, hungry black holes (eating stars!) and rotating black holes.

In Figure 2 we see what smoothness means. At each point on the event horizon (the edge of the hole in the picture) there is a unique way to pick an outward pointing perpendicular. For rotating black holes there are “cusps’’ at the poles like on an apple, and that prevents us being able to choose a perpendicular in a continuous way.

*Figure 2. Types of black holes [1]*

The open question is to prove the area theorem for a more realistic black hole that has a surface that is not completely smooth.

In order to come to an understanding of the area theorem over the course of the project, time was spent learning differential geometry. Differential geometry is an area of mathematics that breaks down larger geometrical objects (Earth/sphere) into very small sections (flat ground where you are standing). Lots of new notation caused a headache in the early stages. As one of my lecturers noted, “Differential geometry is that part of mathematics invariant under change of notation.”

The project applied differential geometry to black holes (very interesting! Not sure how useful to the everyday man on Earth?). Despite whether black holes are useful or not, learning mathematics in the context of black holes is fun.

The applications of differential geometry definitely extend beyond black holes. Differential geometry is applied to the study of fluid flow and forces. Most interestingly, forces (gravitational and electromagnetic) can be related to more general notions of curvature. In general relativity curvature gives the shape of Einstein’s space-time, and this curvature is the gravitational force which determines the motion of particles and light.

It turns out that the event horizon of a black hole evolves in time by following the same paths that light does. The hope for future projects is that we can prove the area theorem just by studying the behaviour of light rays near the black hole without requiring any smoothness. This would allow us to understand the evolution of complicated scenarios involving black holes, such as multiple black holes, black hole mergers and perhaps even the process of collapse.

References

[1] Aeaechan #1 Royalty Free Photos, Pictures, Images And Stock Photography. (n.d.). Retrieved from https://www.123rf.com/profile_aeaechan

*James Lawless was a recipient of a 2018/19 AMSI Vacation Research Scholarship.*

**By Rumi Salazar, University of New South Wales
**

There is an intimate connection between an ancient problem posed by the Greeks and a field of mathematics called algebraic number theory. A particular problem that the ancient Greeks were interested in was: What regular polygons could be constructed using only a compass and straightedge (that is, a ruler with no markings)? They could construct triangles, squares, pentagons, and knew at least in principle how to construct a regular polygon with double the number of sides of any given regular polygon. However, they did not know whether it was possible in general to construct any regular polygon.

This problem from antiquity was left unsolved until a (now famous) young mathematician named Carl Friedrich Gauss came along and proved that the 17-gon is constructible, and provided a partial solution to the general problem in the early 1800s. The full solution was completed by another mathematician named Pierre Wantzel in 1837.

We will come back to their solution shortly as we must first introduce the notion of a Fermat prime! In the 1600s, a mathematician and lawyer named Pierre de Fermat studied numbers of the form 2^n+1 (where n=2^k) which are now called Fermat numbers. He noticed that the first five of these numbers, that is, 3, 5, 17, 257, and 65537, are prime numbers. So, he conjectured that all Fermat numbers are prime. Funnily enough, it was shown by Leonhard Euler in 1732 (another famous mathematician), that only the next Fermat number is not prime. It is still not known whether or not there are any other Fermat primes (Fermat numbers that are prime).

Having introduced the notion of a Fermat prime, we return to Gauss and Wantzel’s solution to the problem, which is known as the Gauss-Wantzel Theorem. It states: A regular polygon with n sides is constructible if and only if n is a power of 2 multiplied by any number of distinct Fermat primes.

So, despite there only being five known Fermat primes, they interestingly show up in the above theorem! The proof of this theorem involves the theory of field extensions in algebraic number theory. In particular, it turns out that the question of constructible polygons posed by the ancient Greeks boils down to whether or not the solutions of certain types of polynomial equations can be expressed using only the operations of addition, subtraction, multiplication, division, and square roots.

Using the exact same theory of field extensions, one can also show that it is impossible to double the cube, trisect the angle, and square the circle (construct a square with the same area as a circle) using only compass and straightedge constructions!

This problem captures, at least in part, the beauty of mathematics. The notion of mathematical beauty deserves many books in its own right, but specifically what I mean is the astounding fact that some easily understood problems in one area of mathematics can often turn out to have very deep solutions in a seemingly disparate area of mathematics. It is the connections between different fields of mathematics that constitute mathematical beauty.

For those interested in further reading, I have provided the following links:

- https://www.cut-the-knot.org/impossible/sq-circle.shtml
- https://www.math3ma.com/blog/what-is-galois-theory-anyway
- http://mathgardenblog.blogspot.com/2013/10/basic-compass-straightedge-construction.html
- https://mathsbyagirl.wordpress.com/2016/03/18/fermat-primes/
- https://terrytao.wordpress.com/2011/08/10/a-geometric-proof-of-the-impossibility-of-angle-trisection-by-straightedge-and-compass/
- https://terrytao.wordpress.com/2014/11/28/245a-supplement-1-a-little-bit-of-algebraic-number-theory-optional/

*Rumi Salazar was a recipient of a 2018/19 AMSI Vacation Research Scholarship.*