**By Kshitija Vaidya, Monash University
**

The Hadamard conjecture is one of those problems that are straightforward to state but tremendously difficult to solve; it is among the longest-standing open problems in mathematics. To understand what it says, we first look at Hadamard matrices.

A matrix is an array of numbers. For our purposes, we need only worry about square matrices composed of ±1s. We can think of these as n by n grids made from black and white boxes, where the black boxes represent 1s and the white boxes -1s. The size of the square grid, n, is called the order of the matrix. An n by n grid of black and white boxes is Hadamard if upon comparing any two distinct rows in the grid, we find that the number of positions in which the colouring differs is equal to the number of positions in it is the same. Hadamard matrices have incited interest since the nineteenth century when they were first studied by James Sylvester and later by Jacques Hadamard. The Hadamard conjecture states that a Hadamard grid exists for every order 4n.

Mathematicians have constructed various infinite families of Hadamard matrices. We briefly explore the Sylvester construction [3, p.11]. This comprises the following sequence of Hadamard matrices whose orders cover all powers of 2:

At each step, the new grid is obtained by pasting three versions of the previous grid together with an ‘inverted’ version of the previous grid (where ‘inverting’ refers to interchanging the black and white boxes). Since we started with a Hadamard grid, the grids generated via this repeated pasting process are always Hadamard. You can check this yourself for the first few steps!

The Sylvester construction is one of many families of Hadamard matrices. Our AMSI project focused on a particularly significant family, called cocyclic Hadamard matrices [2]. Most classical constructions of Hadamard matrices (including Sylvester’s) are known to be cocyclic [3]. This strongly suggests that cocyclic Hadamard matrices may provide a uniform approach to the Hadamard conjecture. Certain algebraic objects called cocycles are at the core of cocyclic Hadamard matrices; 2-cocycles are naturally represented as matrices. This provides a link between Hadamard matrices and cocycles. In our project, we studied Dane Flannery’s work on computing cocycles and their associated matrices [1].

Throughout our project, we were able to appreciate how rich the mathematics relating to Hadamard matrices really is. The applications of Hadamard matrices in the real world are equally so. Hadamard matrices are used extensively in coding theory and cryptography [3]. The Mariner and Voyager probes which were launched into space to study the planets of our solar system use codes based on a Sylvester Hadamard matrix to transmit images to Earth [4, p. 432]. Indeed, while the Hadamard Conjecture is central to the pure mathematician’s interest in research about Hadamard matrices, the wide-ranging applications of this research render its scope truly astronomical.

[1] Flannery, D. L. (1996). Calculation of cocyclic matrices. *Journal of Pure and Applied Algebra*, *112*(2), 181-190.

[2] Horadam, K. J., & de Launey, W. (1993). Cocyclic development of designs. *Journal of Algebraic Combinatorics*, *2*(3), 267-290.

[3] Horadam, K. J. (2012). *Hadamard matrices and their applications*. Princeton university press.

[4] Seberry, J., & Yamada, M. (1992). Hadamard matrices, sequences, and block designs. *Contemporary design theory: a collection of surveys*, 431-560.

Image of Jupiter: https://voyager.jpl.nasa.gov/galleries/images-voyager-took/jupiter/#gallery-6 (Courtesy NASA/JPL-Caltech)

*Kshitija Vaidya was a recipient of a 2018/19 AMSI Vacation Research Scholarship.*

**By Mitchell Harris, University of New England
**

My journey through mathematics was a convoluted one. I thought I would utilise this space to speak a little on it.

In 2011, I was told by some of my high school mathematics teachers that I should study general mathematics, instead of advanced, if any at all. Moreover, they may even have had a point, based on my performance and motivation at school for the preceding four years. Nonetheless, with the encouragement of one other particularly supportive teacher, Dr. Fowler, I sat 2-unit Mathematics in Years 11 and 12… and it was a slog! Indeed, it turned out to be my lowest mark, well below my scores in English and Ancient History, for which my scores were mediocre anyway.

In 2014 I was accepted into the Bachelor of Science / Bachelor of Arts double degree, with majors in Biology and Philosophy respectively. This involved taking the elementary first-year mathematics unit common to every science major. I was surprised to find this more interesting than biology, and so I made the rash decision to change into the mathematics major. I took both of the advanced first-year mathematics units concurrently in the second semester, and found it totally overwhelming.

Then I made the decision to *drop out* of mathematics, and enrol in a psychology degree. I still can’t say for sure whether this was because I found psychology interesting, or mathematics terrifying. However, for the next year I spent more time working on mathematics for my own sake, than on my actual coursework. The curiosity that attracted me into mathematics was still there, and so I changed into the mathematics major for a second time.

From here, things went better. I began to do well in my mathematics courses. And before long, I found myself tutoring the 2-unit mathematics courses that I once struggled with. I even had one high-school student who was doubting themselves say something along the lines of “You’re just good at this and I’m not”. I was able to truthfully say to them that they were doing better than I did when I was at their stage! I’ve since tutored for first-year university subjects as well. Now, in 2019, I will be commencing my Honours in Mathematics, and have a love for the subject that you couldn’t have explained to me when I first started.

I guess the point that I’m making is something like this. What is important is not that you are naturally talented at mathematics. But that you are interested in it, are willing to work hard at it, and eventually, that you love it.

My sincere thanks to my supervisor, Dr. Thomas Kalinowski, for his help and guidance throughout this project.

*Mitchell Harris was a recipient of a 2018/19 AMSI Vacation Research Scholarship.*

**By Sajit Gurubacharya, University of Western Sydney
**

Like so many things that are being automated nowadays, computers understanding our language is one of them. It’s funny because a only a few decades ago programmers had to code in assembly languages, essentially taking directly to computers and now look how the tables have turned!

The big tech companies have been doing it for quite some time now. Siri, Cortana and Alexa are just a few examples. They recognize your speech and convert it into text and then make sense of that text. Similarly when you’re Googling a question, it’s able to understand the question and give you (more or less) accurate answers directly, or at least guide you in the correct direction by being able to understand the text present in the millions of web pages. How do they do it?

My project was addressing a similar question, but on a smaller scale. How can we detect racism in Australian social networks? Instead of analysing web pages, I analysed tweets using a machine learning algorithm called Word2vec. As the name suggests, it converts words into vectors of real numbers. But how does one even start putting a value on a word? Using their word length, part of speech or frequency? It starts with finding the meaning of the word, not off of a dictionary but rather the other words that are surrounding it in a sentence. In other ‘words’ the algorithm looks at the context of the word and seeks meaning in it.

Example:

The quick brown fox jumps.

The quick red fox jumps.

Here ‘red’ and ‘brown’ are both surrounded by similar words. This could mean that ‘red’ and ‘brown’ are similar in some way to the algorithm. It could be done in one of two ways. It can either look at the surrounding words of our target word and guess which word might fit in, or just look at the word itself and guess which other words might make sense surrounding it. The former method is known as CBOW and the latter Skip-gram. Either way after going through millions of cases, in my case tweets, my model was able to answer questions such as ‘What’s the closest word to Vegemite?’—Nutella with 87% similarity and ‘Which is the odd one out amongst Sydney, Melbourne, Auckland and Brisbane?’—Auckland it says. It was quite fascinating when I first tried these out, because not only did it make sense of words but gave answers to questions based in the Australian context. Later on in the project, we looked into visualising and plotting these word vectors into a graph and seeing where the racist words lie. As such we hope to classify if and where racist sentiment is prevalent in our model to better detect racism in Australian social networks.

*Sajit Gurubacharya was a recipient of a 2018/19 AMSI Vacation Research Scholarship.*

**By James Evans, University of Western Australia
**

A generalised n-gon is a finite configuration of points and lines obeying the following conditions:

- for all k<n there are no ordinary k-gons in the geometry
- there are lots of ordinary n-gons in the geometry
- the system is ‘non-degenerate’

Collectively the generalised n-gons are called generalised polygons.

*An example of a finite generalised 6-gon/hexagon*

They may seem basic, but a lot emerges from these simple axioms. For example, each polygon has a pair of numbers, (s,t) such that every line contains s+1 points and every point touches t+1 lines. There are severe restrictions on the values of s and t. An important result is the Feit-Higman theorem: generalised n-gons only exist for n = 2, 3, 4, 6 and 8. These sorts of results show that the polygons have a highly regular but also tightly constrained structure, suggesting that they are rare and exotic objects.

The most compelling and surprising aspect of the polygons is their symmetry. The primary examples, known as the classical polygons, are absurdly symmetric. Their symmetry groups are huge compared to their sizes and obey very strong conditions. But most curiously, their symmetry groups are among the infamous finite simple groups.

There are many mysteries surrounding the polygons and their symmetries. My research focused on this one: have we found all generalised polygons whose symmetry groups are simple groups?

The main method of progress is the following. Start with all groups obeying some condition. Then using the powers of group theory, number theory and more, try to determine which ones are the symmetries of some polygon (without knowing the polygons they are the symmetries of!). In many cases it has been shown that if a polygon obeys certain (even quite weak) conditions, then its symmetry group must be (almost) simple. This is a baffling and exceptional result: why would even weak conditions force the symmetry group to be simple? It is not at all clear what about the axioms, which seemingly have nothing to do with symmetry, could force this. Answering that question is the eventual end goal of this whole effort.

Once the restrictions on the symmetry groups have been found, the next step is to use these to learn about the polygons themselves: to find examples, to derive general facts about their structure, etc. This placement made a small contribution here. We created a program which takes in any finite group and produces all 4-gons which it ‘acts on point-primitively’ (whatever that means). This was used to rule out a troublesome case which had avoided theoretical treatment.

Despite all of the progress that has been made so far, the generalised polygons remain remarkable and puzzling objects. Indeed, it seems that most of what we have learned boils down to ‘the polygons are more remarkable and puzzling than we thought’. But this is probably for the best: if mathematicians were able to completely understand the polygons, then they would no longer have such an interesting and difficult problem to investigate.

*James Evans was a recipient of a 2018/19 AMSI Vacation Research Scholarship.*

**By Thomas Goodwin, University of Technology Sydney
**

The brain is made of billions of neurons. Neurons transfer tiny electrical signals to one another which describes how the brain is activated when doing different tasks. One approximation to the number of neurons in the brain is 85 billion [1]! One recent open question in biology is explaining the mechanism of neural structures between neuroscience and behavioural therapy. How does the brain structure change and adapt as we learn something new?

One important insight shown in the late 20th century was neuroplasticity. This describes how the brain changes overtime, even in the adult brain. As we learn something new, neuron connections can strengthen or weaken overtime, and new neural circuits can be formed with the repetition of a movement or behaviour.

Directed graph, i.e. a graph with vertices and edges connecting these vertices with arrows on each edge to give a direction of information flow, we begin to get an idea about the structure and flow of signals in the brain. A Bratteli Diagram is a directed graph with vertices at each level *n* and edges connected at each level. We say that each vertex represents a neuron and the edges between each vertex is the synaptic connections between each neuron.

Using Measure theory, a mathematical way of measuring the size of sets (distance or volume of strange sets), we can assign probabilities on each edge, we can see how different paths on the Bratteli Diagram are analogous is neural circuits being activated in the brain.

By changing these probabilities over time, we can see how these represents neuroplasticity in the brain. By analysing how this measure of neural paths changes overtime, we discuss how future research can relate this to modern techniques of fMRI (imaging brain activity) and how, with some further work we can simulate brain activity by simulating random walks down Bratteli Diagrams.

**References**

[1] Robert W Williams and Karl Herrup. The control of neuron number. *Annual review of neuro-science*, 11(1):423–453, 1988.

*Thomas Goodwin was a recipient of a 2018/19 AMSI Vacation Research Scholarship.*

**By Daniel Condon, University of Technology Sydney
**

When we first hear terms like “Artificial Intelligence” (AI) and “Deep Learning,” we tend to judge them with a set of preconceived ideas generated by a sensationalist media and a plethora of Hollywood films. It is easy to assume that anything produced by Artificial Intelligence is extremely complicated, and kind of scary – whether that be chess playing computers, driverless cars, or humanoid robots like Sophia; the first robot to receive citizenship of any country. However, I hope to convince you that (at least for now) Artificial Intelligence is deceptively simple and that there is really nothing that scary going on under the hood in most instances of AI.

By now we are quite used to computers performing basic mathematical operations like multiplying or dividing much faster than us—this doesn’t scare us. We understand that the structure of computers allows them to make calculations like these extremely quickly and that our pocket calculators certainly don’t need to be conscious to find the square root of 158. I would like to argue that we should feel the same way about most of the things AI is doing today.

Let’s take the case of an Artificial Neural Network learning how to identify the emotion shown on a human’s face. Without any specialist knowledge, this seems like a daunting task, and the thought of the computer learning about emotions might give us reason to consider it conscious – however this is simply not the case.

An Artificial Neural Network (ANN) despite originally being inspired by the brain, actually bears almost no resemblance to the brain whatsoever. At heart, it is more like a glorified calculator. We can think of an ANN as a function which converts an input to an output. In this case, it would be converting an image into a label. The way it connects the image to the label is to simply multiply the values of each pixel by some parameters which we can think of as tuneable dials. (There is also some non-linearity in there, which again sounds complicated but is often as simple as converting any negative values to 0.)

When an ANN first begins its task of predicting an emotion from a picture of a face, it tunes these dials randomly, and so the first guess probably isn’t going to be very good. In order to make a better guess it “learns” by looking a few thousand pictures of faces. The actual “learning” is done by checking how far off its prediction was from the true label, and tuning the dials so that this distance is minimized. This process is called optimization and usually taught to students during high school.

That’s all there is to it folks. Almost all AI in the world today can be explained in terms like these, and so we shouldn’t think of Jesse the Driverless Car, Deep Blue the Chess Playing Computer or Sophia the Hong Kongese Robot as anything more than glorified calculators with arms and wheels.

*Rohin Berichon was a recipient of a 2018/19 AMSI Vacation Research Scholarship.*

**By Rohin Berichon, University of Queensland
**

When you talk to about studying geometry, you often find them recalling memories of the Pythagorean theorem, or cyclic geometry from their high school maths classes. Of course, while this is a form of geometry, it’s far from the modern geometry that is so pervasive in the modern sciences.

Modern geometry is far from the ages of drawing lines on a page with a straight edge and compass. Even so, we still care about the “nice” properties that make that page so useful to the practical sciences. Notions of area, volume, parallelism, and length are all key properties that modern geometry studies. Whilst the paper is now a manifold, and the rulers are complicated functions defined on local neighbourhoods of these structures, the key concepts are still alive.

A key part of my project over the summer was to investigate what we mean when we say a line is “straight”. As simple as this question sounds, there are numerous ways to interpret the word “straight”. This may sound overly semantic, but it’s useful to pin down exactly what we mean when we define something in a new light.

As we’ve come to learn from drawing lines on paper, a straight line is just the line that minimises the distance between two points. Maybe this is the right definition of straight lines. Unfortunately, when we want to look at straight lines in more interesting spaces, for example the sphere, the straight lines between two points can be connected in one of two ways. Either we can take the shortest path around the equator, or we can move the opposite direction and eventually reach the same place. The latter of these lines is certainly not the shortest path, but it is as straight as the first. This second path however is always the shortest path between “close” points on the sphere. It turns out the notion of local distance minimisation is a good definition of a straight line, and it reflects a lot of the physical laws we expect to be able to derive from our flat paper geometry from before.

We may ask ourselves if there are other intuitive ways to define what we mean by straight. Using the Earth as an example, we already know what a straight line is. That is, if we walk without turning where we are facing, then we are moving in a straight line. This generalises back to the flat sheet of paper very nicely, since if we start walking one way, and don’t turn at all, we will draw out a straight line.

It turns out that these two definitions of straight lines are equivalent to each other. In fact, any reasonable definition of a straight line should always be equivalent to these definitions. Throughout the summer I studied a number of these definitions in trying to learn more about the nature of geometry. In doing so, I learned that seemingly obvious things can be extremely difficult to pin down in a good way.

*Rohin Berichon was a recipient of a 2018/19 AMSI Vacation Research Scholarship.*

**By Jacquie Omnet, University of Queensland
**

My project dealt with finding solutions to a particular geometric partial differential equation (PDE) with given Dirichlet boundary values. The geometric nature of this problem meant that in some instances, solutions could be realised as constant mean curvature surfaces with boundary given by the Dirichlet data. To understand what this means, we first need a crash course on surfaces and curvature:

Surfaces are 2-dimensional objects in 3-dimensional space which locally resemble a distorted plane. We restrict our attention to *bounded surfaces*; surfaces which can be contained within a sufficiently large ball. A *closed surface* is a surface which completely separates two regions of space. For example, a sphere is a closed surface; there is no way to pass from the interior to the exterior without intersecting the surface itself. If we cut away the bottom half of the sphere, it no longer has this property. Moreover, the resulting surface now has a boundary curve where the two halves used to meet. We call this type of surface a *surface with boundary*.

We work with a surface mathematically by working with its *parametrisation*. This is a map from a region in the plane onto the surface. A parametrisation is *conformal* if it locally preserves angles.

Seldom do we talk about surfaces without also talking about some notion of curvature. The problem in my project was concerned with the *mean curvature *of a surface. Without getting too technical, we can intuitively think of the mean curvature as a pointwise measure of how much the surface differs from a plane. We say a surface has constant mean curvature (CMC) if the mean curvature takes the same value at every point on the surface, and say it is minimal if this value is zero.

Surfaces can be used to model natural phenomena. For example, if we dip a wire frame into some soap solution, the soap will form a minimal surface with the wire as its boundary. Soap experiments are what inspired the classic *Plateau problem*: for a given boundary curve, does there exist a zero mean curvature surface with that boundary? This is closely related to the problem in my project.

The physical motivation for my project is the following. Given a boundary curve and a value for the CMC, does there exist a surface with these parameters? Formulating this problem mathematically amounts to solving a PDE. My report looks to answer the questions of existence, uniqueness and regularity of solutions to this PDE. Due to the nature of the formulation, solutions don’t necessarily have the physical realisation that motivated the PDE. If, however, a solution is conformal, it can be interpreted as a parametrisation for a surface satisfying the motivating conditions.

*Jacquie Omnet was a recipient of a 2018/19 AMSI Vacation Research Scholarship.*

**By Gavrilo Sipka, University of Queensland
**

It’s quite interesting to think the journey life can take us, when I entered high school I was a young youth filled with hope and dreams. Looking back for the simple knowledge I had in them I idealised mathematics and physics hoping one day to study either of those at university. But as with all first acts in a story there would turn out to be some challenges toward that dream. My final two years of high school saw my grades began to drastically fall. In hindsight I take full responsibility for the events that occurred. I became disinterested, distracted and quite lazy over those two years, leading me to believe that I was no longer capable of even studying at university. So what was a youth to do?

After graduating and seeing the sub-par results I obtain compared to the standards I once set for myself, I decided to take a gap year and recuperate my energy. I was fortunate enough to just scrap into an arts degree at my university. So when I finally started as with many people I had this daunting feeling that I wasn’t going to be able to “make it”, in the year I spent away from study I had found a revitalisation of the importance of self-agency and self-responsibility. The goal was to put my head to the books and transfer towards a physics degree at the time. Well I ended up doing the former but the later never came to fruition, in that first year although in hindsight it was quite basic I was able to experience proper mathematics for the first time in my life. Things were never as clear as they had been at that point, I realised that the only thing my heart wanted from there on was to study pure mathematics.

It’s quite fascinating to see how such basic abstract concepts and notions developed long ago through the might of human ingenuity were able to evolve into such complex fields and topics. Through my studies I have been able to learn about things such as abstract algebra, representation theory, analytics number theory, analysis and many more topics. So in hindsight “what was I worried about to begin with?”.

*Gavrilo Sipka was a recipient of a 2018/19 AMSI Vacation Research Scholarship.*

**By Vivien Yeung, University of Wollongong
**

The phrase, ‘having your cake and eating it’ is quite literal these days, as the cake and pastries industry is worth $2 bn in Australia alone.

For thousands of years, bakers around the world have slaved over how to create the perfect, fluffy, moist cake. But baking cake requires lots of energy in a complex process involving simultaneous heat and mass transfer. To minimise energy consumption when baking in industry, predictive mathematical models are desired.

The science of baking is an area of increasing interest in food engineering. Engineers have conducted experiments on baking different types of cake and have modeled the heat and mass transfer phenomena inside of it. One such experiment was conducted by (Sakin et al., 2007) where a very thin sample of white cake was baked (3 mm thick and 220 mm in diameter). It’s more of a pancake, really!

A relatively simple model that can model the baking of a *thin* sample of cake is the lumped reaction engineering approach (LREA), a drying model developed by chemical engineers (Putranto et al., 2011, Chen & Putranto, 2013), and was applied to the experimental data obtained by (Sakin et al., 2007).

What does drying have to do with baking? Well, when heat is applied to the cake, water migrates from wet core of the cake batter up to the surface, where it then evaporates. We can think of this process as ‘drying’. So although the LREA has predominantly been used to model the *drying* of thin materials (such as drying milk droplets and thin fruit slices (Chen & Putranto, 2013)), the LREA is still appropriate to model baking.

The LREA is a system of *differential equations *that can be solved to obtain moisture content and temperature profiles. Temperature profiles implicitly measure energy requirements for baking because heat is thermal energy being transferred due to differences in temperature.

But to solve this system in the first place, we need to estimate the *parameters* of the model. It’s a recipe for disaster that involves lots of chemical engineering knowledge, but I promise that it’s not a dry subject!

**References**

[1] Chen, X.D., & Putranto, A. (2013). Modeling drying processes: A reaction engineering approach, Cambridge University Press.

[2] Putranto, A., Chen, X.D., & Zhou, W. (2011). Modeling of baking of thin layer of cake using the lumped reaction engineering approach (LREA). *Journal of food engineering, 105(2), 306-311.*

[3] Sakin, M., Kaymak-Ertekin, F., Ilicali, C., 2007. Modeling the moisture transfer during baking of white cake. Journal of Food Engineering 80, 822–831.

*Vivien Yeung was a recipient of a 2018/19 AMSI Vacation Research Scholarship.*