Arthur and the eclipse

by @ulaulaman about #ArthurEddington #AlbertEinstein #GeneralRelativity
On the 17th November 1922, Albert Einstein, accompanied by his wife, arrived in Kobe (see the report of the visit published on the AAPPS Bulletin - pdf). Here he was surrounded by journalists and fans: while the first asked him questions, the latter were on the hunt for an autograph from one of the most famous physicists and scientists of the time. Einstein, as written by Naoki Urasawa on the initial pages of Billy Bat #9, to a specific question on why he won the Nobel Prize for the photoelectric effect and not for the theory of special and general relativity, replied:
Because, that can't be verified.
But the mangaka committed a chronological mistake, probably caused by the Urasawa's need to focus on the innovation represented by the Einstein's theories: the point, in fact, is that just three years earlier, on the 6th November, 1919, during a meeting of the Royal Society and Royal Astronomical Society, Arthur Eddington presented the results of the celestial observations made ​​in mid-spring of that year. The interest and the importance of the discovery was such that the next day, the Times headlined:
Revolution in Science: New Theory of the Universe: Newton's Ideas Overthrown, by Joseph John Thomson:
Our conceptions about the structure of the universe must be changed in a fundamental way
So, when Einstein went to Japan, the evidence of the correctness of his theory had already been around.

The hobbit, the dragon, and the green knight

by @ulaulaman about #TheHobbit #JRRTolkien #Smaug #mathematics

Gandalf and Bilbo by David Wenzel
The Peter Jackson's Hobbit movie trilogy is arrived to a conclusion, so it could be a good point to write a little, funny curious post about the science and the Tolkien's novel. We start with a paper published last year(1) (2013) in which the researchers find the cause of the triumph of good over evil:
Bilbo Baggins, a hobbit, lives in a hole in the ground but with windows, and when he is first encountered he is smoking his pipe in the sun overlooking his garden (it is worth noting [parenthetically] that smoking is itself associated with skeletal muscle dysfunction6). Dwarves and wizards smoke too, and the production of smoke rings is unfortunately glamourised. The hobbit diet is clearly varied as he is able to offer cake, tea, seed cake, ale, porter, red wine, raspberry jam, mince pies, cheese, pork pie, salad, cold chicken, pickles and apple tart to the dwarves who visit to engage him in the business of burglary. The dwarves also show evidence of a mixed diet and, importantly, although they “like the dark, the dark for dark business”, they do spend much time above ground and have plenty of sun exposure during the initial pony ride in June that begins their trip to the Lonely Mountain.
(...)
Gollum, himself "as dark as darkness" lives in the dark, deep in the Misty Mountains. He does, however, eat fish, although the text describes these only as "blind" and it is not clear whether they are of an oily kind and thus a potential source of vitamin D. He sometimes eats goblins, but they rarely come down to his lake, suggesting that fish play little part in the goblin diet. Interestingly, these occasional trips to catch fish are undertaken at the behest of the Great Goblin, leading one to speculate that his enhanced diet may have helped him to achieve his pre-eminent position within goblin society. In due course, the Great Goblin is replaced by the Son of the Great Goblin. While simple nepotism is a likely explanation, we are unable to exclude an epigenetic process whereby the son’s fitness to rule has been influenced by parental vitamin D exposure.(1)
So the secret is in the diet!
Another great character from The Hobbit is Smaug, the dragon. Its physiology is really peculiar (read also Disco Blog):

When and why the coffee spills

http://t.co/fBhTs48KG4 the #physics of spilling and walking with #coffee
How do we spill coffee?
(a) Either by accelerating too much for a given coffee level (fluid statics)
(b) Or, through more complicated dynamical phenomena:
  • Initial acceleration sets an initial sloshing amplitude, which is analogous to the main antisymmetric mode of sloshing.
  • This initial perturbation is amplified by the back-and-forth and pitching excitations since their frequency is close to the natural one because of the choice of normal mug dimensions.
  • Vertical motion also does not lead to resonance as it is a subharmonic excitation (Faraday phenomena).
  • Noise has higher frequency, which makes the antisymmetric mode unstable thus generating a swirl.
  • Time to spill depends on "focused"/"unfocused" regime and increases with decreasing maximum acceleration (walking speed).
How can we prevent spilling?
Lessons learned from sloshing dynamics may suggest strategies to control spilling, e.g. via using
  • a flexible container to act as a sloshing absorber in suppressing liquid oscillations.
  • a series of concentric rings (baffles) arranged around the inner wall of a container.

Text via Walking with coffee: when and why coffee spills (pdf)
More information on physics buzz blog
Paper: Mayer H.C. & Krechetnikov R. (2012). Walking with coffee: Why does it spill?, Physical Review E, 85 (4) DOI: http://dx.doi.org/10.1103/physreve.85.046117 (pdf)

The globe of Galileo

video by @ulaulaman #levitation

Published by Gianluigi (@ulaulaman) in data:

It's just a little Earth, turns and levitates above its base, reminding those who contributed to give it its rightful place in space. The globe can light up using the switch on the base. It works in the current network.

Fabiola Gianotti, Director General at CERN

http://t.co/rYzcXWlvR0 about #FabiolaGianotti #CERN #ATLAS
Fabiola Gianotti is an Italian particle physicist, a former spokesperson of the ATLAS experiment at the Large Hadron Collider (LHC) at CERN in Switzerland, considered one of the world's biggest scientific experiments. She has been selected as the next Director-General of CERN, starting on 1 January 2016.
She is the 4th italian particle physicist to became Director General at CERN after Amaldi (1952-1954), Rubbia (1989-1993) and Maiani (1999-2003).
A bit concession to the SEO!

Planck results, ATLAS and the dark matter

http://t.co/jJxD8rhCr6 by @ulaulaman about #Planck, #ATLAS, #DarkMatter at #LHC
The last issue of Astronomy & Astrophysics (that it's free) is devoted to the Planck 2013 results:
This collection of 31 articles presents the initial scientific results extracted from this first Planck dataset, which measures the cosmic microwave background (CMB) with the highest accuracy to date. It provides major new advances in different domains of cosmology and astrophysics.
In the first paper there is an overview of 2013 science results, and we can read:
The Universe observed by Planck is well-fit by a six parameter, vacuum-dominated, cold dark matter (ACDM) model, and we provide strong constraints on deviations from this model.
But, in the meanwhile, ATLAS published a preprint about the quest of the dark matter in LHC:
The data are found to be consistent with the Standard Model expectations and limits are set on the mass scale of effective field theories that describe scalar and tensor interactions between dark matter and Standard Model particles. Limits on the dark-matter--nucleon cross-section for spin-independent and spin-dependent interactions are also provided. These limits are particularly strong for low-mass dark matter. Using a simplified model, constraints are set on the mass of dark matter and of a coloured mediator suitable to explain a possible signal of annihilating dark matter.
Tommaso Dorigo, examining ATLAS' results, concludes:
the ATLAS search increases significantly the sensitivity with respect to past searches, but no signal is found. As attractive as DM existence is as an economical explanation of a wealth of cosmological observations, the nature of dark matter continues to remain unknown.

via phys.org

Regge theory

http://t.co/alaasqcHwl @ulaulaman says #goodbye to #TullioRegge
In quantum physics, Regge theory is the study of the analytic properties of scattering as a function of angular momentum, where the angular momentum is not restricted to be an integer but is allowed to take any complex value. The nonrelativistic theory was developed by Tullio Regge in 1957.
Following Chew and Frautschi (pdf), the key papers by Tullio Regge are:
Regge T. (1959). Introduction to complex orbital momenta, Il Nuovo Cimento, 14 (5) 951-976. DOI: http://dx.doi.org/10.1007/bf02728177 (pdf)
In this paper the orbital momentumj, until now considered as an integer discrete parameter in the radial Schrödinger wave equations, is allowed to take complex values. The purpose of such an enlargement is not purely academic but opens new possibilities in discussing the connection between potentials and scattering amplitudes. In particular it is shown that under reasonable assumptions, fulfilled by most field theoretical potentials, the scattering amplitude at some fixed energy determines the potential uniquely, when it exists. Moreover for special classes of potentials $V(x)$, which are analytically continuable into a function $V(z)$, $z=x+iy$, regular and suitable bounded in $x > 0$, the scattering amplitude has the remarcable property of being continuable for arbitrary negative and large cosine of the scattering angle and therefore for arbitrary large real and positive transmitted momentum. The range of validity of the dispersion relations is therefore much enlarged.
Regge T. (1960). Bound states, shadow states and mandelstam representation, Il Nuovo Cimento, 18 (5) 947-956. DOI: http://dx.doi.org/10.1007/bf02733035
In a previous paper a technique involving complex angular momenta was used in order to prove the Mandelstam representation for potential scattering. One of the results was that the number of subtractions in the transmitted momentum depends critically on the location of the poles (shadow states) of the scattering matrix as a function of the complex orbital momentum. In this paper the study of the position of the shadow states is carried out in much greater detail. We give also related inequalities concerning bound states and resonances. The physical interpretation of the shadow states is then discussed.
The importance of the model is summarized by the following:
As a fundamental theory of strong interactions at high energies, Regge theory enjoyed a period of interest in the 1960s, but it was largely succeeded by quantum chromodynamics. As a phenomenological theory, it is still an indispensable tool for understanding near-beam line scattering and scattering at very large energies. Modern research focuses both on the connection to perturbation theory and to string theory.
During the 1980s, Regge is interested also in the mathematical art, using Anschauliche Geometrie by David Hilbert and Stefan Cohn-Vossen like inspiration for a lot of mathematical objects.
Good bye, Mr. Regge, and thanks for all...

Alan Guth, eternal inflation and the multiverse

http://t.co/CnvvOY0mAI about #AlanGuth #multiverse #CosmicInflation #icep2014
At the beggining of October, Alan Guth was at the workshop Fine-Tuning, Anthropics and the String Landscape at Madrid, and he concluded his talk with the following slide:
The complete talk, without question time, follows:

Just a bit of blue

http://t.co/hgbABOxUlm by @ulaulaman about #nobelprize2014 on #physics #led #light #semiconductors

Created with SketchBookX
Flattr this
One of the first classifications that you learn when you start to study the behavior of matter interacting with electricity is between conductors and insulators: a conductor is a material that easily allows the passage of electric charges; on the other hand, an insulator prevents it (or makes it difficult). It is possible to characterize these two kinds of materials through the physical characteristics of the atoms that compose them. Indeed, we know that an atom is characterized by having a positive nucleus with electron clouds which rotate around it: to characterize a material is precisely the behavior of the outer electrons, those of the external band. On the other hand, the energy bands of every atom are characterized by specific properties: there are the valence bands, where the electrons are used in the chemical bonds, and the conduction bands, where the electrons are free to move, the "mavericks" of the atom, used for ionic bonds. At this point I hope it is simple to characterize a conductive material such as the one whose atoms have electrons both in the valence band, both in the conduction band, while an insulating material is characterized by having full only the valence band.
Now, in band theory, the probability that an electron occupies a given band is calculated using the Fermi-Dirac distribution: this means that there is a non-zero probability that an insulator's electron in the valence band is promoted to the conduction band, but it is extremely low because of the large energy difference between the two levels. Moreover, there is an energy level said Fermi level that, while in the conductors is located within the conduction band, in the insulation is located between the two bands, the conduction and valence, allowing a valence electron to jump more easily in the conduction band.

Teachers for the peace

http://t.co/W1K0rh9An6 #nobelprize2014 #peace #children #education #teaching
The Nobel Prize for Peace 2014 is awarded to Kailash Satyarthi and Malala Yousafzai, teachers and activists for children rights,
for their struggle against the suppression of children and young people and for the right of all children to education

Carlo Rubbia and the discoveries of the weak bosons

http://t.co/KGVNarwZMG by @ulaulaman about #CarloRubbia #NobelPrize #physics #particlephysics
Flattr this
On that day 30 years ago, I was almost certainly at school. Physics still was not my passion. Of course I started very well: when the teacher asked what is the space, I thought immediately to the universe, but the question was not referring to that "space", but in another, the geometric. But it is not about those memories that I have to indulge, but on a particular photo, in which Carlo Rubbia and Simon van der Meer, with two goblets, presumably of wine in hand, are celebrating the announcement of the Nobel Prize for Physics
for their decisive contributions to the large project, which led to the discovery of the field particles W and Z, communicators of weak interaction
The story of this Nobel, however, began eight years earlier, in 1976. In that year, in fact, SPS, the Super Proton Synchrotron, begins to operate at CERN, originally designed to accelerate particles up to an energy of 300 GeV.
The same year David Cline, Carlo Rubbia and Peter McIntyre proposed transforming the SPS into a proton-antiproton collider, with proton and antiproton beams counter-rotating in the same beam pipe to collide head-on. This would yield centre-of-mass energies in the 500-700 GeV range(1).
On the other hand antiprotons must be somehow collected. The corresponding beam was then
(...) stochastically cooled in the antiproton accumulator at 3.5 GeV, and this is where the expertise of Simon Van der Meer and coworkers played a decisive role(1).

Sudoku clues

http://t.co/zk3P3rPjFZ #sudoku #mathematics #arXiv #abstract
The arXiv's paper is published two years ago, but I think that every time is a good time to play sudoku!
The sudoku minimum number of clues problem is the following question: what is the smallest number of clues that a sudoku puzzle can have? For several years it had been conjectured that the answer is 17. We have performed an exhaustive computer search for 16-clue sudoku puzzles, and did not find any, thus proving that the answer is indeed 17. In this article we describe our method and the actual search. As a part of this project we developed a novel way for enumerating hitting sets. The hitting set problem is computationally hard; it is one of Karp's 21 classic NP-complete problems. A standard backtracking algorithm for finding hitting sets would not be fast enough to search for a 16-clue sudoku puzzle exhaustively, even at today's supercomputer speeds. To make an exhaustive search possible, we designed an algorithm that allowed us to efficiently enumerate hitting sets of a suitable size.
In the following video by Numberphile, James Grime discusses the paper results:

From Nash equilibria to collective behavior

https://twitter.com/ulaulaman/status/517303481565458432 by @ulaulaman about #Nash equilibria and their role in collective behavior
The Nash equilibrium is an important tool in game theory:
[It] is a solution concept of a non-cooperative game involving two or more players, in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy. If each player has chosen a strategy and no player can benefit by changing strategies while the other players keep theirs unchanged, then the current set of strategy choices and the corresponding payoffs constitute a Nash equilibrium.
Stated simply, Amy and Will are in Nash equilibrium if Amy is making the best decision she can, taking into account Will's decision, and Will is making the best decision he can, taking into account Amy's decision. Likewise, a group of players are in Nash equilibrium if each one is making the best decision that he or she can, taking into account the decisions of the others in the game.
Nash equilibria may, for example, be found in the game of coordination, in the prisoner's dilemma, in the paradox of Braess(6), or more generally in any strategy game. In particular, given a game, we can ask whether it has or not a Nash equilibrium: apparently deciding the existence of Nash equilibria is an intractable problem, if there is no restriction on the relationships among players. In addition for a strong Nash equilibrium, the problem is on the second level of the polynomial hierarchy, which is a scale for the classification problem based on the complexity of resolution(1).
In addition to this study about Nash equilibria, Gianluigi Greco (one of my high school classmates), together with Francesco Scarcello, also studied the Nash equilibria (in this case the forced equilibria) in graphical games, where graphical game is a game represented in a graphical manner, through a graph(2).

CERN's 60th Birthday

http://t.co/zU9b7V4idL by @ulaulaman about #CERN60
The day to celebrate CERN's birthday is arrived:
The convention establishing CERN was ratified on 29 September 1954 by 12 countries in Western Europe. The acronym CERN originally stood in French for Conseil Européen pour la Recherche Nucléaire (European Council for Nuclear Research), which was a provisional council for setting up the laboratory, established by 12 European governments in 1952. The acronym was retained for the new laboratory after the provisional council was dissolved, even though the name changed to the current Organisation Européenne pour la Recherche Nucléaire (European Organization for Nuclear Research) in 1954.
The most recent discovery at the laboratories is the Higgs boson (or a particle that seems it), but there are some others successes in the CERN's history:

1973: The discovery of neutral currents in the Gargamelle bubble chamber;
1983: The discovery of W and Z bosons in the UA1 and UA2 experiments;
1989: The determination of the number of light neutrino families at the Large Electron–Positron Collider (LEP) operating on the Z boson peak;
1995: The first creation of antihydrogen atoms in the PS210 experiment;
1999: The discovery of direct CP violation in the NA48 experiment;
2010: The isolation of 38 atoms of antihydrogen;
2011: Maintaining antihydrogen for over 15 minutes;

There are two Nobel Prizes directly connected to the CERN:

1984: to Carlo Rubbia and Simon Van der Meer for
their decisive contributions to the large project which led to the discovery of the field particles W and Z, communicators of the weak interaction
1992: to Georges Charpak for
his invention and development of particle detectors, in particular the multiwire proportional chamber, a breakthrough in the technique for exploring the innermost parts of matter
On CERN's webcast you can see the official ceremony

Foucault and the pendulum

http://t.co/AphFwEZfQ2 #foucaultpendulum #physics #earthrotation
The first public exhibition of a Foucault pendulum took place in February 1851 in the Meridian of the Paris Observatory. A few weeks later Foucault made his most famous pendulum when he suspended a 28 kg brass-coated lead bob with a 67 meter long wire from the dome of the Panthéon, Paris. The plane of the pendulum's swing rotated clockwise 11° per hour, making a full circle in 32.7 hours. The original bob used in 1851 at the Panthéon was moved in 1855 to the Conservatoire des Arts et Métiers in Paris. A second temporary installation was made for the 50th anniversary in 1902.
During museum reconstruction in the 1990s, the original pendulum was temporarily displayed at the Panthéon (1995), but was later returned to the Musée des Arts et Métiers before it reopened in 2000. On April 6, 2010, the cable suspending the bob in the Musée des Arts et Métiers snapped, causing irreparable damage to the pendulum and to the marble flooring of the museum. An exact copy of the original pendulum had been swinging permanently since 1995 under the dome of the Panthéon, Paris until 2014 when it was taken down during repair work to the building. Current monument staff estimate the pendulum will be re-installed in 2017

Idiosyncratic Thinking: a computer heuristics lecture

http://t.co/7JB3CPaQt9 #Feynman
Richard Feynman, Winner of the 1965 Nobel Prize in Physics, gives us an insightful lecture about computer heuristics: how computers work, how they file information, how they handle data, how they use their information in allocated processing in a finite amount of time to solve problems and how they actually compute values of interest to human beings. These topics are essential in the study of what processes reduce the amount of work done in solving a particular problem in computers, giving them speeds of solving problems that can outmatch humans in certain fields but which have not yet reached the complexity of human driven intelligence. The question if human thought is a series of fixed processes that could be, in principle, imitated by a computer is a major theme of this lecture and, in Feynman's trademark style of teaching, gives us clear and yet very powerful answers for this field which has gone on to consume so much of our lives today.

Witches Kitchen 1971

http://t.co/sHn7nJ4uFj a #funny image about #mathematics by Alexander Grothendieck
Riemann-Roch Theorem: The final cry: The diagram is commutative! To give an approximate sense to the statement about f: X → Y, I had to abuse the listeners' patience for almost two hours. Black on white (in Springer lecture notes) it probably takes about 400, 500 pages. A gripping example of how our thirst for knowledge and discovery indulges itself more and more in a logical delirium far removed from life, while life itself is going to Hell in a thousand ways and is under the threat of final extermination. High time to change our course!
Alexander Grothendieck about the Grothendieck–Riemann–Roch theorem via Math 245
Read also: how does one understand GRR?

Aidan Dwyer and a new fotovoltaic design

Aidan Dwyer, was one of twelve students to receive the 2011 Young Naturalist Award from the American Museum of Natural History in New York for creating an innovative approach to collecting sunlight in photovoltaic arrays. Dwyer’s investigation into the mathematical relationship of the arrangement of branches and leaves in deciduous trees led to his discovery that these species utilized the Fibonacci Sequence in their branch and leaf design. Dwyer transformed this organic concept into a photovoltaic array based upon the Fibonacci pattern of an oak tree and conducted experiments comparing his design to conventional solar panel arrays. After computer analysis, Dwyer discovered that his Fibonacci tree design surpassed the performance of conventional methods in sunlight collection and utilized the greatest quantity of PV panels within the least amount of physical space, making it a versatile and aesthetically pleasing solution for confined and obstructed urban areas.

The discover of Morniel Mathaway

http://youtu.be/OoYkZyZ6XSU a radio drama by William Tenn
Following Deutsch and Lockwood(1), there are two types of time paradoxes: inconsistency paradox and knowledge paradox.
An example of the first type is the grandfather paradox, introduced by the french writer René Barjavel in Le voyageur imprudent (1943 - Future Times Three).
An example of the second type is The discover of Morniel Mathaway, a radio science fiction drama by William Tenn. It was originally transmitted by the show X Minus One by NBC:
A professor of art history from the future travels by time machine some centuries into the past in search of an artist whose works are celebrated in the professor's time. On meeting the artist in the flesh, the professor is surprised to find the artist’s current paintings talentlessly amateurish. The professor happens to have brought with him from the future a catalogue containing reproductions of the paintings later attributed to the artist, which the professor has come to see are far too accomplished to be the artist's work. When he shows this to the artist, the latter quickly grasps the situation, and, by means of a ruse, succeeds in using the time machine to travel into the future (taking the catalogue with him), where he realizes he will be welcomed as a celebrity, so stranding the professor in the "present". To avoid entanglements with authority the critic assumes the artist's identity and later achieves fame for producing what he believes are just copies of the paintings he recalls from the catalogue. This means that he, and not the artist, created the paintings in the catalogue. But he could not have done so without having seen the catalogue in the first place, and so we are faced with a causal loop.

The solar efficiency of Superman

by @ulaulaman http://t.co/WGbVdfv0nk about #Superman #physics and #solar #energy
In the last saga of the JLA by Grant Morrison, World War III, Superman, leaping against the bomb inside Mageddon says:
The way in which Superman gets the powers, or the way in which them is explained, however, is changed over time. Following Action Comics #1, the debut of the character, Jerry Siegel, combining genetics and evolution, says that on his planet of origin
the physical structure of the inhabitants had advanced millions of years compared to ours. Reaching maturity, people of that race earned a titanic force!
In Superman #1, however, Siegel focuses attention on the different gravity between Earth and Krypton, with the latter with a greater radius than aour planet and therefore with a greater severity. Such a claim is also in Ports of Call by Jack Vance. In order to verify it, we must start from the definition of the density: \[\rho = \frac{M}{V}\] where $M$ is the mass, $V$ the volume of the object, or, in our case, of the planet.

Mathematics is a unique aspect of human thought

http://t.co/h9CCSAaER0 #IsaacAsimov about #mathematics
Mathematics is a unique aspect of human thought, and its history differs in essence from all other histories.
As time goes on, nearly every field of human endeavor is marked by changes which can be considered as correction and/or extension. Thus, the changes in the evolving history of political and military events are always chaotic; there is no way to predict the rise of a Genghis Khan, for example, or the consequences of the short-lived Mongol Empire. Other changes are a matter of fashion and subjective opinion. The cave-paintings of 25,000 years ago are generally considered great art, and while art has continuously-even chaotically-changed in the subsequent millennia, there are elements of greatness in all the fashions. Similarly, each society considers its own ways natural and rational, and finds the ways of other societies to be odd, laughable, or repulsive.
But only among the sciences is there true progress; only there is the record one of continuous advance toward ever greater heights.
And yet, among most branches of science, the process of progress is one of both correction and extension. Aristotle, one of the greatest minds ever to contemplate physical laws, was quite wrong in his views on falling bodies and had to be corrected by Galileo in the 1590s. Galen, the greatest of ancient physicians, was not allowed to study human cadavers and was quite wrong in his anatomical and physiological conclusions. He had to be corrected by Vesalius in 1543 and Harvey in 1628. Even Newton, the greatest of all scientists, was wrong in his view of the nature of light, of the achromaticity of lenses, and missed the existence of spectral lines. His masterpiece, the laws of motion and the theory of universal gravitation, had to be modified by Einstein in 1916.
Now we can see what makes mathematics unique. Only in mathematics is there no significant correction-only extension. Once the Greeks had developed the deductive method, they were correct in what they did, correct for all time. Euclid was incomplete and his work has been extended enormously, but it has not had to be corrected. His theorems are, every one of them, valid to this day.
Ptolemy may have developed an erroneous picture of the planetary system, but the system of trigonometry he worked out to help him with his calculations remains correct forever.
Each great mathematician adds to what came previously, but nothing needs to be uprooted. Consequently, when we read a book like A History Of Mathematics, we get the picture of a mounting structure, ever taller and broader and more beautiful and magnificent and with a foundation, moreover, that is as untainted and as functional now as it was when Thales worked out the first geometrical theorems nearly 26 centuries ago.
Nothing pertaining to humanity becomes us so well as mathematics. There, and only there, do we touch the human mind at its peak.

Isaac Asimov from the foreword to the second edition of A History of Mathematics by Carl C. Boyer and Uta C. Merzbach

Maryam Mirzakhani and Riemann surfaces

http://t.co/ZAdRPeiy8b Maryam Mirzakhani wins #FieldsMedal with Riemann surfaces
Maryam Mirzakhani has made several contributions to the theory of moduli spaces of Riemann surfaces. In her early work, Maryam Mirzakhani discovered a formula expressing the volume of a moduli space with a given genus as a polynomial in the number of boundary components. This led her to obtain a new proof for the conjecture of Edward Witten on the intersection numbers of tautology classes on moduli space as well as an asymptotic formula for the length of simple closed geodesics on a compact hyperbolic surface. Her subsequent work has focused on Teichmüller dynamics of moduli space. In particular, she was able to prove the long-standing conjecture that William Thurston's earthquake flow on Teichmüller space is ergodic.
Mirzakhani was awarded the Fields Medal in 2014 for "her outstanding contributions to the dynamics and geometry of Riemann surfaces and their moduli spaces".
Riemann surfaces are one dimensional complex manifolds introduced by Riemann: in some sense, his approach is a cut-and-paste procedure.
He imagined taking as many copies of the open set as there are branches of the function and joining them together along the branch cuts. To understand how this works, imagine cutting out sheets along the branch curves and stacking them on top of the complex plane. On each sheet, we define one branch of the function. We glue the different sheets to each other in such a way that the branch of the function on one sheet joins continuously at the seam with the branch defined on the other sheet. For instance, in the case of the square root, we join each end of the sheet corresponding to the positive branch with the opposite end of the sheet corresponding to the negative branch. In the case of the logarithm, we join one end of the sheet corresponding to the $2 \pi n$ branch with an end of the $(2n+1) \pi n$ sheet to obtain a spiral structure which looks like a parking garage.
A more formal approach to the construction of Riemann surfaces is performed by Hermann Weyl, and the work by Maryam Mirzakhani puts in this line of research.
Some papers:
Mirzakhani M. (2007). Weil-Petersson volumes and intersection theory on the moduli space of curves, Journal of the American Mathematical Society, 20 (01) 1-24. DOI: http://dx.doi.org/10.1090/s0894-0347-06-00526-1
Mirzakhani M. (2006). Simple geodesics and Weil-Petersson volumes of moduli spaces of bordered Riemann surfaces, Inventiones mathematicae, 167 (1) 179-222. DOI: http://dx.doi.org/10.1007/s00222-006-0013-2 (pdf)
Mirzakhani M. (2008). Growth of the number of simple closed geodesics on hyperbolic surfaces, Annals of Mathematics, 168 (1) 97-125. DOI: http://dx.doi.org/10.4007/annals.2008.168.97 (pdf)
Read also:
Press release by Stanford
The Fields Medal news on Nature
The official press release in pdf
plus math magazine

The equation of happiness

by @ulaulaman http://t.co/crZXpaphqA #mathematics #happiness #smile
\[H(t) = w_0 + w_1 \sum_{j=1}^t \gamma^{t-j} CR_j + w_2 \sum_{j=1}^t \gamma^{t-j} EV_j + w_3 \sum_{j=1}^t \gamma^{t-j} RPE_j\] I don't know if my intuition is correct, but the equation from Rutledge et al. reminds me of a neural network, or more correctly a sum of three different neural networks. In every case, this could became an important step in order to mathematically describe our brain.
A common question in the social science of well-being asks, "How happy do you feel on a scale of 0 to 10?" Responses are often related to life circumstances, including wealth. By asking people about their feelings as they go about their lives, ongoing happiness and life events have been linked, but the neural mechanisms underlying this relationship are unknown. To investigate it, we presented subjects with a decision-making task involving monetary gains and losses and repeatedly asked them to report their momentary happiness. We built a computational model in which happiness reports were construed as an emotional reactivity to recent rewards and expectations. Using functional MRI, we demonstrated that neural signals during task events account for changes in happiness.

Rutledge R.B., Skandali N., Dayan P. & Dolan R.J. (2014). A computational and neural model of momentary subjective well-being., Proceedings of the National Academy of Sciences of the United States of America, PMID: http://www.ncbi.nlm.nih.gov/pubmed/25092308
via design & trends

Generalized Venn diagram for genetics

by @ulaulaman http://t.co/MkGI7L546N #VennDay #VennDiagram #genetics
A generalized Venn diagram with three sets $A$, $B$ and $C$ and their intersections. From this representation, the different set sizes are easily observed. Furthermore, if individual elements (genes) are contained in more than one set (functional category), the intersection sizes give a direct view on how many genes are involved in possibly related functions. During optimization, the localization of the circles is altered to satisfy the possibly contradictory constraints of circle size and intersection size.
For the purpouse of the paper, the researchers used polygons instead of circles. In order to compute the polygons' area, they used the simple formula: \[A = \sum_{k=1}^L x_k (y_{k+1} - y_k)\] where $L$ is the number of the edges of the polygon, and $y_{L+1} := y_1$.
Kestler, H., Muller, A., Gress, T., & Buchholz, M. (2004). Generalized Venn diagrams: a new method of visualizing complex genetic set relations Bioinformatics, 21 (8), 1592-1595 DOI: 10.1093/bioinformatics/bti169

Turing's morphogenesis and the fingers' formation

by @ulaulaman http://t.co/9Q5rVkVzEc about #Turing #morphogenesis
On today Science's issue it is published a paper about the application of Turing's morphogenesis to the formation of fingers. In this period I'm not able to download the papers, so I simple publish the editor's summaries. First of all I present you the incipit of the paper by Aimée Zuniga, Rolf Zeller(2)
Alan Turing is best known as the father of theoretical computer sciences and for his role in cracking the Enigma encryption codes during World War II. He was also interested in mathematical biology and published a theoretical rationale for the self-regulation and patterning of tissues in embryos. The so-called reaction-diffusion model allows mathematical simulation of diverse types of embryonic patterns with astonishing accuracy. During the past two decades, the existence of Turing-type mechanisms has been experimentally explored and is now well established in developmental systems such as skin pigmentation patterning in fishes, and hair and feather follicle patterning in mouse and chicken embryos. However, the extent to which Turing-type mechanisms control patterning of vertebrate organs is less clear. Often, the relevant signaling interactions are not fully understood and/or Turing-like features have not been thoroughly verified by experimentation and/or genetic analysis. Raspopovic et al.(1) now make a good case for Turing-like features in the periodic pattern of digits by identifying the molecular architecture of what appears to be a Turing network functioning in positioning the digit primordia within mouse limb buds.
And now the summary of the results:
Most researchers today believe that each finger forms because of its unique position within the early limb bud. However, 30 years ago, developmental biologists proposed that the arrangement of fingers followed the Turing pattern, a self-organizing process during early embryo development. Raspopovic et al.(1) provide evidence to support a Turing mechanism (see the Perspective by Zuniga and Zeller). They reveal that Bmp and Wnt signaling pathways, together with the gene Sox9, form a Turing network. The authors used this network to generate a computer model capable of accurately reproducing the patterns that cells follow as the embryo grows fingers.

(1) Raspopovic, J., Marcon, L., Russo, L., & Sharpe, J. (2014). Digit patterning is controlled by a Bmp-Sox9-Wnt Turing network modulated by morphogen gradients Science, 345 (6196), 566-570 DOI: 10.1126/science.1252960
(2) Zuniga, A., & Zeller, R. (2014). In Turing's hands--the making of digits Science, 345 (6196), 516-517 DOI: 10.1126/science.1257501


Read also on Doc Madhattan:
Doc Madhattan: Matching pennies in Turing's brithday
Turing patterns in coats and sounds
Genetics, evolution and Turing's patterns
Calculating machines
Turing, Fibonacci and the sunflowers
Turing and the ecological basis of morphogenesis
via phys.org

A trigonometric proof of the pythagorean theorem

by @ulaulaman via @MathUpdate http://t.co/LJX8gSX7xf
\[\alpha + \beta = \frac{\pi}{2}\] \[\sin (\alpha + \beta) = \sin \frac{\pi}{2}\] \[\sin \alpha \cdot \cos \beta + \sin \beta \cdot \cos \alpha = 1\] \[\frac{a}{c} \cdot \frac{a}{c} + \frac{b}{c} \cdot \frac{b}{c} = 1\] \[\frac{a^2}{c^2} + \frac{b^2}{c^2} = 1\]
\[a^2 + b^2 = c^2\]

via @MathUpdate

Fifty years of CP violation

via @CERN http://t.co/9Rac42mBVh #CPviolation #CPsymmetry #matter #antimatter
The CP violation is a violation of the CP-symmetry, a combination between the charge conjugation symmetry (C) and the parity symmetry (P).
CP-symmetry states that the laws of physics should be the same if a particle is interchanged with its antiparticle, and then its spatial coordinates are inverted.
The CP violation is discovered in 1964 by Christenson, Cronin, Fitch, and Turlay (Cronin and Fitch awarded the Nobel Prize in 1980) studying the kaons' decays and it could have a key-role in the matter-antimatter imbalance.
Now the CERN Courier dadicated a special issue about the fifty years of the discovery (download here).
Christenson, J., Cronin, J., Fitch, V., & Turlay, R. (1964). Evidence for the 2Ï€ Decay of the K_{2}^{0} Meson Physical Review Letters, 13 (4), 138-140 DOI: 10.1103/PhysRevLett.13.138

Gods, phylosophy and computers

by @ulaulaman http://t.co/Q3AODpvKAs #Godel #ontologicalproof #god #computer
The ontological arguments for the existence of God was introduced for the first time by St. Anselm in 1078:
God, by definition, is that for which no greater can be conceived. God exists in the understanding. If God exists in the understanding, we could imagine Him to be greater by existing in reality. Therefore, God must exist.
There are a lot of phylosophies, mathematics and logicians that proposed their ontological argument, for example Descartes, Leibniz, Frege, and also Kurt Gödel, that proposed the most formal ontological proof:
The proof was published in 1987 (Godel died in 1978), and a lot of logicians discussed around it. One of the last papers published about the argument is an arXiv that suggested to Anna Limind to write that European Mathematicians ‘Prove’ the Existence of God, but the aim of the paper is to control the consistence of the proof and not the reality of the theorem (I think that the theorem is, simply, undecidable), and also to start a new discipline: the computer-phylosophy.
Indeed Benzmüller and Paleo developed an algorothm in order to use a computer to control the ontological proof. So the work:
(...) opens new perspectives for a computerassisted theoretical philosophy. The critical discussion of the underlying concepts, definitions and axioms remains a human responsibility, but the computer can assist in building and checking rigorously correct logical arguments. In case of logico-philosophical disputes, the computer can check the disputing arguments and partially fulfill Leibniz' dictum: Calculemus

Read also: Spiegel Online International
Christoph Benzmüller & Bruno Woltzenlogel Paleo (2013). Formalization, Mechanization and Automation of Gödel's Proof of God's Existence, arXiv:

How quantum mechanics explains global warming

posted by @ulaulaman about #globalwarming http://t.co/VDlaEt2s5m
The physician Mark Schleupner, a Ronaoke native, writes about global warming:
So, according to NASA scientists, if all the ice in 14 million sq km Antarctica melts, sea levels will rise more than 200 feet. Greenland alone has another huge chunk of the Earth’s water tied up in ice; some scientists say that its ice sheet has passed a tipping point and will be gone in the next centuries, raising ocean levels by 24 feet. These are scary amounts of sea level rise that put huge areas of population centers (New York, Boston, Miami, San Francisco, etc.) under water.
In the end, one can deny climate change (although I’d not recommend it), but one cannot deny math.
Well, it's really interesting, about the climate change, to see the following Ted-Ed lesson:
You've probably heard that carbon dioxide is warming the Earth. But how exactly is it doing it? Lieven Scheire uses a rainbow, a light bulb and a bit of quantum physics to describe the science behind global warming.

Mathematicians discuss the Snowden revelations

In the last period I cannot read the Notices of AMS, so I lost the most recent discussion on this journal about the revelations made by Edward Snowden about NSA. Thanks to the n-category Café I recover the letters about this topic:
In the first part of 2013, Edward Snowden, a former contractor for the National Security Agency (NSA), handed over to journalists a trove of secret NSA documents. First described in the media in June 2013, these documents revealed extensive spying programs of the NSA and other governmental organizations, such as the United Kingdom's GCHQ (Government Communications Headquarters). The disclosures reverberated around the world, influencing the bottom lines of big businesses, the upper echelons of international relations, and the everyday activities of ordinary people whose lives are increasingly mirrored in the Internet and on cell phone networks.
The revelations also hit home in the mathematical sciences community. The NSA is often said to be the world's largest employer of mathematicians; it's where many academic mathematicians in the US see their students get jobs. The same is true for GCHQ in the UK. Many academic mathematicians in the US and the UK have done work for these organizations, sometimes during summers or sabbaticals. Some US mathematicians decided to take on NSA work after the 9/11 attacks as a contribution to national defense.
The discussion on Notices: part 1, part 2

Beach sand for long cycle life batteries

#sand #battery #chemistry #energy
This is the holy grail – a low cost, non-toxic, environmentally friendly way to produce high performance lithium ion battery anodes
Zachary Favors

Schematic of the heat scavenger-assisted Mg reduction process.
Herein, porous nano-silicon has been synthesized via a highly scalable heat scavenger-assisted magnesiothermic reduction of beach sand. This environmentally benign, highly abundant, and low cost SiO2 source allows for production of nano-silicon at the industry level with excellent electrochemical performance as an anode material for Li-ion batteries. The addition of NaCl, as an effective heat scavenger for the highly exothermic magnesium reduction process, promotes the formation of an interconnected 3D network of nano-silicon with a thickness of 8-10 nm. Carbon coated nano-silicon electrodes achieve remarkable electrochemical performance with a capacity of 1024 mAhg−1 at 2 Ag−1 after 1000 cycles.

Favors, Z., Wang, W., Bay, H., Mutlu, Z., Ahmed, K., Liu, C., Ozkan, M., & Ozkan, C. (2014). Scalable Synthesis of Nano-Silicon from Beach Sand for Long Cycle Life Li-ion Batteries Scientific Reports, 4 DOI: 10.1038/srep05623
(via Popular Science)

Mesons produced in a bubble chamber

by @ulaulaman about #mesons #bubblechamber #CERN #particles #physics
A bubble chamber is a pool filled with a liquid (typically hydrogen) such that its molecules are ionized to the passage of a charged particle, thus producing bubbles. In this way the trajectories of the particles are visible and it is possible to study the various decays(2).
The bubble chamber was invented by Donald Glaser(1) in 1952, who win the Nobel Prize in 1960.
(1) Glaser, D. (1952). Some Effects of Ionizing Radiation on the Formation of Bubbles in Liquids Physical Review, 87 (4), 665-665 DOI: 10.1103/PhysRev.87.665
(2) Image from the italian version of Weisskopf, V. (1968). The Three Spectroscopies Scientific American, 218 (5), 15-29 DOI: 10.1038/scientificamerican0568-15

Brazuca, a Pogorelov's ball

posted by @ulaulaman about #Brazuca #geometry #WorldCup2014 #Brazil2014
Brazuca is the ball of the World Cup 2014. The particular pattern of its surface is a consequence of the Pogorelov's theorem about convex polyhedron:
A domain is convex if the segment joining any two of its points is completely contained within the field.
Now consider two convex domains in the plane whose boundaries are the same length.(1)
Now we can create a solid using the two previous domains: we must simlply connect every point of one boundary with a point of the other boundary, obtaining a convex polyhedron, like showed by Pogorelov in 1970s.
The object you have built consists of two developable surfaces glued together on edge.
Instead of using two domains, you can, for example, start from six convex domains as the "square faces" of a cube. On the edges of each of these areas, you choose four points, as the vertices of the "square". We assume that the four "corners" that you have chosen are like the vertices, that is to say that the domains have angles in these points.(1)

The damages of the heavy metal

by @ulaulaman via @verascienza about #heavymetal #chemistry #health
A heavy metal is any metal or metalloid of environmental concern. The term originated with reference to the harmful effects of cadmium, mercury and lead, all of which are denser than iron. It has since been applied to any other similarly toxic metal, or metalloid such as arsenic, regardless of density. Commonly encountered heavy metals are chromium, cobalt, nickel, copper, zinc, arsenic, selenium, silver, cadmium, antimony, mercury, thallium and lead.
Heavy metals have a lot of detrimental effects on our body:
Aluminum - Damage to the central nervous system, dementia, memory loss
Antimony - Damage to heart, diarrhea, vomiting, stomach ulcer
Arsenic - Lymphatic cancer, liver cancer, skin cancer
Barium - Increased blood pressure, paralysis
Bismuth - Dermatitis, stomatitis, colitis, diarrhea
Cadmium - Diarrhea, stomach pains, vomiting, bone fractures, damage to the immune, psychological disorders
Chrome - Damage to the kidneys and liver, respiratory problems, lung cancer, death
Copper - Irritation of the nose, mouth and eyes, liver cirrhosis, brain damage and kidney
Gallium - Irritation of the throat, difficulty 'breathing, pain in the chest
Hafnium - Irritation of eyes, skin and mucous membranes
Indium - Damage to the heart, kidneys and liver
Iridium - Irritation of the eyes and digestive tract
Lanthanum - Lung cancer, liver damage
Lead - Fruits, vegetables, meats, cereals, wine, cigarettes contain. Cause brain damage, dysfunction at birth, kidney damage, learning disabilities, destruction of the nervous system
Manganese - Blood clotting, glucose intolerance, disorders of the skeleton
Mercury - Destruction of the nervous system, brain damage, DNA damage
Nickel - Pulmonary embolism, breathing difficulties, asthma and chronic bronchitis, allergic skin reaction
Palladium - Very toxic and carcinogenic, irritant
Platinum - Alterations of DNA, cancer, and damage to intestine and kidney
Rhodium - Stains the skin, potentially toxic and carcinogenic
Ruthenium - Very toxic and carcinogenic, damage to the bones
Scandium - Pulmonary embolism, threatens the liver when accumulated in the body
Silver - Used as a coloring agent E174, headache, breathing difficulties, skin allergies, with extreme concentration it causes coma and death
Strontium - Lung cancer, in children difficulty of bone development
Tantalum - Irritation to the eyes and to the skin, upper respiratory tract lesion
Thallium - Used as a rat poison, stomach damage, nervous system, coma and death for those who survive the remain Thallium nerve damage and paralysis
Tin - Irritation of the eyes and skin, headaches, stomach aches, difficulty to urinate
Tungsten - Damage to the mucous membranes and membranes, eye irritation
Vanadium - heart and cardiovascular disorders, inflammation of the stomach and intestine
Yttrium - Very toxic, lung cancer, pulmonary embolism, liver damage
via verascienza

The Championships' Final

by @ulaulaman via @Airi_Talk about #WorldCup2014 #Brazil2014 predictions: #ESP-#GER
Jürgen Gerhards, Michael Mutz and Gert Wagner developed an economic model in order to predict the results of the football team during the international cups. The researchers evaluated the market value of every player and described every team with an economic price: in this way they predict the winners in 2006 (World Cup: Italy), 2008 (Euro Cup: Spain), 2010 (WC: Spain), 2012 (EC: Spain). Starting from the previous results, the three researchers realized the board of the challenges from the eighth:
The predicted final will be between Spain and Germany, with Spain favorite, but there is a little hope for the other teams, first of all Brazil: in 2012 the model didn't predict the other team in the final game of the Euro Cup: Italy, that should not have reached even into the semi-finals:
Marketization and globalization have changed professional soccer and the composition of soccer teams fundamentally. Against the background of these shifting conditions this paper investigates the extent to which the success of soccer teams in their national leagues is determined by (a) the monetary value of the team expressed in its market value, (b) inequality within the team, (c) the cultural diversity of the team, and (d) the degree of turnover among team members. The empirical analyses refer to the soccer season 2012/13 and include the twelve most important European soccer leagues. The findings demonstrate that success in a national soccer championship is highly predictable; nearly all of our hypotheses are confirmed. The market value of the team is, in today's world, by far the most important single predictor of athletic success in professional soccer.
Jürgen Gerhards, Michael Mutz, Gert Wagner (2014). Die Berechnung des Siegers: Marktwert, Ungleichheit, Diversität und Routine als Einflussfaktoren auf die Leistung professioneller Fußballteams. Zeitschrift für Soziologie, Jg. 43, Heft 3, 231-250
via University of Berlin, galileonet.it, airicerca.org

Soccer balls

#abstract about #soccer #WorldCup
Soccer balls are typically constructed from 32 pentagonal and hexagonal panels. Recently, however, newer balls named Cafusa, Teamgeist 2, and Jabulani were respectively produced from 32, 14, and 8 panels with shapes and designs dramatically different from those of conventional balls. The newest type of ball, named Brazuca, was produced from six panels and will be used in the 2014 FIFA World Cup in Brazil. There have, however, been few studies on the aerodynamic properties of balls constructed from different numbers and shapes of panels. Hence, we used wind tunnel tests and a kick-robot to examine the relationship between the panel shape and orientation of modern soccer balls and their aerodynamic and flight characteristics. We observed a correlation between the wind tunnel test results and the actual ball trajectories, and also clarified how the panel characteristics affected the flight of the ball, which enabled prediction of the trajectory.
Hong S. & Asai T. (2014). Effect of panel shape of soccer ball on its flight characteristics., Scientific reports, 4 DOI:

Portrait of an atom

by @ulaulaman about #hydrogen #atom #orbitals #Bohr #Rutherford #quantum_mechanics
The study of the structure of the atom is long story, and it begins with Democritus, or from the point of view of the modern science, with John Dalton in 1808: indeed he tried to fix in scientific terms the ideas of the greek philosopher and naturalist.
Dalton's theory was based on five fundamental points:
  • matter is made of tiny building blocks called atoms, which are indivisible and indestructible;
  • atoms of the same element are all equal to each other;
  • the atoms of different elements combine with each other (via chemical reactions) in ratios of whole numbers and generally small, thus giving rise to compounds;
  • atoms can be neither created nor destroyed;
  • atoms of an element can not be converted into atoms of other elements.
As you can see, there are some other ideas correct and incorrect. We do, however, a jump of a century and we go to 1902 with Mr. Thomson, the first to propose an atomic model: he assumed that the atom was made up as a sort of cake, a positively charged sphere in which were scattered, like raisins, the electrons with a negative charge distribution such as to make the object as a whole neutral.
A few years later, however, in 1911, Rutherford devised and conducted an important experiment(1) in which he sent a beam of alpha particles against gold nuclei. The cross section observed, i.e. the surface on which the scattered particles resulting bump, it was too large to be compatible with the Thomson's hypothesis, but was compatible with that of Rutherford, namely that the atom was made up of a positive nucleus and by a number of electrons that revolved around the core itself at a large distance (compared to nuclear ones, of course).
However, this is not the last step: in 1913 Niels Bohr refined the Rutherford's model(2). Accepted the planetary structure of the atom, Bohr suggested that the electrons in their rotational motion, could not occupy orbits at their leisure, but they must themselves at very specific distances from the nucleus: this is the dawn of quantum mechanics, that further refine the atomic model thanks to the famous Schroedinger's equation.
The atom, now, was intended as a postive nucleus consists of protons and neutrons, with a little cloud of electrons that moved around, and not on an orbital but in a sort of spherical cap. And these caps at different energy was recently directly observed by Aneta Stodolna's team(3, 4):
(...) an experimental method was proposed about thirty years ago, when it was suggested that experiments ought to be performed projecting low-energy photoelectrons resulting from the ionization of hydrogen atoms onto a position-sensitive two-dimensional detector placed perpendicularly to the static electric field, thereby allowing the experimental measurement of interference patterns directly reflecting the nodal structure of the quasibound atomic wave function.(3)

via phys.org, io9
(1) Rutherfor E. (1911). The scattering of α and β particles by matter and the structure of the atom, Philosophical Magazine Series 6, 21 (125) 669-688. DOI:
(2) Bohr N. (1913). On the constitution of atoms and molecules, Philosophical Magazine Series 6, 26 (151) 1-25. DOI:
(3) Stodolna, A., Rouzée, A., Lépine, F., Cohen, S., Robicheaux, F., Gijsbertsen, A., Jungmann, J., Bordas, C., & Vrakking, M. (2013). Hydrogen Atoms under Magnification: Direct Observation of the Nodal Structure of Stark States Physical Review Letters, 110 (21) DOI: 10.1103/PhysRevLett.110.213001
(4) Smeenk, C. (2013). A New Look at the Hydrogen Wave Function Physics, 6 (58) DOI: 10.1103/Physics.6.58

15 Sorting Algorithms in 6 Minutes

Visualization and "audibilization" of 15 Sorting Algorithms in 6 Minutes.
Sorts random shuffles of integers, with both speed and the number of items adapted to each algorithm's complexity. The algorithms are: selection sort, insertion sort, quick sort, merge sort, heap sort, radix sort (LSD), radix sort (MSD), std::sort (intro sort), std::stable_sort (adaptive merge sort), shell sort, bubble sort, cocktail shaker sort, gnome sort, bitonic sort and bogo sort (30 seconds of it).
More information at the "Sound of Sorting"

Mathematics of soccer: Shot angles

Consider a situation in which a soccer player runs straight, with tha ball, towards the bottom line of the field. Intuitively, it is clear that there is an optimal point maximizing the shot angle, providing the best place to kick in order to improve the chances to score a goal. If the player chooses the bottom line, the angle is zero and his chances are just horrible; if the player kicking far way, tha angle is also too small!

Locus of the optimal points
Two different types of kicks: Diego Armando Maradona in Napoli-Cesena 2-0, Serie A 1987/88, an amazing example of the "Maradona feeling" about the optimal place to kick
and the "impossible" goal by Marco Van Basten during the final of Euro '88

from "Mathematics of Soccer" by Alda Carvalho, Carlos Pereira dos Santos, Jorge Nuno Silva. "Recreational Mathematics Magazine" (2014)