par
europhysicsnews
THE MAGAZINE OF THE EUROPEAN PHYSICAL SOCIETY
The long-sought-after Higgs boson
Carbon Neutral Aviation
Routes to antispacetime
Transforming physics pedagogy in the UK
The Exoplanet revolution
50/5
50/6
2019
Volume 50 • number 5&6
European Union countries price:
107€ per year (VAT not included)
CONTENTS
par
europhysicsnews
THE MAGAZINE OF THE EUROPEAN PHYSICAL SOCIETY
The long-sought-after Higgs boson
Carbon Neutral Aviation
Routes to antispacetime
Transforming physics pedagogy in the UK
The Exoplanet revolution
europhysicsnews
50/5
50/6
2019
Volume 50 • number 5&6
European Union countries price:
107€ per year (VAT not included)
Cover picture: Three years after its final landing in Abu Dhabi (UAE), Solar Impulse,
the solar-powered airplane in which Bertrand Piccard and André Borschberg flew
around the world without a drop of fuel, may once again reach the skies.
©Solar Impulse - Jean Revillard - Rezo.ch
EPS EDITORIAL
03 Physics and Innovation – P. Rudolf
NEWS
m PAGE 24
the long-soughtafter
Higgs boson
04 EPS Historic Sites: the former Department of Theoretical Physics of the Goethe University
06 Sarah Köster awarded the EPS Emmy Noether Distinction
09 GRAND: a Giant Radio Array for Neutrino Detection
11 Improving the gender balance at a Dutch university
12 The eyes of the Baikal GVD neutrino telescope
13 A personal report: setting up a junior research group in Hamburg
14 Nobel prize for Jim Peebles for his theoretical discoveries in physical cosmology
15 Nobel prize for Mayor and Queloz as fathers of the field of extrasolar planets
16 OSA Frontiers in Optics 2019
HIGHLIGHTS
m PAGE 29
Carbon Neutral
Aviation
17 Machine Learning for Characterization and Control of Non-equilibrium Plasmas
Delocalization of edge in topological phases
Partial synchronization as a model for unihemispheric sleep
18 Fractal agglomerates fragment into dissimilar fragments
Chemotherapy drugs react differently to radiation while in water
19 Science puts historical claims to the test
Improving heat recycling with the thermodiffusion effect
Optimising structures within complex arrangements of bubbles
20 New insights into the early stages of creep deformation
Fragmenting ions and radiation sensitizers
Improving the signal-to-noise ratio in quantum chromodynamics simulations
21 Conductivity at the edges of graphene bilayers
WONDER-2018: a workshop on nuclear data
FEATURES
m PAGE 41
The Exoplanet
revolution
22 Letter: Some comments on the historical paper by H. Schmidt-Böcking – J. Vigué
24 How we found the long-sought-after Higgs boson – C. Mariotti
29 Carbon Neutral Aviation – R. Terwel, J. Kerkhoven and F. W. Saris
34 Safe and dangerous routes to antispacetime – V.B. Eltsov, J. Nissinen and G.E. Volovik
38 Transforming physics pedagogy in the UK – D. Sands
41 The Exoplanet revolution – Y. Miguel
44 Advanced instructions for imparting knowledge – M. Plüss
ANNUAL INDEX
47 Annual index volume 50 - 2019
EPN 50/5&6 01
EPS EDITORIAL
[EDITORIAL]
Physics and Innovation
In October, the EPS presented to the press "The Importance
of Physics to the Economies of Europe", an analysis of the
contribution of our scientific discipline to the economies of
Europe over the six-year period 2011-2016.
This study, prepared by the Centre for
Economics and Business Research
(Cebr) using statistics available in
the public domain through Eurostat, shows
that the physics-based sector accounts for
16% of the total turnover of the EU28 business
economy, which is more than the gross
turnover contribution of the entire retail
sector. Physics-based industries within Europe
employed 17.8 million people in 2016,
more than a million more than in 2011.
The production of physics-based goods
and services also has a significant knockon
‘downstreamstream’ effect throughout
the supply chain. This multiplier effect
entails that for every €1 of physics-based
output, a total of €2.49 output is generated
throughout the EU28 economy as a whole.
The employment multiplier is even higher,
with every job in physics-based industries
supporting a total of 3.34 jobs in the economy
as a whole by these industries.
To keep its competitiveness, the European
physics-based sector is highly R&D
intensive and its expenditure on R&D within
the EU28 exceeded €22 billion in every
year of the period 2011-2016. However,
what seems to be difficult to comprehend
for policy makers and for the general public
that elects them is that keeping the physicsbased
sector in the economy strong and
addressing global societal challenges is a process
of a very long-term nature. Solutions to
these challenges will require investments in
innovation and development but also in all
stages of research, from basic (frontier and
open-ended) to applied. Indeed, it will not
suffice to develop technologies on the basis
of the current knowledge: new paths and
new knowledge will be needed, which can
only be generated by open-ended research.
In this context it is worrying that in the
new European Commission the areas of
education and research are not explicitly
represented anymore but hidden under the
"Innovation and Youth" title. Accentuating
economic exploitability (i.e. "innovation")
at the expense of research and education,
and reducing “education” to “youth” while
it is essential to all ages, neglects that research
and education are the foundation
of the wealth and comfort we enjoy in Europe.
Many of you, together with members
of other disciplines forming the scientific
community of Europe, have signed the
open letter demanding that the EU commission
revise the title for commissioner
Gabriel to “Education, Research, Innovation
and Youth” reflecting Europe's dedication
to all of these crucial areas. At the time
of writing this editorial, the letter has been
signed by more than 10.000 scientists and
institutions, among them 19 Nobel Prize
winners but its effect is not known yet. Even
if I have had very positive feedback from
some Europarliamentarians who have emphasized
that they support this request and
that they will fight for more EU funding for
research and education, the walk is still very
much uphill.
This is why I would like to invite each
and everyone of you to reach out to the
general public, as well as to relevant politicians
on the local, regional, national and
European Union level: it is important that
we show them our work and explain that
while industrial R&D is important for
product improvement and for creating
the next generation of goods and services,
basic and foundational research often
lead to discoveries and applications with
long term economic impact. As Serge Haroche
said at the Festakt for the 50 years
of EPS: Einstein certainly did not think
of improving eye-operations with lasers
when working on stimulated emission.
We as a community are very much aware
of the link between research and innovation
but people outside science very often
are not.
Another issue that needs to be clarified
to the general public and to policy makers is
the importance to fund academic research
in order to guarantee the quality of education
required for generating a sufficient
number of highly skilled graduates for the
physics-based industry. In many European
countries investments in research have been
stagnating and the urgently needed growing
numbers of graduate and post-graduate
students are difficult to sustain at the present
investment level. More and more often
I also hear of talented young researchers,
who decide to leave university because they
are frustrated by the low success rates of
their grant proposals. When student numbers
increase not only more staff has to be
hired but research budgets need to increase
if we want our universities to retain their
academic standards.
If you are willing to advocate but feel
unsure how to approach this, your Member
Society may help: EPS will support this by
offering a “train the trainers” workshop in
2020, where a delegate of each member society
will be trained in how to organize (and
give) workshops for teaching physicists how
to talk to policy makers.
According to the Lisbon treaty of 2000,
European governments and firms should
together spend 3% of GDP annually on
R&D, but in many countries this objective
is not yet reached. Let’s all engage in convincing
our policy makers that more dedicated
funding for education and research is
urgently needed to maintain a sound basis
for innovation in Europe and to guarantee
a high standard of living for European citizens
in the future, as well as to warrant a
high research content of the Physics educational
programmes of our students who will
be the ones to produce the science for this. n
l l Petra Rudolf,
EPS President
EPN 50/5&6 03
europhysicsnews
2019 • Volume 50 • number 5&6
Europhysics news is the magazine of the European
physics community. It is owned by the European
Physical Society and produced in cooperation with EDP
Sciences. The staff of EDP Sciences are involved in the
production of the magazine and are not responsible for
editorial content. Most contributors to Europhysics news
are volunteers and their work is greatly appreciated
by the Editor and the Editorial Advisory Board.
Europhysics news is also available online at:
www.europhysicsnews.org
General instructions to authors can be found at:
www.eps.org/?page=publications
Editor: Els de Wolf (NL)
Twitter @elsdewolf1
Science Editor: Ferenc Igloi (HU)
Email: igloi.ferenc@wigner.mta.hu
Executive Editor: David Lee
Email: david.lee@eps.org
Graphic designer: Xavier de Araujo
Email: xavier.dearaujo@eps.org
Director of Publication: Jean-Marc Quilbé
Editorial Advisory Board:
Tracey Clarke (UK), Gonçalo Figueira (PT),
Zsolt Fülöp (HU), Manuel Güdel (A), Tero Heikkilä (FI),
Agnès Henri (FR), Jo Hermans (NL), Christoph Keller (NL),
Robert Klanner (DE), Antigone Marino (IT),
Laurence Ramos (FR), Chris Rossel (CH),
Victor R. Velasco (SP), Sylvain Viroulet (FR)
© European Physical Society and EDP Sciences
EPS Secretariat
Address: EPS • 6 rue des Frères Lumière
68200 Mulhouse • France
Tel: +33 389 32 94 40 • fax: +33 389 32 94 49
www.eps.org
Secretariat is open 09.00–12.00 / 13.30–17.30 CET
except weekends and French public holidays.
EDP Sciences
Chief Executive Officer: Jean-Marc Quilbé
Publishing Director: Agnès Henri
Email: agnes.henri@edpsciences.org
Production: Sophie Hosotte
Advertising: Camila Lobos
Email: camila.lobos@edpsciences.org
Address: EDP Sciences
17 avenue du Hoggar • BP 112 • PA de Courtabœuf
F-91944 Les Ulis Cedex A • France
Tel: +33 169 18 75 75 • fax: +33 169 28 84 91
www.edpsciences.org
EPS HISTORIC SITES
The former Department
of Theoretical Physics
of the Goethe University
Frankfurt, Germany
An EPS Historic Site was inaugurated at the
University of Frankfurt in recognition of key
contributions to quantum mechanics made by
its Physics Department in 1919-1922.
The former Department of Theoretical Physics of the Goethe University
of Frankfurt has been recently recognised by the European
Physical Society (EPS) as an “EPS Historic Site,” the fifth in
Germany. On 3 September 2019, a plaque marking the site was unveiled
by the Presidents of the European and German Physical Societies, the
Goethe University, the Physics Association of Frankfurt as well as by
the speakers and audience of an international symposium “Otto Stern's
Molecular Beam Research and its Impact on Science.” The Historic Site
honors the work of Max Born, Otto Stern, Walther Gerlach, Elisabeth
Bormann, and Alfred Lande done at Frankfurt during 1919-1922.
At the Institute of Theoretical Physics, then headed by Born, key
discoveries were made that contributed decisively to the development
of quantum mechanics. In 1919, Otto Stern launched there the revolutionary
molecular beam technique that made it possible to send
atoms and molecules with well- defined momentum through vacuum
Subscriptions
Individual Members of the European Physical
Society receive Europhysics news free of charge.
Members of EPS National Member Societies receive
Europhysics news through their society, except members
of the Institute of Physics in the United Kingdom and the
German Physical Society who have access to an e-version
at www.europhysicsnews.org. The following are 2019
print version subscription prices available through
EDP Sciences (Prices include postal delivery cost).
Institutions - European Union countries: 107 €
(VAT not included, 20 %). Rest of the world: 128 €
Student - European Union countries: 50.83 €
(VAT not included, 20 %). Rest of the world: 61 €
Contact: Europhysics News, EDP Sciences
17 avenue du Hoggar - Parc d'activités de Courtaboeuf
BP 112 - F-91944 Les Ulis CEDEX A, France
subscribers@edpsciences.org or visit www.edpsciences.org
ISSN 0531-7479 • ISSN 1432-1092 (electronic edition)
Printer: Fabrègue • Saint-Yrieix-la-Perche, France
Legal deposit: November 2019 m Past Physics Institute built in 1906.
04 EPN 50/5&6
EPS historic sites NEWS
and to measure with high accuracy
the deflections they underwent when
acted upon by transversal forces.
Thereby, heretofore unforeseen quantum
properties of nuclei, atoms, and
molecules could be revealed that
became the basis for our current understanding
of quantum matter. Stern
would receive the 1943 Nobel Prize
in Physics for developing the molecular
beam technique and for his
later measurements of the magnetic
moment of the proton. Experiments
done in 1920 by Born and Bormann
sent a beam of silver atoms through
molecular gases and measured their
mean free path in order to estimate
sizes of molecules. An iconic experiment
in 1922 by Stern and Gerlach
demonstrated space quantization
of atomic magnetic moments and
thereby also, for the first time, of the
quantization of atomic angular momenta.
In 1921, Landé postulated the
coupling of angular momenta as the
basis of the electron dynamics within
atoms. The Historic Site includes the
seat of the Physical Society of Frankfurt,
the oldest in Germany, founded
in 1824.
m Frankfurt-
Bockenheim
with University
around 1920
(1 Physics
building,
2 Senckenberg
Museum and
3 Main University
building with
Auditorium)
A host of prominent descendants
of the Stern-Gerlach experiment make
use of the key concept of space quantization
of angular momentum. Foremost
are nuclear magnetic resonance,
optical pumping, the laser, and atomic
clock, as well as incisive discoveries
such as the Lamb shift and the anomalous
increment in the magnetic moment
of the electron, which launched
quantum electrodynamics. In the
1960s, the molecular beam technique
made inroads into chemistry as well,
by enabling to study binary collisions
of chemically well-defined reagents.
In the 1990s, the diffraction of atoms,
pioneered by Stern’s group, became
one of the leading methods for the
non-destructive investigation of surface
structures as well as surface phonons
and adsorbate vibrations. At about
the same time, a renaissance had begun
in atomic physics, nurtured by the
development of techniques to cool and
trap atoms. Based on a combination of
molecular beams with laser cooling,
these techniques enabled the realization
of quantum degeneracy in atomic
gases, launched condensed-matter
physics with tunable interactions, and
transformed metrology. n
llBretislav Friedrich
Fritz-Haber-Institut der Max-
Planck-Gesellschaft, Berlin
llHorst Schmidt-Böcking
Institut für Kernphysik, Goethe
Univertsität Frankfurt
m Picture left: Historic laboratory where the Stern-Gerlach experiment was performed (left: Otfried Madelung son of Erwin Madelung director of the theoretical physics institute in
1922, right: Alan Templeton grandnephew of Otto Stern). Picture right: Past Physics building, red dot marks room of Stern-Gerlach experiment (picture taken in 2011).
EPN 50/5&6 05
NEWS
Prizes
Sarah Köster awarded the
EPS Emmy Noether Distinction
Sarah Köster, a biophysics professor at the University of Göttingen,
Germany [1], has been awarded the prestigious Emmy Noether
Distinction of the European Physical Society for her seminal
contributions to the physics of biological cells and biopolymers,
in particular for the understanding of intermediate filaments, and
her impressive ability in teaching and recruiting women scientists
in her field of research.
Photo: Markus Osterhoff
Born in the south of Germany,
she studied physics at
the University of Ulm and
performed her PhD work at the
University of Ulm, Boston University
and the Max Planck Institute for
Dynamics and Self-Organization,
Göttingen. She received her PhD
from the University of Göttingen
in 2006. Her thesis was awarded
the Berliner-Ungewitter-Award of
the Göttingen physics faculty as
well as the Otto-Hahn-Medal of the
Max-Planck-Society. In 2008, after
two years of postdoctoral work
at Harvard University with David
Weitz, she returned to Göttingen as
an assistant professor. In 2010 she was
awarded the Helene-Lange-Award of
the EWE-Foundation. In 2011 she
was promoted to tenured associated
professor and in 2017 to full professor
in the faculty of physics of the University
of Göttingen, where she leads
the research group Cellular Biophysics
at the Institute of X-Ray Physics.
In 2016 she received an ERC Consolidator
grant. Meanwhile she leads
a fairly large research group in Göttingen
with a surprisingly high proportion
of young women scientists
(14 out of 17 students and postdocs
are female!). Several of these excellent
students and postdocs have been
awarded prices and fellowships themselves.
The group works on disentangling
how molecular interactions in
protein filaments define their unique
mechanical properties and thereby
the physical properties of cells.
Sarah decided early on to focus
her research on topics that were a
bit off the beaten tracks. This is how
she became one of the few physicists
working on so-called intermediate
filament, a type of biopolymers
that is found inside biological cells.
These protein filaments had been
extensively studied by biochemist,
biologists and physicians for their
importance in health and disease.
Their physical properties, which are
equally important for the cell, have
only recently been studied in detail
by few groups, among them Sarah’s.
By combining very controlled experiments
in order to measure piconewton
forces on nanometre length
scales, with analytical and numerical
modelling they discovered very
interesting materials properties:
a specific molecular mechanism
allows the filaments to dissipate a
lot of energy, possibly a protection
On the importance of the EPS Emmy
Noether Distinction as an instrument to
highlight the work of female physicists
“It's a great honour, which I appreciate a lot!
The distinction indeed highlights our work and
hopefully motivates (female and male) physics
students to pursue a career in academia.
I would like to stress that research is always
a team effort and I have the pleasure to work
with a wonderful team.“
mechanism of the cell against strong
impact, like an airbag for the cell.
Furthermore, the filaments are extensible
when pulled slowly, but
stiff, when pulled fast, which could
determine the cells flexibility when
squeezing through strong constrictions,
but protecting it against unwanted
damage, similar to the seat
belt in a car.
Sarah and her group also engaged
in developing novel methods for imaging
cells using x-rays, which are
now established as a complementary
alternative to fluorescent microscopy
and electron microscopy.
Through these projects, Sarah is very
active in the photon science community
as the experiments are typically
performed at synchrotron sources.
Besides her exceptional contributions
to science, Sarah has always
placed emphasis on teaching.
She enjoys working with young undergrad
students and teaching the
freshmen lecture in experimental
physics, as well as educating more
advanced Master’s and PhD students
in biophysics. Her group has always
attracted many students on all levels
and has considerably grown over the
years. At the University of Göttingen,
her students and postdocs are
known for their highly cooperative
and helpful attitude with each other
and especially towards new members.
The proportion of women
among the group members is remarkable
as is the high level of expertise
and strong passion for science of
06 EPN 50/5&6
Prizes
NEWS
the young scientists. Clearly, this is a
consequence of Sarah’s own enthusiasm
for her research.
Besides an excellent account of
her own research group, Sarah’s
visibility in the biological physics
community has grown during the
past decade since she started her
independent research group. In
her department at the University
of Göttingen she has always been
fully involved in committee work
as well as numerous collaborative
research efforts such as Collaborative
Research Centres (CRC) and
Excellence Clusters of the DFG. As
spokesperson/vice-spokesperson
of two of these CRCs she was also
involved in shaping the scientific directions
of the centres. In 2015 the
Biological Physics division of the
German Physical Society (DPG)
elected her vice-spokesperson and in
On how she built a research group with
a large fraction of women researchers:
“I actually did not take any special measures.
I got applications for PhD and postdoc
positions from very talented women and
was lucky and could convince them to work
with me. Of course I am equally happy about
applications from young male scientists. In
the end, my research group does a great job
in recruiting new members. They are always
very welcoming and helpful with new people
and manage to create an excellent work
atmosphere in our lab.“
2017 spokesperson. Thus, Sarah has
very successfully co-organized the
DPG Spring Meeting, which is the
largest Physics meeting in Europe,
four times by now.
With her contribution to leading research
in science, her enthusiasm and
passion for biological physics and her
position as an important role model
for young researchers, Sarah builds
a bridge to the next generation of researchers.
Her career is a great example
for a successful women career in physics,
which has been highlighted by the
2019 Emmy Noether Distinction.
llJosef A. Käs
Soft Matter Physics Division
of Leipzig University
Reference
[1] Prof. Dr. Sarah Köster, Institute for
X-Ray Physics, Georg-August-University
Göttingen. Website: www.
uni-goettingen.de/en/91107.html
EPN 50/5&6 07
NEWS Prizes
08 EPN 50/5&6
experiment
NEWS
GRAND:
a Giant Radio Array for Neutrino Detection
GRAND [1] is a planned observatory for the detection of ultra-high-energetic particles
raining from the cosmos down on Earth. GRAND will be a true multi-messenger observatory,
detecting and identifying neutral particles such as photons and neutrinos, as well as charged
nuclei, at energies exceeding 10 17 eV.
The scale of the project, 200,000
km 2 , makes GRAND the most
sensitive probe for discovering
neutrinos and photons at these energies,
thereby unambiguously identifying
the sources of ultra-high-energy
cosmic rays (UHECR).
The Origin of ultra-highenergy
neutrinos
When cosmic particles with energy in
excess of 10 20 eV were first detected [2],
the quest for their origin started. Most
recently, Pierre Auger Collaboration
has analyzed the directions of the incoming
UHECR and concluded that
their origin is likely outside our own
Milky Way [3]. Due to interactions
with the cosmic microwave background,
the origin of these particles
is likely within 100 million light years 1 ,
or relatively close. In their interactions
with the cosmic microwave background
ultra-high-energy photons
and neutrinos are created that, unlike
the charged cosmic rays, travel
in straight lines through space before
arriving at Earth (see image). Thus, the
measurement of these particles will be
a better probe of the origin of UHECR
for more distant sources. In addition,
their flux will be a good measure for
the nature of the UHECR and thereby
it will shed light on how nature is capable
to accelerate particles to energies
that are 10 million times higher than
any accelerator on Earth can deliver.
Although GRAND will be sensitive
to all particles, this article focusses
on its measurement of tau-neutrinos
specifically. The IceCube collaboration
has recently measured the first cosmic
tau-neutrinos [4], but is too small to
perform neutrino astronomy at an energy
scale beyond 10 18 eV. GRAND is
the only planned observatory with the
aperture and expected reconstruction
accuracy to open this new window on
the universe.
Neutrinos are often called
ghost-particles that are able to fly
through the Earth and can be detected
on the other side. This is no longer true
at the high energy frontier that we are
interested in due to the cross section
rise for neutrinos interaction with matter
for increasing energy. For ultra-high
energies there is a high probability
that a neutrino when passing through
1
To set the scale: the typical distance between galaxies is a few million light years.
m The different
types of cosmic
particles and
waves
a mountain or skimming the Earth’s
crust interacts with the rock and thereby,
in case of a tau neutrino, produces a
tau-particle. This tau-lepton sometimes
escapes the mountain and enters the
atmosphere. As the tau-lepton has a
lifetime of about 3 10 -13 s, it will decay
with an energy dependent average decay
length of order 50 km. The decay
products of the tau create a cascade of
interactions in the atmosphere thereby
producing billions of secondary particles
in what is called an air shower.
These secondary particles gyrate due to
the Lorentz-force in the magnetic field
of the Earth, which causes radiation in
the MHz regime.
This radiation will be measured by
cheap dipole antennas, from which the
energy and direction of the tau-particle
that created the air shower can be inferred.
EPN 50/5&6 09
NEWS
experiment
After successfully operating and
optimizing GRANDProto300 for
about five years, we expect to be ready
to set up the first GRAND hotspot,
GRAND10K, at the same location as
GRANDProto300. This 10,000 km 2
array will at that time be the largest
instrumented area for the detection
of ultra-high-energy particles from
space and may already detect the first
ultra-high-energy neutrinos.
This technique has been established
in the recent past by radio arrays
measuring properties of cosmic-rayinduced
air showers [5], and the GRAND
array can thus be used to measure all
different types of ultra-high-energy
particles that create air showers (see
GRAND detection strategy).
The GRAND setup
GRAND will be a distributed observatory
with 10-20 different sites
spread over the world. Each site will
cover an area of about 10,000-20,000
km 2 , which is instrumented with one
measurement antenna per km 2 . The
total project will consist of 200,000 antennas
on a total area of 200,000 km 2 .
In order to realize such an ambitious
project, we adopt a staged approach.
The stage we are currently designing
is GRANDProto300, a detector of 300
antennas covering an area of about
200 km 2 . This stage of the detector will
be deployed in the Gobi-desert at an
altitude of 2800 m, nearby the town
of LengHu in the QingHai province
in China. On this site we recently deployed
a first series of four prototypes
in order to monitor the local conditions
in preparation for GRANDProto300
(see picture). The main goal
of GRANDProto300 is to test and
optimize the detector design, trigger
m GRAND
detection
strategy
c First prototype
setup in the
LengHu site.
This setup is
used to monitor
background
conditions in
preparation for
GRANDProto300
algorithms and reconstruction and
analysis tools for GRAND. However,
even this (modest-size) detector has
its own physics goals. It will be used to
measure the composition of charged
cosmic rays in the energy region
(10 17 -10 19 eV). At the low end of this
energy region, the particles are expected
to be accelerated predominantly in
our own galaxy, whereas at the high
end their origin is mostly outside our
Milky Way. Fully understanding this
transition region is very important for
the study of the sources of cosmic rays
as well as of the shielding effect of the
galactic magnetic fields.
The GRAND collaboration
The GRAND proto-collaboration is
welcoming new partners. It is continuously
growing, and consists at this
moment of about 60 collaborators
from 10 countries. A Memorandum
of Understanding that will formally
establish the GRAND collaboration
is being drafted and the first partners
are ready to sign the MoU. n
llCharles Timmermans,
Nikhef/Radboud University Nijmegen
References
[1] GRAND collaboration, Sci. China Phys.
Mech. Astron. 63:2, 19501 (2020) .
[2] John Linsley, Phys.Rev.Lett. 10, 146 (1963).
[3] Pierre Auger Collaboration, Astrophys. J.
868-1, 4 (2018)
[4] IceCube Collaboration, Phys.Rev. D 99-3,
032007 (2019).
[5] Frank G. Schröder, EPJ Web Conf. 208,
15001 (2019)
10 EPN 50/5&6
Report
NEWS
Improving the gender balance
at a Dutch university
Shortly before the 2019 summer recess, my employer – the Eindhoven
University of Technology (TU/e) in the Netherlands – announced a
radical plan. From July 1 st onward, it would be considering only female
applicants for any new faculty position posted [1]. The announcement
quickly went viral on various social media outlets, and for a day or two
the TU/e board was treated to equal measures of praise, ridicule and rage. A highly polarised
reception revealing that the exclusionary hiring measures had somehow struck a deeper
nerve. In a personal capacity, I’d like to take this opportunity to sketch how this extreme
move came to be, what we expect from it and what’s next.
Photo: Rob Stork
The immediate cause is quite clear:
The TU/e has occupied the bottom
slot in the national gender rankings
for some time now; it has the lowest
percentage of female full professors of any
university in the Netherlands. Between
2014 and 2017, the percentage of female
full professors has risen from 8.6% to 12.6%
(LNVH Monitor 2018, Dutch only, https://
monitor.lnvh.nl). The university had set itself
the target of reaching 20% in 2020; at
the current rate that target will be missed.
These figures appear even bleaker in European
context: with an overall average of
18.7% female staff in the highest academic
ranks the Netherlands is well below the
EU-average of 23.7% (She figures 2018 of
the EC). For more than ten years, the TU/e
has worked to improve its percentages, but
to little avail as evidenced by the slow rise.
Challenged by an increasing call from
inside the university, and perhaps also
somewhat embarrassed by the poor showing,
the TU/e leadership decided to change
its approach. An organisation that seeks to
improve its gender balance (without laying
off its male employees) needs to do
one thing: hire more women. With typical
Dutch directness, it was decided to do exactly
that.
In some ways, this amounts to treating
the symptoms rather than the cause. To be
sure, the issues surrounding gender balance
in academia are complex and impossible to
capture in a single number or percentage.
The way I see it, there are really three distinct
problems which, ideally, must all be
structurally addressed, at the same time.
••
The first concerns recruitment and selection,
and happens at the front door. The
problem is simply that more men than
women end up getting hired.
••
The second problem is the glass ceiling.
Once hired and ‘inside the system’, the
progression through the academic ranks
of female staff is delayed or impeded
compared to the career paths of male
colleagues (She Figures 2018 of the EC).
••
Thirdly, there is the leaky pipeline: On
average, female academics are more likely
than their male colleagues to abandon
their careers before they reach the highest
ranks (Why women leave academia, and
why universities should be worried, Curt
Rice in The Guardian of 24 May 2012).
These three problems are well documented,
and they are interrelated. Together,
they have led to the skewed balance we
face today. The causes for each, likewise,
are largely known. The issues at the front
door are often attributed to implicit biases in
hiring and selection; subtle, unintentional
or fully subconscious mechanisms by which
we project stereotypes and attitudes on
those we are tasked with evaluating. These
biases [2,3] color our assessment, often to
the disadvantage of those unlike ourselves.
As a result, committees tend to select ‘more
of the same’, which will obviously slow
down the establishment of balance where
balance is lacking. Another important aspect
of the front door issues in STEM fields
is simply that fewer women apply for faculty
positions to begin with. As a result, even a
fair and unbiased process would still result
in gender imbalance. This ‘supply side’ issue
is sometimes offered as an argument
for the inevitability of gender imbalance
in STEM, but I strongly feel we should not
accept this as a fait accompli. Across Europe,
early stage scientists in far more favourable
male/female proportions are preparing for
their careers in science, technology and innovation.
Apparently, the women among
them are more hesitant to pursue the next
step than their male colleagues. Given the
other two problems – which of course are
well-known to applicants – I completely
understand their hesitation. I too would
think twice about launching myself into
an environment that will hinder my career
progression. I too would think twice about a
job where so many colleagues have so much
trouble protecting the balance between
work and personal life. I too would think
twice about volunteering to become the first
female staff member in a research unit or
an entire department, knowing full well this
would entail endless quota-filler service and
committee assignments. For those same
reasons, I am not surprised that the career
pipeline is leaky. There is no doubt that the
average STEM career in the Netherlands is
a profoundly different personal experience
– and a considerably less enjoyable one – for
women than it is for men.
The implicit biases that skew hiring and
selection and the glass ceiling, likewise,
share a common cause: The illusion that our
current processes let us competently assess
the quality and merit of candidates, both in
hiring and promotion decisions. Among
the most frequently raised objections to the
EPN 50/5&6 11
NEWS
Report
TU/e measures was some form of an appeal
to a purely meritocratic ideal of academic
organization: Obviously, its proposers
would offer, any hiring procedure should
only select the best qualified individual
from among the candidates, regardless of
anything else. The myth of this omniscient
ability to single out the perfect candidate
every time should be considered soundly
busted by now [4,5], and we are left with the
realisation that our processes are flawed and
inaccurate. Despite its very best intentions,
The academy is not a perfect meritocracy.
For decades, appointment and promotion
committees have based their decision
making on noisy, biased measurements
and the results are the staff compositions
we see today.
For these, and many other reasons it
would be short-sighted and unwise to address
only the problem at the front door by
just hiring only women. In Eindhoven, we
are well aware that to ensure that our new
colleagues remain within our university
we must revisit how we support people and
careers. We explore new ways of assessing
NNV DIVERSITY PRIZE 2018 AWARD
GOES TO GRONINGEN
and rewarding accomplishment (in line
with for instance The San Francisco Declaration
on Research Assessment, https://
sfdora.org). We are setting up mentorship
and coaching programmes for new staff
and are addressing the mounting workloads
that increasingly weigh down academic
staff in the Netherlands [6]. Start-up
funds were previously rare, but are now
standard practice in Eindhoven and guarantee
our new colleagues a fair shot, an
equal playing field and (temporary) independence
from the highly competitive and
low success-rate external funding calls as
they build their labs and their identity as
independent PIs.
The understandable but unfortunately
one-sided media focus on the exclusion
of male candidates at TU/e stirred powerful
emotions but in the end detracts from
the actual job at hand. The radical hiring
measure is a short sprint, aimed to get our
university to a viable point of departure for
its next phase. I have little doubt that we
will get to that 20% in 2020 target, but I am
much more excited to be a part of the larger
The first edition of the NNV-Diversity Prize was won by the faculty
of Science and Engineering of the University of Groningen
(RUG). The Netherlands Physical Society (NNV) has created the
biannual prize for the physics institution that is most successful
in putting an open diversity policy into practice. The prize
is a tribute and an inspiring example for other institutes and/
or departments.
http://www.epsnews.eu/2019/02/nnv-diversity-prize-2018-awarded-to-groningen
experiment, in which we transform our
university into a considerate and supportive
employer that provides a career space
where all of its employees may flourish. n
llCornelis Storm
Professor of Theoretical Biophysics, Eindhoven
University of Technology.
References
The article is an adapted version of an article
published in Dutch by the author in the magazine
of the Netherlands Physical Society (NTvN-85/10).
[1] More precisely, the measure dictates that for
every new staff vacancy, and for the first half
year that it is open, only female candidates
may be considered. In case no suitable female
candidate is found in those first six months,
the position is opened to all applicants. Target
female influx figures between now and the
end of 2020 are 35% at the full and associate
professor level, and 50% at the assistant professor
level. If these targets are realized, the
TU/e will make its 20% by 2020 target.
[2] C.A. Moss-Racusin, J.F. Dovidio, V.L. Brescoll, M.J.
Graham and J. Handelsman, PNAS 109 (41),
16474 (2012)
[3] C. Wennerås and A. Wold., Nature 387, 341
(1997)
[4] E.J. Castilla and S. Benard, Administrative
Science Quarterly 55 (4), 543 (2010)
[5] E. Reuben, P. Sapienza, and L. Zingales. PNAS
111 (12), 4403 (2014)
[6] TU/e to tackle work pressure with a dedicated
approach, TU/e Cursor (2018) https://www.
cursor.tue.nl/en/news/2018/februari/week-3/
tue-to-tackle-work-pressure-with-a-dedicated-approach
[7] For an excellent entry in the literature: Where’s
the evidence? A little science about bias and
gender equality, Curt Rice (2017) https://
curt-rice.com/2017/09/23/wheres-the-evidence-a-little-science-about-bias-and-gender-equality/
THE EYES
OF THE BAIKAL GVD NEUTRINO TELESCOPE
The eyes of neutrino telescopes are the photomultipliers embedded in pressure resistant
glass spheres. Thousands of these are necessary to instrument the large volumes of deep
ice, deep sea and – in the case of the Baikal GVD telescope – in deep lake water. That
requires smooth production factories, such as the one in the picture in the Joint Institute
for Nuclear Research (JINR) in Dubna, Russia. Once deployed at the bottom of Lake Baikal
in Siberia they will register the Cherenkov light of charged particles produced in the collision
of cosmic neutrinos with the matter in the vicinity of the telescope. An old detection
technique, but still convincingly efficient.
12
EPN 50/5&6
Society
NEWS
© Philip Bartz for the Volkswagen Foundation
A personal report:
setting up a junior research group
in Hamburg
I am a postdoctoral researcher in theoretical physics at the Center for Free Electron Laser
Science at DESY in Hamburg, Germany. Starting from January 2020, I will lead a junior
research group at the Universität Hamburg funded by the Freigeist Fellowship Program of
the Volkswagen Foundation.
The Volkswagen Foundation is an
independent foundation that promotes
science and technology in
research and teaching. It shares the name
of the car manufacturer for historical reasons
and is not affiliated with it. I learned
about the Volkswagen Foundation, when
I went to a conference in Hanover in 2014
organized by the foundation. I looked into
its initiatives and found out about the
Freigeist Fellowship Program that funds
research projects at the boundaries between
established fields of research. Later,
when I was shaping the idea of my research
proposal for a junior research group, I
realized that it matches the objectives of
the program.
My research project "Seeing excitons
in motion" aims to employ advances of
the attosecond science for light-energy-conversion
applications. This project
is inspired by recent developments in the
field of attosecond science. Nowadays,
it is possible to produce such short light
pulses that they can capture processes
with sub-femtosecond temporal resolution
(shorter than a millionth of a billionth of
a second, 10 -15 sec), which is a time scale
relevant for electronic motion. There are a
number of problems in physics, chemistry
and biology, where the ability to resolve
electronic motion would provide many
new fascinating insights. The problem of
revealing exciton formation and dynamics
in the field of photovoltaics is one of the
exciting examples.
Excitons are quasiparticles that describe
a special state of electrons in semiconductors.
They play the key role for the photovoltaic
effect used by solar cells, and the ability
to see how excitons are moving will enable
the observation and control of mechanisms
that govern the efficiency of solar cells.. The
goal of this project is to invent methods to
follow excitonic motion by means of ultrashort
light pulses with sub-femtosecond
temporal and atomic spatial resolution. The
interaction of light pulses and moving electrons
at sub-femtosecond time scales gives
rise to quantum effects that do not appear
at longer times. A rigorous quantum-mechanical
analysis will be employed to describe
how photovoltaic materials in the
regime of exciton dynamics interact with
ultrashort light pulses. This description will
provide tools to extract the finest details of
exciton dynamics, a very complex type of
electron dynamics, from signals obtained
by means of ultrashort light pulses. Based
on this analysis, the project aims to propose
and design novel, cutting-edge experiments
that will open up new perspectives for renewable
energy research.
My research project is funded for six
years. The funding includes my own position
and four PhD-student positions in
total. Each PhD position is for three years,
which means two PhD positions on average
at a time. The Volkswagen Foundation
also approved funds for me to stay
twice for the periods of two-months at the
PULSE Institute at Stanford Linear Accelerator
Center in the USA. I plan to collaborate
with scientists at the PULSE Institute
on the development of concepts to follow
electronic motion by means of ultrafast
x-ray scattering at Linac Coherent Light
Source at Stanford. I greatly appreciate this
opportunity, since the visits of the twomonths
duration make it possible for me
to gain additional international experience
being a mother of a small child.
I was delighted to see that my request
for GPU-Servers was approved. I highly
acknowledge that the Volkswagen Foundation
understood the need of a theoretical
group for high performance computing
equipment to be able to perform advanced
calculations. It will enable the group to
perform realistic simulations of exciton
dynamics in photovoltaic materials, which
are highly computationally demanding.
I have chosen the Universität Hamburg
(UHH) as a host institution for my project.
The UHH is one of eleven German
universities that are awarded the status of
a University of Excellence. The UHH has
also recently won the Excellence Initiative
by the German Federal and State governments
to establish the cluster of excellence
"CUI: Advanced imaging of matter" (AIM).
The cluster of excellence AIM combines scientific
teams developing technologies to explore
the structure, dynamics and control of
matter at the atomic scale. The goals of my
project are in line with the mission of the
AIM. I have already several collaborators
within the cluster and there are a number of
further opportunities to collaborate.
Hamburg is an excellent scientific environment.
The city hosts several light
source facilities that are unique among
the world. There are also several research
institutions in Hamburg that are partners
of the UHH within the AIM: the
Deutsches Elektronen-Synchrotron DESY,
Max-Planck-Institute for the Structure
and Dynamics of Matter and the European
XFEL. I have chosen the UHH as a
host institution among the other research
institutions to work in a classical academic
environment, which prepares to become a
full professor. n
llDaria Gorelova
DESY Photon Science division
EPN 50/5&6 13
NEWS
Prizes
Nobel prize for Jim Peebles
for his theoretical discoveries in physical cosmology
On October 8, 2019, the Nobel
Committee awarded the Nobel
Prize in Physics to Phillip James
Edwin Peebles, known in the scientific
world as Jim Peebles or PJEP, for his theoretical
discoveries in physical cosmology
that contributed to our understanding of
the evolution of the universe and Earth’s
place in the cosmos.
In the second half of the twentieth
century, cosmology - the science about
the Universe, its structure and evolution -
ceased to be the subject of philosophical or
theological speculation, or mathematical
models with an obscure reference to reality,
and became a precise natural science
based on observations, computer simulations
and solid theoretical considerations.
This change was possible thanks to the close
cooperation of observers and theoreticians,
among them Jim Peebles.
Jim Peebles was born in 1935 in Winnipeg,
Canada. There he also began studying,
but in 1958 he moved to Princeton in
the USA, where he obtained a doctorate in
physics in 1962. He remained associated
with Princeton University until this day.
Peebles could have been a Nobel Prize winner
long ago. In the mid-1960s, together
with R. Dicke and Robert Wilkinson, he
contributed to building an antenna to detect
microwave background radiation, the
oldest light in the universe, the remnant of
the Big Bang. By chance, it was recorded
slightly earlier by Robert Wilson and Arno
Penzias, but the theoretical explanation and
cosmological connotation of the discovery
came from the Princeton group. Penzias
and Wilson were awarded the Nobel Prize
for Physics in 1978 for their discovery. The
microwave background radiation provides
a "picture" of the Universe in the state it
was about 300,000 years after the Big Bang,
when there were no stars or galaxies - "there
was nothing." However, it contains traces of
unevenness in the distribution of matter,
which later formed the entire large-scale
structure of the Cosmos with stars and
galaxies. For years, Peebles has dealt with
the problem of linking what is seen in this
picture with what is now seen in the distribution
of matter in space. He dealt with the
problem of how from an almost smooth
distribution of matter after billions of years
compact objects like galaxies separated
from each other by almost empty space,
and how it relates to this "oldest light in
universe ".
The discovery of this "oldest light"
marked a turning point in the development
of modern cosmology. Suddenly it
became a precise observational science and
pure speculation and mathematical models
began to be confronted with results of observations.
Measurements of the microwave
background radiation allowed Peebles to
accurately trace the process of primordial
synthesis of light chemical elements. In
the beginning, after the Big Bang, "there
was nothing", not even chemical elements.
Everything emerged gradually. Today we
know where all the elements in the Mendeleev
table come from. Almost all of them
were formed in crucibles, which are the interiors
of hot stars, or in processes related
to the evolution and end of life of stars. But
the lightest, isotopes of hydrogen, helium
and lithium, were created during the first
three minutes of the Universe's existence,
after the Big Bang. The discovery of the
"oldest light" allowed Peebles to trace the
process of formation of isotopes of these
elements. He published the results in 1966.
After working on background radiation and
the formation of elements, at the age of
31 he became a classic.
Peebles contributed to other important
discoveries. In the '70s, together with another
prominent Princeton astrophysicist,
Jeremiah P. Ostriker, he noticed that thin
discs of spiral galaxies are unstable. For
their stabilisation it is necessary for such
galaxies to be surrounded by a vast, massive
halo of dark, non-luminous matter. This
was the first work proving that the universe
is filled with a large amount of dark matter.
Peebles and Ostriker also showed how nascent
galaxies create unexpected structures
that look like a bar in their midst.
For many years, Peebles' close associate
was a Polish cosmologist from the Copernicus
Astronomical Center in Warsaw,
Roman Juszkiewicz. They were friends.
Together they wrote a couple of scientific
papers on "the oldest light" and on the formation
of galaxies. Jim Peebles is the author
of several classic monographs in the field
of cosmology. As a researcher of the largescale
structure of the Universe, he once said:
"Give me a thousand redshifts (a magnitude
that tells you how far away a galaxy is), and
I'll tell you what the structure of the Universe
is". Today we know millions of redshifts
and still we do not know exactly this
structure. Asked if he maintained his view
(that a thousand shifts would be enough),
he answered "no". How so? "The universe
is changing and people are changing, their
views are changing and so is mine.” n
llStanisław Bajtlik,
Copernicus Astronomical Center
Polish Academy of Sciences, Warsaw
Left to right: J. Peebles, M. Mayor, D. Queloz
© Nobel Media 2019. Illustration: Niklas Elmehed
14 EPN 50/5&6
Prizes
NEWS
Nobel prize for Mayor and Queloz
as fathers of the field of extrasolar planets
Michel Mayor and Didier Queloz
together received half of the Nobel
Prize in Physics 2019 for arguably
the most important discovery in astronomy in
the past 100 years. Although it does not involve
new or fundamental physics, it signaled the
birth of a new research area – that of extrasolar
planets. First met by much skepticism, their
discovery, now almost 25 years ago, started a
true revolution cumulating in the thousands
of planets we know now. The initial excitement
is continuing today, with some planets being
surprisingly Earth-like, possibly habitable, or
even inhabited. The search for signs of extraterrestrial
biological activity will start in earnest
in the next decade – a direct legacy of the work
by Mayor and Queloz.
The presence of planets orbiting other stars
than the Sun had been speculated for hundreds
of years, but stars being so distant and their
orbiting planets so faint, it had not been possible
to prove their existence. A number of false
claims over the previous decades had made
astronomers weary and cautious, and it is unlikely
that the discovery of two planetary-mass
objects orbiting an exotic neutron star by Alexander
Wolszczan and Dale Frail in 1992, with
hindsight a mere oddity, had changed this.
The detection method used by Mayor and
Queloz, the radial velocity technique, measures
the regular change in Doppler shift of a stellar
spectrum caused by the gravitational pull of a
companion object. The method was well proven
and had been used to measure orbits and
masses of binary stars since the early days of
spectroscopy. However, Jupiter is about a thousand
times less massive than a sun-like star,
meaning that a Doppler shift induced by such
planet would only change by a bit over twenty
meters per second during its decade-long orbit
– too difficult to measure. This did not defer
them to start monitoring a sample of 140 stars
just to see what is out there.
Using their newly-built and innovative instrument,
ELODIE, on the 1.93m telescope at
the Observatoire de Haute Provence in France,
they began to observe targets in April 1994,
and already in the autumn of that same year
they noticed the velocity variations of 51 Pegasi.
Since they would imply a planet of half a Jupiter
mass in a 4.2 day-orbit, well within the orbit of
Mercury, they were at first unsure about this
result. Instead of publishing their discovery,
and risking to follow the fate of some illustrious
predecessors, they waited until the next
observing season to monitor the star again, but
continuously for eight nights. The outcome is
now Nobel-prize history.
The initial reception of their announcement
was bimodal, ranging from hysterical excitement
that a centuries-long quest had finally
delivered, to deep skepticism. Of course, the
Doppler method only measures the radial
component of the reflex motion of the star, so
for a face-on orbit the companion could be a
brown dwarf or a small star. And even if it was
of planetary mass, would this really be a planet
formed like those in our own solar system, or a
completely different beast? Also, how sure could
the team be that this was not some previously
undiscovered stellar pulsation: maybe only the
surface of the star is moving back and forth, not
the star as a whole. The latter suggestion was
the most serious one, with the eminent stellar
astronomer David Gray publishing a paper
in Nature early 1997, claiming that 51 Pegasi
showed line shape variations pointing to stellar
oscillations, not a planet. However, already
before his article appeared in print, its findings
were rebutted with better data: no pulsations.
Within a year, six other gas-giant planets
were found, many of them by the American
rivals Geoff Marcy and Paul Butler and collaborators,
who had been sitting on their data
(and also quickly confirmed the existence of 51
Pegasi b). This provided a powerful statistical
argument against face-on orbits. Also, theoretical
endeavors showed that planets could migrate
inwards through early interactions with
the circumstellar disk. In the following years,
some planets were shown to exhibit eccentric
orbits, making stellar oscillations as the origin
of their Doppler variations highly unlikely.
The first multiple planet systems were found.
This all cumulated in the discovery of the first
transiting planet in 2000, showing a dark object
crossing a stellar disk by David Charbonneau
and his team (which also included Mayor).
This removed the last piece of doubt about
the correctness of the planetary hypothesis,
even from the most skeptical and conservative
astronomers.
Ever since their Nobel-prize discovery,
Mayor and Queloz have been at the forefront of
the exoplanet field. With their team at Geneva
Observatory, they developed the most stable
spectrographs to date, such as HARPS and ES-
PRESSO at the European Southern Observatory
in Chile, capable of detecting Earth-mass
planets. They also had their hand in many of
the transit discoveries, such as those with the
CoRoT and Kepler space missions. The future
is bright. Since the early years of this millennium,
it is also possible to probe the atmospheres
of planets, either through transit spectroscopy
or direct imaging. This has resulted in increasingly
detailed climatological information, such
as planet temperatures, molecular abundances,
clouds, global circulations and planet spin.
While this is currently restricted to mostly
gas-giant planets, this will soon change with
the launch of the James Webb Space Telescope,
and the construction of the European Extremely
Large Telescope in Chile. Characterization
of increasingly more Earth-like planets will be
within reach, such as Proxima b, a temperate
Earth-mass planet around our nearest neighbor,
and the seven Earth-size planets of the
TRAPPIST-1 system. Do they have atmospheres?
What are their main constituencies?
Are they water-rich? Or even, do they show
molecular oxygen – a possible sign of biological
activity? Thank you, Michel and Didier,
for starting this thrilling journey. There will
be many more exciting discoveries to come. n
llIgnas Snellen,
Leiden Observatory, Leiden
University, The Netherlands
Sources and further reading:
[1] Mayor M. & Queloz D., A Jupiter-mass companion
to a solar-type star, Nature 378, 355 (1995)
[2] Ray Jayawardhana: Strange New Worlds, the
search for alien planets and life beyond our
solar system; Princeton University Press
EPN 50/5&6 15
NEWS
Events
OSA Frontiers in Optics 2019
A Focus on Quantum Brings to Light Innovations
in Research and Applications
The international ‘OSA Frontiers in Optics and Laser Science APS/DLS (FiO + LS)’
conference is a joint meeting of the Optical Society (OSA) and the Division of Laser Science
(DLS) of the American Physical Society.
The conference unites the OSA
and APS communities for five
days with a variety of high
quality speakers and special events
sessions. The accompanying Science
+ Industry Showcase features leading
optics companies, technology products
and programs making the conference
a place where scientific ideas
meet industrial interests.
In particular, this year’s conference
showed how a focus on quantum is
bringing to light innovations in both
research and applications. The first
plenary presentation of FiO + LS, titled
‘Generating High-Intensity, Ultrashort
Optical Pulses’ was delivered
by 2013 OSA President and 2018 Nobel
Laureate in Physics Donna Strickland,
who discussed the foundational
research that led to her award-winning
development of chirped pulse
amplification (CPA) and the myriad
innovations it has spawned. The
power of CPA comes from its ability
to create incredibly powerful yet fleetingly
brief bursts of laser light, which
led to numerous scientific advances
and to such innovations as advanced
laser eye surgery and precision cutting
applications for industry. Ronald Hanson,
scientific director at QuTech,
University of Delft, The Netherlands,
provided a glimpse of future information
sharing in the second plenary
talk titled ‘The Dawn of a Quantum
Internet’. Hanson described how future
quantum networks will harness
the power of entanglement, potentially
revolutionising the way data
is stored, processed and transmitted
across global networks.
Technical sessions at this year’s
conference centred around four
thematic areas: Autonomous Systems,
Nanophotonics and Plasmonics,
Virtual Reality and Augmented
Vision and Quantum Technologies
– the latter the first-ever cross-cutting
theme at a FiO + LS conference.
These themes provided opportunities
for focused exploration into the compelling
and promising technologies of
today and tomorrow. In keeping with
the conference objective to couple
science and applications, two theme
programs were supplemented by industry
sessions on market trends and
m Ronald Hansen,
plenary speaker
at FiO 2019
b Ursula Gibson,
president of The
Optical Society
. Young
scientists at
FiO 2019
opportunities for key technologies.
Each theme included an all-invited
program of panel discussions and was
anchored by a 45-minute talk offered
by a visionary speaker from industry
and research. This year’s line-up included:
Jeremy J. Baumberg (University
of Cambridge, U.K.), Steven Cundiff
(University of Michigan, U.S.A.), Bernard
Kress (Microsoft Corp., U.S.A.),
John Martinis (Google and University
of California, Santa Barbara, U.S.A.),
Toshiki Tajima (University of California,
Irvine, U.S.A.), Mohan M. Trivedi
(University of California, San Diego,
U.S.A.) and Jelena Vuckovic (Stanford
University, U.S.A.).
With about 1300 participants, the
LiO+LS 2019 was a great success. The
message was clear that, as an industry,
optics and photonics is on the forefront
of development, both in research
and in translating discoveries into innovation
engines. For those interested,
video recordings of the plenary talks
are available at the conference’s website
at frontiersinoptics.org. n
l l(based on material provided
by the OSA media relations)
16 EPN 50/5&6
from european journals
HIGHLIGHTS
Highlights from European journals
PLASMA PHYSICS
Machine Learning for
Characterization and Control
of Non-equilibrium Plasmas
Recent breakthroughs in machine learning and artificial intelligence
have created cross-disciplinary research opportunities in
the field of non-equilibrium plasma (NEP) treatment of complex
surfaces in applications such as plasma medicine, plasma catalysis,
and materials processing. Machine learning can potentially
transform modeling and simulation, diagnostics, and control of
NEP. Machine learning can aid in the development of predictive
models for plasma-surface interactions and plasma induced
surface responses from experiments, especially when there is a
lack of comprehensive theoretical models for the fundamental
plasma-surface interaction mechanisms. Machine learning also
holds promise for extracting the latent and often multivariate
information of on-line plasma diagnostics. This can facilitate real-time
inference of physical and chemical properties of NEP as
well as complex surfaces interacting with NEP. Learning-based
approaches to feedback control is another promising research
area for NEP applications, especially when the plasma interacts
with complex surfaces with time-varying and uncertain characteristics
that in turn would lead to unpredictable plasma behaviour
and surface responses. Learning-based process control
and artificial intelligence is expected to become indispensable
for reliable, flexible, and effective NEP treatment of complex
surfaces in the future. n
llA. Mesbah and D. B. Graves,
'Machine learning for modeling, diagnostics and control
of non-equilibrium plasmas', J. Phys. D: Appl. Phys. 52,
30LT02 (2019)
MATERIAL SCIENCE
Delocalization of edge
in topological phases
Topological properties are a hot topic currently. If the bulk of
a system is topologically non-trivial (Chern number C ≠ 0), the
bulk-boundary correspondence predicts in-gap states in finite
samples. These states close the energy gap between bands of
different topology so that it can change at boundaries. Conventionally,
the in-gap states are localized at these boundaries so
m Dispersions in a topological system with positive indirect gap and edge states
(left); negative indirect gap and no edge states (right).
that they are edge states. We show, however, that this localization
only occurs for positive indirect gap. Generically, without
indirect gap the in-gap states become extended by mixing with
bulk states despite C ≠ 0. This is illustrated for two fundamental
lattice models (Haldane and checkerboard model) by adding
terms to the Hamiltonians proportional to the identity in momentum
space. Thus, the dispersions change while the topology
remains unchanged. These terms can close the indirect gap
and lead to delocalization of edge states in finite geometries.
Thus, discrete topological invariants may exist without localized
edge modes. This underlines the vital significance of indirect
gaps for the existence of topological edge states and puts the
bulk-boundary into perspective. n
llM. Malki and G. S. Uhrich,
'Delocalization of edge states in topological phases',
EPL 127, 27001 (2019)
MEDICAL PHYSICS
Partial synchronization as a
model for unihemispheric sleep
Human brains exhibit a slight structural asymmetry of their two
hemispheres (see Figure). We have investigated the dynamical
asymmetry arising from this natural structural difference
in healthy human subjects, using a minimum model which
elucidates the modalities of unihemispheric sleep in human
brain, where one hemisphere sleeps while the other remains
awake. In fact, this state is common among migratory birds and
mammals like aquatic species.
By choosing appropriate coupling parameters in a network
of FitzHugh-Nagumo oscillators with empirical structural connectivity,
we have observed that our brain model exhibits
spontaneous symmetry breaking and bistability, where each
hemisphere may engage into either of two dynamical states,
EPN 50/5&6 17
HIGHLIGHTS
from european journals
m Brain connectivity.
characterized by a relatively high and low degree of synchronization.
However, a high degree of synchronization in one of the
hemispheres always coincides with a low degree of synchronization
in the other. This dynamical asymmetry can be even
enhanced by tuning the inter-hemispheric coupling strength.
These results are in accordance with the assumption that unihemispheric
sleep requires a certain degree of inter-hemispheric
separation.
The structural asymmetry in the brain allows for partial synchronization
dynamics, which may be used to model unihemispheric
sleep or explain the mechanism of the first-night effect
in human sleep. n
llL. Ramlow, J. Sawicki, A. Zakharova, J. Hlinka,
J. Ch. Claussen and E. Schöll,
'Partial synchronization in empirical brain networks as a
model for unihemispheric sleep', EPL 126, 50007 (2019)
MATERIAL SCIENCE
Fractal agglomerates fragment
into dissimilar fragments
A new study suggests the pattern of fibres in tissues is
similar to the petals of a flower
linear chain composed of identical units is uniform. How does
the distribution change as the morphology of the fragmenting
structure changes?
More generally, fragmentation kernels, which depend on
the size distribution and the fragmentation rate, have been
extensively used in population balance equations. Usually, their
analytical form is dictated by homogeneity requirements (as
suggested by coagulation kernels) or physical arguments. In
this work, the morphology-dependent fragment-size distribution
is determined from numerical simulations of fragmenting
in silico fractal-like agglomerates. The overarching idea is to
map the agglomerate onto a graph via the adjacency matrix,
the matrix that specifies the monomer-monomer bonds. Fragmentation
occurs via random bond removal. The simulations
showed that the distribution is U-shaped, fragmentation into
dissimilar fragments, accurately reproduced by a symmetric
beta distribution. n
llY. Drossinos, A. D. Melas, M. Kostoglou and L. Isella,
'Morphology-dependent random binary fragmentation of
in silico fractal-like agglomerates', EPL 127, 46002 (2019)
APPLIED PHYSICS
Chemotherapy drugs
react differently to
radiation while in water
A new study looked at the way certain molecules found
in chemotherapy drugs react to radiation while in
water, which is more similar to in the body, compared to
previous research that studied them in gas
Fragmentation occurs everywhere in nature: polymers degrade,
soot particles break up, cells divide, volcanic ash fragments,
droplets break up in turbulent flow, lung fluid fragments to
generate droplets. Nevertheless, little is known about the distribution
of fragment sizes when a fractal agglomerate breaks up.
The fragment-size distribution upon random bond removal in a
m Chemotherapy medication reacts to radiation. Image by Michal
Jarmoluk from Pixabay.
m A fragmentation event and the distribution of fragment sizes upon random
bond removal in a Diffusion Limited Cluster Aggregation (DLCA) cluster, EPL 127,
46002 (2019)
Cancer treatment often involves a combination of chemotherapy
and radiotherapy. Chemotherapy uses medication to stop
cancer cells reproducing, but the medication affects the entire
body. Radiotherapy uses radiation to kill the cancer cells, and
it is targeted to the tumour site. In a recent study, published
18 EPN 50/5&6
from european journals
HIGHLIGHTS
in the journal EPJD, the authors studied selected molecules of
relevance in this context. They wanted to see how these molecules
were individually affected by radiation similar to that
used in radiotherapy. n
llS. E. Huber and A. Mauracher,
'Electron impact ionisation cross sections of fluoro-substituted
nucleosides', Eur. Phys. J. D 73, 137 (2019)
HISTORY
Science puts
historical claims to the test
The latest analytical techniques available to scientists
can confirm the validity of historical sources in some
cases, and suggest a need for reconsideration in others
As any historian
will tell you, we
can rarely take
the claims made
by our ancestors
at face value. The
authenticity of
many of the artefacts
which shape
m Science provides valuable dating tools for artefacts
our understanding
of the past have been hotly debated for centuries, with
little consensus amongst researchers. Now, many of these
disputes are being resolved through scientific research,
including two studies recently published in EPJ Plus. The
first of these, led by Diego Armando Badillo-Sanchez at the
University of Évora in Portugal, analysed an artefact named
‘Francisco Pizarro’s Banner of Arms’ – believed to have been
carried by the Spanish conquistador during his conquest
of the Inca Empire in the 16 th century. The second team,
headed by Armida Sodo at Roma Tre University in Italy,
investigated a colour print of Charlemagne – the medieval
ruler who united much of Western Europe – assumed to be
from the 16 th century. n
llD. A. Badillo-Sanchez, C. B. Dias, A. Manhita
and N. Schiavon,
'The National Museum of Colombia’s “Francisco Pizarro’s
Banner of Arms”: a multianalytical approach to help uncovering
its history', Eur. Phys. J. Plus 134, 224 (2019)
llA. Sodo, L. Ruggiero, S. Ridolfi, E. Savage,
L. Valbonetti, and M.A. Ricci,
'Dating of a unique six-colour relief print by historical
and archaeometric methods', Eur. Phys. J. Plus 134,
276 (2019)
MATERIAL SCIENCE
Improving heat recycling with
the thermodiffusion effect
Numerical simulations of the thermodiffusion effect within
falling film absorbers reveal that thin films composed of
liquid mixtures with negative thermodiffusion coefficients
enhance the efficiency of heat recycling
Absorption heat
transformers can effectively
reuse the
waste heat generated
in various industries.
In these devices, specialised
liquids form
thin films as they
flow downward due
to gravity. These liquid
films can absorb
m Recycling heat using falling films.
vapour, and the heat
is then extracted by a coolant so that it can be used in future processes.
So far, however, there has been little research into how
the performance of these films is influenced by the thermodiffusion
effect – a behaviour seen in mixtures, where different
types of mixture respond differently to the same temperature
gradient. In a study recently published, researchers from the
Fluid Mechanics group at Mondragon University and Tecnalia,
pooled their expertise in transport phenomena and absorption
technology. Together, they explored for the first time the
influence of the thermodiffusion property on the absorption,
temperature and concentration profiles of falling films. n
llP. Fernandez de Arroiabe, A. Martinez-Urrutia,
X. Peña, M. Martinez-Agirre, M. M. Bou-Ali,
'On the thermodiffusion effect in vertical plate heat exchangers',
Eur. Phys. J. E 42, 85 (2019)
SOFT MATTER
Optimising structures within
complex arrangements of bubbles
Computer simulations reveal the secret to stronger,
cheaper structures shaped like bubbly foams
While structures which emulate foam-like arrangements of bubbles
are lightweight and cheap to build, they are also remarkably
stable. The bubbles which cover the iconic Beijing Aquatics Centre,
for example, each have the same volume, but are arranged in
a way which minimises the total area of the structure – optimising
the building’s construction. The mathematics underlying this
EPN 50/5&6 19
HIGHLIGHTS
from european journals
how creep deformation affects material properties over time.
In new research published recently, the authors analysed the
characteristic ways in which material structures evolve during
the early stages of creep deformation. n
m Optimising an arrangement of five bubbles.
behaviour is now well understood, but if the areas of the bubbles
are not equal, the situation becomes more complicated. Ultimately,
this makes it harder to make general statements about
how the total surface area or, in 2D, edge length, or ‘perimeter’,
can be minimised to optimise structural stability. In new research
published recently, the authors explore how different numbers of
2D bubbles of two different areas can be arranged within circular
discs, in ways which minimise their perimeters. n
llF. Headley and S. Cox,
'Least-perimeter partition of the disc into N bubbles of
two different areas', Eur. Phys. J. E 42, 92 (2019)
llD. Fernandez Castellanos and M. Zaiser,
'Statistical dynamics of early creep stages in disordered
materials', Eur. Phys. J. B 92, 139 (2019)
APPLIED PHYSICS
Fragmenting ions
and radiation sensitizers
A new study using mass spectrometry is helping
piece together what happens when DNA that has
been sensitized by the oncology drug 5-fluorouracil is
subjected to the ionising radiation used in radiotherapy.
MATERIAL SCIENCE
New insights into the early
stages of creep deformation
Computer simulations show that the evolution of
material structures during creep deformation can modify
material properties.
The properties of many materials can change permanently
when they are pushed beyond their limits. When a given material
is subjected to a force, or ‘load’, which is stronger than a
certain limit, it can become so deformed that it won’t return
to its original shape, even after the load is removed. However,
heavy loads aren’t strictly necessary to deform materials irreversibly;
this can also occur if they are subjected to lighter loads
over long periods of time, allowing a slow process called ‘creep’
to take place. Physicists have understood for some time that
this behaviour involves sequences of small, sudden deformations,
but until now, they have lacked a full understanding of
m Mass spectrum of 5-fluorouracil showing ions produced by impact with highenergy
electrons.
The anti-cancer drug 5-fluorouracil (5FU) acts as a radiosensitizer:
it is rapidly taken up into the DNA of cancer cells, making
the cells more sensitive to radiotherapy. However, little is known
about the precise mechanism through which radiation damages
cells. A team of scientists have now used mass spectrometry
to shed some light on this process; their work was recently
published in EPJ D. A full understanding of this process could
ultimately lead to new ways of protecting normal tissues from
the radiation damage caused by essential cancer treatments. n
llP.J.M. van der Burgt, M.A. Brown, J. Bockova,
A. Rebelo, M. Ryszka, J-C. Poully and S. Eden,
'Fragmentation processes of ionized 5-fluorouracil in the
gas phase and within clusters', Eur. Phys. J. D 73, 184 (2019)
. Varying strain patterns during creep deformation.
PARTICLE PHYSICS
Improving the signal-to-noise
ratio in quantum
chromodynamics simulations
A new Monte Carlo based simulation method enables more
precise simulation for ensembles of elementary particles
20 EPN 50/5&6
from european journals
HIGHLIGHTS
m This illustrates the fact that the fermions - the class of particle that this
technique can be used to model- includes the particles that make up 'ordinary'
matter (protons, neutrons and electrons). © Wikimedia Commons.
Over the last few decades, the exponential increase in computer
power and accompanying increase in the quality of algorithms
has enabled theoretical and particle physicists to perform more
complex and precise simulations of fundamental particles and
their interactions. If you increase the number of lattice points in
a simulation, it becomes harder to tell the difference between
the observed result of the simulation and the surrounding
noise. A new study recently published in EPJ Plus, describes a
technique for simulating particle ensembles that are 'large' (at
least by the standards of particle physics). This improves the
signal-to-noise ratio and thus the precision of the simulation;
crucially, it also can be used to model ensembles of baryons: a
category of elementary particles that includes the protons and
neutrons that make up atomic nuclei. n
graphene are stacked on top of each other to form a ‘bilayer’,
these properties can become even more interesting. At the
edges of these bilayers, for example, atoms can sometimes
exist in an exotic state of matter referred to as the ‘quantum
spin Hall’ (QSH) state, depending on the nature of the interaction
between their spins and their motions, referred to as their
‘spin-orbit coupling’ (SOC). While the QSH state is allowed for ‘intrinsic’
SOC, it is destroyed by ‘Rashba’ SOC. In an article recently
published, the authors showed that these two types of SOC
are responsible for variations in the ways in which graphene
bilayers conduct electricity. n
llP. Sinha and S. Basu,
'Study of edge states and conductivity in spin-orbit coupled
bilayer graphene', Eur. Phys. J. B 92, 207 (2019)
NUCLEAR PHYSICS
WONDER-2018:
a workshop on nuclear data
llM. Cè,
'Locality and multi-level sampling with fermions', Eur. Phys.
J. Plus 134, 299 (2019)
MATERIAL SCIENCE
Conductivity at the edges
of graphene bilayers
The conductivity of dual layers of graphene greatly
depends on the states of carbon atoms at their edges; a
property which could have important implications for
information transmissions on quantum scales.
Made from 2D sheets of carbon atoms arranged in honeycomb
lattices, graphene displays a wide array of properties regarding
the conduction of heat and electricity. When two layers of
. Intriguing properties arise in graphene bilayers.
m By combining experimental data (left, example of experimental setup) and
theoretical calculations (middle, example of theoretical calculations), it becomes
possible to perform an evaluation of nuclear data (right, example of evaluated
cross-sections). Those evaluated nuclear data are collected in a regularly updated
international library such as JEFF (Joint Evaluated Fusion and Fission).
To describe the path of neutrons in the material but also the
chain reactions that take place in a reactor and the changes in
the composition of matter due to nuclear reactions, neutronics
uses computer codes. These codes have also acquired such a
level of performance since the last two decades that the main
source of uncertainty in neutronic calculations comes today
from nuclear data. In this context, the 5 th edition of the International
Workshop On Nuclear Data Evaluation for Reactor Applications
(WONDER-2018), organized by the French Alternative
Energies and Atomic Energy Commission (CEA) in collaboration
with the NEA (Nuclear Energy Agency of the OECD) was held in
Aix-en-Provence, France, on October 2018. The main objective
was to identify future trends in the measurement, modeling
and evaluation of nuclear data needed for current reactors and
innovative reactor concepts. Proceedings were published in
EPJ Web-of-Conferences:
(https://epjwoc.epj.org/articles/epjconf/abs/2019/16/contents/
contents.html). n
llWONDER-2018 – 5 th International Workshop
On Nuclear Data Evaluation for Reactor applications
EPJ Web of Conferences 211 (2019)
EPN 50/5&6 21
[Letter to the Editors]
by Jacques Vigué
Laboratoire Collisions, Agrégats, Réactivité UMR 5589, CNRS - Université de Toulouse, UPS
DOI: https://doi.org/10.1051/epn/2019501
Some comments on the historical paper by
H. Schmidt-Böcking “The Stern-Gerlach experiment
re-examined by an experimenter” (EPN 50/3 pp. 15-19)
T
his paper is very interesting, with a large
amount of poorly known or fully unknown
details about this experiment. However, I
would like to correct two imprecisions and
add a small complementary information:
Bottom of page 15: in the sentence “Using the Molecular
Beam Method (MBM) that he invented in 1919…”,
H. Schmidt-Bocking attributes to Stern the invention of
the Molecular Beam Method. However, it is well known
that the first molecular beam was built by L. Dunoyer [1]
in 1911 who used it in the following years to observe the
resonance fluorescence of sodium atom.
Middle of the second column of page 16: “As was typical
of all experiments he [=Stern] performed, he carefully
calculated the required conditions (the beam collimation
parameters, the strength of the magnetic field, etc.) in order
to be able to resolve, from the deflected beam, the tiny
transverse momentum transfer due to the existence of an
internal atomic magnetic moment (fig.1)”. This sentence
which refers to Stern 1921 paper[2] is somewhat misleading.
The magnetic field gradient considered by Stern
was 10 4 Gauss per centimeter (10 2 T/m in SI units) and
the calculated beam deflection was 1/100 mm, a
lot too small to be detected. This is due to
the largely underestimated value of the
magnetic field gradient and the experiment was a success
because the gradient was at least 10 times larger. One may
wonder why Stern has made such a large underestimation
of the magnetic field gradient...
The complementary information is a note in Stern
1921 paper which states (text taken from the English
translation) “Mr. W. Gerlach and I have been occupied
for some time with the realization of this experiment.
The reason for the present publication is the forthcoming
paper by Messrs. Kallmann and Reiche concerning the
deflection of electrical dipolar molecules in an inhomogeneous
electric field. As I understand from the proofs,
which were most kindly sent to me, our considerations
are mutually complementary....”. This note proves that the
competition was strong and, at the same time, authors
were kindly communicating the proofs of their papers
to possible competitors! n
References
[1] L. Dunoyer, Comptes Rendus Acad. Sc. 152, 592 (1911).
[2] O. Stern, Z. Phys. 7, 249 (1921) which is available in an English
translation in O. Stern, Z. Phys. D 10, 114 (1988)
22 EPN 50/5&6
[Letter to the Editors]
ANSWER by Horst Schmidt-Böcking
First I am happy and thankful that there are readers who
read papers carefully and are willing to come in contact
with the author.
To the history of the so-called molecular beam method MBM:
I presented in my paper the view of an experimental
atomic physicist, who is performing with his group since
several decades high resolution momentum imaging of
atomic particles or fragments. Thus I see an important
difference in the early history of the atomic beam experiments
performed by Dunoyer and those performed 8
years later by Stern. Stern used the atomic beam of Dunoyer
but he ennobled it to a precision method measuring tiny
momentum transfers. This is for me really the beginning
of the so-called MBS.
But Dunoyers work was clearly important for Stern
and he mentioned it in his first publication on the Maxwell-Boltzmann
velocity distribution measurement (O.
Stern, Eine direkte Messung der thermischen Molekulargeschwindigkeit.
Z. Physik 2, 49 (1920)). We will publish
in October in the German "Journal für Physik) an article,
which begins with the work of Dunoyer but then the article
describes the basic new features of Otto Sterns MBM.
Stern was indeed in close contact with people in the
Kaiser Wilhelm Institut für Physikalische Chemie in
Berlin (today the Fritz Haber Institut) and knew what
Reiche and Kallmann were planning to do (electric dipole
moment). Therefore Stern published very early his idea
how to perform a similar experiment deflecting atoms
by the interaction of the inner atomic magnetic moment
with an inhomogeneous magnetic field (the idea to use
inhomogeneous fields was Madelungs idea, see interview
of Stern in 1961 by Jost). In Sterns first publication
(Otto Stern, Ein Weg zur experimentellen Prüfung der
Richtungsquantelung im Magnetfeld. Z. Physik 7, 249
(1921)) his calculation was based on a magnetic field
strength of about 10.000 Gauss/cm thus the predicted
deflection was only 0.01mm, but later in the real experiment
of Feb. 7 th to 8 th 1922 Gerlach reached fields of
about 200.000 Gauss/cm, which yielded deflections of
0.1 mm or even a little more (see Walther Gerlach und
Otto Stern, Über die Richtungsquantelung im Magnetfeld.
Ann. Physik 74, 673 (1924) and Walther Gerlach, Über
die Richtungsquantelung im Magnetfeld II, Annalen der
Phys. 76, 163 (1925)). n
EPN 50/5&6 23
FEATURES
THE STORY OF A DISCOVERY:
HOW WE FOUND THE
LONG-SOUGHT-AFTER
HIGGS BOSON
llChiara Mariotti – INFN Torino, Italy – DOI: https://doi.org/10.1051/epn/2019502
The Higgs boson has been discovered in 2012 by the ATLAS and CMS experiments
at the LHC, 50 years after its prediction. The scientific and human adventure
of this discovery will be summarized in this article, going back to the search at LEP
and to the foundation of the LHC Higgs Cross Section working group.
24 EPN 50/5&6
the long-sought-after Higgs boson
FEATURES
The standard model time-line
In 1954 Chen Ning Yang and Robert Mills extended the
idea of a local gauge symmetry, to the case of non-abelian
gauge groups. The theory required that the gauge
vector bosons are strictly massless.
In 1961 Sheldon Glashow suggested that electromagnetic
and weak interactions might be described by
a non-abelian gauge theory, following the work of Yang
and Mills, but a severe difficulty was yet to be solved:
weak interactions are very short-ranged, and the corresponding
intermediate vector bosons must be extremely
heavy (roughly, 100 times the proton mass).
In 1964 Robert Brout, François Englert, Peter Higgs,
Gerald Guralnik, Carl Richard Hagen, and Tom Kibble
realized that the non-abelian gauge symmetry can be
formulated in such a way that some of the gauge vector
bosons acquire a non-zero mass, without spoiling the
gauge symmetry. The so-called Higgs mechanism produces
three massive and one massless gauge boson, plus
a physical spin-0 boson (the Higgs particle.).
••
In 1967-1968 Steven Weinberg and Abdus Salam, combining
the theories of Yang and Mills, the Higgs mechanism
and the Glashow hypothesis developed the theory
of the standard model (SM) of electroweak interactions.
••
In 1970 Gerardus t’Hooft and Martinus Veltman
demonstrated that this new formulation of non-abelian
gauge theories is perturbatively renormalizable.
••
In 1973 Gargamelle, a bubbles chamber experiment
at CERN, discovered the weak neutral current, using
neutrino and anti-neutrino beams.
••
In 1983 the UA1 and UA2 experiments at CERN discovered
the W and the Z bosons.
••
From 1989 to 2000 the LEP experiments at CERN
signed the triumph of the standard model, by measuring
its parameters with very high precisions.
••
In 1995 at Tevatron the CDF and D0 experiments discovered
the top quark, completing a sequence started
in 1973 with the discovery of the c-quark, followed by
the discovery of the b-quark in 1976.
••
Only in 2012 at the large hadron collider (LHC) at CERN,
the ATLAS and CMS experiments discovered the last
missing piece of the standard model: the Higgs boson.
The LEP result
LEP (Large Electron Positron collider) was located in a
27 km long circular tunnel, at about 100 meters below
ground, which was the largest European civil-engineering
work of that time. The first interactions were delivered in
August 1989 at 91 GeV. It was an amazing night, since the
moment the two beams crossed in the detectors, we could
observe on the display the Z decaying i n two leptons
or in two jets, being the Z exchange the (almost!) only
process involved at that center-of-mass energy. The four
LEP experiments (ALEPH, DELPHI, L3, OPAL) took
data for 12 years. When LEP stopped in 2000, it had
reached the maximum center-of-mass energy of 209 GeV.
It is hard to remember how little we knew about
electroweak and QCD physics before LEP started! LEP
allowed a gigantic step in our knowledge.
LEP experiments could measure SM quantities with
very high precision. To give an idea of the level of accuracy,
at LEP we could measure changes in the accelerator
circumference of a fraction of a mm! While the standard
model was tested at better than per mill accuracy, what
was not found at LEP was very important as well, allowing
to exclude many models.
LEP was the ideal place to measure quantities with high
precision: the simplicity of the electron-positron initial state
is transmitted to the final state. In the case of the search of the
Higgs boson, the dominant production mechanism is the
“Higgs-strahlung”, e + e - → Z(*) → ZH, allowing to produce
a Higgs boson of mass m H < E cm – m Z . The final state was
expected to have two b-quark jets from the Higgs boson
decay, and two fermions from the Z decay. The result at the
end of the data taking was negative: LEP set a limit at 95%
b Artistic view of the
CMS detector, from
“Subatomic desire”
by Silvia Fabiani
(http://www.
atomesdansants.com)
EPN 50/5&6 25
FEATURES
the long-sought-after Higgs boson
Towards LHC
In December 2000 LEP stopped and it was dismantled.
LHC started the engineering work. The ATLAS detectors
was built in situ in the experimental cavern, CMS
on the surface and lowered between November 2006
and August 2008 to the underground cavern. The early
commissioning was performed using cosmic rays. In September
2008 proton beams were circulated for the first
time, but due to an accident, the first interactions were
given only in November 2009, after 1 year of tremendous
reparation work.
In the years 2010-2012 LHC delivered p-p collisions
at energies of 7 and 8 TeV.
c FIG. 1: (a)
The SM Higgs
production cross
sections at
√ – s = 8 TeV as a
function of the Higgsboson
mass [2].
(b) The SM Higgs
branching ratios as
a function of the
Higgs-boson mass [2].
CL of m H > 114.4 GeV, with an expected limit of 115.3 GeV
[1]. The observed limit was lower than the expected since
there were four events with a non-negligible probability to be
Higgs boson candidates. Indeed, immediately after the end
of the run, before all the final alignments and calibrations of
the detectors, there were even more significant candidates
that gave a probability of 0.00065 of the fluctuation to be
compatible with the background only hypothesis. The initial
excitement faded after the reprocessing of the data with final
calibrations, when the compatibility with the background
only hypothesis became 0.0024, i.e. less than 2 sigma effect.
The LEP accelerator ended operation in December
2000 and the dismantling of the experiments and the
accelerator started immediately.
In 1995 Siemens had offered to produce 32 Superconducting
Radio Frequency cavities for 32 MCHF, in
addition to the ones already ordered, before dismantling
the production line. With those cavities LEP could have
reached E cm =220 GeV, thus m H < E cm – m Z ~ 129 (± Γ Z )
GeV. The CERN management, under pressure to get
the LHC approved, decided not to increase the energy
to more than 200 GeV and leave then the future to LHC.
In reality LEP went up to 209 GeV, because the RF cavities
worked very well, and with a very ingenuous trick,
that was ”increasing” the radius of the collider, by moving
the orbit of the electrons and positrons to the maximal
possible external side of the beam pipe.
1
The luminosity is a performance parameter of the accelerator that allows to estimate
how many events will be produced for a process with a given cross section.
It is measured as the inverse of a cross section.
The LHC Higgs Cross Section
working group
The same day that LHC delivered the first proton-proton
collisions to the experiments, a small group of physicists
met in Torino, to give birth to the group called “LHC
Higgs Cross Section working group” (LHCHXSWG).
In 2008 Giampiero Passarino had the idea of the group
for the first time, emphasizing the urgency, since a discovery
could come sooner than expected.
In August 2009 we met at CERN with two physicists
from ATLAS, and in Torino in November, the group
was formed, and the program was discussed. In January
2010 the experiments formally recognized and endorsed
the activities of the group. The first Yellow Report (a
highly reputed CERN Report series traditionally published
with a distinctive Yellow cover) was published in
February 2011 [2]. This meant that since day zero the
Higgs analyses from ATLAS and CMS have been using
the LHCHXSWG prescriptions and results.
It was not all roses and flowers, of course! The first
hard moment was between Christmas 2010 and New
Year’s eve 2011 when we had to submit the first Yellow
Report. Some of us will hardly forget that winter break!
Another hard moment was at the end of 2014, that we
can summarize with this sentence: ”when the battle is
won the generals are coming”. Thus some drastic change
in the management was done.
We have published four yellow reports, that have been
extensively used and cited. The group increased from ~50
authors to more than 350. The two plots of Figure 1 were
the most used ones in all the Higgs boson analyses: the
cross section for the Higgs boson productions at LHC
and the branching ratio of its decays.
The first years of LHC data taking
The first years of data taking were a successive series of
emotions; adrenaline was flowing without interruption.
We (CMS) had a first spectacular four-muons events
in September 2010 with a four-muons invariant mass
m(4l) = 201.7 GeV, after having collected 35 pb -1 of
luminosity 1 at 7 TeV.
26 EPN 50/5&6
the long-sought-after Higgs boson
FEATURES
For the EPS conference in Grenoble, in July 2011 we
went into panic mode because of two intriguing four
leptons events at m(4l) ~ 145 GeV. We had to scrutinise
the data, the Monte Carlo generators, going into the
details of the diagrams, interferences, parameter setting
etc. The collaboration put us under a tight review, but
we finally were approved and (surprise!) at the EPS-
2011 ATLAS presented a similar intriguing event with
the same mass.
The 13 of December 2011, ATLAS and CMS presented
preliminary analyses of the Higgs boson search with
4.5 fb -1 of luminosity at 7 TeV. Both experiments presented
an exclusion at 95% CL for Higgs of masses larger
than ~ 130 GeV and lighter than ~115 GeV. An excess
of about 3 sigma was observed by both collaborations
between ~115 and ~130 GeV.
In parallel, precision electroweak measurements from
LEP and Tevatron suggested that the Higgs should be
light, i.e. less than 130 GeV.
Both experiments decided not to look at the data
in 2012 until the estimated statistics to discover an
eventual Higgs boson of mass around 115-130 GeV
were collected.
June and July 2012
And here comes the 14 th of June 2012, in the CMS experiment.
The analysis is ready, the background is under
control, the statistics is enough to observe a Higgs if its
mass is 115-130 GeV. It is 19:00 in a room of Building 40
at CERN and with few people connected in video conference.
We run the analysis program for the first time
of that year on the data, and we project the result on the
screen. And there it is: a beautiful peak!
We were seeing a new particle! Maybe the Higgs boson,
so much sought after. From that moment, it was a
crescendo of emotions! I remember not having slept four
days in a row because of too much adrenaline.
The next day, in the afternoon of the 15 th of June,
all the collaboration was reunited. Around the world,
hundreds of physicists working in the CMS experiment
connected via videoconference to a room at CERN, where
the different groups showed the results. Two channels,
H → ZZ → 4 leptons and H → 2 photons, showed a very
nice peak. The other less sensitive channels, however,
gave consistent results. Since the results were only to be
released on the 4 th of July, we had to keep the maximum
of confidentiality until then. The most difficult challenge
was not smiling when going around CERN!
The 3 rd of July people started to camp outside the
CERN Auditorium to be able to enter first thing the following
morning. On the 4 th of July, the Auditorium at
CERN was packed; all the former directors of CERN,
Englert, Higgs, Hagen, and Guralnik were present. The
first to speak was the spokesperson of the CMS experiment.
The emotion was so great that we could not breathe
and when he unveiled the transparency that showed the
coveted 5 sigma the applause was long and loud. And
then it was ATLAS, that showed a 5 sigma peak as well,
at the same mass value. In Figure 2 there are the plots of
two channels from the experiments [3,4].
At the end of 2012, after having collected almost
20 fb -1 at 8 TeV, each channel had a peak that could claim
a discovery by itself. With the four-leptons final state we
could measure the spin and parity to be consistent at
95% CL with the one of the SM Higgs boson. The beautiful
peak that we saw for the first time on the 14 th of June,
was indeed the Higgs boson.
As Paul Dirac said: “The beauty of an equation is more
important than its correctness, in the sense that if an equation
is beautiful, sooner or later it will be demonstrated to
be correct”.
b FIG. 2: (a)
Distribution of the
four-lepton invariant
mass for the ZZ →
4l analysis in CMS.
The points represent
the data, the filled
histograms represent
the background, and
the open histogram
shows the signal
expectation for
a Higgs boson of
mass m H = 125 GeV,
added to the
background
expectation. The
inset shows the m 4l
distribution after
selection of events
with KD> 0.5 (i.e. with
a high probability
to be coming from
signal, given their
kinematic), as
described in reference
[4]. (b) The ATLAS
distributions of the
invariant mass of
diphoton candidates
after all selections for
the combined 7 TeV
and 8 TeV data sample
[3]. The inclusive
sample is shown in
(a) and a weighted
version of the same
sample in (c). The
result of a fit to the
data of the sum of
a signal component
fixed to m H =126.5
GeV and a background
component described
by a fourth-order
Bernstein polynomial
is superimposed.
The residuals of the
data and weighted
data with respect
to the respective
fitted background
component are
displayed in
(b) and (d).
EPN 50/5&6 27
FEATURES
the long-sought-after Higgs boson
In 2013 Prof Englert and Prof Higgs got the Nobel
Prize in physics “for the theoretical discovery of a mechanism
that contributes to our understanding of the origin
of mass of subatomic particles, and which recently was
confirmed through the discovery of the predicted fundamental
particle, by the ATLAS and CMS experiment
at CERN’s Large Hadron Collider”.
Another break-through:
the Higgs boson width
In 2013, following the work of Kauer, Passarino and Caola,
Melnikov, we constrained the Higgs width, Γ H , to few
tens of MeV using off-shell Higgs boson production [5].
An improvement of a factor ~200, with respect to the
on-shell reconstruction, dominated by the experimental
resolution limiting the measurement to few GeV.
It was an hard effort from the theoretical and experimental
point of view, since we had to change all our
analysis methods and Monte Carlo predictions. Thanks
to many theoreticians involved in the LHCHXSWG we
managed to present the first result in Moriond 2014.
The latest results with the statistics from Run 1
and half of Run 2, constrained the Higgs width to
3.2 +2.8 MeV.
−2.2
The Higgs boson mass of the combined ATLAS and
CMS Run1 data is m H = 125.09 ± 0.24 GeV, with the statistical
uncertainty of 0.21 GeV dominating over the
systematic one of 0.11 GeV.
With the data collected in Run 1 and Run 2 many
measurements have been possible: the Higgs boson
coupling to the electroweak bosons and to all elementary
fermions, differential cross sections and fiducial
cross sections.
Extrapolating to the end of LHC, i.e. with 3 ab -1 of
luminosity per experiment, most of the couplings will
be known with at 2-4% precision, dominated by theory
uncertainties.
Changing paradigm
As early as July 4 th , it was clear to us physicists that our
world would not be the same anymore. We now know
that the Higgs boson exists and is not just an elegant and
fascinating theory.
Before the discovery in 2012 the hypothesis was the
SM and the unknown was the mass of the Higgs boson.
Therefore, bounds on m H were derived through a comparison
with high precision data (LEP, Tevatron, LHC…).
After the discovery at LHC, given that the SM is
fully specified, the possible unknowns are explored
through deviations from the SM. Thus, schemas to
define SM deviations are necessary. Two different approaches
have been proposed, the κ-framework and
the SMEFT (SM effective field theory) (see Yellow
Reports [6,7] of the LHCHXSWG). The κ-framework
is a procedure used at leading order (LO).
It is valid only if κ~1, where κ is a multiplicative factor
to the SM cross section. Thus, it can just give indications
if there are deviations from the SM. The SMEFT
is a methodology to study possible new physics effects
from massive particles that are not directly detectable.
The Lagrangian of the SM is extended by introducing
additional dimension-6 (or 8) operators. The underlying
assumption of an effective quantum field theory is
that the scale of new physics Λ is large compared to the
experimentally-accessible energies.
Moreover, we must continue to search for new particles
to understand the "nature" of the newly discovered
Higgs boson and try to understand if the SM is a
fundamental theory. From the discovery of the neutral
weak current in 1973, up to the Higgs discovery in 2012,
no flaw has been detected in the SM. Is the SM just an
effective field theory?
There are many open questions and many unsatisfactory
explanations, and we must increase the precision
of the measurements and of the predictions, especially
wherever there is a little disagreement with data.
Shall we instead propose something new, completely
different?
Towards a new world
We have built huge and sophisticated accelerator and detectors,
“the cathedrals of science”, to find an elementary
particle that explain why elementary particles have mass.
The discovery of the Higgs boson is invaluable. It is a
great success of a community of thousands of physicists:
it is the result of a large group, where each of us has given
their personal contribution. n
About the Author
Chiara Mariotti is an elementary particles
physicist of INFN-Torino. Her
research activity has been focussing on
data analysis in the experiments at the
LEP and LHC colliders. She was one
of the main actors for the Higgs boson
discovery. She was awarded with the Emmy Noether
Distinction for women in physics in 2018.
References:
A complete list of theoretical references can be found inside the
following papers:
[1] LEP collaborations, Phys.Lett. B 565, 61 (2003)
[2] LHCHXSWG, CERN-2011-002
[3] ATLAS Collaboration Phys.Lett. B 716, 1 (2012)
[4] CMS Collaboration Phys.Lett. B716, 30 (2012)
[5] CMS Collaboration, Phys.Lett. B736, 64 (2014)
[6] LHCHXSWG, CERN-2013-004
[7] LHCHXSWG, CERN-2017-002
28 EPN 50/5&6
FEATURES
CARBON NEUTRAL AVIATION
llRob Terwel 1 , John Kerkhoven 1 and Frans W. Saris 2 – DOI: https://doi.org/10.1051/epn/2019503
ll
1 Kalavasta b.v. – 2 Foundation Sanegeest
The aviation industry can become carbon neutral by CO 2 recycling into synthetic kerosene
using renewable energy. As an example we take CO 2 emissions from Tata Steel in the
Netherlands as well as Schiphol Airport’s kerosene consumption.
Modern airplanes emit CO 2 as they burn kerosene
in their jet engines. Flying is therefore
one of the human activities that contributes
to global warming. Direct emissions from
aviation are more than 2% of worldwide emissions and
are set to grow by approximately 2.5 - 3.5% (including
efficiency gains) every year for the next 30 years in Europe
(1). Compared to 2017, kerosene usage would easily
double towards 2050 and could even rise to up to 5 times
current levels if efficiency targets would not be realised. In
2009 the International Air Transport Association (IATA)
set a goal to halve CO 2 emissions by 2050 with respect
to 2005 (2). Without additional measures projections are
a factor of 6 - 10 times worse than what IATA aims for.
The International Civil Aviation Organization (ICAO)
launched the CORSIA program which sets out to offset
any emissions above 2020 levels through emission trading
(3). Participation becomes mandatory for all States
committed in 2027. Given that as a society we do not
seem to want to give up flying, we need to find a solution
that meets – and ultimately goes beyond – IATA's longterm
target and emission offsetting.
Solutions from other sectors
do not work for aviation
While in other sectors technologies are available that
would allow the sector to become carbon neutral, this
is not likely to be the case in the aviation sector soon.
For instance, one can switch from a car with an internal
combustion engine on petrol to a battery electric
vehicle “fuelled” by electricity from a wind- or solar
source. As a result, the activity of driving (excluding the
manufacturing of the car) does not produce greenhouse
gases. There is no such an electric technology for large airplanes
yet, nor is it likely to arrive in the coming decades.
Current electric planes can carry up to 10 passengers for
m © Roya Hamburger
EPN 50/5&6 29
FEATURES
Carbon Neutral Aviation
c FIG. 1: Carbon and
hydrogen cycle in
synthetic kerosene
production and
utilisation, with
the ambiant air
as a CO 2 source.
30 EPN 50/5&6
up to 1 hour of flight time; in 2035, electric planes are
expected to be able to transport 50-100 passengers for
1,000 km - smaller in both capacity and distance than
planes on kerosene.
The aviation industry tries to develop, produce and
buy bio-kerosene, which is kerosene produced from
plant-based sources. Bio-kerosene made from biomass
grown in the Netherlands typically requires over 1,000
times more fresh water and arable land than the suggested
alternative we present here. To put this into perspective,
supplying Schiphol Airport in 2017 with bio-kerosene
would require an amount of farmland 0.5-1.8 times the
size of all Dutch farmland (depending on the crop, see
6). So far aviation has also used second generation biomass
such as used cooking oil, which does not compete
with food production, but cannot scale up as much as
the alternatives. Investigation is ongoing for third generation
biofuels (from algae), but this is currently not
yet a viable option from a commercial or environmental
perspective. In short, it seems like bio-kerosene cannot
be the ultimate solution.
Hence, we need to look for another type of solution,
maybe even unique to this sector. This ‘synthetic’ solution
would be to replace the carbon atoms present in kerosene
with reusable or renewable carbon atoms from a nonplant
based source.
There are various routes
to make synthetic kerosene.
Kerosene is a mixture of hydrocarbons, compounds consisting
of many carbon (C) and hydrogen (H) atoms.
To produce this energy-dense aviation fuel, we need a
source of carbon atoms and a source of hydrogen atoms.
It is possible to obtain carbon atoms by capturing carbon
dioxide (CO 2 ) from the air via a process called direct air
capture (DAC). Hydrogen atoms one can obtained by
splitting water (H 2 O).
The technology to synthesise carbon and hydrogen
atoms into kerosene exists (4,5). And so an opportunity
opens up to capture carbon and use the hydrogen atoms
in water to make kerosene, using solar or wind electricity
as energy source. The carbon and hydrogen go back
into the atmosphere by burning it: when jet engines
burn kerosene, carbon dioxide (CO 2 ) and water (H 2 O)
are released. This would be a carbon neutral and circular
economy solution (see figure 1).
An intermediate step which would roughly halve
emissions would be to take the carbon atoms initially
from a concentrated source of CO 2 emissions like an industrial
plant and reuse the carbon. Then one would still
use fossil carbon at the industrial plant from which CO 2 is
captured, and emit ‘new’ CO 2 into the atmosphere when
kerosene made from these fossil carbon atoms is burned.
But because this fossil carbon is reused in synthetic kerosene,
one would avoid the use of fossil kerosene and the
emissions from the production of fossil kerosene - and
therefore total emissions drop by 50%.
We summarised the main principles of how one makes
synthetic kerosene in figure 2. Starting with carbon dioxide
captured from the ambient air or an industrial plant,
water and renewable electricity, a possible route would
be to split the carbon dioxide into carbon monoxide +
oxygen and the water into hydrogen + oxygen. There are
various ways to do this. One way is to use an electrolyser
that uses renewable electricity. Once one has carbon
monoxide and hydrogen one has syngas and can use exactly
the same process that Shell uses in its Pearl Plant in
Qatar to make synthetic kerosene. The kerosene (mass)
yield for this route is 61% (after recycling of light gases;
the other product being diesel). The associated energy
efficiency (excl. electricity generation) is 39% if diesel is
considered as a ‘loss’ and 63% if it is considered a product.
There are other routes as well, see box. The key idea
is that it is possible to convert carbon dioxide, water
and renewable electricity into synthetic kerosene. In the
technical report (available on the web, 6) we explain the
individual process steps that jointly form the renewable
synthetic kerosene production chain. For each step we
describe which organisations are active in this area, how
the (chemical) process works, what the costs are now
and what we may expect costs to be around 2030 (7).
Lastly, we describe how we use this information in the
business case model (6 and 7) we have developed to flexibly
calculate the costs of most major synthetic kerosene
production methods. This model as well as the technical
report are freely available online.
Carbon Neutral Kerosene
in the Netherlands
To get an idea of how much kerosene can be produced
from the emissions of an industrial plant, we consider the
following scenario. We take Tata Steel in Netherlands, as
an example, as well as Schiphol Airport’s kerosene consumption
in 2016. Tata Steel emits enough carbon atoms
to fuel approximately 50% of the airplanes that fuelled
at Schiphol Airport in 2016. Alternatively, if we capture
the carbon atoms directly from the ambient air, there is
no carbon limit and we can fuel any airplane we want
at Schiphol airport. We would however also need more
DAC units, water and renewable electricity – which may
have a large land and/or water footprint.
Carbon Neutral Aviation
FEATURES
To produce a large volume of hydrogen one also
needs a large quantity of demineralised water. With the
IJ-harbour nearby as well as the North Sea, water is not
a limiting factor. One would need about 1% of the water
demand of all Dutch households.
There are plans to build large offshore wind farms in the
North Sea near the coast of IJmuiden, close to Tata Steel’s
production plant. Wind farms make electricity in variable
quantities depending on how hard the wind blows. TenneT
foresees that transporting that electricity (when wind farm
electricity supply is high and Dutch electricity demand is
low) to the rest of Europe will be quite expensive (9). Hence
converting this peak supply near IJmuiden into hydrogen
could be an alternative to extending the high voltage electricity
grid, which may be cheaper as well.
The offshore wind electricity produced near IJmuiden
around 2030 would be enough to make synthetic kerosene
for approximately 1/3 of the airplanes that fuelled
at Schiphol airport in 2016. Of course, this renewable
electricity is not only there for the production of synthetic
kerosene, and society’s electricity demand may surge if
various sectors electrify simultaneously. However, renewable
electricity is not only produced in windfarms near
IJmuiden, but also in various other places on the North
Sea as well as in onshore wind farms and solar panels.
Currently, the infrastructure to transport kerosene
(pipeline to Schiphol Airport) and storage terminals are
already in place in the Port of Amsterdam. Hence, if we
produce synthetic kerosene in the Port of Amsterdam/
IJmuiden, we would not need significant additional infrastructure
to secure supplies to Schiphol Airport. In
fact, it would be an opportunity for regional embedding
and integration of a novel cluster with a very wide reach.
Pricing Carbon Neutral Kerosene
Although we now know that we can produce semi (up to
just over 50% from waste gases of an industrial plant) or
fully (up to 100% for direct air capture) carbon neutral
kerosene, we do not know if we can also afford it. There
are many uncertainties when exploring a pathway towards
2030. The main uncertainties turn out to be the
costs of crude oil (main determinant of fossil kerosene
costs) and of renewable electricity (main determinant of
synthetic kerosene costs).
Although the price of solar and off-shore wind electricity
has dropped considerably in the last few years, we
do not know for sure how much further this cost reduction
will go. Also we do not know what market prices
will be if all sectors in society electrify simultaneously.
The crude oil price is very volatile as well – it has been
between 40 and 140 dollar per barrel in recent years.
A high price of fossil oil (for example 150 dollar per
barrel, excluding taxes) makes fossil kerosene just as expensive
as carbon neutral kerosene in our comparison
in our base scenario. Similarly, a low price of renewable
electricity (1.7 eurocents per kWh, excluding taxes) also
puts carbon neutral kerosene at par with fossil kerosene.
Of course, a combination of a slightly higher cost of oil
and a slightly lower cost of electricity also creates cost
parity. This could happen, for example, with oil at 120 dollar
per barrel and electricity at 3 eurocents per kWh (7).
We see that, for example, the following set of assumptions
delivers carbon neutral kerosene at the same costs
as fossil kerosene in 2030.
••
An oil price of 98 dollar per barrel (today’s oil price is
80 dollar per barrel)
••
A fossil CO 2 tax of 20 euro per tonne (today the CO 2
ETS price is 26 euro per tonne)
CURRENT INTERNATIONAL DEVELOPMENTS
Recent developments have shown that large-scale synthetic
kerosene production is perhaps even closer and cheaper than
we thought.
Opus 12, in partnership with SoCalGas, showed that its technology
can convert CO 2 in a gas mixture (biogas) to methane in a single
electrochemical step. It thus does not require a clean CO 2 feed and
can also produce other molecules (10). Climeworks, developer of
DAC technology, built a new and more efficient DAC plant with
integrated electrolysis and methanation unit, producing carbon
neutral methane (11). The Opus 12 and Climeworks process could
be integrated however, producing methane in a single step from
water, renewable electricity and CO 2 from the air.
Sunfire in Germany developed a high temperature co-electrolysis
system, which produces syngas (CO and H2) directly from CO 2 ,
water and renewable electricity (12). This implies significant
savings in investment and operating costs for synthetic kerosene
production compared to a system with individual CO and
H2 production units. Carbon Engineering, developer of DAC
technology, published an article in a peer-reviewed journal
demonstrating that DAC is feasible at costs below $100/t CO 2
(13). This provides a stronger basis to the claim that significant
cost reductions in DAC technology can be achieved.
Lanzatech, which had already successfully produced synthetic
jet fuel via ethanol from waste gases, saw the ASTM certify its
use for commercial flight (14), supplied sustainable jet fuel to the
world’s first ‘steel gases’ fuelled flight with Virgin Atlantic (15) and
is on its way to develop a first commercial facility with a large
consortium (16).
These developments combined paint a picture and envision
production pathways that are closer, more cost-effective
and more efficient than the ones we studied together in our
research. It should therefore be exciting to see the first synthetic
fuel production plants emerge. Lanzatech, Nordic Blue Crude
and Carbon Engineering are amongst the first to announce the
construction of synthetic fuel (pilot) production plants, while
there are other consortia in the Netherlands, around the Port of
Amsterdam and Rotterdam Airport, investigating this opportunity
and probably others of whose existence we are not aware yet.
EPN 50/5&6 31
FEATURES
Carbon Neutral Aviation
. FIG. 2:
Production process
••
An electricity price of 2.9 eurocents per kWh (currently
the average price of electricity is 4 eurocents per kWh, with
solar and wind electricity sometimes pushing it towards
1 or 2 eurocents per kWh)
••
Oxygen, a by-product, is sold against production price
None of these assumptions seem very extreme. So
the conclusion may be that it is possible to produce carbon
neutral kerosene at (near) competitive costs in the
near future.
Note that these costs are with the Netherlands as a
reference point. Notably renewable electricity can be produced
for lower costs in various other parts of the world.
In fact, as the electricity price is the main determinant of
synthetic kerosene costs, sourcing it for 2 eurocent/kWh
(8) rather than for 4 eurocent/kWh could make synthetic
kerosene cheaper than the fossil kerosene reference.
However, transport cost to the Netherlands would then
also have to be included, and the cost of capital may be
higher (9).
Currently fuel makes up an estimated 15 - 40% of
a flight ticket’s price (depending on various factors, including
airline, flight distance and airports). In our base
scenario for 2030 carbon neutral kerosene would add 20
- 50 % to the price of the flight ticket compared to a ‘fossil
kerosene’ ticket. However, if cost parity is reached, tickets
would cost exactly the same, irrespective of fuel choice. n
Conclusions
Our study shows that CO 2 recycling for synthetic kerosene
production for carbon neutral aviation could become
a reality around 2030. It is very likely that it can be
done, and we can afford it if we assume modest changes
to current prices. Furthermore, this seems the best option
available for aviation to meet the goals of the Paris
agreement by 2050.
CO 2 recycling for carbon neutral kerosene will however
not stand on its own; instead it will be able to play a significant
role in the energy system, not only by providing
renewable fuels for aviation, but also in its ability to balance
large variations in the supply and demand of electricity.
About the Authors
Frans W. Saris (franswsaris@gmail. com)
is a physicist since 1964, first as (Silicon)
researcher (at Amolf, AECL, Cornell,
IBM, UNSW, ECN) later also as teacher
(at Utrecht Cornell Leiden) and manager
(at Amolf, ECN, Leiden, STW,
Sanegeest) and science writer (NRC, Volkskrant, De Gids,
TW, dNBg. www.franswsaris.nl - www.sanegeest.nl
Rob Terwel is a partner at Kalavasta. He
holds an MSc in Complex Systems (KCL)
and a BSc and BA in Liberal Arts and
Sciences (UCU). He co-founded Kalavasta,
Climate Neutral Strategies in 2017 and
is responsible for research and modelling.
John Kerkhoven is a partner at Kalavasta.
He holds a Phd in Marketing, Operations
Research and Computer Science
(WUR). He worked in the Chemical
Industry for 10 years, was a partner a
strategy firm Arthur D. Little Inc. and is
since 2002 a serial entrepreneur focusing on the Energy
Transition through companies like Quintel Strategy Consulting
(now part of A.T. Kearney), Quintel Intelligence
(www.energytransitionmodel.com ) and most recently
Kalavasta, Climate Neutral Strategies.
References
[1] https://www.iata.org/publications/store/Pages/
20-year-passenger-forecast.aspx
[2] https://www.icao.int/Meetings/aviationdataseminar/
Documents/ICAO-Long-Term-Traffic-Forecasts-July-2016.pdf
[3] https://www.icao.int/environmental-protection/CORSIA/
Pages/CORSIA-FAQs.aspx
[4] D. Marxer et al. (2015), Energy Fuels 29, 3241 (5)
[5] Adelbert Goede and Richard van der Sande, Europhysics News
47/3, 22 (2016)
[6] http://www.kalavasta.com/pages/projects/aviation.html and
references in there
[7] https://pro.energytransitionmodel.com/
[8] Feasible according to a forthcoming Kalavasta report
[9] J. Ondraczek et al., Renewable Energy 75, 888 (2014)
[10] https://www.sempra.com/newsroom/press-releases/
socalgas-and-opus-12-successfully-demonstratetechnology-simplifies
[11] http://www.climeworks.com/climeworks-launches-dac-3-
plant-in-italy/
[12] https://www.sunfire.de/en/company/news/detail/
breakthrough-for-power-to-x-sunfire-puts-first-coelectrolysis-into-operation-and-starts-scaling
[13] https://carbonengineering.com/climate-change-breakthrough/
[14] http://www.lanzatech.com/jet-fuel-derived-ethanol-noweligible-commercial-flights/
[15] http://www.lanzatech.com/virgin-atlantic-lanzatechcelebrate-revolutionary-sustainable-fuel-project-takes-flight/
[16] http://www.lanzatech.com/lanzatech-virgin-atlantic-secureuk-government-grant-develop-world-first-waste-carbon-jetfuel-project-uk/
32
EPN 50/5&6 50/4
Carbon Neutral Aviation SECTION
COMPANY DIRECTORY
Highlight your expertise. Get your company listed in europhysicsnews company directory
For further information please contact camila.lobos@edpsciences.org
GOODFELLOW
www.goodfellow.com
Goodfellow supplies small quantities of
metals, alloys, ceramics and polymers for
research, development and prototyping
applications. Our Web Catalogue lists a
comprehensive range of materials in
many forms including rods, wires, tubes
and foils. There is no minimum order
quantity and items are in stock ready for
immediate worldwide shipment with no
extra shipping charge. Custom-made
items are available to special order.
LASER QUANTUM
www.laserquantum.com
Laser Quantum, world-class manufacturer
of ultrafast and continuous
wave products, provides customised
solutions to meet the needs of our customers
whilst supplying cutting-edge
technology with industry-leading lifetimes
to further research. To learn more,
please visit www.laserquantum.com or
contact us for a free demonstration and
quotation: +44 (0) 161 975 5300.
LEYBOLD
www.leybold.com
Leybold offers a broad range of
advanced vacuum solutions for use in
manufacturing and analytical processes,
as well as for research purposes. The core
capabilities center on the development
of application- and customer-specific
systems for creating vacuums and
extracting process gases.
MB SCIENTIFIC AB
www.mbscientific.se
MB Scientific AB is a Swedish company
which develops and produces state of
the art instruments for the photoelectron
spectroscopy experiments. Our
photoelectron energy analyser MBS A-1
gives you the opportunity to do world
leading research together with MBS VUV
photon sources, MBS L-1 and T-1, which
produce the brightest and narrowest
lines existing to be used for this type
of experiments.
MCPHERSON
www.mcphersoninc.com
McPherson designs and manufactures
scanning monochromators, flat-field
imaging spectrographs, and vacuum
monochromators and measurement
systems for reflectance, transmittance,
and absorbance testing. Its spectrometers
and systems are built for soft x-ray,
vacuum-ultraviolet, and UV/Vis and
Infrared wavelengths. Applications range
from lasers and lithography, solar, and
energy to analytical life science and more.
OPTIGRATE
www.optigrate.com
OptiGrate Corp is a pioneer and world
leader in commercial volume Bragg
gratings (VBGs) and VBG-based ultranarrow
band optical filters. BragGrateT
Raman Filters from OptiGrate are
unmatched in the industry for narrow
linewidth, optical density, and optical
transmission. BragGrate notch filters
enable measurements of ultra-low
wavenumber Raman bands in the THz
frequency range down to 4 cm -1 .
PFEIFFER VACUUM
www.pfeiffer-vacuum.com/en/
Pfeiffer Vacuum stands for innovative and
custom vacuum solutions worldwide,
technological perfection, competent
advice and reliable service. With the
invention of the turbopump, the company
paved the way for further development
within the vacuum industry. Pfeiffer
Vacuum offers a complete product
portfolio: backing pumps, leak detectors,
measurement and analysis devices,
components as well as vacuum chambers
and systems.
TREK
www.trekinc.com
TREK, an Advanced Energy Company,
designs and manufactures products
for demanding applications in research
& industry. Trek’s high-voltage
amplifiers utilize proprietary circuitry
to provide a closed-loop amplifier
system with exceptional DC stability
and wideband performance for driving
capacitive loads. Trek’s non-contacting
electrostatic voltmeters circumvent
the charge transfer issues of traditional
contacting technology.
ZURICH INSTRUMENTS
www.zhinst.com
Zurich Instruments is a technology
leader developing and selling advanced
test & measurement instruments for
dynamic signal analysis. These devices
are used in many fields of application by
high-technology research laboratories
and industrial development sites. Zurich
Instruments' vision is to revolutionize
instrumentation in the high-frequency
and ultra-high-frequency
range by incorporating
the latest analog and digital
technology into powerful
measurement systems.
The EPS is not responsible for the content of this section.
EPN 50/5&6 33
FEATURES
LESSONS FROM TOPOLOGICAL SUPERFLUIDS:
SAFE AND DANGEROUS ROUTES
TO ANTISPACETIME
llV.B. Eltsov 1 , J. Nissinen 1 and G.E. Volovik 1,2 – DOI: https://doi.org/10.1051/epn/2019504
ll
1 Low Temperature Laboratory, Aalto University, P.O. Box 15100, FI-00076 Aalto, Finland
ll
2 Landau Institute for Theoretical Physics, acad. Semyonov av., 1a, 142432, Chernogolovka, Russia
All realistic second order phase transitions are undergone at finite transition rate and
are therefore non-adiabatic. In symmetry-breaking phase transitions the non-adiabatic
processes, as predicted by Kibble and Zurek [1, 2], lead to the formation of topological
defects (the so-called Kibble-Zurek mechanism). The exact nature of the resulting
defects depends on the detailed symmetry-breaking pattern.
For example, our universe – the largest condensed
matter system known to us – has undergone
several symmetry-breaking phase transitions
after the Big Bang. As a consequence, a variety
of topological defects might have formed during the early
evolution of the Universe. Depending on the Grand Unified
Theory model, a number of diffierent cosmic topological
defects have been predicted to exist. Among them
are point defects, such as the 't Hooft-Polyakov magnetic
monopole [3, 4], linear defects known as cosmic strings
[1], surface defects or cosmic domain walls, continuous
topological and nontopological objects (skyrmions and
Q-balls), etc.
The model predictions can be tested in particle accelerators
(now probing energy densities >10 -12 s after the
Big Bang) and in cosmological observations (which have
not yet identified such defects to date). The same physics,
however, can be probed in symmetry breaking transitions
in condensed matter systems — in fermionic superfluid
3
He to an astonishing degree of similarity.
The physics of Kibble-Zurek formation of cosmic
string defects during a second order phase transition was
34
EPN 50/5&6
safe and dangerous routes TO antispacetime
FEATURES
tested in superfluid 3 He in a rotating cryostat (Fig. 1).
With neutron irradiation local mini Big-Bangs – hot spots
with temperature above the superfluid transition — were
created [5]. Cooling down back into the superfluid state
produced topological defects – quantized vortices – in
agreement with detailed theoretical predictions.
Superfluid 3 He as an analog
of the fermionic quantum vacuum
This is but one of the many connections of the superfluid
phases of liquid 3 He with particle physics and general
relativity [6]. All superfluid phases of 3 He are examples
of topological superfluids with emergent relativistic
excitations and topological defects.
1) In the A-phase of superfluid 3 He ( 3 He-A) the chiral
mirror symmetry is spontaneously broken, as in the vacuum
of Standard Model, where the behaviour of left-handed
and right-handed elementary fermions (e.g. quarks and
leptons) is essentially different. The fermionic excitations
of 3 He-A (called quasiparticles and -holes) are very similar
to elementary particles (and antiparticles) in the early
Universe, where quarks and leptons were still massless.
Quasiparticles have the relativistic energy spectrum,
E 2 = g ik p i p k , where the anisotropy tensor g ik plays the role
of metric tensor in general relativity. These quasiparticles
have a quantum “spin" parallel (Fig. 2 a) or antiparallel
to their direction of momentum — giving the particles
a handedness. The chiral right-handed particles “spin"
anticlockwise whereas the left-handed clockwise (and
vice versa for the chiral antiparticles).
They move in the synthetic gravitational and electromagnetic
fields created by the deformations of the
superfluid. Their motion is governed by an equation of
the form of the relativistic Weyl equation – the linear
Dirac equation applied to chiral particles. In words, the
Weyl fermions of 3 He-A experience the same quantum
effects as the elementary fermions of our Universe.
Strikingly they can be created and annihilated from
the quantum vacuum not only as particle-antiparticle
pairs (quasiparticles and -holes), but one by one out of
vacuum fluctuations, if axial "electric" and "magnetic"
fields are applied. The quantum effect of creation of
chiral particles from the vacuum fluctuations is known
in particles physics as the chiral or axial anomaly. In
3
He-A the chiral anomaly has been demonstrated in
experiments with skyrmions [7].
The reason for such a close and robust connection
between 3 He-A and the quantum vacuum of Standard
Model is topological. The quantum vacuum of the early
Universe and 3 He-A belong to the same universality
class of topological materials. This is the class of fermionic
vacua with Weyl points – topologically protected
points in momentum space, where the energy
of a (quasi)particle goes to zero. The topology of this
point in momentum space is similar to the topology of
b FIG. 1: Rotating
cryostat, where
topological phases
of superfluid 3 He are
studied at ultralow
temperature reaching
140 μK. Rotation
allows to create and
stabilize different
types of topological
objects, such as
Alice strings, Witten
superconducting
strings, Kibble-
Lazarides-Shafi walls
bounded by strings,
solitons, skyrmions,
etc., and perform
"cosmological"
experiments, such
as simulation of
the Kibble-Zurek
mechanism of
defect formation
in early Universe,
and transition
from Minkowski to
Euclidean spacetime.
. FIG. 2: Topological materials and fermionic quantum vacua as topologically stable configurations in momentum space. (a) Weyl point in 3 He-A as monopole
in p-space. The hedgehog of spins in momentum space is responsible for the topological stability of elementary particles of Standard Model and Weyl
quasiparticles in 3 He-A. (b) The polar phase of 3 He has the Dirac nodal line in the quasiparticles spectrum — the p-space counterpart of a cosmic string
in real space. (c) Skyrmion configurations in p-space describe the topological insulators and the fully gapped topological superfluids, such as 3 He-B.He-A.
EPN 50/5&6 35
FEATURES
safe and dangerous routes TO antispacetime
3) In the B phase ( 3 He-B) the spectrum of quasiparticles
is similar to the spectrum of Dirac particles in the
present epoch of the Universe, where all elementary particles
have became massive. The fully-gapped B-phase
has the topology of the so-called DIII superconducting
class, which is similar to the topology of skyrmions in real
space Fig. 2 (c). This topological configuration protects
massless Majorana fermions living on the surface of the
superfluid and in vortex cores.
m FIG. 3: Half-quantum vortex in chiral superfluid 3 He-A and in the polar phase. The spin part of the order
parameter in both phases has the form Ψ = ^
de iΦ .The phase Φ of this vector order parameter changes by π
around the half-quantum vortex. The change of the phase of the order parameter is compensated by the
change of the direction of the spin vector ^
d and do not produce physical jump in the order parameter Ψ.
The order parameter remains continuous around the vortex. In 3 He-A, the spin-orbit interaction forces the
change of ^
d-vector to be concentrated within the topological soliton. Across the soliton the ^
d continuously
changes the direction to the opposite. Due to the energy of the soliton attached to the vortex, the vortex
is not favorable energetically. The half-quantum vortex has been observed first in the polar phase of 3 He
confined in a nanostructured material called nafen [8]. In the polar phase the spin-orbit interaction is
more favorable: in zero magnetic field or in the field along the nafen strands the solitons are absent, and
the lattice of half quantum vortices becomes the lowest energy state in a rotating vessel.
c FIG. 4: Half-quantum
vortex as the Alice
string. Spacetime
continuously
transforms to its
mirror image after
circling an Alice
string, and thus
matter continuously
transforms to
antimatter. Two
penguins, Alice and
Bob, start to move in
opposite directions
around the string.
When they meet each
other again, they
may annihilate. In
condensed matter the
vector ^
d, which is the
axis of quantization
for spin, changes
sign around the
half-quantum vortex,
i.e. spin transforms
to "antispin" – the
analog of antimatter.
a magnetic monopole in gauge theory, see Fig. 2 (a).
Other condensed matter representatives of this class
are solid state topological materials – Weyl semimetals
and Weyl superconductors.
2) The polar phase has been realized in 3 He immersed
in a nanostructured material called nafen, see e.g. Ref.
[8]. Nafen is composed of nearly parallel solid strands,
which are about 9 nm in diameter and are 30-50 nm
apart. The nafen volume is thus mostly empty and in
the experiments this empty space is filled by liquid 3 He.
Quasiparticles in the polar phase are also gapless, but
their energy is zero along a line in momentum space —
so-called Dirac nodal line in Fig. 2 (b). These quasiparticles
are similar to massless quasi-two dimensional Dirac
particles with one important reservation: the synthetic
metric g ik becomes degenerate, i.e. the "speed of light"
vanishes in the direction along the nodal line.
Half-quantum vortex as an Alice string
One of the exotic topological objects living in 3 He is the
half-quantum vortex (HQV) – a vortex encapsulating
a fraction of the quantum of circulation, see Fig. 3. It is
the analog of so-called Alice string in cosmology, where
a particle encircling an Alice string continuously transforms
to an antiparticle. In other words, the spacetime
is continuously transformed to its mirror image — the
antispacetime, see Fig. 4. In cosmology, antispacetime
Universe was recently suggested as a continuation of our
Universe across the Big Bang singularity [9]. This is a
rather more dangerous route for Alice to travel to a mirror
Universe — going around the Alice string can still be
safe for Alice if she can avoid close encounter with Bob.
HQVs were originally predicted to exist in the chiral
superfluid 3 He-A [10]. However, before being experimentally
observed in 3 He-A, the HQVs were first observed
in another topological phase of 3 He – the polar phase
[11]. The reason for that is that in 3 He-A, the spin-orbit
interaction chooses the preferable orientation for the
vector d ^
describing the spin degrees of freedom of the
order parameter. This leads to formation of a soliton
interpolating between two degenerate vacua in Fig. 3.
The energy of the soliton prevents the nucleation of the
HQVs (Alice strings) in 3 He-A. In contrast, in the polar
phase the spin-orbit interaction can be controlled to not
prohibit the formation of HQVs.
The surprise was HQVs, which are formed in the polar
phase by rotation of the superfluid or by the Kibble-Zurek
mechanism, surviving the transition to the 3 He-A.
The reason for that is that the defects are pinned by the
nafen strands. They remain pinned after transition to
the 3 He-A, in spite of the formation of the energetically
costly solitons and therefore effective attractive tension
between vortices.
Two routes to mirror Universe
Even more surprisingly, further experiments demonstrated
that the HQVs survive even the phase transition to 3 He-
B, where such defects cannot exist as individual entities.
It was found that the HQV becomes part of a composite
defect: it is the boundary of a domain wall in Fig. 5. As
distinct from the continuous topological soliton in Fig. 3,
the wall bounded by HQV is singular: it is composed of
yet a different unstable superfluid phase with a degenerate
36 EPN 50/5&6
safe and dangerous routes TO antispacetime
FEATURES
metric, and has a higher energy. But still the wall tension
is not sufficiently strong to unpin the vortices.
In cosmology, the walls bounded by cosmic strings have
been suggested by Kibble, Lazarides and Shafi (KLS) [12].
The KLS walls appear after two successive cosmological
phase transitions. Below the first transition the topologically
stable defects – cosmic strings – are formed. Below the
second transition these defects lose the topological stability
and become boundaries of domain walls. As we explained,
the same mechanism with successive phase transitions
works in superfluid 3 He. The composite HQV defect –
the Kibble-Lazarides-Shafi wall bounded by HQVs (Alice
strings) – demonstrates the two ways to enter the mirror
world in Fig. 6 [13]. The safe (continuous) route is around
the half-quantum vortex and the dangerous route is across
the cosmic singularity of the Kibble-Lazarides-Shafi wall.
In our case the dangerous route is similar to the route of
our Universe from spacetime to antispacetime in Ref. [9]. n
m FIG. 5: Further experiments demonstrated that the half-quantum vortex survives even the phase
transition to 3 He-B, where such a vortex cannot exist as an independent topological object. The previously
unobservable jump in the ^
d field becomes physical — the domain wall between different superfluid vacua.
This wall is singular as distinct from the continuous topological soliton. So, in 3 He-B the half-quantum
vortex becomes the part of the composite defect – the domain wall terminated by the string. This is the
analog of the Kibble-Lazarides-Shafi (KLS) cosmic wall bounded by cosmic string [12].
Acknowledgements
This work has been supported by the European Research
Council (ERC) under the European Union’s Horizon
2020 research and innovation programme (Grant Agreement
No. 694248).
About the Authors
Grigory Volovik (middle) currently shares the position in
Landau Institute (Chernogolovka, Russia) with a position
in Aalto University (Otaniemi, Finland). The areas of his
interests are topological defects; topological matter; connections
between condensed matter physics, particle physics
andcosmology. In 2003–2006 he was chairman of the
European Science Foundation Programme "Cosmology
in the Laboratory''. He was awarded Landau Prize (1992),
Simon Prize (2004) and Lars Onsager prize (2014); member
of German National Academy Leopoldina.
Vladimir Eltsov (left) received his doctoral degree from the
Kapitza Institute in Moscow, Russia in 1997 and then
moved to Helsinki University of Technology to work on
superfluid 3 He. Since 2010 he leads the experimental research
on rotating 3 He at Aalto University.
Jaakko Nissinen (right) is a post-doctoral research at
Aalto University. His research interests include topological
phases of matter, condensed matter analogs of relativistic
quantum field theory and quantum Hall physics.
He obtained his Ph.D. at the University of Oslo, Norway.
References
[1] T.W.B. Kibble, J. Phys. A9, 1387 (1976).
[2] W. H. Zurek, Nature 317, 505 (1985).
[3] A.M. Polyakov, JETP Lett. 20, 194 (1974).
[4] G. ’t Hooft, Nucl. Phys. 79, 276 (1974).
[5] V.M.H. Ruutu, V.B. Eltsov, A.J. Gill, T.W.B. Kibble, M. Krusius,
Yu.G. Makhlin, B. Placais, G.E. Volovik, Wen Xu, Nature 382,
334 (1996).
[6] G.E. Volovik, The Universe in a Helium Droplet, Clarendon Press,
Oxford (2003).
[7] T.D.C. Bevan, A.J. Manninen, J.B. Cook, J.R. Hook, H.E. Hall,
T. Vachaspati and G.E. Volovik, Nature 386, 689 (1997).
[8] S. Autti, V.V. Dmitriev, J.T. Mäkinen, A.A. Soldatov,
G.E. Volovik, A.N. Yudin, V.V. Zavjalov, and V.B. Eltsov,
Phys. Rev. Lett. 117, 255301 (2016).
[9] L. Boyle, K. Finn and N. Turok, Phys. Rev. Lett. 121, 251301
(2018).
[10] G.E. Volovik, V.P. Mineev, JETP Lett. 24, 561 (1976).
[11] J.T. Mäkinen, V.V. Dmitriev, J. Nissinen, J. Rysti,
G.E. Volovik, A.N. Yudin, K. Zhang, V.B. Eltsov, Nat. Comm. 10,
237 (2019).
[12] T.W.B. Kibble, G. Lazarides and Q. Shafi, Phys. Rev. D 26,
435 (1982).
[13] G.E. Volovik, JETP Lett. 109, 499 (2019).
. FIG. 6: Two roads to
antispacetime: the
safe route around the
Alice string (along
the contour C 1 ) or
the dangerous route
along C 2 across the
Kibble-Lazarides-
Shafi wall with
degenerate metric.
This dangerous
route through the
Alice looking glass is
similar to the route
of our Universe
from spacetime to
antispacetime via
Big Bang in Ref. [9].
EPN 50/5&6 37
FEATURES
KNOWLEDGE AND SKILLS CHANGES
TO ACCREDITATION HERALD PEDAGOGICAL
TRANSFORMATION IN THE UK
llDavid Sands – University of Hull – DOI: https://doi.org/10.1051/epn/2019505
The Institute of Physics in London is changing the way it accredits degrees, which could
have far-reaching consequences for the way physics is taught and assessed. Degree
accreditation serves two purposes. First, it is the mechanism by which the Institute fulfils
its commitment under its Royal Charter to uphold standards in physics education, and
secondly, it provides a crucial step toward professional recognition for graduates.
Anyone wanting to be recognised as a Chartered
Physicist has not only to be able to
provide evidence of suitable professional
experience, but also to show knowledge and
skills appropriate to Masters level. By accrediting degree
programmes, the Institute makes it much easier for graduates
to be able to demonstrate the requisite educational
level or knowledge level has been reached.
Those familiar with the UK system will be aware that
we have two types of undergraduate degrees: the normal
3-year Bachelors and the 4-year integrated Masters.
The Institute accredits both types of degree. Graduates
from an accredited Bachelors degree partially meet the
educational requirements for professional recognition,
but graduates from an integrated Masters degree meet
them in full and only have to show appropriate professional
experience.
The Institute has previously approached accreditation
by essentially defining what a physics degree should look
like. There is a prescribed core of topics together with
some attempt to define the minimum level of complexity.
Referred to as the IOP Core of Physics, this content is largely
delivered in the first two years. In addition to the Core,
a Bachelors programme must contain at least 60 credits
(CATS) of honours-level physics content. For those not
familiar with the British system, honours level corresponds
to the final level of a Bachelors programme. There are also
suggested minimum amounts of laboratory work for experimental
physics programmes, as well as a range of skills
that graduates should be developing and requirements on
a minimum amount of mathematics content.
What often seems like a good idea in principle can
throw up difficulties in practice and there are three main
drawbacks to the current approach of requiring a fairly
substantial prescribed core content. First, physics degrees
across the UK look remarkably similar. There are
variations, of course, but as the first two years of any
Bachelors degree are generally taken up with teaching the
IOP Core the opportunities to be distinctive are limited
to what is offered in the final year. Secondly, rather than
being seen as the essential physics that every graduate
should know, the Core of Physics has come to be seen
38 EPN 50/5&6
Transforming physics pedagogy in the UK
FEATURES
as a requirement to be fulfilled and very often depth is
sacrificed for breadth with some material being covered
in only one or two lectures. Thirdly, the kinds of physics
degrees that can meet the accreditation requirements
are quite limited.
This last is a central consideration in the accreditation
review. The Core contains a number of what seem at first
sight to be important concepts but in fact are very specific
examples of the application of more basic ideas. In degree
programmes such as biological physics or environmental
physics, which are concerned with the application
of physics to specific areas, it might not be relevant for
students to learn in depth about laser cavities, semiconductor
band structure, or the role of phonons in the heat
capacity of a solid. That does not mean that they do not
possess a sound knowledge of physics, think like physicists
or acquire the same kind of physics-related skills
as graduates from more conventional physics degrees. If
these kinds of degrees deliver these outcomes, graduates
should be eligible for the same kind of accelerated professional
recognition.
The answer to these difficulties is to shift the focus
away from the degree to the graduate. The existing accreditation
scheme sets out in detail what a physics degree
should like, or at least those essential elements common
to physics degrees across the UK, but we want instead to
identify the kind of attributes a graduate should possess.
It will be up to departments to decide on the details of the
degree programme that will develop those attributes and
this will allow for much greater flexibility, distinctiveness
and inclusivity.
Our starting point for graduate attributes are the QAA
benchmark statements for physics and astronomy 1 . At the
end of the last century the QAA, or Quality Assurance
Agency, started to lay down a set of discipline-specific
statements against which the outcomes of degrees should
be judged. These statements were constructed by the
members of the respective academic communities, for
example, lawyers, historians, mathematicians, etc., and
therefore reflected the thinking of that community. Regardless
of the university or the discipline, it is now very
unlikely that any new degree will not base its outcomes
on the benchmark statements. These statements have
thus become the de facto standards for the outcomes of
degree programmes in the UK.
The last revision of the benchmark statements for
physics and astronomoccurred in 2017. I was part of
the review group. We came to the conclusion early on
that little change was needed to the content, but it could
be re-ordered and re-organized to present a much more
coherent and usable account of standards. Thus, the 2017
document looks very different from its 2008 predecessor,
but the changes are largely cosmetic. The long list of
outcomes for both Bachelors and Masters degrees was
reorganised into threshold and typical outcomes for both
types of degree. The typical outcomes for a BSc build on
the threshold and the threshold for integrated Masters
degrees build on the typical for a Bachelors. The only
change of any substance we made was to emphasise the
role of computation in modern physics, which was understated
in the 2008 document. Computational physics
has emerged as a third way of doing physics alongside
the two traditional branches of experimental and theoretical
physics. In particular, computation is perhaps the
only possible way to understand emergent behaviour in
large systems subject to simple rules and we wanted the
benchmark statement to reflect this.
Although we weren’t thinking about accreditation at
the time, re-organising the benchmark statements into
threshold and typical levels brought the two processes
into very close alignment. Accreditation is very much
concerned with thresholds: every graduate from a degree
programme must meet the minimum educational
standard for that programme to be accredited. Therefore,
we adopted and adapted the outcomes in the benchmark
statement for our own purposes.
The principal adaptation has been to phrase the outcomes
in a way that is clear and measurable. There are eight
threshold standards for a Bachelor’s degree and most can
be carried across into accreditation without much adjustment
or elaboration, but some are not so straightforward.
For example, the first standard says that a student will have
demonstrated an ability to “comprehend basic physical
laws and principles”, which begs two questions: what do
we mean by comprehend and which basic laws?
Understanding is one of those things that we recognise
when we see it but is very hard to pin down in a simple
definition, yet if a department is to collect evidence that
students are developing an understanding and present
that in support of an accreditation application, this is precisely
what must be done. Physics education research has
shown repeatedly over the years that being able to state
a law is not equivalent to understanding it. Students can
often state Newton’s three laws of motion, for example,
1
https://www.qaa.ac.uk/docs/qaa/subject-benchmark-statements/sbs-physics-astronomy-and-astrophysics-17.pdf
EPN 50/5&6 39
FEATURES
but when asked to apply them in a relatively simple problem
requiring qualitative reasoning, they often come up
with the wrong answer. However, skill in mathematics
is also not sufficient. Graduates who demonstrate mathematical
facility do not always display a good grasp of
the fundamentals.
If you were to have an intelligent conversation with
someone about physics, you would expect the conversation
to involve qualitative descriptions as well as qualitative
and mathematical arguments. The last might involve,
for example, limiting cases or simply interpreting trends
implied by mathematical representations of the physics.
You would also expect ideas to be represented in whatever
way is appropriate to the discussion, including using
diagrams and graphs. The ability to reason qualitatively
and to translate between different representations lies at
the heart of understanding and in the absence of a precise
definition these will serve as good indicators. Therefore,
by including understanding of basic laws and principles
among the accreditation criteria we will effectively require
departments to find ways to allow students to develop
these abilities as well as assess them and to use both
as evidence in their accreditation submission.
The question of what to include in the content is more
difficult to resolve. If you were to ask your colleagues
which ideas in physics a graduate must know in order to
be considered a physicist, you would probably receive a
variety of different answers depending on the particular
specialism of the person you are asking. By way of example,
a colleague of mine suggested not so long ago that the
p-n junction is so central to modern technology that it
ought to feature in the IOP core. It doesn’t and never has,
but one can easily appreciate the argument for including
it. It functions as a rectifier and a voltage-dependent capacitor,
but it also forms the basis of laser diodes, LEDs,
solar cells, transistors and photodiodes. However, these
are applications of semiconductor band theory, which
in turn is arises from the application of the Pauli exclusion
principle to interacting systems. If we were to pare
it down to essentials, we would find that semiconductor
physics is an application of quantum mechanics to interacting,
multi-atom and multi-electron systems with
some simplifying assumptions from classical mechanics
(in the form of transport theory) and electrostatics
superimposed. It might be desirable to teach these ideas
via semiconductor physics, but it is not essential. What
is essential, however, is that students have the skills for
self-learning, so that someone who has learnt these ideas
in other contexts can, if necessary, transfer them to semiconductors.
The same applies to many of the ideas that
would be suggested for inclusion in the core.
We have taken a quite radical approach to the IOP Core
of Physics. Our intention at the outset was to make it less
prescriptive and restrictive, so we have based it on the areas
identified as fundamental in the benchmark statement:
electromagnetism, quantum and classical mechanics, statistical
physics and thermodynamics, wave phenomena
and the properties of matter. Two topics currently in the
Core, condensed matter and optics, no longer feature in
the requirements for accreditation. We reasoned that a
graduate with a sound understanding of these five basic
topics should be able to pick up optics or solid state physics
through self-directed learning if the need arises. Most
departments are likely to continue teaching these topics
but removing them from the core gives more freedom to
consider whether this content is necessary for all degrees,
for example mathematical physics. If taught, they would
then serve as vehicles to illustrate the application of some
or all of these five fundamental topics.
The main outcome of the review of accreditation procedures
is that the IOP Core of Physics is no longer a list
of topics that needs to be taught in the first two years.
Rather, it constitutes a set of themes that will run through
the entire degree and in a well-designed programme
students will develop their knowledge, understanding
and competence in applying that knowledge throughout
the duration of the degree. This allows for a much
more holistic approach to the design of the curriculum
and has the additional benefit that departments can be
much more distinctive in what they offer. Considerable
support for these ideas has already been expressed but
a formal consultation is required before the scheme is
implemented. Once that happens, either towards the end
of this year or early next, it will take time for departments
to adjust to the new scheme, but it holds out the prospect
of systematically embedding good teaching practice into
the national structure of physics degrees in the UK. n
About the Author
David Sands is chair of the Physics
Education Division of the EPS. He has
worked extensively with the Institute of
Physics in London on accreditation, first
as an assessor, then as chair of the Degree
Accreditation Committee and lastly as
chair of the Accreditation Review Group. He also represents
the UK on Commission 14 (education) of IUPAP.
40 EPN 50/5&6
FEATURES
THE EXOPLANET REVOLUTION
llYamila Miguel – Leiden Observatory, Leiden, The Netherlands – DOI: https://doi.org/10.1051/epn/2019506
Hot Jupiters, super-Earths, lava-worlds and the search for life beyond our solar system:
the exoplanet revolution started almost 30 years ago and is now taking off.
Are there other planets like the Earth out
there? This is probably one of the oldest
questions of humanity. For centuries and
until the 90s, we only knew of the existence
of 8 planets. But today we live in a privileged time. For
the first time in history we know that there are other
planets orbiting distant stars.
The first planet orbiting a star similar to the Sun was
discovered in 1995 -only 24 years ago- and it started
a revolution in Astronomy. Today astronomers have
discovered the astonishing number of 4000 exoplanets,
and counting. Every new discovery shows an amazing
diversity that impacts in the perception and understanding
of our own solar system.
How to find exoplanets?
Finding exoplanets is an extremely difficult task. These
planets shine mostly due to the reflection of the stellar
light in their atmospheres and their light is incredibly
weak compared to that of their host stars. For this reason,
. Artist’s impression
of COROT-7b.
© ESO/L. Calçada
EPN 50/5&6
41
FEATURES
The Exoplanet revolution
. FIG. 1: Schematic
view of the transit
(top panel) and radial
velocities methods
(bottom panel).
observing exoplanets directly is extremely difficult and
astronomers had to develop indirect techniques that infer
the presence of the planet.
Two of the most successful techniques to discover
exoplanets are the "Transits" and "Radial Velocities"
techniques.
In the first one, astronomers observe the dimming of
stellar light when the planet passes in front of it (figure 1,
top panel). Current instrumentation allows astronomers
to measure less than 1% change in stellar light. Because
the portion of stellar light that diminishes is proportional
to the size of the planet, this technique allows astronomers
to know the planetary radius.
In the Radial Velocities technique, astronomers measure
the movement of the star -the doppler shift in the
stellar light- caused by the presence of a planet orbiting
around it (figure 1, bottom panel). With present techniques,
astronomers can measure movements of the star
of less than 10cm/s (as a reference we usually walk at
1m/s) [1]. The effect on the star is larger if the planet has
a high mass and is located close to the star and is smaller
for small planets. Therefore, the radial velocities technique
allows us to have an estimation of the mass of the
planet. Other methods include direct imaging (looking
at the light from the planet directly) and micro-lensing
(observing gravitational lensing due to a planet).
Each one of these methods give valuable information
to start understanding the variety of planets out
there, in particular the combination of both techniques
allows astronomers to calculate the planet’s density,
important to start assessing planetary compositions
and diversity.
The Exoplanet Zoo
The population of exoplanets shows a huge diversity
(figure 2). Due to observational biases, most of the
planets detected so far orbit very close to their stars -in
a few hours- and for this reason many of them are tidally-locked,
showing always the same face towards the star,
similar to the Moon-Earth system. This affects the circulation
of their atmospheres and in some cases creates
huge temperature differences between the dayside and
dark side that can be as extreme as ~600K.
The search for exoplanets has revealed other surprises;
here we describe some of the strange and unexpected
worlds found.
Hot Jupiters and far-away giant planets.
Astronomers are finding giant planets -like Jupiter and
Saturn- but located very close to their stars, much closer
than Mercury to the Sun. One of the most famous examples
of a “hot-Jupiter” is 51 Pegasus b, the first exoplanet
detected around a Sun-type star [2], that orbits its star at
a distance of 0.05 times the distance between the Earth
and the Sun (Astronomical Unit), which implies that a
"year" in that planet lasts 4.2 days.
Astronomers are also finding giant planets very far
away from their stars, at approximately twice the distance
between Neptune and the Sun, and even further out.
Both hot-Jupiters and far-away giants shook the
foundation of planet formation theories, pushing the
boundaries and showing that extreme scenarios are possible
for the formation and evolution of planets. Today
we know that planets are not located in the place where
they were born, but that they “migrated" due to the
interaction with the protoplanetary disk during their
formation and evolution [3]. In addition, some of them
might have been scattered away due to the dynamical
interaction with the star and other bodies of the planetary
system.
Mini-gas planets and super-Earths.
There are some planets that don’t have enough mass
to be a giant planet, but are more massive than small
rocky planets like our own. These planets have masses
of approximately 10 times that of the Earth and for this
reason are usually called “Mini-gas planets”-those with a
substantial atmosphere made of hydrogen and heliumor
"super-Earths" -the ones that are small, and have a
much smaller, potentially secondary atmosphere. Since
there is no parallel to these planets in our solar system,
astronomers don’t know what to expect for their interiors,
atmospheres or formation history. In addition, analysis
of the exoplanet population shows that most of the
42 EPN 50/5&6
The Exoplanet revolution
FEATURES
exoplanets discovered so far belong to this category [4],
and a lot of effort is going into trying to understand their
nature and to know why there are no such planets in our
solar System.
Hot rocky exoplanets or “lava worlds".
These are an intriguing class of planets that are rocky
-like the Earth-, but located extremely close to their stars
and that might have a magma ocean running on their
surfaces. This is caused by the high temperatures on the
dayside (of approximately 2000K or more) caused by
stellar irradiation. In these planets, the magma ocean
vaporises and forms an outgassed atmosphere mainly
made of vaporised rocks [5] that is escaping the planet
in what looks like a cometary tale.
Many of these planets were discovered using powerful
ground-based observatories, or space missions like Corot
(ESA), Kepler (NASA) or the currently on space TESS
mission (NASA). Other space missions such as the upcoming
Cheops (ESA, to be launched this year) and JWST
(NASA, to be launched in 2021) will help to improve our
knowledge on these worlds.
The search for life in the Universe
and future prospects
The ultimate goal of the exploration and search for
exoplanets is to know if there are other solar systems
and planets like our own. We still don’t know how life
originated on Earth, but we do know that water is essential
for life in our planet. The fact that our planet
has oceans of liquid water is due to a combination of
different factors, some of them being the mass of our
planet, the pressure of our atmosphere and the temperature
of the planet, which is a consequence of the
gases present in our atmosphere and the irradiation
received from the Sun. With this idea in mind, astronomers
developed the concept of “habitable zone” [6].
This is the region where a planet with an atmosphere
like the Earth should be orbiting its host star in order
to maintain liquid water on its surface. Since different
stars emit different energy, each star has its habitable
zone located at different distances. Keep in mind that if
we find an exoplanet with a mass and radius similar to
the Earth and located in the habitable zone this doesn’t
necessarily mean that the planet will host life, it is just
saying whether such planet has any possibility of being
habitable -having liquid water on its surface- at all.
Since different rocky planets might develop different
atmospheres, this is just a useful concept to guide our
searches, but nothing more. Our current technology
allows us to measure only masses and radius of Earthlike
exoplanets (and detect some chemical species in
bigger, giant exoplanets atmospheres, [7]), therefore it
is not yet possible to detect Earth twins and uniquely
identify life-forms in exoplanets.
Nevertheless, not everything looks dark in the future,
and astronomers are working towards the next generation
of instruments that will make this possible [8]. An
example is the Plato mission (ESA), a space telescope
(to be launched in 2026) that has the goal of finding and
characterising planets like the Earth. Examples from the
ground, include the next generation of extremely large
telescopes (Extremely Large Telescope, Giant Magellan
Telescope, Thirty Meter Telescope), that will have mirrors
of 30 meters and are currently under construction.
These telescopes and the future space missions will lead
astronomers to the next step towards finding habitable
worlds, where Sci-Fi meets reality. n
About the Author
Yamila Miguel is an assistant professor at
Leiden Observatory. She completed her
PhD in Argentina at La Plata University,
made a first postdoctoral study at the Max
Planck Institute for Astronomy (Germany)
and later was a CNES postdoctoral
fellow at the Observatoire de la Cote d’Azur (France).
References
[1] J. I. González Hernández, F. Pepe, P. Molaro and N. C. Santos,
Handbook of Exoplanets, 157 (2018)
[2] M. Mayor and D. Queloz, Nature 378, 355 (1995 )
[3] D.N.C. Lin and J. Papaloizou, in D.C. Black & M.S. Matthews,
eds., Protostars and Planets II Univ. of Arizona Press, Tucson,
pp. 981-1072 (1985)
[4] A. W. Howard, Science 340, 572 (2013)
[5] Y. Miguel, L. Kaltenegger, B. Fegley et al. , ApJL 742, L19 (2011)
[6] J. F. Kasting, D. P. Whitmire and R. T. Reynolds, Icarus 101,
108 (1993)
[7] D. Charbonneau, T. M. Brown, R. W. Noyes et al. , ApJ 568,
377 (2002)
[8] I. A. G. Snellen, R. J. de Kok, R. le Poole et al. , ApJ 764, 182 (2013)
m FIG. 2: Exoplanets
discovered by August
2019 (data from
exoplanets.org). The
planets in our solar
system are indicated
with fill circles
for comparison.
Main population
of exoplanets are
indicated with circles:
hot Jupiters (red),
mini-gas giants or
super-Earths (purple)
and hot rocky.
exoplanets (green).
EPN 50/5&6 43
FEATURES
ADVANCED INSTRUCTIONS
FOR IMPARTING KNOWLEDGE:
GETTING SCIENTISTS HEARD
AMIDST THE NOISE OF FAKE NEWS
llA compacted re-print of an article by science writer Matthias Plüss
llSwiss Science Magazine ‘Horizons’ of 05/06/2018 – DOI: https://doi.org/10.1051/epn/2019507
It’s getting more and more difficult for experts to get their arguments across to a broad
public. We investigate why, and offer six suggestions for improving things.
m An expert
under siege by the
media: German
Ramirez, a specialist
in tropical illnesses,
reports on the
successful treatment
of a Spanish nurse
who was infected
with Ebola in 2014.
© Image: Denis
Doyle/Getty Images
Tom Nichols is a professor of national security
affairs at the U.S. Naval War College in
Newport. He says he’s long had to get used to
the fact that most people don’t like professors.
People devoid of any specialist knowledge are becoming
convinced that they know better than the experts.
“I don’t have a problem with people being sceptical –
that’s actually a good thing”, says Nichols. “What’s bad is
that people have lost all respect. We’re now being challenged
aggressively”. With “The Death of Expertise” [1],
Tom Nichols has written the ideal book for our times
Everyone’s an expert
The crisis in expertise doesn’t just affect science. Doctors
tell of patients who don’t ask for advice, but simply
demand treatments they’ve found on Google. Architects
and craftsmen tell of clients who want to dictate to them
how to do their work. And teachers often have to cope
with parents who aren’t prepared to accept that the answers
their child gave in a test are actually wrong.
The reasons for this phenomenon are as diverse as the
problem is widespread. Nichols writes of a negligence
that comes with prosperity: “Our highly technological
world functions so smoothly that people mistakenly start
to believe that everything is really simple. You click on a
button, and your e-mail flies halfway round the world.
No one thinks about all the experts that make it possible,
from the engineers to the software designers and the
diplomats”. Another reason is the trend towards treating
students like customers today, asking after their wellbeing
44
EPN 50/5&6
Advanced instructions for imparting knowledge
FEATURES
instead of challenging them. This can lead to an excess
of self-confidence that is coupled with less knowledge.
Two other reasons for this crisis of expertise are to
be found within our science system itself. On the one
hand, we are experiencing the revenge of postmodern
relativism. Starting with Nietzsche – who claimed that
there were no facts, just interpretations – left-wing theoreticians
in particular have fundamentally questioned
whether there is any such thing as objective truth. The
philosopher Michael Hampe from ETH Zurich claims
that this is why it’s difficult to counter the arguments
of those who wish to relegate the notion of man-made
climate change to a mere thought-construct.
On the other hand, experts have constantly been venturing
beyond their own field of competence. “Scientists
can state the degree of probability that measles will break
out in a kindergarten if 20 percent of the children aren’t
vaccinated”, says Dietram Scheufele, professor of life
sciences communication at the University of Wisconsin
(USA). “But it’s not their job to decide whether or not to
introduce compulsory vaccinations”. This is a political
issue that can only be answered through the political
process, he insists. Scientists should certainly offer their
opinion, but they shouldn’t claim to be acting with any
authority (as they often do), as “this can only mean they
lose their credibility”. They should accept the fact that
moral and religious notions play a role, says Scheufele.
“Friedrich Dürrenmatt put it really well: ‘if something
affects everyone, then everyone has to solve it’”.
The internet
None of these undesirable trends would have culminated
in our current crisis without one decisive factor: the Internet.
Today, sound knowledge and informed opinions
exist on an equal footing with conspiracy theories and
mere gossip. What’s even worse: fake news often spreads
quicker and further than facts.
Social media are intensifying these negative developments.
“We’re all friends on Facebook”, says Tom Nichols.
“That has led to the absurd notion that we all know as
much as everyone else and everyone’s opinion is equally
valid”. What’s more, social media serve to promote an
effect that psychologists call ‘confirmation bias’. In reality,
people rarely form their opinions based on facts. Instead,
opinions tend to be dominant, and people then seek the
facts that confirm them. The Internet makes it far easier
to find such ‘facts’ – supported by algorithms that supply
us with hits that affirm what we want to believe. “This
is the paradox of our new world of information: it has
never been so easy to get all the information you want”,
says Dietram Scheufele. “But it has also never been so
easy to dodge all the information that you don’t want”.
The current crisis in expertise isn’t actually a crisis of
science. In Europe, scientists are still regarded as trustworthy,
and in the USA, 90 percent of the population
have a positive opinion of science. The problem is that
people seek out the science that happens to suit them.
And this might just turn out to be the study about alleged
links between vaccinations and autism that has long been
revealed to be fake.
So it is inevitable that people don’t become less ideologically
minded as their knowledge increases. Instead,
they become even more beholden to those ideologies.
This has been shown to be the case with man-made climate
change. In the USA, among Democrats, the more
people know about it, the more they believe it; but the
opposite is the case among Republicans. The American
psychologist Ashley Landrum recently reported on a fascinating
experiment she conducted. She had test subjects
read an article about the dangers of the Zika virus.
But she had two different versions of the article: the one
linked Zika to climate change, the other to migration.
Republican readers showed concern when they’d read the
migration article, whereas Zika in the context of climate
change left them cold. The exact opposite was the case
among Democrats.
In Europe, the situation does not yet seem to be as dramatic
or polarised as in the USA. But a glance at the wider
political situation doesn’t exactly foster optimism. To give
just one of many examples, we need only consider the
triumph of the Movimento Cinque Stelle in Italy, which
insistently supports vaccine sceptics. The concurrent
crisis in journalism doesn’t make things any easier. “It’s
already five to twelve”, says Stephan Russ-Mohl, professor
of journalism and media management at the Faculty of
Communication Sciences, University of Lugano. “In the
fight against fake news, we’ve got our backs against the
wall”. Despite all the initiatives that already exist, we can
barely reach people”.
Cleverer communication
Science urgently needs something along the lines of a
communication strategy. It’s not enough to be right in
principle. Gleb Tsipursky, a science historian at Ohio State
University with an interest in psychology, recommends
first exploring the emotional state of your conversation
partner. Why is he or she so angry? What’s troubling him?
SIX COMMUNICATION TIPS FOR SCIENTISTS
••
Don’t moralise.
••
First show empathy, then bring out the facts, but in moderation.
••
Consider your audience, and choose your examples accordingly.
••
State clearly where your expertise ends and your own opinions
begin.
••
When launching a new topic, consider early on how to frame it.
••
For universities and funding organisations: offer courses in
communication and create incentives for scientists to engage
in PR themselves.
EPN 50/5&6 45
FEATURES
Advanced instructions for imparting knowledge
In a second step, you have to demonstrate sympathy for
her problems. Only when you’ve done this groundwork
should you bring your arguments – but you should do
this, wherever possible, so that you don’t come across as
diametrically opposed to the fundamental convictions
of your opposite number. By following these steps, Tsipursky
claims to have been able to change the minds of
several science sceptics.
Dietram Scheufele is of a similar opinion: “When I
mention climate change to a Republican, the shutters
come down straightaway. It’s pointless for me to say anything
more”. So if you want to campaign for renewable
energies, it’s better to appeal to cross-group values. “It’s
best if you emphasise energy independence and global
competitiveness. These are things that are important to
all Americans. For example, Arnold Schwarzenegger does
that very well”.
Katharine Hayhoe is someone Scheufele thinks could
be a role model. She’s a climate scientist, the director of
the Climate Science Center of Texas Tech University, and
also an evangelical Christian. It’s an unusual combination,
but proves to be highly effective. Her religion gives her
credibility in conservative circles, and her insistence that
we must preserve Creation has enabled her to convince
many a sceptic that climate change is real – including
her husband, a pastor. The business magazine Fortune
currently ranks Hayhoe at number 15 in its list of the
‘World’s greatest leaders’.
Getting to grips with people’s values could also work
in other fields. A study at Emory University in Atlanta,
for example, has shown that vaccine-sceptical parents
don’t generally react to the value of ‘fairness’. You’d have
a better chance of success if you pointed out that vaccines
strengthen the body’s natural resistance and give
you greater control over your health.
Another approach from communication psychology
is so-called framing. This means the art of giving a topic
a specific slant by carefully choosing your vocabulary.
This can help you to shift the audience’s emotions in the
direction you want. It is important for scientists to consider
early on how they should talk about their topics.
For example, there is a new book entitled ‘A Crack in
Creation’ about the genetic engineering method CRISPR,
written by Jennifer Doudna, a researcher from Berkeley.
“It sounds good”, says Scheufele. “But it will tread on the
toes of the almost 60 percent of Americans for whom
religion is very important”. It reads as if this new technology
contradicts the values of a large portion of the
population. “If such a notion gets established, it’s almost
impossible to rectify”.
Together for ‘truth’
Science is in the process of losing the race – even before
it’s really noticed that the race is on. “First, the scientists
have to realise that they actually have a problem”, says
Gleb Tsipursky. “Then they have to stop seeing themselves
as lone fighters and start to club together”.
Tsipursky has set up a movement called ‘Pro Truth
Pledge’. It gets experts – along with journalists and interested
laypersons – to commit publicly to spreading only
information that’s been verified; they also have to correct
their own mistakes and the mistakes of others, and must
always differentiate between facts and opinions. When
it’s suggested that science sceptics are hardly going to be
won over by such a project, Tsipursky answers that there
are still enough people situated between the two opposing
poles who might respond to such an approach. Hardened
sceptics are probably hopeless cases anyway.
Stephan Russ-Mohl has also had a similar idea. He
is proposing an ‘alliance for enlightenment’. “Scientists
and journalists should come together in an alliance to
counter the flood of disinformation and fake news”. This
would help journalists to get new, reliable stories, while
researchers for their part would have a basis from which
to communicate more with the broader public. However,
at present there is nothing to suggest that such an alliance
could actually come about.
But what if scientists communicated directly with the
public on a more regular basis – such as via social media,
blogs or newspaper articles? “That would be desirable,
but there are no incentives for it”, says Russ-Mohl. Scientists
have enough on their plate, he says, because they
need to publish in specialist journals to maintain their
standing. “For as long as PR isn’t explicitly rewarded by
the organisations that fund science research, it’s unlikely
anything’s going to change”. Furthermore, many scientists
have adapted quite comfortably to living in complete
disregard of the public.
People like Nichols are lone warriors today. There
are signs of a possible coordinated campaign – such as
with the March for Science [2], which brought several
hundred thousand people onto the streets in 2017. On
the other hand, there is hardly any sign that the barrage
of fake news and the vilification of experts is going to
ease up.
Nichols is less optimistic. If you ask him for a general
assessment of how things stand, his answers can be pretty
frightening. It is tragic to think, he says, that this rampant
narcissism might only disappear if it culminates in a catastrophe
– such as a war or economic collapse. Because
in crisis situations, people suddenly feel a need for real
expertise again. “In the emergency room”, says Nichols,
“you don’t see many people arguing with the doctor”.
References
[1 Tom Nichols, “The Death of Enterprise. The campaign
against established knowledge and why it matters”, ISBN:
9780190469412
[2] March for Science, https://marchforscience.org
46 EPN 50/5&6
ANNUAL INDEX
ANNUAL INDEX VOLUME 50 - 2019
AUTHOR INDEX
»»
A
Alimonti G. Climate Change, a point of
agreement! • 50/3 • p.30
»»
B
Bajtlik S. Nobel prize for Jim Peebles for
his theoretical discoveries in physical
cosmology • 50/5&6 • p.14
Beck C. EPS Statistical and Nonlinear
Physics Prize 2019 • 50/4 • p.07
Bouchaud J.-P. Econophysics: still fringe
after 30 years? • 50/1 • p.24
Boudewijn T. see Oosterkamp T.H.
Bourguignon J.-P. The Pursuit of Knowledge
as European Endeavour • 50/1 • p.06
»»
C
Cerullo G. Ultrafast lasers: from
femtoseconds to attoseconds • 50/2 • p.11
»»
D
Damjanovic S. South East European
International Institute for Sustainable
Technologies • 50/4 • p.31
Durante M. The Biophysics Collaboration
for research at FAIR and other new
accelerator facilities • 50/4 • p.27
»»
E
Eltsov V.B. Safe and dangerous routes to
antispacetime • 50/5&6 • p.34
»»
F
Faísca P.F.N. Interview with Mike
Kosterlitz • 50/3 • p.12
Friedrich B. The former Department
of Theoretical Physics of the Goethe
University • 50/5&6 • p.04
»»
G
Gorelova D. A personal report: setting
up a junior research group in Hamburg •
50/5&6 • p.13
Goudriaan J. Too much fear for radioactive
contamination of seawater • 50/1 • p.19
»»
H
Heikkilä T.T. Moiré with flat bands is
different • 50/3 • p.24
Holmes C. A Nobel cause: public
engagement and outreach • 50/2 • p.19
Hyart T. see Heikkilä T.T.
»»
J
John P. see Holmes C.
»»
K
Käs J.A. Sarah Köster awarded the
EPS Emmy Noether Distinction • 50/5&6
• p.06
Kerkhoven J. see Terwel R.
Klanner R. Silicon Photo-Multipliers • 50/4
• p.17
Klein T. Crossing borders: Zeno's Paradox
• 50/3 • p.27
Kubbinga H. A Tribute to Lise Meitner
(1878-1968) • 50/4 • p.22
»»
L
Lee D. EPS Council 2019 • 50/3 • p.04
Lukishova S. Letter: a lesson from
the history of scientific discovery
of measuring the pressure of light
• 50/4 • p.15
»»
M
Marino A. Physics, lasers and the Nobel
Prize • 50/2 • p.26
Mariotti C. How we found the longsought-after
Higgs boson • 50/5&6 • p.24
Miguel Y. The Exoplanet revolution
• 50/5&6 • p.41
»»
N
Nissinen J. see Eltsov V.B.
Nisoli M. see Cerullo G.
»»
O
Oosterkamp T.H. Skating on slippery ice
• 50/1 • p.28
»»
P
Patera V. see Durante M.
Pesce G. Manipulating matter with light
• 50/2 • p.15
Plüss M. Advanced instructions for
imparting knowledge • 50/5&6 • p.44
Prezado Y. see Durante M.
»»
R
Rogan E.A. By combatting bias, we can
achieve parity for women in science
awards • 50/2 • p.32
Rossi G. Research Infrastructures as a key
optimizer of European research • 50/1 • p.14
Rudolf P. Let’s better support our
postdoctoral researchers! • 50/3 • p.03 |
Physics and Innovation • 50/5&6 • p.03
Rusciano G. see Pesce G.
»»
S
Sands D. Transforming physics pedagogy
in the UK • 50/5&6 • p.38
Saris F.W. see Terwel R.
Sasso A. see Pesce G.
Saunders F. Building ITER – more than just
a modern-day Cathedral? • 50/4 • p.03
Schmidt-Böcking H. The Stern-Gerlach
experiment re-examined by an
experimenter • 50/3 • p.15 | see
Friedrich B.
Sirois Y. Emmy Noether Prize • 50/2 • p.04
Snellen I. Nobel prize for Mayor and
Queloz as fathers of the field of extrasolar
planets • 50/5&6 • p.15
Sólyom J. EPS Historic Sites: in Budapest to
Honour the Eötvös Experiment • 50/1 • p.04
Spiro M. IUPAP: towards its centenary and
towards an IYBSD in 2022/2023 • 50/4
• p.04
Stirling C. see Holmes C.
Storm C. Improving the gender balance at
a Dutch university • 50/5&6 • p.11
»»
T
Tanaka K. see Zamfir V. • 50/5&6 • p.09
Terwel R. Carbon Neutral Aviation •
50/5&6 • p.29
Timmermans C. GRAND: a Giant Radio
Array for Neutrino Detection • 50/5&6 • p.09
Travasso R.D.M. see Faísca P.F.N.
»»
U
Ur C. see Zamfir V.
»»
V
van Leeuwen J.M.J. see Oosterkamp T.H.
van Tiggelen B. EPL in an eventful
environment • 50/3 • p.20
Vigué J. Some comments on the historical
paper by H. Schmidt-Böcking • 50/5&6
• p.22
Volovik G.E. see Eltsov V.B.
von Weizsäcker E.U. Science and longterm
thinking • 50/2 • p.29
Voss R. S-Class publishing • 50/1 • p.03 |
Crackdown on academic freedom • 50/2
• p.03
»»
Z
Zamfir V. Extreme Light Infrastructure
Nuclear Physics (ELI-NP) • 50/2 • p.23
EPN 50/5&6 47
ANNUAL INDEX
Volume 50 - 2019
»»
Annual index
Volume 50 - 2019 • 49/5&6 • p.47
»»
Directory
August 2019 • 50/3 • p.31
»»
Crossing borders
Zeno's Paradox - Some thoughts Klein T.
»»
Editorials
S-Class publishing Voss R.
Crackdown on academic
freedom Voss R.
Let’s better support our postdoctoral
researchers! Rudolf P.
Building ITER – more than just a modernday
Cathedral? Saunders F.
Physics and Innovation Rudolf P.
»»
Event
IUPAP: towards its centenary and towards
an IYBSD in 2022/2023 Spiro M.
OSA Frontiers in Optics 2019 • 50/5&6
• p.16
»»
Experiment
GRAND: a Giant Radio Array for Neutrino
Detection Timmermans C.
The eyes of the Baikal GVD neutrino
telescope • 50/5&6 • p.12
»»
Historic sites
In Budapest to Honour the Eötvös
Experiment Sólyom J.
The former Department of
Theoretical Physics of the Goethe
University Friedrich B. and Schmidt-
Böcking H.
»»
Highlights
50/1 • 09 summaries • p. 10-13
50/2 • 10 summaries • p. 06-10
50/3 • 13 summaries • p. 06-10
50/4 • 10 summaries • p. 11-14
50/5&6 • 13 summaries • p. 17-21
»»
Features
A Nobel cause: public engagement and
outreach Holmes C., John P. and
Stirling C.
A Tribute to Lise Meitner (1878-
1968) Kubbinga H.
Advanced instructions for imparting
knowledge Plüss M.
Carbon Neutral Aviation Terwel R.,
Kerkhoven J. and Saris F.W.
EPL in an eventful environment
van Tiggelen B.
How we found the long-sought-after
Higgs boson Mariotti C.
Manipulating matter with light
Pesce G., Rusciano G. and Sasso A.
Moiré with flat bands is
different Heikkilä T.T. and
Hyart T.
Physics, lasers and the Nobel
Prize Marino A.
Research Infrastructures as a
key optimizer of European
research Rossi G.
Safe and dangerous routes to
antispacetime Eltsov V.B., Nissinen J.
and Volovik G.E.
Silicon Photo-Multipliers Klanner R.
South East European International
Institute for Sustainable
Technologies Damjanovic S.
The Biophysics Collaboration for research
at FAIR and other new accelerator
facilities Durante M., Prezado Y.
and Patera V.
The Exoplanet revolution Miguel Y.
The Stern-Gerlach experiment
re-examined by an experimenter
Schmidt-Böcking H.
Transforming physics pedagogy
in the UK Sands D.
Ultrafast lasers: from femtoseconds
to attoseconds Cerullo G. and Nisoli M.
»»
Inside EPS
MATTER INDEX
EPS Council 2019 Lee D.
The Pursuit of Knowledge as European
Endeavour Bourguignon J.-P.
»»
Interview
Mike Kosterlitz Faísca P.F.N.
and Travasso R.D.M.
»»
Letter
A lesson from the history of scientific
discovery of measuring
the pressure of light Lukishova S.,
Masalov A. and Zadkov V.
Some comments on the historical paper
by H. Schmidt-Böcking Vigué J.
»»
Opinion
By combatting bias, we can achieve
parity for women in science
awards Rogan E.A.
Climate Change, a point of
agreement! Alimonti G.
»»
Physics and Society
A personal report: setting up
a junior research group in
Hamburg Gorelova D.
Towards an update of the European
Particle Physics Strategy • 50/4 • p.10
Upgrade of CMS is in full swing
• 50/4 • p.06
»»
Prizes - Awards - Medals
Emmy Noether Prize Sirois Y.
EPS Statistical and Nonlinear Physics Prize
2019 Beck C.
Nobel prize for Jim Peebles
for his theoretical discoveries
in physical cosmology
Bajtlik S.
Nobel prize for Mayor and Queloz
as fathers of the field of extrasolar
planets Snellen I.
Sarah Köster awarded
the EPS Emmy Noether Distinction
Käs J.A.
»»
Report
Improving the gender balance at a Dutch
university Storm C.
THANK YOU VICTOR!
Since 2014 Victor R. Velasco was the dedicated Editor of the EPN magazine. EPS and the editors of EPN are grateful to Victor for more
than five years of hard work and dedication. He has passed on the baton to Els de Wolf, experimental particle physicist, retired from
the University of Amsterdam and Nikhef Institute in Amsterdam.
48
EPN 50/5&6