January, no talk
Tuesday 19 February 2019, Ian Williams, Professor of Applied
Environmental Science , Soton Uni : Air quality monitoring
My PHd was in air pollution with the Dept of Transport Road Reasearch
Lab , now a combination of DEFR and DEC. Air polution was my first
scientific subject although a chemist intitially, working in air pollution about
30 years. Air pollution has its peaks and troughs, it can be hugely
popular and then dip down to nothing. In late 1980s there wasa big fuss
about fitting catalytic convertrs to cars and removing lead from
petrol. So at the time a huhge furrore about air pollution then.
Though I'd been motivated by the air pollution disasters of the 70s and 80s.
Sveso and Chernobyl and others. In the 1970s it was acid rain ,
80s lead in petrol, 90s it was taking S out of coal
and replacing coal fired power stations with gas, then should we or should we
not have diesel and the govt decided yes. I was a ghost author on the
first air quality review paper in 1993 on particulate matter .
I wrote a chapter on diesel particles , althouhg I was not named.
I said they were dangerous , do not go with them. These days we're more
motivated by sustainable developement , having replaced the millenium
developement goals, about 2000 to 2015, the world signed up to.
Including clean water and sanitation, peace justice and strong
institutions. But quite a lot of them contect in some way to air
pollution. eg Climate action, good health and well-being,
sustainable cities and communities . These targets are in place to 2030.
An air pollutant is defined as something that can potentially,
not definitely , cause harm or damage the well-being, that bit
is not widely known. To humans , animals or eco-systems.
The well-being can involve mal-odour, bad odour such as sewage works,
anaerobic digester, pig-farm etc counts as air pollution.
2 things that cause most problems in understanding air pollution
is concentrations and emmissions.
Emission is the amount of a gas emitted directly from the source.
So it can be something out of an exhaust or a chimney stack.
Its measured in metric tonnes per year. For a big power station
it will be many cubic metres per second, so a lot. That is what
is thrown into our environment, but that does not affect us.
Typically what affectrs us is concentration of a pollutant in the
air. I will focus on concentrations.
Sources of air pollution can be natural and anthropogenic sources.
Our main pollutants are gases and articulate matter but we
forget noise which is an air pollutant, forget heat and light
pollution . Especially in cities, damaging to nocturnal animals.
Then radio-activity. So sources , eg in port cities are shipping
and more than 50% of the world population lives in port
cities, also industy and power stations.
The natural sources are about the same amount as anthropogenic.
Wild fire have been in the news a lot recently, USA, Oz and Greece.
Lightening is an amazing source. A bolt of lightning, a massive
stream of electricity passing thru the air, carrying loads of
energy. Most of our lower atmosphere is di-nitrogen gas and di-oxygen
gas, 1% argon and less than .1% of anything else. The reason we have so
much di-N around is its a very stable gas. But lightning
can , despite the strongly attached triple bond, can break that bond
into very reactive N3 radicls. So reactive they will form
something else instantaneously , often forming nitrogen oxides.
So lightning is a big sourse of such oxides and also ozone.
In the 1980s one morning I awoke to find my car covered in sand.
A sandstorm from the Sahara. Dust storms in Africa and Texas
etc. Then we forget residential pollution , amazing
amount of pollutants in houses, mold, bacteria, pet dander ,
cooking odours. In fact there is more air pollution indoors than
outdoors. The biggest source of air pollution is likely to be
burnt toast, barbecues and cooking with gas. We forget we
spend a massive amount of time indoors, only 7% of time outdoors,
typically. Even the middle of the day you tend to be in an
office or a school , not outdoors.
Anthropogenic sources like aircraft emissions ,
traffic , power-plants. Soton has an airport , port operations,
ferries and liners. A lot of pleasure craft in the Solent.
21% of Soton is unusual in that its green space. That means all the
traffic is forced ont a small number of roads, with a big impact
on AQ. Industry in the way of a refinery, incinerator etc.
Air pollution disasters propelled me into this direction , Bohpal
1984 wiht Methyl-isocyanide released , killed thousands of people.
Sveso in 1976 in Italy, Chernobyl in 1986. There is an AP disaster
somewhere . Its only recently that parts of Wales has been
declared free of radioactivity from Chernobyl. radioactive snow fwll
in Lapland that year and herds of reindeer put down as a result.
The biggest AP incident in the UK since WW2 was Buncefield in
Hertfordshire . That AP incident was visible from space, the
other one was the first Ieraq war , when the departing Iraqi army
set fire to the wells. The great smog of 1952, the first time in history
we could have actual measurements of AP concentrations
because London had then the most advanced AQ measuremt
the world. Though it was just smoke and SO2.
A mg per m^3 , athousand times what we have in our
atmosphere these days. As the pollution concrntrations go up
05 to 10 Dec 1952 , so do the number of deaths go up.
Also once the concentration of AP drop with wind blowing them away,
the death incidence eventually dropped back to normal levls. People had been
seriously damaged and took a long time to die. This started
alarm bells in the UK about AP.
Once emitted to the atmosphere, they are transported, by wind, by
heat. Photo-chemical smog, chemical reactions can take place,
very complicated chemistry to model. We tend to do basic modelling
, tending to assume the APs are not reacting in the atmosphere
once they're emitted to being deposited. The more complex models will
attempt to do this, requiring a lot of computer power.
Then he influence of meteorology , affecting the plumes from chimneys etc,
a cone , a fan , lofting, fumigation? when it comes back to the land.
You need to take into account atmospheric conditions when modelling,
AP, especially looking at its influence on human health.
You can probably drink a few litres of polluted water
for a while and not die. Some areas like africa , half the people drink
polluted water and don't die, but they're not very healtyhy.
You can go a few days without drinking water but if i ask you to hold
your breath, about 90 seconds. In my youth I could probably
run a mile in 5 minutres. I could run at about 5m/s , and if you have a wind speed of 10m/s , you cannot outrun the pollution .
Air pollution can be worse than water pollution in terms of
cronic effects. Can have all sorts of long lasting effects, of lung damage,
foetal death, child death, respiratory illnesses, cardio-vascular disease,
skin problems. A Ukrainian president a few years ago , he went from being a
big strapping man , in 6 months to a terrible condition with scarred face,
he was poinsoned. He was poisoned with somrthing not dissimilar to the recent
Salisbury incident. That sort of thing can happen routinely to people
routinely exposed to elevated concentrations of air pollution, especially in
The Royal College of Physicians produced a fantastic report in 2016
called Every Breathe we take. An important mover behind that was Prof
Steven Holgate . It showed the effects of air pollution from
womb to tomb. You can download it. Looking at the effect of AP
on us in the UK, now. The Comittee on the Effects of AP said
particulate pollution is causing 29,000 excess deaths per year.
Deaths that would not have happened if the particulate matter was not
present. An average loss of life of 6 months for every person.
That is just for fine particulates, not accounting for all the other
pollutants acting synergistically.
Q: How do you define fine?
Particulate matter is defined by what is called aerodynamic diamerter.
Particles are all sorts of sizes and shapes. For thought purposes, imagine a
particle as a sphere , so about 10 microns 6 on a human hair width.
Your body's defense maechanisms can cope with those, beards are a great trap
for such mater . Nose hair works at that well too, as well as it goes to
the back of your mouth which has loads of muccous, that traps it.
When it gets to the upper parts of your lungs , there are cilia ,
beating and clear the particles out. But go down to PM2.5 so only 7.5
smaller and mostly they will get thru those defences. Get down to
PM1 almost all of them will get thru. Flour for baking is very fine
and exposure to that is hazardous. A lot of people don't realise
about such being dangerous , because of the emphasis on
cigarette smoke , but occupational particles like welder's smoke can
be very fine, easily entering your body. In Soton , the estimate is
about 1800 deaths per year , 110 of those per year have been allocated
to PM. That is probably too low a figure, more likely double that.
In terms of concentrations in the atmosphere, the main polutants
SO2 used to be ? , because it was from coal and used to be a lot
in diesel but regulations removed it . The main source now is bunker fuel
of shipping . But bunker fule is ised mainly when out at sea, not coming into
ports, they switch to anothe rfuel then , much more refined.
In Soton a big problem with nitrous oxides, more on that later.
Metals, organic substances.
When we took lead out of petrol , that was there to stop the
pinking noise in car engines . It was removed and replaced with
something based on benzene and that is possibly carcinogenic.
As a chemist I used to wash my hands with benzene , to remove an organic
substance from my hands.
CO2 is not there because its part of the natural cycle of photosynthesis,
so its not a regulated pollutant.
In the UK and a lot of other countries, we measure, and forcast AQ.
So on to measuring and forecasting. This info is freely
available on the internet,
and see the output of the AP monitoring stations in
Soton , of the main gases, instantaneously. This has been in place
since the early 1990s and its very trustworthy. Not all websites
can be relied on, China for instance.
We collect data that is impartial, reliable and authorative.
If the gov is being held accountable for your air quality ,
how can they be the ones trusted to do the monitoring. The poacher and
gamekeeper situation. So you create an arm's length company,
employing a consultant to do it for you. This is why the govt is not
using the public sector to do the monitoring. They use a few companies
that run the monitoring network , take the data and verify it and
check it. That data helps to check the environment and checks whether
we're exceeding legal guidance of AP or not. There is kerb-side, road-side, urban centre and background, industrial ,suburban, rural and remote.
They're all there to make sure we meet our AQ legislation.
Rural stes included because there are crops there and if there is high
level of ozone then crop dmage results.
There are automatic networks , a shoebox size bit of kit costing a few thousand
pounds each, takes a reading every sec and averages out to the hour
and that is the www data you get. When its instant its unverified.
TRhere is a lag while its verified and then reported in annual reports.
Again these are freely available on the www.
There ar enon-auto networks, measuring less frequently or
where the tech is not there for instantaineous measuring. So taking an
air sample, do something to it in a lab and later report the data.
May be done daily,weekly or monthly, using filter papers and such
Q: Why would you use non-automatic?
Because some pollutants there just is no realtime monitoring tech.
Some volatile organics , they are in the air at so low concentrations
that its impossible to detect instantly. So a matter of trapping them
over time , to bulk it up, and then take to a lab. Then the mass of
pollutant divided by the vol of air pumped thru, gives the
concentration over maybe 1 week or even a year.
The EA manages it on behalf of DEFRA and subcontracted out.
Monitoring all sorts of pollutents
Polycyclic aromatic hydrocarbons, toxic micro-organic pollutants,
C, a range of heavy metals , particulate matter, stratospheric
ozone UV light , whole range of acid gases , aerosols, NO, ammonia
, mercury, tons of it. All available via the www, good use of tax
So an auto monitor typicallly has a stainless steel pipe . Stainless as
unreactive, as you don't want a pipe whith gas coming in
and sticks to the inside of the pipe. A pump to draw the
air in , conducted to the monitor analyser in a building, for a gas.
For PM, needs to be trapped on a filter and then gravimetric
analysis which is time consuming. In early 1990s developed a
TEOM, tapered element oscillating microbalance. It uses
clever physics to measure PM instantaineously. We are not
interested in all PM , only the stuff that will damage us, so
the fine PM. Imagine an oil-tanker turning around
and a jet-ski. A tanker would have to start turing perhaps a mile away ,
to tap the air as it passes the turning point, there is so much momentum ,
requires a massive force to change direction. But a jetski
can turn on its axis. Air associated with that I could easily suck into a
small pipe. So dynamic difference between smal land big PM.
So you make the big particles go over a downward jet , it has too
much momentum to go into the sample pipe , but a small particle
like the jetski will enter the pipe easily. Easily done, but clever sorting.
The gaseous ones have been around since the 1970s and the reliable PM
with TEOM about 1992. These techniques cannot be invented by just anyone.
They have to be verified by a verification body that is trusted. In the
UK we have the NPL and the National Chemical Lab. There is an EU standards program and all the UK measurements adhere to that program.
After Brexit maybe stay with that, or default to the USA ones which are a
wider set. All those standards are described on the www. There is a
certification system . I couldn't invent an analyser, place it in Soton
, and take readings as I'm not validated by anyone, and I don't
have certification to do it.
So in a monitor for CO the air comes into a cell which has InfraRed
light bounced around it . A display and export to the internet.
For Soton monitoring we tend to get the allied weather data from the
airport and the port met stations.
There are personal monitors that will clip on to your belt.
I currently have a student doing monitring around a school using
All the time we have to bear in mind meteorology and thats where the
UK Met Office comes in. You can pay the MetO for data or you can
go to Wunderground site . You get the local temperature, UV,
pollen and also pollution alerts with it. There is pollen of some type or other around in the UK
for 11 months of the year.
For emissions to air are calculated separately , again verified
by third parties. Often the data is shown as a time-series plot .
So dataset likes of greenhouse gases by source , cars/industry/power-station,
by end-user catagory , by large point source, by deposition by fuel
consumption etc. This data goes back a long tine as well.
We can go back to about 1970 in UK with a lot of confidence.
If I had 1970 on this graph the emissions of NO2 would be off the
scale so made it from 1990, which is the baseline we tend to use.
The emmissions have gone down so much. The UK govt has done a
really good job at reducing emissions, which is why emissions can be a red
herring, as what matters to us is concentrations, what we individually are
exposed to. The industrial sources have come down to very little
, the transport sources have stayed high and they are focussed in cities.
Hence a NO2 problem in our cities.
Sometimes you can't monitor aP. Say someone comes up with a
new plan to build a next-generation power station in St Denys.
So he builds it using me as the designer and I messed it up and you end
up with a power station with loads of terrible emissions. Once you've
built it , its built. You either have to stop using it , wasting a load of
money or it operates. We cannot allow you to build something
that will cause terrible pollution. We cant let him build it , run it
and see what happens. It has to be modelled first, to see if it might
cause problem pollution.
AQ in Soton. To manage AQ in our towns and cities the govt has
declared Air Quality Management Areas via the Environment Act 1995
sets targets for AQ management that should not be exceeded.
If you are exceeding targets routinely you declared as an AQMA.
The local authority has the responsibility then to do something about it.
Soton has 10 AQMAs, all to do with NO2. We had 6 in 2005
and now its 10. The current city plan has 47 AQ action plans ,
a waste of time each one. They're all too small to have an
impact. We need a step-change in technology or we all need to
walk and cycle more. Simple as that, and proveable.
The worst areas are near the airport, The Avenue,Hill Lane, University Rd, Burgess Rd, the Port
and Millbrook out to the M27 the lorry route.
Q: I studied this for a day last year and Millbrook has the highest aP
of all Soton?
That is lorries using Dock Gate 20 mainly.
In Totton a railway barrier close to ASDA, when the barriers come
down, all the cars sit there with chugging engines.
I had students study this and he decided to make a sign requesting
to turn off their engines, while waiting. That solved the AP
problem, simply that. A modern car just shuts off and restarts.
A lot of this is just common sense. We don't really know what
contribution the port makes to emissions. This is the next assignment
for my students to try and work it out. 1/how much the emissions impact the
city and 2/ how much the school-run affects the city.
I have a theory the school-run is the major contributor.
The traffic in Soton is very predictable, the western approaches down Shirly High Rd, down the Avenue, Bevois Valley , Northam and Itchen bridges and
repeats the other way afterwards. Half-term at the moment and AP
is less now as less traffic on the roads.
I've had students plotting this for years, so every Monday same time
of rush-hour , every afternoon but broader/longer . Nothing happens
sunday morning but peaks in the evening and afternoons from sports
So students plot the AP concentrations when the shools are on
and school holidays and a massive difference. They did
this up to 2013, not just Soton but Bornemouth, Brighton
Bristol, Canterbury, Plymouth , Reading and others but the same trend
everywhere. That tells us something what we should and should
not be doing.
You can also plot hospital admissions . Or look at the impact of
windspeed and direction . You can see impact of pollutants off the port
and Fawley blowing across the city towards the airport.
Q: what about airport emissions?
The airport will be an increasing source of pollution , but not mainly
from the airplanes. Mainly due to the traffic to and from the air[port.
Soton Airport has gone up from just under a million
passengers , to 2 million and will be 3 million soon, all
in 1 decade. You can forget the rail journey component,
the journeys by road are the significant factor there.
The impact on Wide Lane for the airport is very significant.
The number of flights is not as important as the road
vehicle movements associated with it, pssengers and workers.
Clean Air Southampton
Liz Baton has done a great job on raising awareness of
poor Soton AQ. Even the Daily Echo has got into it with its
Breathe Easy campaign. At the uni my main area of research these days
is waste management. I do a lot on re-use , the circular economy
and recycling. I do have a carbon-footprinting team . In 2010
a collegue Simon Kemp , discovered problems with the methodologies
around C-footprinting. The city council helped with funding a PhD
student and directly from that, we published some papers which ended
up influencing the new British Standard BSI Path2070 which is for
Cfootprinting cities. We Cfootprinted our university , published it
and had an award for it last year. For many years I was the only person teaching AP at Soton uni, probably only 6 in the whole
country, because it was not popular. I have a book out on AP.
We are doing more on AP at the uni these days.
For Soton where does most of the AP come from, cruise ships
or the trafic?
At the moment we don't know. I cannot even speculate because we
don't have any evidence concerning cruise ships or much on the port in
general. There is data on the trasffic and I would bet that most AP was
NO2 from cars. I'm trying to model the AP of a ship but to
that is very complicated and expensive. In the next month I'm
working with hte Finnish Meteorological institute on a project
relating to this. A Russian student with us has some data on this.
Q: Cruise ships don't use shore power , they run their diesel
generators continuously while in port .
Called coldironing , they have 2 types of engines a locomotion engine
for moving the ship and an auxilliary engine . They turn off the main
engine while in port , using the cleaner fuel aux engine then.
Very few ports in the world run cold-ironing where you throw
a power line over the ship's side and its mainly military ports that have
that, especially in USA. I've a student looking at this at the moment,
a PhD student part sponsored by Company Rambol? and had an
US MSc student working with the US navy as a submariner
and he helped me get some data from USA and I'm hoping to
publish some of this soon.
But basically there is no way of doing this easily. For an idea of scale,
if in Soton we provided the infrastructure to provide electricity for the
ships, everything else in the city would have to be switched off.
It would require some sort of control technology . The amount of ships and
planes , most emissions occur over sea rather than on land.
All the international data on ship emissions only count >100 grosse tons ,
now think of all the marinas . A rough rule of thumb is they are
underestimating by 50% by not counting the smaller vessels.
They don't include inland waterways either. We tried
estimating the emissions from inland waterways but there is no data at all.
No log of the vessels. We found the best source was the canal
boat equivalent of train spotters , because they have loads of data , they've
identified al lthe small canal boats , including engine size and their activity.
Q:If the basic function of a chimney is to send pollution as far as possible into
the distance , if that is so, then not as high as Fawley Power station chimney
but megaship/cruise ships funnels in the docjks are very high.
Where does that pollution start falling out?
Mosly over the sea. For a ship's funnel , coming out hot,
rapidly cools when the ship is moving. Its the scale of the
shipping industry is of concern. When you build a ship it has to
be operational 25 or 30 years. So there are a lot of older ships
around. Even if we started putting new tech onto older ships
now , there willstil lbe some around in 2050.
Think of the power stations of 1970s/80s , they fitted flue-gas
desulphurisation on the tops and removed SO2 , which
worked to some extent. They are thinking of doing the same thing on
ships. Onto the modern shis, its difficult to retrofit. We're looking into
emitting into water, instead of the air. It might neutralise
the acidity but we'd have to work out the long term effect of that would
be. Battersea power station when operating in London, it pumped
a lot of its gases into the Thames which is naturally alkaline , so some cancelling.
But as hot, there was thermal pollution and at that time the Thames
was dead anyway as no O2 in it then. The thames has since been
revitalised so now htere are even seahorses in it and 2011
won Best River in the World title. It took 50years to do that.
So these are long term projects, gathe rdata , then proceed.
Q:Do you use satellites for data?
Some do stream relevant data and with some clever computing with
GIS systems and remote sensing can estimate emissions , but
requiring supercomputers. But we only get a small
amount of time on the uni ones. AP stuff is probably lower priority
than the other time users there. As computing power improves
and better satellites and sensors then definitely.
Q: Can fine particles as such in themselves be toxic, you mentioned
flour which is not toxic as such?
It coats the inside of your lungs . it irritates the lining , its just the
size of the particles, a mechanical or physical effect. Any irritation
in the lungs, like irritation in the nose, generates snot .
So you drown from the inside out. When 1970s fish were killed in
Scandinavian lakes as a result of SO2 pollution . The SO2
was deposited on the lakes, acidifying the water , it got to pH3.5
and at that point underlying sediments released Al into the
water ,as Al3+ which irritates the gills of fish , which snot-up
and they died of asphyxiation. They did not die as a result of
the acid rain , but this secondary process. Thats what flour and othe r
Will 5G and 6G be an air pollutant?
I'm not up on the physics to answer that. I'm old enough to
remember when microwaves came in the early 1980s and the
concerns then. Similar with emergence of mobile phones
20 years ago and people said we'd bake our heads and thats not happened.
The peole at the uni Zeppler Institute tell me its not
Q: Do you have a sample of air that is the standard , what you are
trying to achieve?
So think of pre-industrial air .
Q:which is impossible?
No its not . We can go to permofrost areas and to ice that has
never melted . You know the rate that ice forms year on year , so
drilling down , you can take a sample of ice , go to the year which
would be around 1750 and look a tthe gas molecules trapped in the ice.
That would be about 115 to 120 ppm CO2 , at the moment we're over 400.
Thats our biggest problem from a global perspective is CO2
. Then we have to think locally and PM , small increases in concentrated
areas ,gives adverse health concerns. I'm from Wales and old
enough to remember coal and we had a coal-fired Rayburn cooker
, no gas or electric cooker when I was young.
In the winter there was smoke everywhere. There was much mor e
pollution bu tthe particles were bigger so they didn't get past our
bodies defences. They were bigger than 10microns. The fine PM
from diesel, cigarette smoke etc , they will get past.
Q: Is methane from melting permafrost and agriculture significant compared to
Very significant. The way we look at climate change is we use CO2
as the benchmark molecule. 1 molecule of CO2 will do 1 damage.
Every molecule of methane does 25 times as much damage.
We grossly underestimated that for a long time. The IPCC
, first report about 15 years ago regarded methane as 8 times the damaging effect.
Each report afterwards has upped the methane damage factor. We
have methane from animals , land-fill sites and other sources. a bunch of
other gases called the Kyoto Basket of now 7 , used to be 6.
The gases that cause dangerous climate change and methane and CO2
are 2 of the least damaging per molecule. There is one , possibly HF3 ,
which is 3,700 times more damaging.
Yes volcanoes can generate a lot of methane. Combined with concern
over fracking as methane or something similar comes out with that operation.
If you're forcing water into the earth to release gas, you're unlikely to
capture it all , some will vent to the atmosphere. So some of the
concern about fracking is not just local earthquakes but also
the uncontrollable release of hydrocarbons to the air.
Q: Is noise pollution being monitored in the same way as AP.
No, noise is really an occupation issue. Take the tube in London
compared to going to a heavy metal concert and you were close to the
speakers you would be exposed to about 120dB. That is a typical
value of many of the London underground passages. Some of the
deeper tunnels and some of the older trains for perhaps 20
minutes at a time you are being exposed to 110 or 120dB .
Noise has a big impact on mood as well as their hearing.
Q: Bats produce those sort of levels , but too high f fro us to hear.?
I have a colleague Patric Osborne? who thinks the best way to measure AP
is to measure noise. Make the assumption that most AP comes from
cars , cars are noisy. Cars without an engine create noise from the tyres.
Electric vehicles will not be a panacea because they are 20% heavier
and you get an increase in non-exhaust vehicular matter
from brake wear, tyre wear and road ware. The potholes will
get worse as well. In theory you could use noise measurement as a
surrogate for certain types of pollutants. You could model that
and also use satellite sensor data there.
Q: going back to the school-run business, there was virtually
no school run in the 70s, but pollution levels were higher?
Then we had a lot of coal use around and the amount of S in
our petrol and our diesel was much higher . Now all fuels are
heavily refined to remove most of the S. There are now
catalytic converters on cars, part of a control technology
since the late 1980s.
Q: And how far do have to drive before the catalyst is hot enough
About 3 miles , but most jounneys are less than that, all
those journeys to the nearest off-license or a loaf of bread and back.
It has to get to at least 500 to600 deg C . It will vary
summer or winter .
Q: That is what I smell near hear , the major road junction ,
the volatile whatsits from the exhausts before the catalytics have heated up. ?
Although I never smell kerosene from the planes to/from the airport although
I'm just off the glide/flight path.?
Petrol is more volatile than diesel and both more volatile than
kerosene. Ticking over diesel engines ar eworse polluters.
Q: I retrofitted a mechanical heat-recovery system to my house.
It puts air into each room , a complete change of air once an hour.
It has what they call pollen filters , but i don't know
what size it traps . Around each ventillator there is dark
powder and if you wipe it off with a tissue, its very fine. Have I
made my house worse by pulling this air through from outside?
Pollen is quite large diameter. Probably not made the situation
worse. The filters gradually get blocked by the smaller particles
building up . Less particle throughput but mor epower
required to suck the air thru.
Q: It takes about 3 months to build up this ring of particles.
Those are big particles?
The filter gradually partially clogs , becoming more effective
but using more power, for the same air throughput.
Q: Is there much problem inside buildings of breakdown of
plastics producing formaldayde gas or is it mainly
fungal spores and household skin and clothing dust , that sort of stuff.?
HVAC systems have massively improved recently. I researched
formaldahyde in 1996. Not so much a problem in the UK,
mor elike S America and Brazil as their cars are powered by ethanol ,
and higher temperatures wil lgenerate aldehydes. You do
get aldehydes and ketones from glues here.
Q: Not the likes of nylon carpets breaking down ?
I don't know anyone who has done any measurements on that.
Q:I'd heard of cavity wall insulation breaking down to
form formaldehyde. ?
Mainly from glue UF glues.
Q: Done anything on the new university buses , where they deteriorate
after a year?
Done some on the old ones. The NO2 concentrations in the
bus was higher than the concentrations outside of the bus.
Tuesday 19 March 2019
Katla Satyanarayana, Soton Uni: From Pigeon Post to Hologram Phone
Calls: The Future of Wireless Technology.
Wireless communication has evolved from pigeon-post to paging, voice
calls, text messages, video calls – and now Internet everywhere. It
has become the ubiquitous means of socializing, doing business and of
entertainment. There are around 5 billion mobile phones in use through
which we transmit around 60 terabytes of data every month. And yet,
this is just the beginning – the future is even more exciting as we
are moving from the internet-of-things to holographic video calls,
which can conjure up the image of a person right in the room when we
talk to them. However, one of the key issues of this technology,
whether we have the capacity to accommodate all these users at a high
quality-of-service. An obvious solution to circumvent this problem is
to increase the bandwidth used. But we only have a limited bandwidth.
In this talk, I shall shed light on how to address this problem.
I'm working on the next generation of mobile phones.
A hologram image, where I can participate in a conference in
Hydrabad and I can simultaneously do a talk here in
soton. This is the future of next generation mobile phones.
From pigeons being used as message carriers. These days a day doesn't
pass without the mobile phone.
A skeleton block diagram of what is going on in a MP.
If you open one up you will see each of these blocks.
Your voice is the message source, the input , encoder then modulator .
In the pigeon post , the pigeon is the carrier , message is enclosed i nan
envelope attach it to a pigeon and it takes it to a friend or family.
Your voice message is modulated on to a radio signal and out
on an antenna. Transmit over a channel which could be air
,copper or fibre, the medium for transmission.
If by air- its wirelss comms, via water its acoustic underwater comms,
copper its a landline, fibre its optical comms.
Dependent on its medium, is its name.
The transmitted message will be received by another antenna
of the second party. There its decoded and the message appears
in the earpiece of that MP.
Early comms were smoke signals where smoke carried the
message , to gather at a spot or whatever. Then pigeons then
telegraphic comms. Then landlines and then first gen MP.
Then text transfer on 2 ,3 and 4G MPs.
1G , for first generation, currently 4G and next year 5G.
1G was analogue signal , no coding involved, just modulated onto a carrier
and transmission via antenna. Voice only transmitted, analogue as the
way we speak.
2G was digital comms. The message is multiplied onto the carrier
, like a pigeon carrying its envelope. The message is carried in the
amplitude of the carrier wave. This is amplitude modulation
com, frequency stays the same and amplitude varies with the signal. It is susceptible to blockages including by the human body.
Also FM , frequency modulation where the amplitude stays the same and the frequency changes with the signal.
2G,3G,4G is digital , it is digitised or sampled . Instead of the complete
signal just one sample of the message , a discrete set of values rathe rthan
continuous. There are represented as bits , 0 or 1 as a sequence.
you can use the spectrum more efficiently and its more robust
against blockages and quality of service is high. The secret is inthe 2 blocks , not in G1 but is in all later G. The encoder and channel encoder
encrypt your info, so when decoded at the other end can
still work through a blockage. Datarate is higher 2G from 1G and
3G higher than 2G. The change is not so much the available
bandwidth but TDM time-division-multiplex of frequency division
multiplex. Say i want to talk to 2 groups i can't do it simultaneously.
I can apportion time slots to each group . I can talk or I
can hear , not both at the same time. With FDM there is a range
of spectrum alloting frequencies to each group. With this allocation
I'm losing the chance to transfer data , this was in 2G.
For 3 code , i send a unique code to 1 and another non-overlapping code to 2.
This is spread spectrum , as invented by the Hollywood actress
Hedy Lamarr in the 1940s. The are 2 flavours direct sequenced
spectrum and the other is frequency hopping core division multiplexing .
There is no overlapping and I can use the time allocations
to 1 and 2 simultaneously. Hence higher data rates.
In 4G there is orthography division multiplexing . Divide the entire
bandwidth into chunks of frequencies , called sub-carriers,
orthoganally . There are also other techniques that make 4G
faster than the other Gs.
We are now moving to 5G. An exponential growth in data use expected between 2017 and 2023, requiring 5G. Trying to enhance MP bandwidth .
Heavy data transfer like films presently require buffering and at
congested places like Waterloo Station sometimes you can't even get a dial tone.
The demand is too high . We also want low latency to connect
machines together and to have an internet of things.
Say a medical device monitoring heart-rate and pressure, sugar levels etc that can report via your MP to a GP, without any intervention by
yourself. Every device connected to every other device by the IoT.
Requires more bandwidth than availablr to 4G. This is what we aim
for , for 5G, to address some of these issues. Remotely operated
surgery, a future scenario, surgeon on the ground but patient
in a surgery in space somewhere, a robot doing the surgery.
For this you need ultra reliable and low latency comms,
someone's life at stake. Not just in space but surgery in
remote areas of the Earth where there is no doctor.
With 5G you could watch sports from the plyer's point of
view. You can stream high definition video to a high-speed
train, only by buffering currently.
A scenario of a disaster and an air ambulnce helicopter
dispatched to the scene. Energy consumption is of
critical importance. Another app would be remotely operated
agricultural drones disposing of pests or weeds, again
energy consumption is utmost importance. Have we opened Pandras'
box full of problems or exciting news. The potential problem
with wanting enhanced mobile broadband , is it actually possible.
Imagine 2 taps filling a bucket that has a hole in the bottom.
Can only access the water via that hole. If you add 2 more taps it will
just overflow. You cannot access the extra water, its spilled over the side.
So increase the size of the hole . but how much further can you increase.
The hole in wireless comms is called bandwidth BW, the water is called
information , the taps are MP, laptop , cooker , washing machine, etc.
For important comms like medical, it should not get lost in the
traffic. Its more important than pics of cats say. There is only
limited BW available. BW is basically yhr set of frequencies usable.
But we don't have unlimited frequencies. There are 5 magic
bullets around this.
Massive MIMO , millimetre waves, beamforming, full duplex and small cells.
MIMO is multiple input /multiple output. A MP usually has 2 antenna,
a laptop maybe 4 so multiple both input and output. Massive
is just scaling that up to more antennas, perhaps several hundred.
mm wavelengths, my terrain, my PhD area and including
beamforming. So I use a massive number of antenas at mm
wavelength, to enable beamforming. Small cells is the range where
the signals can cover. A base station communicates with your
MP contained within its cell reach or area. Its better to have
small cells than large . For small cell you can have multiple
base-stations , continuously talking to a number of base stations.
So reduce cell size from currently about 1km area to 500 or 250m
, the sevice quality is enhanced. Because I'm nearer to dedicating
each base station to 1 user . The individual base-stations then
talk to each other. Full duplex is talking to someone and
receiving from someone. Only politicians can do both at the
same time. So with MPs with 2 frequencies f1 and f2 I use
f1 for transmitting and f2 for receiving. They are orthogonal
so they don't interfere with each other. I can transmit without
causing interference to the received signal. I can listen
and decode without any error in the message. Full duplex is
transmitting on the same frequency at the same time.
I only need f1 or f2 , not both. So I'musing the available
frequencies more efficiently, nearly doubling the spectrum,
doubling the datarate.
mm waves . 4G MP work on about 2.4 GHz . Wavelength is the speed
of light devided by f. Typically the spacing of elements in an
antenna is half the wavelength. 2.4GHz means that spacing is of the
order 100mm . mm waves start from about 30GHz and above,
and the wavelength is of the order mms. The antenna size reduces
and you have more BW. BW for 4G is about 3GHz - 300MHz
about 2.7 GHz. Go up to 300 GHz and there is 270 GHz , 100 times the BW.
Lets say each f step is 1GHz then I have 270 frequencies available,
wheras before only 2 or 3. So I can transmit loads of data on these.
The spacing between the antennas is so small I can now accommodate a
few hundred antennas in a MP or laptop. More BW but more
susceptible to blockages and attenuation because of likes of trees and rain.
With low levels of received signal there is less success at decoding the
signal, not enough energy. If i lower my voice, you cant hear me,
the energy being received by ypu is less. I raise my voice level so
you can hear me. but then there is a wall and if you were the other
side you will not hear me. Windows, doors etc causes attenuation and
some surfaces reflect. Signal fades through a series of trees,
getting progressively weaker. 5G more susceptible to blockages
compared to 4G frequencies. Even user blockages, if I hold
a phone close to me and am between handset and base station.
For mm waves a human in the way can block , reflect and
refract those sorts of signals. I could raise the power level
but we have limited power and increase too much and they
become open microwave ovens and would cause cancer.
Increase microwave energy and it will ionise the cells in your
body, which is carcinogenic. The Tlecoms union has put a limit
on the amount of power we can use.
A way to increase power without using any extra source of power
is to do directional tranmission. In 4G there is omnidirectional
transmission , the signal energy is all-pervading,
spreading in all directions. With a suitable monitor of electromagnetic
radiation you can see my signal everywhere in range.
So why waste energy to the surrounding area, just transmit
directly between A and B . I would need to steer the transmit
signal into the desired direction. This is called beam-forming.
There are Yagi antennas , you see on rooftops for TV,
they are directional. Each element is an antenna, multiple such
strips , steer the beam in that direction. Twist the Yagi around ,
you may not receive a signal. Parabolic antennas are another such
For 5G we will have a few hundred antennas closely spaced , spacing
of the order mm. Beam-forming is only possible like this with a large
number of antennas. For a very narrow beam you need loads
of antennas, giving a laser-like signal . These are arranged
in a plane, 2D fashion, and have control over elevation
and azimuth , ie 3D control. 4 beams coming from the same array
can be directed to 4 different users in different directions ,but all on the same frequency and at the same time. There is no interference between the beams as they are so highly directional. Its not duplex , its only
transmitting. There isa bit of cross-over leakage but multiple
users can use the same channel, same frequency at the same time.
An example of 8 antennas with phse-shifting meaning the resultant
beam is steerable. The more phase-shifters you have the
better the resolution, the better pencil-beam i have.
Nothing to see in the way of steering, done electronically by the phase-shifters.
More than one array of such antennas , for diversity. For me speaking,
my voice is going in all directions , some has a direct line, some reflected
and a direct path blocked because someone moves in between, but you can still
here me because of my voice coming by less direct routes.
There isa diversity of signal paths , if you receive more than
one indirect signal path you can better understand the signal I'm
transmitting. With a total blockage, you still have access via another
path. It improves the fidelity of the signal, having multiple transmission
routes. Say 3 separate routes to 1 user, conveys more signal power ,
its more robust. Like fake-news going around, 4 people conveying the
story 3 people say the same and one says something else , you
could make a judgement based on the numbers. Massive
MIMO equals diversity.
Where are we at with this, I've heard of deployments in Korea.
Is it work in progress or established tech or
an embelishement on early tech that is being deployed?
The ITU has collected the companies involved like Samsung
together and are shceduling roll-out. By the end of 2020
there should be commercial deployment. There has been
preliminary trials by Qualcom in US and Samsung in Korea
and some others. No one is sure that 5G works.
mm waves is a big technical difficulty to handle. What we hear of 5G
is massive MIMO and the beam-forming but not mm-wave, 4.5G
at the moment. 4G does not do what 4G was asked to do,
whether 5G will do what 5G was supposed to do , when the
hype is around still.
5G handsets will have those multiple beam-forming
arrays built in. I'm assuming everyone's MP will have built-in,
updated regularly, a database of where all the base-stations are.
The x,y,z coordinates, so when you make a call , the initial hook-up
, that will be beam-formed in the direction of the nearest expected
Yes, instant access happens with 4G . For beam-forming to happen it
needs to know the angles of departure and arrival. If that is line-of-sight
then fine . But lets say there is multiple blockages, no direct line of sight.
Initial contact then would be omni-directional , learns the intervening environment,then it will do beam-forming.
Is the term phase-array the same as beam-forming?
In the early days of satellite TV there was things the public called
squarials , was that this beam-forming structure insdie?
yes. The beam forming ability has been around for ages, the orthography/
multiplexing knowledge was around in the late 1940s or so
but the technology only emerged mid 1970s when the electronic chips
became available to do it.
What do you reckon the future will be like?
I hope to make a holographic phone-call sometime soon.
I assume there is an infrastructure chenge required to go to 5G.
At what level is that required, satellites, the transmitters or can you use
much of what is already htere ?
For mm-waves its a completely different structure required bu tthe rest
of the infrastructure should be much the same as current deployment.
Can you see 5G being all pervasive or just augmenting 4G, having
5G where it warrants it?
Yes, where 5G is not available somewhere then fall back to 4G.
Like at the moment ,where 4G is not available , goes back to 3G.
Just a sheet of paper would block 300GHz ?
Depends on the type of paper. Metal reflects the signal, buildings
block the signal.
What about fog, would that block?
Yes. At 60GHz there is a total absorbtion band of water vapour.
So 60GHz cannot be used for telecoms. You cant use all
available frequencies in the mm wave region .
Only possible for indoor 60GHz wifi.
Tuesday 16 April 2019 , Adam Barton of Solent Uni . Title: The future
in Video Games. Video games are now a significant influence in today’s
global economy. This talk speculates about the future of the games
industry; the challenges it faces, the impact of new technologies and
what it means for those who wish to develop interactive media.
The state of the industry. The rise of the mobile, spiralling costs,
whichare insurmountable for the future of video-games vg.
The ris eof robots will hopefully solve some of the problems, tHe massive
skill shortage and I'll make some predictions at the end.
I wrote this presentation about a year ago and some of those predictions are
not so wild.
For the future , a tip someone gave me, look at whats not going to
change. If things are going to be consistent that helps you
form what the future might be. Some things we are happy to say is
performance will improve. Moore's law is beginning to taper out now .
Automation will increase . We know prices will
rise and dependent on leisure time and free space available but we're
pretty sure prices will rise. Leisure time will increase and probably
facing rising unemployment, long-term if not permanent.
Advertisers will advertise and want to advertise.
Nicholas de Ponte? wrote a book Be Digital in 1995 where he made
predictions about the economic digital future . He recently gave a talk
where he went through his book exploring what hew got wrong and
Precising him, there will be a growing number of consumers of the
digital domain and be a contracting number of creators.
If your a creator you're going to be financially secure , you will be able to
monetise your IP. THese people wil lbe supporting you with cash for what
you do. 20 years on from 1995 and it still holds true.
Cultural shifts we have to be aware of - the stigma of gaming . There seems to be a social construct where people don't like the term gamers or don't
like people who play games. Ideas of what a gamer and a non-gamer is.
There are now people who just watch people who play games.
Then how does marketing interfere with this. Data shows 50% of the
population are gaming, a staggering amount. The console market is a
big market, mobile gaming 1.1 billion and pc gaming just shy of 1 billion.
It make sa lot of money and is prevalent within media society.
Bu tif you survey the average man in hte street they don't
expound. Politically we are on the back foot , government not recognising
any of the significant contributions that gaming makes and gives the industry a
hard time about tax-breaks. There is a rise of non-gamers and a problem with
identifying them. A big proportion out there playing games
regularly but if you ask them , they say they are not a gamer.
You may catch someone playing solitaire on an ipad or similar whimsy
games on a pc , but firmly say they ar eno gamer.
So an aging group who are gamers , younger people coming thru
with a perception of not being gamers . Its not like there is a categorisation
of newspaper buyers or tv watchers. S ogaming is not necessarily thr
same construct. Also market size and gender split , the perception
is its very male oriented thing , it is roughly 50/50
but it depends how you split the market, whether console , pc
or mobile. We need to identify which grouping. A weird sector,
even to me , is people who watch people play games. On you-tube they
wil lsit and watch others. At which point my mind is blown.
They will even watch people playing a vg they themselves have .
I can stretch to understanding trying out, before buying
concept . Something weird and psychological going on.
I don't watch footbsl bu ta lot of people do. I understand that because they
might not be able to play football. But there is indication that it
doesn't matter thet you cannot play, you just like to watch football.
The gaming viewer market is huge but poorly understood.
We don't know why its popular and its rising in
popularity. Then marketting. Sony and No Man Sky? they saw a product built be a small team in
Guildford and they wanted to be first to market this . They put a lot
of money into marketting , getting ahead of the thing, they dragged the developement team along and ended up selling a game that was not
finished. This debacle made a big impact on gaming and its
marketting. A huge backlash, massive complaints , law suits.
Similar with Fallout1976? and EA are going thru similar with a game called
Anthem , where the hype is not supported by the developemnt.
Gaming is worth 130 billion globally , the markets are steadily
growing and expanding. Asia and S Pacific areas ar estil a big chunk
of that market. Looking by platform its roughly split console
and pc as everyone thinks that is the big market , but its only
half the market , the othe rhalf is in mobile.
The prediction for the future is the real market is in mobile.
Its set to grow for the next 10 to 15 years. Look at the trends
, its mobile that will be gobbling up the money and making the money.
Everything else is growing a bit but fairly static in percentage terms.
Then the predicted death of the console , which is a bit of a stunner.
Sony is announcing playstation5 , but there isa distinct downward trend.
Switch has kicked it up a biyt , but is switch a console or a
mobile. The rise of the PS4 but does not compensate for the market
loss as a whole. So maybe by 2030 , no mor econsoles.
The UK used to be one of the top vg producing countries , 2004/5
we had 3 out of 10 of the top developers in the world.
Now we are not in the top 10, probably in the top 30.
We've been overtaken by Japan and the USA. Canada introduced
tax-breaks and Ubi-soft moved to Canada , where all the money went.
We are making a healthy contribution to UK GDP .
We employ in the UK 20,000 people 50/50 split between
retail and developers. That contribution to GDP is about 1.5 billion
thats about 75,000 per individual contribution to GDP, quite
staggering. If we had more people, then a bigger contribution
to GDP. The problem is we don't have enough people.
Mobile trend is to increase.
By useage , making phone-calls is not even in this list.
Checking the weather is a common useage of them. Playing games
is in the middle of uses. We know people are very attached to their mobile data
, because we track all their data.
Mobile is set to expand, international markets set to expand.
The Middle East is brimming to be exploited by vg industry.
As it stands we don't know what games to make them , our games
don't appeal ot them, a cultural difference.
The pc console market is saturated , people have gone from
14 years old with a computer to 50 yearolds with computers
and no more room for expansion.
The markets are unsustainable because of developement costs.
In 1985 30,000 GBP would get a vg on the shelves. Then a logarithmic
graph . A broadening of the costs . Unless you are a billion dollar
company , how do you make a vg. another thing difficult to see
management of this in the future. If these companies continue either
they go bust or people cannot afford to make a vg. The industry
is not sure of the solutuion to this. Redlib2 rumoured to cost >.5 billion
to make and 7 years of developement . It made back that money in
about the first week. But you need substantial capital
to get in on that track to begin with. It never used to cost that
much to make a vg. They used to be produced in about 4 to 5
man-weeks and now man-years.
So need to sell more consoles, why buy a PS4 or 5 if its only
as good as a PS3 or even2. Some people argue the earlier ones were
mor eplayable. But they make money by selling new consoles.
You need to outshine your previous products, something better/
more glamorous or exciting. A lot of this comes down to
graphics , when you look at a vg advert, its how pretty it looks.
There is little commentary on the playability , even at the
bottom of the screen it says "This is not a playable section of the
game", its just a cut-scene . The marketeers know that is what sells the
game. The killer is that manufacturing costs ar etiny but developement
costs are high . To make your first beta version is many thousands of
man-hours. But once you've made it and want to make a copy
its miniscule costs. Here lies the problem. In comparison to say
car-making. The engineering in that car is expensive to make anothe r
car . When manufacturing costs are so cheap, marketing
takes over, outstripping developent costs often, the root problem.
So we must get cheaper . A Marxist theory of econmics , to get cheaper,
is to get rid of the humans, because they cost the money. The quality
must not go down , or the consoles would not sell. You need more
effective staff , better tools , automation , creativity and you need some
machine learning in there. So what do the robots have in
store for us. There is an interesting symbiotic relationship bertween the
vg industry and machine learning. In the 1990s not enough computing power
to do descent machine learning. At the same time GPU, Graphics Processing
Units , took off. A parallel processing much dedicated to do clever
graphics in a pc or console. AI scientists discovered that these GPU
were good for doing AI and machine learning, because they are a
parallel processing system. Comes into Single Instruction Multiple Data
, one instruction to stuff across a whole range of different day=ta.
And GPUs are god at this. Graphics cards got cheaper because so many
were used for this. now cheap, scientists could play around with thes e
GPUs. Now there are GPUs dedicated to just doing computing.
So we have AI working, the other big problem
facing the games industry is actually no one knows how to build a
vg. I've been in this industry for 30 years , I know, what many outside the
industry don;t know, we don't know how to build games.
Shocking how inefficient we are, how many games get canned
mid-developement , how we make a long series of bad choices
and the only way of fixing it, is to throw away and srart again.
There are some processes that don't work, get your ducks
in a row but it does not work for vg developeent.
We use Agile which is good for software developemnet ,
the build things that have a certain flexibility built into them
but talk to the big names in the industry and its a big failure
but they don't want it known publically , or their
share-holders may run away , when they realise we don't know what we're doing.
We had a success in the rise of rapid prototyping , get a team
to make some stuff, then get iteratively desgn and develop it.
Famously Valv are very keen on it , but they have not made a
game for at least 8 years. So what is going on there.
We've seen a return of Immersive Play , mainly Indie developers
going "I'm just going to make a game" and if I like it, I'll
put it out there and we'll see what happens. But if you're trying to
doa billion dollar game , thats very risky.
We've seen the integration of View? control and asset
tracking, again wholly ineffective systems , from a broad
The biggest one is asset tracking where you make a vg ,
then take the assets and chuck them away and make the next one.
This is insane, its not done in other industries. It seems to be prevalent
in vg design industry. We need people to come in and sort it out,
otherwise we will not be able to continue to make bigger and bigger
games. Consider engineering infrastructure where the building costs 1 billion,
you know people know what they're doing on that project.
With things like RedDeadRedemption the vast majority of the
team just hope it all comes together towards the end.
If games wen't so cost effective to manufacture , we'd all
be broke by now.
The Art Pipeline. Everything i na vg you see, is traditionally made
by an artist. Every chair, every table , every wall or character
somebody or a team built that. Very labour intensive .
so historically we had 1st second, third ,fourth generation.
Build models, textures and artwork that goes on models ,
shaders for how things look whether something looks like a piece
of metal or a piece of plastic. Then rigging which is putting a
mechanism inside a character so it can animate. Then animating
getting your character to do (e?)motions frame by frame. Traditionally this
was all human , all processes done by humans.
We are at a midway point, we can scan models , using photogrammetry
and another process using a laser scanner . We can use photos for
texturing . We have smart materials and smart systems
of visual programming languages which makes the shading easier
. We are still hand-rigging but Maxi-Mo ? and a couple of othe r
companies are slowly solving this . Currently a load of
motion capture, mo-cap. See any of the "making of" films and games,
people in black cat suits with whit ebobbles in a big room
with loads of cameras. Captures their movement and organises
it to a form of data that a computer can use.
We don't know what goes between scanning and synthesis but w e
hope for some systems that can automate this. We have something
called Substance from Algorithmic for simulating real physical
surfaces and generating textures and shading from them.
We've no idea what can happen about rigging and then with animation
we have asimilation systems , where you don't need to mocap
any more. You can set a program running and it will
work out what that character should be doing.
The end prediction is no more humans, we will be synthesising our
models , texturing using procedural texture generators ,
synthesising our shaders and machine-learning algorithms
will be able to do via classic teacher/training systems and it will
learn to rig.
Some examples of some clever stuff. They work out a patch grid .
Take a video and let the comouter look at the video and the
computer has computed where parts of the body are and can
then work out the animation . Can then return that as functional
data . So someone can capture me just walking around a room ,
no special rig or camera or other facilities and 20 minutes later
they have 3D animation in a computer.
Repeated but with crowds.
Using wifi signals to see how they are blocked , as a person
walks around an unseen room. From that they can compute where a person
is and where their limbs are, for multiple people and all
through a wall, just mind-blowing. We're safe in here as there is
no wifi. You can see military applications for this , as long as a hotspot
is near them.
So we need people or actors to go and do your stuff . One for a dog ,
synthesised quadraped motion. No real dog involved. Captured loads of
imagery of actual dogs , fed into a computer learning system .
So they now have a virtual dog that wanders around and behaves
according to algorithms. Then that data can be spat out , ready
for a games engineer to use.
So then you can get rid of every person in the pipeline,
all replaced with a computer.
Photogrammetry is another toolset , take loads of photos of
static things , there is free software to do this at home.
Take loads of pics of your teddy bear, feed into a computer
, get 3D model, a data-cloud , which tells you where all
the points are and what colours they are. The result can
look like the actual scene but it has never existed. The down-side
is they can be very expensive models , having 2 or 3 million
points in them , to make them look good. Then algorithms to take the
entire scene and recomopute and optimise it , to be in a game.
The Vanishing of Ethan Carr ? used just photogrametry.
No artists used, just photos of everything and it looks
amazing. The technology is there and the industry is
The other big problem is that coders are expensive,
time-consuming and not 100% accurate. They put bugs in their
code. If we could teach computers to code we could solve a lot
of problems. We could rid the human factor and speed things up
dramatically. Problems are computers can't handle paradoxes ,
they can't handle the halting problem. Alan Turing's experiment
that a computer can't detect a program that does not halt.
P v. NP problem, polynomial time versus non-polynomial time ,
can you solve every problem. Descriptions of Intent,
basically what a coder does, you say to the coder what you
want and he translates that into code. Its your intent and they
wriite code for it. The current theory is that our programming
languages are wrong. Its not that they are impossible,
although mathematical arguments that the halting problem is impossible.
A conjecure has it that its only impossible in the current
paradigm of the way we write languages these days and we should build a
different computer language, that would circumvent
such problems. An interesting branch of doding maths: is it possible
to write a langueage that can just pass the halting problem.
If we could do that , then computers could write code. Currently they
can help, there is IDE Integrated Developement Environment.
It can look a t the first few letters you type and it can make a
suggestion "you are looking for a variable/ function, I'll tell you what it is
so you don't have to spell it. Code prediction is useful, high level
languages are useful , allowing you to make generalised statements and then the comp
will go away and work out exactly you need to be doing.
Then compilers that are optimising code, apparently dominated
by Microsoft currently. Writing a good compiler solves many problems
of writing code. Because you want a good compiler, the langueag
and everything else is just another layer of icing on the cake.
We have a massive skill shortage . The problem with automation,
you can eliminate repitive tasks , low skilled and narrow skilled jobs
and get rid of labour intensive work . We saw this with the industrial revolution
, automating many jobs. The problem we have , as you take away
low skill jobs, the entry level has less and less openings for
graduates and a rising expectation of skill level of what someone
has just to get into the industry. A barrier to entry so the
industry can't grow. Developing and maintsining
these highly developed jobs is in itself a highly skilled job. If you
don't have people who know the basics , you don't have anywhere
to go. But we are actively getting rid of these jobs. Traditionally in
the art pipeline one of the jobs popular for graduates
was modding, taking someone-elses model anf you had to build a simpler
version of it, reducing some of the complexity.
Also retopologising which is similar , you take a very high polygon
model and draw simplicity from that model. But those jobs have
gone, partly through automation , partly off-shoring to Asia where that
Some wild predictions about the future.
Machine learning - a great story about a boy who found his
father's ghost in a vintage vg. The games industry is really keen
to sample your online play and simulate it. Multiplayer games need
an audience , but also need the participants , to create the bots.
Fill the server with bots , machines emulating humans , then , #
providing no one can tell the difference , these bots will be much more
attractive. They don't mind losing. That means lots more people
get playing your game , because they've just kicked everyone's ars.
In reality they wil lbe playing against a bunch of bots that don't
care. A lot of work has been done on the way you play a game,
synthesising it and then replaying it.
Subscription streamed play , no consol required , we had
Stadia coming out as GDC earlier this year. Google into this
now, no console, we just send you everything down your
broadband. Mobiles will win , not my prediction, I don't
like mobiles. However they are prevalent, a capacity to do
more in a simple compact form . The only thing holding them back at tthe
moment is the batteries. I'll say that VR will go mainstream.
There have been at least 3 attempts at VR games industry.
But the pickup over VR has been substantial . Its in architectural
simulation , for training motor mechanics and doctors ,
military. Its big and its growing.
Then when computers build games , we know enough about the method
of building games, procedural generated environments .
Q: Have they already tried?
They have tried but they were awful. There have been a few games where the
entrant was a comp and it looks like a Wolfenstone knockoff.
The building blocks are there . Computers will learnt to play games
, but they will be too good , which is kind of ironic. Trying to generate
interesting opponents for humans is that we actually have to make
them dumb. They have access to all the data, they knw who
you are , know what you're doing, where you're looking. They are the
ultimate ghost in the machine. They know just when you are changing your
clip, and they pop up and shoot you. There is no argument about them
being hard to beat, they don't play like how humans do.
We have things like heat-maps , we watch where people play games,
generate hesatmaps of where people hang out. On top of this we can
build decision loops . Let people play games and there is a hotspot
where people always die, the AI knows to avoid that area.
So a layer on top, a decision map . If there is another place where if you
stand, you always get kills, then another decision map. You can build up
very good systems just based on just these 2 maps.
Then neural networks and deep learning can learn to play
effectively. Then the problem is controlling their developement of
learning because they will reach a point where they flip-flop
from beating them reasonably easily , to not winning at all.
My current research is navigation in non-orthogonal space
and i'm looking for mathematicians to help me out. Its a big
area of problems where we understand how humans navigate
thru space and a good understanding of how effective grids are
in vg, from Astar algorithms , but the problem with grids are
they are too regular and they don't join together easily.
So research in how to go off the grid, so to speak.
Q: With consoles, surely mechanically they are better than
mobiles or PCs, they're made for gaming. No latency
between pushing a button and the code operation.
Or an absolute minimum compared to scanning across a screen
of a mobile say. ?
Yes, I totally agree.
Q: But counter-intuitively things seem to be progressing the wrong way?
Then the people who only look at websites on a mobile, I
will only use a pc. But mobiles are taking over ,
because people have them on all the time.
There is no accounting for people. The culture and society don't
necessarily make the best decisions.
All the effort into hi-def graphics and its all lost on tiny screens.
We're in the wrong age demographic to understand it.
The big money goes where the mobile is. There will still be a hard
core of gamers who want a massive screen in their living room .
We know we need a new controller/console. You can span a lot
of keys on a keyboard quickly, but keyboards are'nt ideal.
There is a divide in the market where there is aplastic thing with a
dozen buttons at the most or a plastic thing with 120 buttons.
We need tactile VR and then we can have as many buttons as we like.
There are hotkeys, but there is a lot to be done in the human/computer
interface. We can make fairly solid predictions on what PS5
would have and yes we'll still have a controller.
A slide you had, based on an old book, where the number of consumers
grow and the number of producers shrinks. If you look at the
music industry , thats not happened. The number of people generating
music has grown exponentially, the tools to do that are there and easier and
in everyone's pocket now. But its harder to make any money, perhaps where
that graph still applies. The number able to make a living shtinks?
THere has to be a threshold of competence. Its like u-tube , massive number
making such videos. You can consider them creators bu t do they have a financial
stability, maybe that was missing from the graph.
For vg some are getting very expensive but some are getting very cheap .
Some will hit on a success and will get catapaulted . The best predictions
on the big games are that they will double their money , wheras predictions
for indy-games , some do 100s of times more revenue coming in than
expenditure. Its a market of competition . Knotch and his Minecraft started i na
bedroom, he didn't really have an audience, he just liked making it.
One of the most successful products in terms of investment you
could ever think of.
Do you happen to know what makes a game addictive?
Lots of possibilities of different gameplay. Someone called Heisenburg
had a theory of hygene for what makes people happy at work,
things like agency and control . There's no amount of pay that makes
people happy. Give people agency, control and challenge , that makes them
happy. So in vg , to change the player, to have agency and control
gets into feedback . We know that if you put those things
into games , its fairly sure they'll be hooked. That cycle of play-reward,
you get into a spiral.
So it could be induced by the marketing deptment?
Allied to that , why so much spent on marketting , when there is so
much "social media" around , to magnify the word-of-mouth?
Some of that marketting budget is social media.
Again comparing to music, in the original vinyl days. Someone gets an
album , and someone else hears it , but had dismissed that genre before ,
he goes out and buys it. These days is it because their friends take it up
, or is it direct marketed to them? What makes them take up a game?
Its direct marketting, we have the data, not via friends.
The problem with vg , if something is not great, you don't get
word of mouth. This is why direct marketting can compensate for
shoddy games. In 2016 I was surprised to see signs for Doom and
Red Redemption, who need to be told that Red Redemption 2 is
coming out. Every fan out there , knows this is coming out. Why
do they need to market it, all very strange, spending as much
on the marketting as on the game. Thats insane if it doesn't
work. I'm convinced that advertising doesnt work.
They don't have to reach the fandom, just the people on hte
fence. Plant the idea in hteir head, repeatedly seeing the mentions,
then when they see it come up on their media feed, perhaps via an
influencer, they'll go "i'd better check it out, see what its like".
I've talked to people in marketting firms and they
don't know what works that well. We know what doesn't work.
Its not possible to do a blind-test. So do a test, don't
advertise RedDem3 , will we still sell lots of copies. No way of filtering
that to see if it works. You spend 500 million on making a
game and then another 500 mill on marketting .
I'm intrigued in the concept of watching others. I like
watching snooke rbut try and play yourself, its hopeless,
you can't make a decent break. You need someone who does it well.
I'm not familiar with Minecraft is it popular because its so difficult to play, that its
more enjoyable to watch others who can play it well?
No its easy to play. It can be fun to watch others play it badly.
So is it like the TV series Gogglebox, getting people to watch other people
looking at TV?
Yes and because of advancements in video-editing , there is comedy that
comes out of the video-editing.
For a particular streamer of video-gaming his performance is much more
entertaining , because he does ??.
So many reasons, just having 5 minutes to watch something, not play
the full game, or pick up tips .
I scoffed when I first saw others watching vgamers but then I
scoffed when I saw the first version of Minecraft, thinking who would
buy that. I don't think we know enough about human nature , to
understand why people watch others doing vgs.
I've seen u-tube commentary about others on u-tube, so how far off
for u-tube commentary on a player of a vg. Dr eus and the bee-watchers.
Is it because we have too much free-time. Instead of mining coal
we watch dancing pixels.
Gaming of use in medicine and education. I wondered , as I',m
a psychiatrist, how things are moving in that direction, the
cross-over. A colleauge is using it for phobia control ,
developing VR , works quite well , as you're living that
experience, but not having any danger attached. ?
There's some great work in mixed reality VR. That is where
we see the real world and the VR at the same time.
A great BMW one , for training mechanics on their vehicles
with VR/AR specs . You can see all the car and it will light
up the nut to undo next, saying which spanner and torque settings.
That makes experts out of novices, a great way of expanding
knowledge transfer and technical abiliies. You could send a VR/AR
kit out to a very remote place and guide a doctor
through a very delicate appendicitis. We has humans
have the ability to do delicate things, but don't necessarily
have the understanding to do it.
A couple of my students have worked in an area of
de-sensitising , curiously, a bunch of people with a phobia
about using telephones . A researcher with me has been
doing work on autism , agsin with AR/VR. I think the
sky's the limit for VR because you can do so many things, but limited
at the moment be lack of tactile feedback. I think we've satisfied
latency and issues over resolution . Motion-blur is another
interesting sticking point, crack that and it will expand to a whole
new group of people, because a lot of people are immediately
sick due to it.
By tactile feedback you mean goggles plus some sotrt of ha[ptic
You could feel the pressure . Microsoft have been working on
haptics and hand-tracking systems and vibration systems.
I can't imagine being able to grab something in VR.
At the moment i'm happy that the controller vibrates , telling me
I've got hold of something, that will do .
With VR can you grab something and feel how heavy
No, everything feels like polystyrene in VR , all weird. Your brain
sees a bowling ball but your arm feels nothing .
You'd probably need a whole body suit, to resist your whole
arm movement. You can get used to many things, I can walk thru walls in VR
. I said recently to my daughter, remember you can teleport,
a strange thing to find yourself saying.
In the early days of BBC beeb pc, the kids learnt to
programme by liking to program games and then
move on. When you're looking for students for your courses ,
what sorts of qualities or skills or minds to lok for,
they don't have to be mathematical , but perhaps be logical?
Numeracy and literacy.
That all? numeracy is just arithmatic.
I teach art predominately. The fundamentals are that I
can go a long way with someone who understands the maths
of art. If you hsve a good appreciation of art but not good
maths , then its a real struggle. Because you are in what we call
"The Glass" , you're separated from your artwork by a sheet of glass
and the only way you can communicate with that art is if you understand how
a computer works. Understand the maths behind making that picture appear.
Don't understand that maths and unfortunately the industry
ddoes not have a place for you.
Would you need an Alevel in maths, ? be able to program before joing a course?
Yes, ideally . We used to have an interview process , a thorough way
of filtering students . Nowadays a complete political/economic
spectrum on uni-level education.
So take someone like me doing physics, and did not realise how much
heavy maths there was. For someone on a gaming course ,
can they end up dropping out because they were not expecting the
As an educator , I'm disappointed in the underaluing of numeracy in the uK.
There is aeven a large body of people that take pride in that they
don't get on with maths. Computing is basically glorified
mathematics. A lot of the future we can see is software
programming and engineering. If you cant do maths, you'll struggle.
We don't teach the interesting parts of maths , the exploration,
the excitement , patterns, the discovery . We teach rote-learning
, do 100 subtractions, now additions , now multiply. But a
computer does that . The first thing I showed my own kids
when they started at school , was how to use a spreadsheet.
Boomph, ther's all your homework done , in one mouseclick.
Then you can do the interesting stuff, look for patterns.
Maths is looking for patterns, finding them .
On computers doing coding, it reminded me a bit of doing Scratch,
scaled up to 3D.? Small kids manipulation of physical blocks
to program robots etc?
Scratch is great and MIT have another version called AcIinventor?
a more grown up version. Someting called Gadoe?, not as efficent .
With computers cant program concept , fundamentally is because
we've written a bad programming language.
By bad, is our logical constructs lend themselves to the problems
of paradoxes and the halting-problem. We need to get thru to
writing structured commands that cannot have a paradox within
them. We're entering philosophy here. Inately we're not
good at it because our language is so mesy, so full of
abstraction and ambiguity.
If we end up with a paradox, tough luck, it won't do us
any harm , lets move on to the next thing, it
doesn't make us fall to the floor?
Occasionally we as humans do crash. There are problems with the
human computer . But if we want AI working
safely , safety is the key aspect, then we need to solve some
of these problems. You cannot afford to have a computing
device having a problem with a paradox .
Say, I make a really good stamp-collecting programme and the computer
decides the best way to do that is to kill all humans
and turn them into stamps.
Program in moral thinking?
Then you get HAL in the 2001 film, would not detonate itself .
It all lies at the human end not specifying clearly enough what they
want. I'm a programmer as a living. The biggest problem
is teasing out of my clients exactly what they want? Most of the time
they don't know , until they see it?
They have a picture in their head , but its not solid. Then there is the
trolley problem. You have a large wagon going along a rail track.
You have a split , on one track there is 5 people and the othe rtrack ,
one person. The trolley is going to hit the fivesome, do
you flick a lever , save 5 but kill 1. It emerges in the world
of computers driving cars. The creator of Resource3 , Mika?,
he conducted an experiment live recreating this. The psychological
effect on the people who flicked the switch . The people who
didn't flick the switch didn''t seem to care. Those people died ,
but I don't take responsibility. 2 people in that experiment,
who flicked the switch , were traumatised. This is the complexity
of moral decision making. Imagine a computer being upset because
it had run ouver a dog. How do you build in empathy in the
With morality. If someone does something bad to you, is your
response , do something bad to them. To teach them a lesson
or do you give them so many chances. Someone called something
like Axylrod . He got a number of programs to see which woulfd
give the best response and it turned out to be tit-for-tat?
An early example of trying to get computers to handle moral solutions?
I believe th ewinning strategy was I'll do something nice
and if you do something nice back, I'll reciprocate.
Do somethin bad and I'll reciprocate doing bad . That program
won out over all the other very complicated ones.
So logivally we stop having guns in vg?
We'd love to get rid of guns in vg , but cant find a solution .
Guns and vg are a kind of hybrid of what is the simplest
interaction . It would be nice to talk to a computer ,
have sentences and have a conversation, but we can't do that.
So apparently we use guns. Its an interesting deconstruction,
that the easiest way of resolving any conflict is to overpower
your opponent. Same if an alien came into this room, we
cant have a conversation.
There was ages ago a faux psychiatrist program called Lisa.
"I'm feeling really depressed today."
"oh, you're feeling really depressed"
just reflection. But there are now sophisticated programsd that
you would not know you were not talking to a trained therapist or
(From the psychiatrist) It is a developing area, developing apps
and programs like that , as there are not enough psychiatrists
or social workers , in the UK.
If its helping the patient , then its useful. Some people react to
computer CBT programs more positively than to a therapist.
There are lots of facets to the games industry and technology
which are external to the industry, which are used in the industry
and influences other industries. We've talked a lot here on
non-vg , but still relevant. What other stuff gaming tech does
, moral questions as well. Does this go back into the vg industry?
Yes. One of the good things about the vg industry is it makes a lot
of money, which ironically , makes money in other areas of
research. The big thing about GPUs leading to machine learning
which is a huge boon generally, compared to if everyone said, I'm
not buying a graphics card. Its pleasant to know there are
positive aspects to vg. One other thing to use vg for, is to
support anaesthesia , because if you can distract someone by playing
Mario , you can do more to them, before they register pain.
Computers playing games and internal AI , you didn't talk about
external AI, Go and Chess games etc , what are your thoughts?
Once you start remote computing stuff , you can spend a lot of
time on it. There isalmost a branch-point between logic AI
which is how to develop winning strategies and the human
perception AI , I'm more interested in the human perception
stuff. Basically AI cheats, because it knows everything and the
only thing it doesn't know , in say chess, is your next move.
We've tried brute-force i nthe past and its pretty much incomputable.
The Deep Learning has had to come up with better algorithms.
For me, getting past the computer cheat of just looking thru
all the data it possibly can , then good chance of knowing your next move
will be because it has so much access to data. Rather than looking at the
screen and make some predictions , based on fair information .
If you flip this round, you were playing a game against a computer
and it let you dump out all the memory and scan thru , so you
know what its thinking , it will be much easier for you.
Perhaps 100s of people combing through the computer, so someone
knows what it will do next.
Thats why we use computers, because their better at that than we are?
Go was interesting, like chess a game of limited inputs but so many
many outputs. The Delta2? exampe was mind-blowing because its
almost infinitely complex? The computer started doing moves , that were counter to the human move , that were way too quick for me to counter
and doing strategies a human just would not think of?
Its almost verging on creativity , but not , as its only analysing stuff?
For me, chess , showed great scope . How many infinites are there, chess
is difficult , but then Go is phenominally difficult , that no one thought
computers could be doing, even just 8 years ago.
That Go program self learnt compared to Deep Blue that was trained
by humans. Now its gone on to plau Doehow?, now its limitless , what is it
capable of doing. Doing things that world experts in Go thought , thats not a good move , but 12 moves down the line, was a good move.
One of the classic signs of intelligence , is questions. I try and teach
my students, the more fundamental the question , the harf=der it is
to answer. Most of these questions have been high . But students often
find they get down to a really fundamental level and think this sa
a stupid question. But it can be profound, almost to the level of
what is it "to be".
I assume its still the same as my college days that , even if the
lecturer asks if there is any questions, no one puts their hand up
as all too scared to be shown-up amongst their peer-group. ?
Yes. If you get to the end of your lecture, ask if any questions,
there ar enone, then you've failed.
Can you pick on someone and ask them to explain something?
You can but they might not come to yoyur classes any more.
I have a giant pink inflatable beach ball and pass it around the room.
Thwere isa bunch of rules , but basically you get to ask questions.
Even if they know the answer, it still encourages them to ask
questions , because they can then ask questions of other people.
It breaks the tension, I can ask a question now.
Silly quaestion, Whats Minecraft?
Look on you-tube and you'll find people pklaying it.
Its like Lego . There isan eductional version of it used in schools.
I could give an hour lecture on what Minecraft is.
A simple explanation , break a tree, use the tree to make a thing to
break a rock , use the rock to make a thing to break an Awk,
use the awk to make another thing to break more Awks.
Its like 2001 A Space Odyssey, you start as a primitive creature with
very primitive tools and hopefully you will be a wizard one day.
Tuesday 21 May 2019 Peatland Ecosystems: Proxies for environmental change and current management. Anna Leveridge, Jordan Paker, George O'Ferrall of Soton Uni
Peatbogs are very wet , generally found in colder places. Not a lot
appears to be going on in terms of biodiversity but later on we'll
find it is important.
Peat is the layer of sediment that builds up , when organic matter
decays. It can be thick and so useful for re-constructing
past environments. There are 2 main types of peat bogs, blanket
bogs which look pretty homogenous when looked at from
afar. They hug the landscape and look like grassy hills etc.
Then reaised bogs that occur mor ein isolation , accumulations
of peat rising above the water table. So they are fed only by
precipitation rather than precipitation and gtound water.
Another one is the acrotelm which is the active peat layer at the
surface , where things get deposited and all the plant matter
decays. The sediment corer, a metal cylinder thrust in the
ground and can pull out a really long circular
core of the sediment.
The distribution of peatlands worldwide. Mainly found in Asia
and N America . Globally 4.2 milion sq km is peatland ,
which is about 3% of all land area. Europe has quite a lot especially
in Scandinavia. A significant difference between the peatlands
that build in the north of Europe and the south, because of patterns
of rainfrall and temp. They prefer colder temps and more
precipitation to build up. In the UK blanket bog forms the largest
semi-natural habitat. Important for bio-diversity and conservation,
because they are good at regulating C. They are eco-systms that
have high acidity and should be waterlogged if in good condition .
They are anoxyc which means no oxygen in there at depth.
This explains why peat accumalates rather than decomposes
unlike othere systems like forests. Without oxygen the plants canot respire .
Important habitat for lots of wildlife, species adapted to acidic
conditions. So endemic plants, ie plants only found in that kind
of ecosystem. A key species is spagnum moss , which produce
phenolic compounds which produce acidity , the main driver of the
conditions there. They are conserved under UK and EU law.
The Habitats and Wildlife Directives of EU law , transfered to
UK law on Brexit. Specialist species live here because of the
niche conditions. Not many species could cope with the low
pH and constant waterlogging. So GreenShank, Dunlin , Red Throated
Diver and Woodlark. And similar variety of invertebrates .
As these peatlands change potentially due to anthropogenic
forcing like extracting peat for peat-turf fuel , the ecosystem
changes and as it changes sometimes it becomes more like
heathland and changes the species variety supported.
So the benefits that humans can derrive from nature , 4 main types.
Culteral services, can't be easily defined in monetary terms.
Like taking a walk within nature and cnsequent well-being.
And the aesthetic value , wild and atural. Provisioning
services, that can more easily have a price tag attached.
eg fuel in Ireland 17C, animal bedding and as soil
conditioning for horticultural purposes. Regulating
services, benefits to water quality alone are worth 15 billion pounds
in the UK. So the way the environment reacts and responds to the
wider environment, such as sequestering C , mitigating
floods as peatland is essentially a sponge, protecting
The historical archive that the sediment core can produce.
Using a combination of plant remains and pollen, its possible
to reconstruct pictures of past landscapes and past climates.
In the case of the UK peatland as far back as 10,000 years.
So why bother reconstructing past enviro. If you can reconstruct the
past vegetation of a site, then you can reconstruct its climate
and how the vegetation changes hsppen according to the climate
change. Of relevance now , because we can look back into the
sediment history and see how bio-diversity has responded
to potential climate change in the future.
Marine sediment cores and ice cores are used also, and for
much longer timescales . But on the local scale lots of benefits as being
more accessible and economic as no boat required and don't need to go
to Antartica. There isa large range f proxie measures. High temporal
resolution , so you can study many changes over a relatively short time
frame. Autochtonous, developed at site, not transferred from elsewhere or
redeposited. Maybe every 20 years rather than ice-core more like every 100,000
years. What can a tube of mud tell us, they contain , pollen , micro-fossils
and tephra. A proxy is something you can measure , and from that
measurement tell what something else is doing.
You can find traces of things like lead and titanium within cores,
and this can show human influences carried into the air. Traces of lead
can show the timing of the building of roads and the shift to unleaded petrol.
Titanium is a measure of erosion , can see when changes of land use
occured , just from measuring titanium.
Pollen under a microscope has different characteristics
for each species. They can be difficult to tell as not always nice
and flat, can be folded or torn. possible to create pollen diagrams, bottom
scale is percentage of what pollen species in the core . So birch, pine ,
alder ,beech, lime. Each species likes specific conditions. Godwin in
1940 saw this and that in some areas these patterns of
change are similar, he created a pollen-zoning system. Can be used to
compare with different areas, as C dating is expensive. Micro-fossils
are basically small pieces of plant matter , like seeds ,fruits ,stems .
These enhance the pollen record, as you can only see species that
produce pollen . There is good taxonomic precision , compared to
pollen which is generally limited to only family level , you could
say its Pine, not Scots Pine or another pine.
Different cores from different sites will have different depths.
Say in the New Forest , more organic deposits available, so
may get deeper faster than areas of ?? may be more shallower.
So how can we take cores from different regions around the world
and look at them in the same contexts. For that we use tephrochronology.
Tephra is volcanic ejecta , any ejecta, could be large boulders
, volcanic bombs, lakoli? smaller rocks and fragments. But here its
ash, tiny shards of minerals and rocks. Such ash gets pushed up
in the atmosphere , sometimes the stratosphere in large events.
Then its deposited around the world. This occurs on a rapid
timescale , in geological terms. This tephra deposit occurs in
about 1 year, so that one tephra deposition event in one
location will be the same time as any other location.
A thick layer would be alocal eruption If a very thin
layer of ash ,often hard to identify , appears as these shards
in otherwise organic. Each volcano has its own individual
fingerprint of spectrum of minerals. Find one mineral makeup in a
tephra location and the same or very similar mineral makeup in another location ,
they probably came from the same eruption.
So you can draw a line between them, although apearing at
different depths in the cores. Take the different depths and you can
create a model for age v depth per location. We can date the tephra
layers by likes of argon-argon dating but usually using mineral dating.
Tephra dating is the main technique for synchronising cores
from different locations, its not always plain sailing.
Tephra deposition is effectively synchronise in geoplogical
timing sense. Near volcanoes give deeper layers of tephra , but farther
away may get a thin layer or no layer. In extreme cases with cores
at the same site which don't have tephra layers in them
but move 10m away and have a different layer profile to it, less
shards . So we have to take multiple cores from
the same sites and average out what you see. You have to check that
cores that show no layers, from the same same site, are representing the
same time series. Also animal activity and plant growth within the
deposition area, so the often fragile ,thin tephra layers ,
especially in the northern hemisphere , where there is less volcanic
activity , can be disturbed . So faults can occur where material
slides down , so can appear younger or older simultaneously .
Also vertical mixing from tephra being deposited on top of a living
plant producing a more diffuse tephra layer. You want to identify the
paleosurface , the surface where you can say at this time , where we
dated this to , is the surface as far as internal composition.
The plant matter and the geological matter at that layer.
You cant say , this time period looked exactly like this,
just a geeral idea. With tephrachronology we can date volcanic
eruptions very specifically to each volcano. We also get apicture
of what disasters wer elike . I the last 10,000 years there has been a
lot of eruptions and disasters can define eras in geology, extinction
events define new eras of life on Earth. We come up with a very
real picture , that we can compare to today, to understand how
disasters happen. A core with thick deep tephra layers in a short
space of time , so many eruptions in one area could be hostile to life.
In recent history its possible to track very recent eruptions.
A study looking at tephra layers in Ireland , could track eruptions of a
singl;e volcano in Iceland over 1,000 years. For recent Icelandic
eruptions we'll be able to see those layers of ash in only 100 or 200 years .
So future geologists and geographers will be able to re-establish
our current environments.
Its possible to use volcanic eruptions to date eruptions on other planets,
not using peat cores of course. Coring on Mars would show ash
An important efect of peatland is carbon sequestration. Possible to us e
natural systems to take our C overload back out of the atmosphere.
Peatlands are possible the largest C store on the land of Earth.
Northern Hemispheric peat alone has 25% of the world soil carbon.
These peatlands can exchange 20 to 30 gram of C per m square, per year.
England alone has about 14,185 sq km of peatland, so many tons of
C has been sequestered in a year. A large ampount of C stored in these
peatlnads. Globally estimated to be between 400 and 600 GigaTons.
So compare with what we have of estimates left in our C budget
of about 375GTons. So they contain as much C
, than we have left to put into the atmosphere , before
we do irreversible climate change. Around 1000 Gtons released since
the start of the industrial revolution. There is difficulty in measuring this
potentail, due t permafrost. Occurs primarily in the N hemisphere
, as most land mass is in the north. Permafrost is a layer of
frosted ground that pretty much never melts season to season.
Methane is trapped in the frost as bubbles and in the micro-organisms in the
soil that is frozen. When unfrozen , will decay and release
methane and othe rC compounds , back into the atmosphere. Similarly with the
bubbles. Video of igniting and exploding methane previously
trapped in permafrost. We're already seeing permafrost melting
rapidly in most of the world. This is just the surface, way more buried
deep under the surface , trapped there while frozen. There is twice as
much C in permafrost buried , as there is in the entire
atmosphere right now. If we melt all the permafrost and put all that
C back into the armosphere, then a serious negative impact on
our climate. A positive feedback loop , that melting, C release, increasing
heat in the atmosphere causing more permafrost to melt.
We're not so sure how much C will be released when it melts.
Its so recent a phenomenon, we've not got good measures of it yet.
Organisms decay at different rates , some may not decsay, some
might re-activate and start sequestering C again.
Wet peat acts better at sequestering old c than has been emitted,
so could end up overall more sequestering than releasing.
The worst case scenario , the Clathrate Gun or Clathrate Canon.
A hypothesis that methane from permafrost will be released
and once released can never be sequestered back because the
offsets from revised peatlands will not be enough to resequester.
So we may already have set off something, we won't know
until its too late. There may be clathrate methane released from clathrate
mounds at the bottom of the ocean. They are under continetal
shelves, that s something we cannot deal with. But regards
peatlands we should now be ensuring we can sequester as mmuch C as possible,
we can help there. Mitigating against further peatland permafrost melting.
Peat is a good source of fuel, its C rich. People living in upland
regions, harsh environments, historically extracted it for fuel.
Then unintended consequences that so often come along with environmental
issues. To extract peat you need to drain it completely , the C stock
changes. Change 1 thing , there will be a series of knock-on effects.
Lowering the water table, damages the sphagnum moss , it cannot
grow, no peat accumulation, and total loss of the peatland habitat
and loss of its species. Instead of individual crofters, the issue now is
commercial extraction, less frequent but larger effect. Sometimes for fuel
or more often for horticulture.
So drains started on the sides , containing a raised hummock of
peat. Eventually the sphagnum stops growing as the water table is lowered,
C starts to be released. Forestry can take over peatlands as its very
flat , easy to work upon. Trees are very good at tsakig up water,
lowering the water table and peat growth stops totally.
Many thought planting trees in such places was very "green" but
ended up losing the unique habitat of peatland.
For agriculture again ideal landscape for likes of rice growing,
which needs flat waterlogged land. The worst comes from livestock
farmig , they graze nd will easily over-graze and trample
the key spagnum layer on the top. The natural slight undulations
will flatten out and loose structure, water table will
lower and trees and shrubs will be encouragesed to grow and again peatland loss.
Groouse shooting has had a large cultural and economic effects.
A large part of upland economy comes from grouse shooting.
To rais grouse requires a very specific management regime , involving
drainage, fires and sometimes kill off predatory species of the grouse.
Again you tend to loose peatland and get a heather dominated landscape.
Wildfires - benefits and problems.
Fire is part of the natural ecosystem. Fire causes a successional
change in vegetation. This can be benefit but also encourages the growth
of small shrubs and heather that are much more flammable than spagnum.
Try to set fire to a wet sponge. There are threats particularly , Malaysia, Indonesia
anf Peru, where peatlands are often near large urban areas.
So when they burn , very harmful air pollution to the humans.
Nearer home, Saddleworth Moor near Manchester burnt last
year. This highlighted the deposition of metals in heathland. When they burn
the metals and toxins are released into the air. Linked to respiratory
issues for the young and old particularly.
Wildfires are predicted to increase under climate change. We will
see more Saddleworth Moor events. The effects on peatlands from
climate change. Increase the temp of the atmosphere , the risk
of wildfires and peatland loss increases. Therefore more c in the atmsophere
and then more warming , positive feedback again.
Peatbogs are more vulnerable to drying than to warming.
However with climate change in the UK precipitation is likely
to increase in most areas. So in the UK would be beneficial
but in other areas like Indonesia likely to be detrimental.
Most upcoming effects wil bwe negative with climate change.
One threat is invasive species. If temp does change , it alows the
climatic range of species to encroach. Spagnum moss is what we want to retain
but is easily out-competed by othe rspecies. Reproduce quickly, spread quickly
and more likely to use the new resources that may come with climate change.
While peatland vegetation will struggle to adapt. Things like rhodedron
will quickly invade. Humans laying down pylon lines across
heathlands , is a good nursery for invasive species. They grow to a point
where they can then spread out. People inadvertently carrying spores
and seeds on themselves, and even domestic pets is an issuue.
A lot has been done and a lot of research, because they are so important
but so vulnerable. THey are so sensitive to change,
once humans start meddling , then unintended consequences.
Moist layer transfer, you take some acrohelm and sphagnum
and move onto areas to start the peat productiuon again. Things like
peat mosaics will start it of. If you can convince a farmer
to create such stepping stones for species to move acrsoo.
Also long strips of linear habitat within agriculture
improves connectance and reduce isolation effects.
Counter though is improve the connectance and you increase the
chance of wildfire and invasive species spread . So management
plans can have negative consequences.
Removing trees where they start to invade. Improving the
water table is important, will help restore dried out peatland.
Fires do break a dominance that can grow and give other species
a chance. We don't know what level of fires is natural
in origin. A simple remedy is just blocking drains.
Walking by humans and vehicle traffic can be managed
Legislation varies between countries, but protection of heathlands
is not number 1 priority. In Ireland there's no specific law to protect
peatland. Indonesia 2017 , protection legislation there was shot doewn be the
supreme court because of effects on local likelihoods.
As often 2 sides to these stories.
Some f the things you can find in peat bogs, Tolland man found in
1950 and he lived in 4C Denmark. His preservation is superb , due to phenolic
compounds . Bog-butter, people in Ireland used to bury theor
butter in peatland for preserving. It is sometimes found and dug out
and tastes like butter still. Plastic is preserved in peatland ,
including microplastic because its ever present now. Mammoths,
best chance of DNA cloning a live mammal from the extinct past, as
so well preserved a full genome from the body.
With Cathrate analysis, with the melting of permafrost, methane would
be released. Some tens of millions of years ago, the Earth was a lot
warmer than today. You did mention it was irreversible but with the Earth
hotter at one point in time, if there is methane in permafrost now
doesn't it mean it is reversible and if the case, how severe if it does contine
to melt. ?
Its reversible in a geological timescale, but climate change only really
started in the 20C. If permafrost melts now then more than
doubling the amount of C released into the atmosphere.
As for if reversible, there isan intereseting argument that if we can stop it
from hap[pening at all. The world was very different 10 million
years ago, it was very different 3 million years ago. The current ppm of CO2 has just reached the
the level it was 3 mya, but the Earth was very different then.
Continue with C release and Earth will become something we will not
recognise. If permafrost melts it will go beyond the current situation.
For IPCC RCP scenarios , no consideration is made of future
permafrost melt and extra C release, it only considers human released C.
So worst case scenario would be entire equatorial belt countries being uninhabitable
and mass extinctions. By the time we could possible reverse it all , all too late.
We must keep an eye on it but the fingerprint to it will only turn up
after its started. Scientists go out there and test the bubbles for
methane. Peatland fires in Russia released such methan and the fires burnt for
many year, because so much of the gas beneath them. We should
be looking at peatlands for indicators , if we don't change our ways.
There was a period of 5 to 7 degree C warming during the mid-pleistocene,
about 800,000ya . That has been hypothetised to have been caused by this
clathrate release. So we've seen it before but it doesn't really
matteer if its reversible or not because we've seen it before and in that earlier
period something like 70% of terrestrial life got wiped out , due to that
temp increase. S owe want to prevent it rather than reverse it in the
For the marine deep deposits of clathrates , is it normally kept there
by both temp and pressure?
Its the calcification depth . The threat is sea levels decreasing which will
not be happening.
Are methane hydrates a different process?
Methane hydrates are more like trapped gas but clathrates are solid.
It then dissolves and releases gas. In the mid-pleistocene it was
changes in sea level that caused release. Sediments on the continental
shelves were disturbed and as they fell , they shifted the deep ocean
With the tephra layers , do they spread far and evenly enough for
you to sequence them like dendrochronology?
Thats what we use them for. Some eruptions don't go far
enough. You have to account for the fact that the eruptions that
do turn up tend to be very large ones . Sometimes the layers are so thin, they
are missed, in cores where you should be getting tephra.
That particular area could have been protected by wind shadow.
For pollen analysis, how do you tell the difference between say one pine tree 10m
away depositing pollen and 10 acres of pine trees 10 miles away? and pollen generally can
travel hundreds of miles?
Some species have longer dispersal distance than others. By using pollen which
they know where its come from and work out what percentage of
pollen can be found in a core. Some pollen from grasses and
gramaloid? can travel very far. They know that if there is more than
5% of that pollen , then it indicates more long distance dispersal
than immediate dispersal. They can apply arbitrary units on whether
something has been dispersed far or just local, a reference.
What is generally the state of peatlands in the New Forset with all
the cattle , horses and people trampling around?
Historically I think the most effect would have been the rearing of
agricultural livestock. Nowadays they are stringent about what
is allowed on the land.
About 5 or 6 years ago the commission had a change of policy.
They started actively blocking some of the drains. Certainly in the
south of the forset, so the ground could be rewetted, not for any
agricultural reason. It has a greater conservation value and
significance as bog or semi-natural bog than it does as very
poor quality grazing land.
Would you know what is happening in the very North, Caithness
after the war, a program to introduce fast growing
conifers, big plantations of Sitka Spruce. Lots of drainage went on ?
Not specifically. I cant believe it would be good.
Basically juniper tundra , and when the wind got up , they just all fell over?
Unintended consequence. If people want timber they'll plant
You're not aware of any programmes to re-purpose.?
There are always vested interests involved .
Are you aware of our local mystery of the bog bodies found at Fawley?
When Fawley Power station was built 50s/60s. Men in white coats
took away the bodies. The official record has it they were bits of old wood
not bog bodies. Experienced groundwork crew , fullly familiar with
bits of old wood, recognised the leathery appearance. Someone contacted
the "Ministry" and instead of the men in white coats dumping the
so-called wood on the site bonfire , took whatever it was away and nothing heard
(Ironically) In my experience you typically wear whitr coats to remove
bits of wood. It may have been considered a bio-hazard.
It would be posible that some experts would come along and take
away just pieces of wood for some further analysis, dendrochronolgy or
Yes, entirely possible.
There's quite an extensive layer of peat under a lot of the
Soton area, deposited meso-lithic sort of times, at the Boldner Cliff
sort of ground level of human occupancy, now well under water in the Solent?
Please make emails plain text only , no more than 5KByte or 500 words.
Anyone sending larger texts or attachments such as digital signatures, pictures etc will have
them automatically deleted on the server. I will be totally unaware of this, all your email will be deleted - sorry, again
blame the spammers. If you suspect problems emailing me then please try using
keyword for searchengines , scicafshadow, scicafsoton, Southampton Science Café, Café Scientifique, scicaf, scicaf1, scicaf2
, free talks, open talks, free lectures, open lectures ,