Gastrophysics is changing the way we understand food

falso-huevo-de-erizo-photo-jose-luis-lopez-de-zubiria-mugaritzA staple ingredient of many science fiction movies is the ‘food pill’; a small tablet containing all of humanity’s daily nutritional needs. Whilst not light years from reality, this glimpse of the future fails to acknowledge the important social benefits humans derive from food and communal dining. But food alone is thought to be only a small, if central, part of what makes up a fabulous meal. Chefs and scientists (not a mutually exclusive bunch) are increasingly blending science, technology and gastronomy to stimulate all the senses in an effort to produce the greatest dining experience humanity has ever known.

The Provençal Rosé paradox, according to Charles Spence of Oxford University, describes the unwelcome magic trick whereby that delightful bottle of wine sampled on holiday, has seemingly turned to vinegar when opened at home. The wine may be the same, but the relaxed mind and sparkling company may not have survived the transit. It depends on how the brain absorbs and interprets information from all five senses. Multi-sensory perception, as it is known, is becoming better understood and exploited. Particularly the relationship between taste and smell. These two senses, compared to the others, are filtered to a lesser degree on the way to the limbic system; the part of the brain processing memory and emotion. Foodies are excited. No more so than Chef Andoni Aduriz, holder of two Michelin stars at Mugaritz restaurant in San Sebastian, Spain: “in every corner of the world food is becoming a priority for research, innovation and creativity”, he says.

Diners’ emotions are being manipulated in artful ways. At The Fat Duck, Heston Blumenthal’s three-Michelin starred restaurant in Britain, reminder cards delivered a month prior to a reservation are scented with the same oil contained in the wooden door frame through which a diner passes on the big day. Likewise, bags of sweets to take home repeat flavours experienced at the table for weeks after (or days, depending on the diner’s sweet tooth). They are both subtle ways of elongating and elevating the meal in the diner’s memory. Heavy cutlery is also perceived to herald more sophisticated food. (Concorde eschewed the fuel-saving properties of light cutlery for this reason.) And when food was laid out like Wassily Kandinsky’s Painting No. 201 in a recent study, diners preferred the meal (and were prepared to pay more for it) to a plate of identical but ordinarily-presented ingredients.In it’s hunger for information, the brain can be seduced by the senses to alter flavours, experiences and memories. As Professor Barry Smith, Director of the Institute of Philosophy and Centre for the Study of the Senses at London University advises, “if you don’t like the wine, change the music”.

The implications go wider than the dinner table. Ultimately our five senses are received in the brain as electrical signals. So Professor Adrian Cheok of City University in London has been producing his own. Delivering an electrical current between 50 Hz and 1 KHz to a sceptical volunteer’s tongue, Mr Cheok can electrically produce a taste sensation of lemon. The possibility exists then that in the future humans may transmit taste over the internet. Or, through, say, implanted devices, electrically alter or invent flavours. Children could be encouraged to eat unpleasant tasting but healthy foods. Diabetics could enjoy sweet foods without a trace of sugar. Or dementia sufferers could be repeatedly drawn back to the present through a memory of their favourite flavour. Food for thought.

The photo accompanying this post of sunripened berry fruits, drops of extra virgin olive oil, lime and cold beetroot bubbles is courtesy of Mugaritz restaurant and taken by Jose Luis Lopez de Zubiria.

Advertisements

Gridiron versus gridlock: aspirant American cities

Jacksonville-Jaguars-vs-Dallas-Cowboys1_display_image“If you want to go south,” quips an old Miami gag, “drive north”. This somewhat snooty put down describes northern Florida as a cultural backwater; the usual butt of such jokes being America’s southern states. It hopes to highlight Miami’s status as a beacon of worldly sophistication and dynamism. But with national politics divided more than ever before, many big American cities are as gridlocked as government. Meanwhile, some medium-sized cities are employing a more bipartisan approach and quietly getting on with business; an increasingly attractive prospect for global investors.

Take Jacksonville. Located in north Florida it is one of the targets of the derisive comments from its larger cousin to the South. With Orlando, home to Disney World, also located in the Sunshine State, it is tough to compete with “Miami and the mouse” laments Mayor Alvin Brown. But the city has global aspirations. Which is one reason why the Jacksonville Jaguars, an American football team, played a league game against the Dallas Cowboys in London on November 9th. The Texan giants were always the smart money favourites, but the plucky upstarts threatened briefly. Ultimately though, Dallas prevailed and two Jacksonville players were treated for possible concussion. The city hopes to avoid similar headaches as it competes with the big boys. But even with a billion-dollar-generating port providing 65,000 jobs, inward investment from France, Brazil and elsewhere and more corporations headquartered in the city than anywhere else in Florida, the difficulty is marketing and educating, according to Mike Breen, Director of Jacksonville’s business chamber.

Mr Brown, a Democrat, says the secret is understanding that local benefits flow from cooperative, broad-based and business-friendly politics; what the Brookings Institution, a think tank, calls “networks of metropolitan leaders”. He praises Republican Governor Rick Scott as a “valuable player” and is proud to count Mario Rubio, older brother of Republican Senator Marco, on his team. He believes it partly explains why Jacksonville’s unemployment rate is bang on the national average of 5.8% (Miami’s rate is currently 6.1%).

It’s a similar picture in Denver and Austin, more global aspirants. Like Jacksonville, these cities hope one day to be the “beachheads” (or gateways, depending on the geography) to the consumer markets of the US interior. Investor confidence and finance flexibility are making second tier markets in stable economies more attractive, says David Hutchings of Cushman & Wakefield, a global investment company. Austin, in particular, is emerging as a technology hub. With a good cultural mix and quality of life, the city “ticks all the boxes” and appeals to fast growing companies, he says. Michael Hancock, Mayor of Denver, agrees he cannot afford to get bogged down with party politics. He points to Denver’s $4.7 billion transit system as an economic driver that would have come off the rails but for a consensual political approach. International investors, including Japan’s Terumo Corporation, the world’s largest blood transfusion company, are taking note. Mr Hancock used to work as Huddles, the mascot for the Denver Broncos, the local American football team. For medium-sized aspirant US cities, perhaps gridiron is the route to the world after all?

Obituary for the Devil

the-devil-throughout-history-photos-3-horned_pig_devil“God is an absentee landlord,” shouted the devil, “I’m here on the ground with my nose in it since the whole thing began!” He always sought to tempt from a position of mischief. But even speaking through Al Pacino man still rejected him. The Gospel according to John called him ‘a murderer from the beginning.’ So when the General Synod of the Church of England replaced references to him with the anodyne ‘evil’ on July 13th 2014, the game was up.

He had been expected to go out every day and deliver the pain for God. As the CEO of hell, Rowan Atkinson had him organising those newly arrived in purgatory into murderers, looters, lawyers and the French (and with no access to toilets: damnation without relief). Always he was looked to as the administrator of the worst of humanity and was hated for it. Why? It’s just the job he got stuck with, Kent Anderson observed.

He led the evil empire long before Ronald Reagan had heard the term. And he became more than simply Old Nick. He became an idea. Invoked to express human opposition and to characterise human enemies, Elaine Pagels saw him as the interpretation of human conflict and a standing puzzle in the history of religion. But was this too much responsibility, even for a fallen angel? Is not the inconvenient truth that man needed the devil, to explain the worst in himself and eschew responsibility?

God wrote to man: “You invented terms like ‘just wars’ and ‘friendly fire.’ And it was you that didn’t know when to stop digging deeper and when to stop building higher.” But man had stopped listening. He preferred to blame the father of lies; Dante’s ill worm that pierces the world’s core. It hadn’t always been this way. For much of his early career the devil was the servant of God. He was necessary. He tested the fortitude of Job. The Hebrew bible and the only story in the New Testament in which the devil got a look in – the Temptation of Christ – contained the sense of a dilemma: if God is omnipotent, where does evil come from? Neither Christianity nor Judaism has ever supplied an adequate answer.

The second and third centuries AD were the heyday for apocryphal stories about the devil. The Manichean heresy, built on pagan and Christian gnostic beliefs, argued for the existence of good and evil forces. The two are in permanent conflict, with the human world of flesh and sexuality entirely governed by the force for evil. But the heresy said that trapped within man is a spiritual element which means he belongs also to another world of truth. The purpose of human existence then, was to escape from the dark to the light. Augustine started as a Manichean but then rejected the doctrine. In orthodox Christianity he rejected the Manichean ideas, limiting the role of the devil and claiming evil is simply the absence of good.

The devil fought for his place in the human soul. The Black Death led to an existential crisis for Christianity. If priest and pauper alike were dying, what could be said for the omnipotence of the church? Perhaps there was another force, as powerful or perhaps more powerful than God. In Robert Frost’s opinion, we dance around in a ring and suppose, but the secret sits in the middle and knows. Was the secret God or another? But it was the Reformation that saw the devil really up his game. The idea took root that the devil was out to destroy God. This was a radical departure from the somewhat benign temptations he had placed in front of Christ. Appealing for a demonstration of supernatural powers was an attempt to draw Christ away from being who he was. Violence and torture had not entered the script. But it suited the late medieval church to promote the devil and his alliance with humans through the cult of witchcraft as a way of explaining its own internal crises. The church split. The opposite was demonised. Man separated from God.

Milton’s Paradise Lost, published in 1667, was another boost. Matthew 7 had set the scene. Describing evil as coming from a person, rather than going in, it suggested the evil the Lord’s Prayer protected man from was that which he was capable of, not that may assail him from outside. The description of man as an individual, separate from God and responsible for his own actions built on this doubt. But a psychological journey into the soul of modern Europe appealed to the Enlightenment and just as Eve had been tempted by the devil in the garden of Eden when she was on her own, separated from Adam, so the devil saw his chance again. The whispers were listened to: the church can no longer protect you; you are on your own.

It helped that intellectuals were slow to deny the existence of devils and witches. Intricately linked to other spiritual beings such as angels and, indeed, God, to deny the existence of one was to deny the existence of the other. Better then not to deny. D H Lawrence opined that devils belong to man, he must accept them and be at peace with them. But if, as Philip Almond says, the devil objectifies the often incomprehensible evil that lies within us and around us, are not God and the devil mutually exclusive? If the church now teaches that the devil was only symbolic, cannot the same be said for God? The devil would love that idea. It may be a pyhrric victory but by denying the existence of the devil, perhaps man has also killed God. The greatest trick the devil ever pulled was convincing the world he didn’t exist. And like that…he was gone.

The picture above is a detail from a 16th-century painting by Jacob de Backer in the National Museum in Warsaw.

Crime prediction and detection: exploiting mobile phones and twitter

victorian-predictions-11How far should one’s front door be from the road to minimise the risks of burglary? Aviva, an insurance company, say this “goldilocks distance” (not too much, not too little) is six metres. Any less may invite opportunistic vandalism; any more and determined ne’er do wells could get up to mischief unseen. It is one example of how big data are increasingly being used to predict events in the real world, but there are others. Three recent developments show that targeted analysis of data and developments in mobile communications and social media offer crime-busting opportunities too.

In a paper released at the International Conference on Multimodal Interaction (ICMI) in Istanbul on November 15th, a team from the University of Trento in Italy and Telefonica Digital, a Spanish telecommunications firm, have used mobile phone data to predict future crime locations. The research originated from the Datathon for Social Good, a public competition kicked off in 2013 by Telefonica Digital, The Open Data Institute and MIT. With London as the test-case city, three sets of data were made available to participants, each collected over three separate weeks between December 2013 and January 2014.

The first data set consisted of anonymised and aggregated mobile phone data from O2 users, broken down by gender, age groups (from ‘up to 20’ to ‘over 60’) and whether the person was a resident, worker or visitor to that cell tower location. But this information was not used to track individual phones. Instead it highlighted the “social churn” of population movement over time, says Alex Pentland of MIT, who sponsored the work. The second batch of data contained 68 different metrics about the population, such as statistics on demographics, business survival, jobs density and teenage pregnancies. Finally, all reported crimes by location were recorded, categorised as one of 11 possible types (anti-social behaviour, burglary, violent crime etc.).

The three sets of data were grouped geographically and mapped to mobile phone tower coverage. The high population density in central London meant these areas were as small as 200 square metres in some places. In total there were nearly 125,000 such areas, full of juicy data for a clever algorithm to munch through.
The machine learning algorithm in question was let loose on a training data set consisting of 80% of the total information available. It used various combinations of the mobile phone and population data to best account for the reported crimes. These combinations (numbering many thousands) were reduced to the few dozen reckoned to be the most important for accurately predicting future crime locations. When a second test was conducted against the remaining data, the programme achieved an impressive 70% accuracy rate.

But is such a system practical? Expecting mobile phone service providers to allow law enforcement agencies unfettered access to their data is unlikely to get a good hearing in the court of public opinion. And anyway, mapping the movement of people through mobile phones is unnecessary, according to Commander Simon Letchford, the head of the predictive policing system for the real-world London. Crowd-rich environments, attractive to those with criminal intent, are generally easy to spot, he says.
London’s system, on trial until December 2014, is based around three pieces of information: the time, location and type of crime. These are enriched with other metrics where available, such as type of property burgled or whether transport links bisect housing estates (possibly encasing criminal activity one side or the other). This makes it possible for the machine learning algorithm working through the data to identify those 250 square metre boxes around London of most interest to the bobbies on the streets.

Commander Letchford believes such methods will nab the “optimal foragers” among the criminal fraternity: those that habitually operate in the same area or commit many crimes over a short time period. He is also able to put into practice the theory of the Koper Curve, devised by Christopher Koper of the Centre for Evidence-Based Crime Policy at George Mason University in America. His theory suggests police officers should spend no more than 15 minutes every couple of hours in an area to be most effective. Any longer and the shock factor on local scallywags diminishes rapidly. But clever tech can’t feel collars. Identifying boxes likely to be at risk of high crime is only part of effective policing, explains Commander Letchford, “you still need to be a good cop in that box”.

London’s police are not unique in using big data this way; many American forces are fans too. They agree that criminals, once successful, often literally return to the scene of a crime. Home owners do not usually upgrade security systems in the wake of a burglary and similarly constructed buildings can be annoyingly easy to break into one you know how. As a spokesperson for PredPol, a California-based company responsible for supplying many of the American systems warns, “in crime, lightning does strike twice”.

As good as data-crunching super sleuths can be, they are always one step behind reality. But in a rapidly-developing and chaotic situation such as the recent shootings in Ottawa, it is far better to work in the here and now. Even allowing for the fact that eyewitness accounts often contain inaccuracies, social media has shown it can respond faster than official security channels. Which is where the Real-time Detection, Tracking, Monitoring and Interpretation of Events in Social Media (clunkily reduced to ReDites) comes in. A collaboration in Britain between the Engineering and Physical Sciences Research Council (EPSRC), the Defence Science and Technology Laboratory and academia, ReDites analyses 1% of the daily Twitter feed (about four million tweets a day) for pre-selected keywords from specified locations.

Tackling the sheer volume of tweets was worth the effort to tap into a resource tuned to report events immediately. The system is built to “pluck out the nuggets and ignore the false positives,” according to Miles Osbourne of the University of Edinburgh. It does so by favouring repetition of related words in separate tweets over precise information contained in fewer postings. ReDites automatically weeds out the chaff using a purpose-built Violence Detection Model. Words such as ‘gun’, ‘police’ or ‘explosion’ flag up tweets of interest, which Alex Hulkes of the EPSRC calls “adding the clever to the quick”. The system then gives a gist of the breaking news, grouped by location, to the user.

Unlike predictive policing systems, ReDites is a concept demonstrator and has yet to be used operationally by a police force. But big data are already leaving clues as to what the future of law enforcement may look like.

Augmented reality: In the eye of the beholder

tupac-hologramAS READERS of The Economist will no doubt recall, Tupac Shakur, a rapper, was murdered in 1996. It therefore came as a surprise, to say the least, when he appeared on stage in 2012. Clever visual manipulation it certainly was, but claims that Mr Shakur’s performance was a hologram were overblown. Holography maintains its sense of magic, but significant consumer applications have remained doggedly distant.

The growing interest in what is known as augmented reality (AR) may finally change that. AR is an approach to take the world that consumers see and overlay information or interfaces onto it. Just how that will be done is a subject of fervent speculation and sporadic, staggering investment. In October, a total of $542m was raised from investors including Google by Magic Leap, a company which deals in augmented reality technology (but no one seems to have figured out just what kind).

One of the problems, though, is that where information is to be added to the field of vision, it invariably interferes. Current AR devices like Google Glass incorporate a small screen in the furniture of the wearer’s glasses. Far from augmenting reality, such devices actually deny wearers some of it. But it may be worse than that; a paper published this week in the Journal of the American Medical Association suggests that Google Glass users actually have worse peripheral vision than those who wear glasses of a similar size. The technology creates blind spots where there were none.

Holography may provide a way out of this conundrum. Holograms are created by sending two laser beams onto a material that can capture an image, such as photographic film. One beam travels directly, and the other reflected off an object to be imaged. The criss-crossing of the two beams creates a pattern that, when itself lit up with ordinary light, projects a faithful 3D image of the object. What makes the idea suitable for AR systems is the fact that the film can be transparent.

TruLife Optics, a London-based firm, in collaboration with Britain’s National Physical Laboratory (NPL), has created a holographic element the size of a postage stamp. They start with a thin, transparent layer of a mix of chemicals familiar from traditional photographic film, including gelatin and silver halides. They create the holographic pattern by firing a laser through it onto a mirror. The incoming and reflected beams interfere in the film and leave a pattern that acts, itself, like a mirror. The firm’s idea is to use two of these transparent holograms, placed at either end of a “waveguide” that looks a bit like a microscope slide. An image projected onto one of them is turned and funnelled along the waveguide. When it hits the second hologram, it is turned again, parallel to the incoming image but shifted along a bit.

Because the whole assembly is transparent, an image projected from, say, the region of the ear in a forward direction could therefore be projected into the eye without interfering at all with vision. But the work needs more focus than that. Existing AR technologies rely on a fixed point of focus, which might lead to fatigue. Simon Hall, of the NPL, has been working with the TruLife team on a way around that. Mr Hall reckons that the solution is to use the same kit to bounce harmless infrared light off of the eye and collect what comes back. That gives clues as to where the eye is looking and, crucially, at what distance it is focused. The incoming image can then be re-focused to accommodate where, and at what depth, the wearer is looking. Or it could be used to determine that the wearer is looking away, and the image can be turned off altogether. Because, sometimes, you just want to watch Mr Shakur perform.

This article was published in The Economist on 5th November 2014.  See this link.