Gene genie

In my opinion the English language is currently diffy two words. First, the fear that having checked into your hotel room you will pull back the shower curtain in the bathroom – extended for hygiene purposes – and discover a dead body in the bath, courtesy of some horrific Coen Brothers-like murder.

The second missing word describes that mixture of terror and absolute certainty that when you reach into the banana box in Sainsbury’s, you will disturb the massive tarantula that has hitched a lift from Honduras.

I’m not alone in these thoughts, or at least the last one. Sort of. The government announced today that, remarkably, the UK’s National Bee Unit (no, me neither) has for some time been expecting the arrival of Vespa Velutina or Asian Hornet, from pot plants, cut flowers or fruit (I knew the stuff wasn’t good for you). Well, the scourge of the bumblebee (pictured) has finally arrived and has been spotted for the first time ever in the UK in Tetbury, Gloucestershire.

Nicola Spence, Defra’s Deputy Director for Plant and Bee Health, said: “we have been anticipating the arrival of the Asian hornet for some years and have a well-established protocol in place to eradicate them and control any potential spread”. The government swooped into action like a plague of clichés, established a three-mile surveillance zone around Tetbury and deployed bee inspectors armed with infrared cameras. Nest disposal experts have been put on immediate notice to move.

The plan to destroy any nests and “snuff out the invasion” echoes another debate raging in biodiversity circles at the moment. The Bill and Melinda Gates Foundation has just invested $75 million in a scheme to exterminate mosquitoes. It raises interesting ethical questions about man’s right to eliminate other species, no matter how much harm they may cause.

Malaria kills 400,000 people, mostly children, every year and is only one of many nasty diseases carried by mosquitoes. So far, the most invasive technique genetic engineers have come up with is to try to render the males of one species of mosquito sterile. But this has only been tested in Brazil; a limited area in global terms.

The new technique is called a ‘gene drive’. Most genes have a 50% chance of being passed onto offspring. A gene drive (or gene bomb, as it is sometimes called) will aggressively target the DNA of the host to increase those odds. Stop the females of the species from reproducing and eventually the entire population will die.

It is a controversial proposal and even Bill Gates admits there’s no regulatory framework to oversee the method. Aside from the ethics of humanity’s right to kill off another species, many worry that the impact on a fragile and complex ecological system is too unpredictable. Mosquitoes are nasty, but they are also food for other animals.

And what about the consideration that the mosquito is only the route through which any disease is passed, rather than the direct cause of any illness? The unintended consequences of wiping out an entire species could outweigh the possible benefit of eradicating disease, especially if such an outcome was not certain.

Any decision to take such drastic action will require concerted international engagement and agreement; tricky at the best of times. And it is only a short biological hop from a gene drive to eradicate disease, to a biological weapon. So don’t expect any bio-tech programmes to roll out across international borders any time soon.

But that won’t stop you wanting to wipe out the lot of them the next time you hear the ‘ziiiiiiiii’ and feel the bite. Just be thankful it wasn’t an Asian Hornet.

What are you looking at?


20151216-ILSVRC imageComputers may rival human visual analytical ability for the first time

Fans of the Where’s Wally picture books (known as Where’s Waldo in the United States and Canada) have for years searched for their hero with nothing but a keen eye and herculean dose of patience. Readers are challenged to locate their man along with his distinctive bobble hat, striped shirt, cane and glasses all while being distracted by other similar objects. It is headache-inducing and a frustrating way to spend an afternoon.

Help may be on the way. The results of the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), released on December 10th, show how machines may finally be better at image classification than humans.

The annual competition, championed by scientists from Stanford, Michigan and the University of North Carolina at Chapel Hill has grown steadily since it was launched with six teams in 2010 (when Princeton and Columbia were participants). It attracts global interest and has become the benchmark for object detection and classification.

This year there were 70 teams, from Microsoft, Google, research laboratories, student groups and other companies and academic institutions. They were provided a publicly available image dataset, which allowed them to develop categorical object recognition algorithms. The fully trained and beefed-up algorithms were then let loose in November on the two elements of the competition itself: detection and localisation.

To score a point for detection, the teams had to accurately label within a bounding box objects within 51,294 images (each containing multiple objects), grouped in 200 categories. They were then allowed five guesses at the localisation and classification of objects from 150,000 images across 1000 categories. The classification of the images in the first test needed only to be generic: fish, car, airplane etc. In the second test the classification was much more stringent: there were 189 breeds of dog to choose from to earn a point, for example.

Every team used some variant of a deep neural network. These information-processing models, based on the principles of biological nervous systems, aim to derive or predict meaning from incomplete data (such as images, as in this case). Each network comprises layers of highly interconnected processing elements. In previous iterations of the competition teams had never used more than 20 hidden layers in their algorithms. But this year, the winning team, Microsoft Research Asia (MSRA), used 152 layers; each one slightly transforming the representation of the layer before.

Generally these networks are arranged in layers of artificial neurons or nodes. Adding more layers to a network increases its ability to handle higher order problems. For instance, a small number of layers may be able to recognise spheres, later layers may then be able to ascertain that these are green or orange spheres and further layers may decide that these are in fact apples and oranges. Then perhaps more layers could be added to work out that we were looking at a fruit bowl. As such there is a huge advantage in having more layers when complex tasks need to be performed. The trouble is that these ‘deeper’ networks become rapidly more difficult to train as the available permutations become so vast. A point is reached where the system accuracy degrades when additional layers are added.

What MSRA seems to have identified is that some parts of the image recognition task inherently require a different number of layers than others. If the network has successfully learnt a feature then adding more layers thereafter just dilutes the answer and gets in the way.

To get round this problem MSRA provided short-cuts; connections that can skip across layers that may be redundant for the particular image being analysed. This has allowed them to have a network where the depth is effectively changed dynamically. A side effect of this seems to be that they can greatly increase the number of layers before hitting the limit of the networks ability to learn, which is when everything goes a bit bonkers. That’s how they managed to scale up to 152 layers.

So, the trick seems to be not just increasing the number of layers, but also controlling the resultant computing power by using short cuts. As Assistant Professor Alex Berg of UNC Chapel Hill says: “MSRA had to develop new techniques for managing the complexity of optimising so many layers”.

The results were unequivocal. In the detection test, MSRA won 194 of the 200 categories, with a mean average precision (AP) of 62%. This was a whopping 40% increase on the mean AP achieved by the winner in 2014.

To err is human

In the second test MSRA achieved a classification error rate of 3.5%. That is significant because after the 2014 competition, the human error rate, tested against 1500 images, was estimated to be 5.1%. (At the time the best computer algorithm only managed 6.8% against the same test set of images.)

But computers are not unquestionably better than humanity at image recognition, at least for now. “It is hard to compare human accuracy,” explained Mr Berg, “as computers are not distracted by other things.” And while they may be better at differentiating between hoary and whistling marmots, they cannot, yet, understand context. For instance, a human would recognise a few barely visible feathers near a hand as very likely belonging to a mostly occluded quill; computers would probably miss such nuance.

The long-term goal of this research is to have computers understand the visual context of the world as humans do. ILSVRC is a step towards that future and more will be learned on December 17th when the winning teams reveal their full methodologies at a workshop in Chile. Whether the test set for next year’s competition will contain red and white bobble hats is not yet known.

MH370: Searching for answers

MH370The search for missing Malaysian airliner MH370 began afresh in a new area last year.  I wrote about it for The Economist at the time (see here). It has been reported today that debris consisting of a wing part and, possibly, a suitcase that may have come from the plane has washed up on Reunion island, thousands of miles away in the Indian Ocean near Madagascar. Ocean current data suggest it is possible the items could have travelled from the area currently being searched since March 8th 2014 when the plane disappeared.

But as I researched the story last year I developed a nagging doubt about the new area to be searched, for two reasons. First, as it was based largely on satellite data there were many assumptions, extrapolations and margins for error. The technology seemed to point to a very small part of a very large ocean and I wondered if it really could be that accurate. As the latest Operational Search Update from the Australian Joint Agency Coordination Centre makes clear, my concerns may not have been unfounded. The search area announced last year with much publicity was doubled in size to 120,000 square kilometres in April.

That is not to suggest the debris that has come ashore in Reunion is not from MH370, just that the plane may have come down in an area that has yet to be searched, and possibly not in the Southern Ocean at all. Which leads to the second cause of my apprehension.

As a document released last June by the Australian Transport Safety Bureau shows (on page 40), two low-frequency hydro-acoustic signals recorded at the likely time of MH370’s demise raise an intriguing question. The two signals came from underwater hydrophones used by a monitoring station listening out for violations of the United Nations Nuclear Weapons Test Ban and a Marine Observation System. An underwater event occurred that the document states “could be associated with the impact of the aircraft on the water or with the implosion of wreckage as the aircraft sank”. (It also acknowledges that a “small earth tremor” could be responsible.) But because satellite data were believed to offer a more accurate answer, the signals from the two stations – that suggested a crash site in the middle of the Indian Ocean due south of Sri Lanka – was discounted. Dr Alec Duncan, the head of the team investigating the underwater acoustic event said “the crash of a large aircraft in the ocean would be a high energy event and [would be] expected to generate intense underwater sounds” (images from the team showing the acoustic results and likely point of origin can be seen here).

So, the debris may show that MH370 impacted with the water, which will scotch some of the more outlandish conspiracy theories but be of little consolation to the grieving families. Dr Duncan says that he has many other underwater recorders that may have picked up similar signals to the two already examined. They have yet to be recovered. In the absence of any other evidence as to the location of the missing plane, these additional hydrophone data will be of great interest when eventually recovered.

Hitting home

20150515-Card fraud imageAFTER falling in love with European coffeehouses on his wanderjahr in Germany in1971, Brian Olson started importing espresso machines to America. Today his three Café Intermezzo outlets in Atlanta employ 150 people and take in $7m a year. But in 2013, after fake credit cards were used in one of his restaurants, his card payment processor started withholding 20% of his revenue in escrow, “holding my business hostage” he laments. He is now an enthusiastic advocate of payment card security, hoping others avoid the “capricious and arbitrary” treatment he experienced. Changes to America’s payment card environment are imminent, but industry experts warn that without an holistic approach to data security fraudsters will continue to cause misery.

2014 was the worst year on record for data breaches and payment card security in America lags much of the rest of the world. A major improvement will occur later this year with the introduction of EMV; the technology introduced by Europay, MasterCard and Visa to the European payment card ecosystem 20 years ago and known outside America as chip and PIN (personal identification number). To authenticate legitimate card users and protect sensitive card details, EMV uses an embedded smart chip instead of the magnetic strip on the back of the cards, which can easily be cloned.

On October 1st the major credit card brands (MasterCard, Visa, American Express and Discover) are shifting the liability for fraud or data breach to the least secure part of the sales infrastructure. The move, designed to encourage take-up of safer point of sale apparatus that can handle EMV payments, is welcome news. But there are 12 million merchants in America cautions Jeremy King, International Director of the Payment Card Industry Security Standards Council (PCISSC), a cross-industry standards-setting organisation. Not all will be ready by October.

A second issue is that EMV allows for different Cardholder Verification Methods (CVM), some of which are stronger than others. PINs checked online with banks (using the debit rails) are the most robust. But one CVM accepts just a signature; a method open to abuse. (Purchases can also be made with no verification at all, for low-value transactions.) What is more, a transaction using a signature CVM uses the credit rails controlled by the major banks, with an attendant higher interchange fee. Mallory Duncan, Senior Vice President at the National Retail Federation (NRF), says that the interchange fee bonus and a reluctance to spend money investing in chip and PIN infrastructure means it is in banks’ interests to promote chip and signature as a CVM. This is despite wide acceptance across the industry that chip and PIN is more secure and retailers’ dismay that they are expected to invest in chip and PIN-reading equipment that may never be used. “Fraud flows to the weakest point,” Mr Duncan warns, “and that’s the signature.”

But unlike Europe 20 years ago, America today is an online marketplace. Measures to protect card details transmitted over the internet, (such as when an online CVM is used), or held within a retailer’s network (helpful in facilitating future sales and providing other services), are not included in the liability shift. Mr King warns that EMV is not a silver bullet and highlights the need for multi-channel protections.

Despite high profile attacks like those on Target and Home Depot, around 90% of data breaches hit SMEs, says Charles Hoff, former General Counsel of the Georgia Restaurant Association and now CEO of PCI University, an online platform providing data security education. Credit card companies expect banks and card processors to comply with data security standards set out by the PCISSC. They, in turn, expect merchants to do likewise, bearing the risk of failure to do so. The rules, expressed in the Standard Merchant Contract, state that following a data breach, merchants may have to employ forensic auditors to examine their network and pay fines, charge backs and card re-issuance penalties in the event of non-compliance. This “cash-flow crunch” can be terminal for SMEs and “within six months of a breach,” says Mr Hoff, “around 60% go out of business.”

Part of the problem is that most small business owners and merchants have the misconception that they are too small to be the target of hackers, says Mr Hoff.  As a result, they often feel that they can take their chances and not worry about initiating proper security measures. But after Café Intermezzo was attacked it cost $45,000 to beef up security at each restaurant and a yearly fee of around $30,000 for bank insurance, consultants services and technology maintenance to indemnify against future violations. Many small merchants ask if the exposure is worth the investment. And it is worse for big retailers. Target is spending $100m to change equipment and issue its own chip-based cards following the breach of 40m accounts in 2013.

For a robust and multi-layered approach to payment card security, five things are required. First, chip and PIN should be accepted as the industry standard (with banks absorbing the costs of their infrastructure investment, rather than pass these onto consumers and merchants). Second, as the Primary Account Number (PAN) on a smart card is still transmitted in clear, it is vulnerable. Point-to-Point Encryption (P2PE) along the transmission routes of a transaction should be implemented so that any data intercepted within the merchants point of sale apparatus or on the way to the card processor is better protected.

The third must-do is to remove the PAN from any online transaction as soon as possible. The card payment processor should swap the PAN for a token, so as to lessen the threat from any subsequent breach of the encrypted transmission routes or retailer’s systems’. Neither this process (known as tokenisation) nor P2PE will impact normal business processes and both should be a basic part of standard card processing packages offered by merchant’s banks and card processors.

The fourth element is to educate merchants and consumers and increase awareness of the threat across the whole industry (the NRF are calling for a Data Breach Notification law). Mr Hoff accepts that there is no easy or inexpensive solution and that every participant in the payment card security environment could make a case for someone else paying for P2PE and tokenisation. “It is tricky, but it’s a cost everybody needs to bear.”

Finally, greater industry self-regulation is required to provide a more responsive counter to innovative security threats and avoid the need for legislation. (Congress is always “a dollar too short and a day too late” says Tom Litchford, Vice President of Retail Technology at the NRF.) Initiatives such as Visa’s Technology Innovation and Secure Acceptance Incentive programmes encourage small merchants to employ the most secure and PCI-validated systems, says Ruston Miles, Chief Innovation Officer of Bluefin Payment Systems, the first company to offer PCI-validated P2PE to merchants in America. He hopes MasterCard will follow suit to produce an industry-wide “safe-harbour programme”.

In Café Intermezzo, Mr Olson says PCI-compliance is a crucial issue but he knows many restaurateurs who are unaware of the threat, the potential costs to business or of the EMV shift. “It takes a major negative experience to motivate us to do what we should have been doing in the first place,” he says.

Gastrophysics is changing the way we understand food

falso-huevo-de-erizo-photo-jose-luis-lopez-de-zubiria-mugaritzA staple ingredient of many science fiction movies is the ‘food pill’; a small tablet containing all of humanity’s daily nutritional needs. Whilst not light years from reality, this glimpse of the future fails to acknowledge the important social benefits humans derive from food and communal dining. But food alone is thought to be only a small, if central, part of what makes up a fabulous meal. Chefs and scientists (not a mutually exclusive bunch) are increasingly blending science, technology and gastronomy to stimulate all the senses in an effort to produce the greatest dining experience humanity has ever known.

The Provençal Rosé paradox, according to Charles Spence of Oxford University, describes the unwelcome magic trick whereby that delightful bottle of wine sampled on holiday, has seemingly turned to vinegar when opened at home. The wine may be the same, but the relaxed mind and sparkling company may not have survived the transit. It depends on how the brain absorbs and interprets information from all five senses. Multi-sensory perception, as it is known, is becoming better understood and exploited. Particularly the relationship between taste and smell. These two senses, compared to the others, are filtered to a lesser degree on the way to the limbic system; the part of the brain processing memory and emotion. Foodies are excited. No more so than Chef Andoni Aduriz, holder of two Michelin stars at Mugaritz restaurant in San Sebastian, Spain: “in every corner of the world food is becoming a priority for research, innovation and creativity”, he says.

Diners’ emotions are being manipulated in artful ways. At The Fat Duck, Heston Blumenthal’s three-Michelin starred restaurant in Britain, reminder cards delivered a month prior to a reservation are scented with the same oil contained in the wooden door frame through which a diner passes on the big day. Likewise, bags of sweets to take home repeat flavours experienced at the table for weeks after (or days, depending on the diner’s sweet tooth). They are both subtle ways of elongating and elevating the meal in the diner’s memory. Heavy cutlery is also perceived to herald more sophisticated food. (Concorde eschewed the fuel-saving properties of light cutlery for this reason.) And when food was laid out like Wassily Kandinsky’s Painting No. 201 in a recent study, diners preferred the meal (and were prepared to pay more for it) to a plate of identical but ordinarily-presented ingredients.In it’s hunger for information, the brain can be seduced by the senses to alter flavours, experiences and memories. As Professor Barry Smith, Director of the Institute of Philosophy and Centre for the Study of the Senses at London University advises, “if you don’t like the wine, change the music”.

The implications go wider than the dinner table. Ultimately our five senses are received in the brain as electrical signals. So Professor Adrian Cheok of City University in London has been producing his own. Delivering an electrical current between 50 Hz and 1 KHz to a sceptical volunteer’s tongue, Mr Cheok can electrically produce a taste sensation of lemon. The possibility exists then that in the future humans may transmit taste over the internet. Or, through, say, implanted devices, electrically alter or invent flavours. Children could be encouraged to eat unpleasant tasting but healthy foods. Diabetics could enjoy sweet foods without a trace of sugar. Or dementia sufferers could be repeatedly drawn back to the present through a memory of their favourite flavour. Food for thought.

The photo accompanying this post of sunripened berry fruits, drops of extra virgin olive oil, lime and cold beetroot bubbles is courtesy of Mugaritz restaurant and taken by Jose Luis Lopez de Zubiria.