What are you looking at?

 

20151216-ILSVRC imageComputers may rival human visual analytical ability for the first time

Fans of the Where’s Wally picture books (known as Where’s Waldo in the United States and Canada) have for years searched for their hero with nothing but a keen eye and herculean dose of patience. Readers are challenged to locate their man along with his distinctive bobble hat, striped shirt, cane and glasses all while being distracted by other similar objects. It is headache-inducing and a frustrating way to spend an afternoon.

Help may be on the way. The results of the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), released on December 10th, show how machines may finally be better at image classification than humans.

The annual competition, championed by scientists from Stanford, Michigan and the University of North Carolina at Chapel Hill has grown steadily since it was launched with six teams in 2010 (when Princeton and Columbia were participants). It attracts global interest and has become the benchmark for object detection and classification.

This year there were 70 teams, from Microsoft, Google, research laboratories, student groups and other companies and academic institutions. They were provided a publicly available image dataset, which allowed them to develop categorical object recognition algorithms. The fully trained and beefed-up algorithms were then let loose in November on the two elements of the competition itself: detection and localisation.

To score a point for detection, the teams had to accurately label within a bounding box objects within 51,294 images (each containing multiple objects), grouped in 200 categories. They were then allowed five guesses at the localisation and classification of objects from 150,000 images across 1000 categories. The classification of the images in the first test needed only to be generic: fish, car, airplane etc. In the second test the classification was much more stringent: there were 189 breeds of dog to choose from to earn a point, for example.

Every team used some variant of a deep neural network. These information-processing models, based on the principles of biological nervous systems, aim to derive or predict meaning from incomplete data (such as images, as in this case). Each network comprises layers of highly interconnected processing elements. In previous iterations of the competition teams had never used more than 20 hidden layers in their algorithms. But this year, the winning team, Microsoft Research Asia (MSRA), used 152 layers; each one slightly transforming the representation of the layer before.

Generally these networks are arranged in layers of artificial neurons or nodes. Adding more layers to a network increases its ability to handle higher order problems. For instance, a small number of layers may be able to recognise spheres, later layers may then be able to ascertain that these are green or orange spheres and further layers may decide that these are in fact apples and oranges. Then perhaps more layers could be added to work out that we were looking at a fruit bowl. As such there is a huge advantage in having more layers when complex tasks need to be performed. The trouble is that these ‘deeper’ networks become rapidly more difficult to train as the available permutations become so vast. A point is reached where the system accuracy degrades when additional layers are added.

What MSRA seems to have identified is that some parts of the image recognition task inherently require a different number of layers than others. If the network has successfully learnt a feature then adding more layers thereafter just dilutes the answer and gets in the way.

To get round this problem MSRA provided short-cuts; connections that can skip across layers that may be redundant for the particular image being analysed. This has allowed them to have a network where the depth is effectively changed dynamically. A side effect of this seems to be that they can greatly increase the number of layers before hitting the limit of the networks ability to learn, which is when everything goes a bit bonkers. That’s how they managed to scale up to 152 layers.

So, the trick seems to be not just increasing the number of layers, but also controlling the resultant computing power by using short cuts. As Assistant Professor Alex Berg of UNC Chapel Hill says: “MSRA had to develop new techniques for managing the complexity of optimising so many layers”.

The results were unequivocal. In the detection test, MSRA won 194 of the 200 categories, with a mean average precision (AP) of 62%. This was a whopping 40% increase on the mean AP achieved by the winner in 2014.

To err is human

In the second test MSRA achieved a classification error rate of 3.5%. That is significant because after the 2014 competition, the human error rate, tested against 1500 images, was estimated to be 5.1%. (At the time the best computer algorithm only managed 6.8% against the same test set of images.)

But computers are not unquestionably better than humanity at image recognition, at least for now. “It is hard to compare human accuracy,” explained Mr Berg, “as computers are not distracted by other things.” And while they may be better at differentiating between hoary and whistling marmots, they cannot, yet, understand context. For instance, a human would recognise a few barely visible feathers near a hand as very likely belonging to a mostly occluded quill; computers would probably miss such nuance.

The long-term goal of this research is to have computers understand the visual context of the world as humans do. ILSVRC is a step towards that future and more will be learned on December 17th when the winning teams reveal their full methodologies at a workshop in Chile. Whether the test set for next year’s competition will contain red and white bobble hats is not yet known.

Advertisements

Capping the Cards

20150905-Card imageThis post was published on Huffington Post at this link.

Does Canada hold the key to regulating the global payments industry?

The subject of payment regulation, pitting retailers and restaurants against the credit card companies and banks, is a hardy perennial for discussion in legislative bodies the world over. The British government, for example, is currently grappling with how to supervise compliance with the European Union’s Interchange Fee Regulation.

Dry as the subject may seem, for the people of the UK, their government’s decisions on this subject could have far-reaching consequences: when similar provisions were enacted in the United States, the law of unintended consequences meant that, by one calculation, between $1 billion and $3 billion has been transferred annually from households to big retailers and their shareholders. In the UK and in other legislatures across the globe the question is, could the same happen here?

A quick revision. When a purchase is made by debit or credit card there is a risk the buyer does not have sufficient funds to balance his account, or that it is a stolen card and the transaction is, simply, fraudulent. So that the merchant is not left out of pocket, the card-issuing bank guarantees payment.  But in return for carrying the risk of a dodgy sale the merchant’s bank pays the card-issuing bank a small fee, known as the interchange fee or popularly, if misleadingly, the ‘swipe’ fee.  It is usually recouped from the merchant through business banking charges and passed on, in turn, to the consumer through higher prices.  So, on the face of it, any lowering of the interchange fee would be passed through to consumers.  That may be the theory, but it didn’t happen in the United States where the ‘savings’ were soaked up on the way through the system to the price tag.

Britain’s Chancellor of the Exchequer, George Osborne, said he expects businesses to pass on any savings to consumers in the form of lower prices. There were almost 10.7 billion credit and debit transactions in Britain in 2013 and the British Retail Consortium estimates the agreement could save British businesses up to £480m a year. The government proposes implementing a 0.30% cap on domestic credit card fees and an average 0.20% cap on domestic debit card transactions from December 9th this year.

As merchants cannot charge different prices for cash, credit or debit payments and obviously price-in the interchange fee, those consumers using cash (e.g. those on fixed incomes such as the retired) are, in effect, paying a hidden fee. So reducing interchange fees as far as possible make sense. Or does it?

Regulation can be a blunt tool and banks don’t like being bashed; losses are normally recouped elsewhere. For example, in 2009 banks provided 76% of America’s current accounts free of charge.  After capping interchange fees that figure halved by 2013. And a lower interchange fee could see banks recovering costs elsewhere, such as higher annual fees to use cards or fewer benefits.

So perhaps an interchange fee at a slightly higher rate could encourage the banks to play fair, whilst not squeezing consumers too much. The Payment Systems Board of the Reserve Bank of Australia is considering a reduction in the current weighted average of 0.5% to either a hard cap, or a lower weighted average, for implementation in 2016.

A rather elegant solution has recently been adopted in Canada. MasterCard and Visa have enacted a voluntary deal to cut the average interchange fee to 1.5% (an effective drop of 10%). This was enough to encourage the card networks to step up, but not so much that the banks felt the need to claw back revenue from other areas such as ending free banking or increasing overdraft fees.

The Canadian government was not keen to enforce regulation as a “gutting of interchange would lead to a gutting of the rewards programmes and no government want to hear ‘hey kids, the government have cancelled our trip to Paris,’” says Dan Kelly, President and CEO of the Canadian Federation of Independent Business.

Former Minister of Finance in Canada, Joe Oliver, said that the industry-led solution balances the need for rate reductions and rate predictability for merchants, while allowing the sector to continue to provide the rewards and benefits associated with credit card products that consumers have come to enjoy. (Not every Canadian is happy though.  The New Democratic Party says the fees are “excessively high and anti-competitive” and planned to regulate had they won power in the general election on October 19th this year.)

The European Commission estimates that interchange fees amount to £1 billion per annum in the UK; the Chancellor should tinker with such a revenue stream guardedly. He says he will set up a Payment Systems Regulator to supervise the interchange fee regulation.  Whether this body will have sufficient visibility and power to ensure the consumer enjoys the rewards of a lower fee, without paying for it through other means is not yet clear. The contrasting examples of America and Canada offer valuable lessons.