My latest post is now live on Harvard Business Review discussing why Google, Apple and others, such as Intel and IBM, are spending hundreds of millions of dollars in A.I. research and development and patent applications as a means of providing a solution to help us manage our most precious resource – time — through the use of a personal interactive cognitive (robotic) assistant. Read more at HBR.org…
There is also a good article at Inc Magazine (The future of your productivity is in Artificial Intelligence).
Several debates have arisen around robotics, of course some of this is science fiction, but then we are increasingly seeing science fiction becoming science fact, there is the debate around ‘strong artificial intelligence’ and robots replacing human being, a notion which is being redefined by Moravec, who predicts that machines will ‘attain human levels of intelligence by the year 2040, and that by 2050, they will surpass us.’
There is the debate around Ray Kurzweil, who forecasts the union of human and machine, in which the knowledge and skills embedded in our brains will be combined with the vastly greater capacity, speed, and knowledge-sharing ability of machines, these two writers and scientists are considered by many to promulgate the highly intelligent anthropomorphic robots of popular culture.
Then there is the debate on the impact of robotics on unemployment in the industrial and service sectors especially that of Frey and Osborne (2013 and for an excellent write up see Andrew Flowers article) who illustrate the impact of robotics on almost 50% of current jobs. There is the debate on the complex relationship between technology and employment, such as Castells, who charts the social and economic dynamics of the information age and its impact on society as a whole.
There is also the change in human machine interactions as a result of advanced robotics as illustrated by Levy (Love and Sex with Robots) and lastly but by far means least, the debate around ethics and the implications of robotics and the law as so well documented by Ryan Calo (2014).
One thing is clear robots create a significant divide whoever you talk to and despite the fact that dates may be slightly off, there is general consensus that the twenty-first century will be the century of robots.
The Economist magazine has a special report on Robotics this week where they state:
“It is quite easy to imagine a future in which “robots” remain an esoteric subject of public fascination even as more and more services are automated with techniques developed in robotics laboratories.”
Whatever we think about robots and their associated technologies they will increasingly become part and parcel of our every day life. The discussions around ethics and laws are advancing, and yet the debate rages on around employment and how to ‘fix it.’ This is not new, in his essay Economic Possibilities for our Grandchildren John Maynard Keynes (1930) predicted what he called “technological unemployment”
“We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come – namely, technological unemployment. This means unemployment due to our discovery of means of economizing the use of labor outrunning the pace at which we can find new uses for labor. “
Automation is not a new phenomenon! Society has come through at least five economic revolutions in the last 200 years (Kondratiev wave).
Yes, more and more services will be automated and yes there will be job displacements as a result. But shouldn’t the bigger question or debates be around creating more inclusive societies in which automatic and robotic systems will contribute to improving people’s lives?
Rumblings have started. Jesse Myerson caused quite a stir with the article: “Five Economic Reforms Millennials Should Be Fighting For.” Likewise Dylan Matthews with his article: “Five conservative reforms millennials should be fighting for.” Both arguing for better welfare and basic income guarantee.
However, In their paper Technology, Unemployment & Policy Options: Navigating the Transition to a Better World, the authors write: that a basic income guarantee would have a “corrosive effect on the social fabric, would not address the need for people to have a meaningful purpose to their lives, and would likely be politically infeasible in this era of government cut-backs and retrenchment.”
Of the arguments I have seen put forward by modern economists such as Tyler Cowen and as mentioned by Dylan Matthews in his piece referenced above: “to make the government a large institutional investor — basically, to create a government hedge fund, or “sovereign wealth fund.” A number of other countries (Norway and Singapore come to mind) have such funds, as do Texas (which uses oil money to fund public higher education) and Alaska (which dispenses the returns on its fund as a dividend to residents).” Whilst this may work effectively in smaller countries by population I am not as convinced it will be effective for larger populated countries.
Economists also argue that the stability of the labor share of income is a key foundation in macroeconomic models – until this is addressed we will continue to remain in a depressed economy regardless of the advances in technology.
How we achieve an inclusive society and get people back to work is probably best summed up by Schumpeter and creative destruction. For Schumpeter, technological opportunities ‘are always present, abundantly accumulated by all sorts of people.’ With the rapid progress in Internet technologies, knowledge resources available through Google and others, online courses in computer sciences and machine learning, rapid prototyping through 3D technologies and a myriad of other support systems, opportunities for entrepreneurs to profitably tap into the pool of usable science and technology are abundant… it is up to each and every one of us to grab those opportunities the automation age presents us with.
Stimulating the brain to learn faster, better. (Kurzweil A.I.)
Artificial Intelligence is the next big tech trend. Here’s why… (Washington Post)
Unemployment in the age of robots. (Scott Adams)
Working with robots, the next business standard, (Wired)
Robots, drones and the uncertain future of work. (Government Technology)
In 1920 the Czech author and playwright, Karel Capek introduced the word “robot,” in his play R.U.R. (Rossum’s Universal Robots). Robot in Czech means “forced labor” or “drudgery.” (A “robotnik” is a peasant or serf.)
The play opened in Prague in January 1921. The Robots are mass-produced at the island factory of Rossum’s Universal Robots. According to the play ‘Robots remember everything, and think of nothing new.’ Domin (the factory director) says: `They’d make fine university professors.’
Every now and again a Robot will throw down his (they were male in the play) work and start gnashing his teeth. The human managers treat such an event as evidence of a product defect, but Helena [who wants to liberate the Robots] prefers to interpret it as “a sign of the emerging (robotic) soul.”
Capek wrote in a newspaper article in 1935 that he “refuses any responsibility for the thought that machines could take the place of people, or that anything like life, love, or rebellion could ever awaken in their cogwheels.”
James Albus, a leading researcher in robotics, who created an economic concept known as People’s Capitalism in which he imagined: “a world without poverty, a world of prosperity, a world of opportunity, a world without pollution, a world without war, and includes a detailed plan for achievement of these goals.” Albus believed that robots would help companies grow and employ more people. In 1983, for example he stated:
“There is no historical evidence that rapid productivity growth leads to loss of jobs. In fact quite the contrary; in general, industries that use the most efficient production techniques grow and prosper, and hire more workers. Markets for their products expand and they diversify into new product lines.”
Could this be the case with Amazon? Despite its $775 million cash acquisition of Kiva Systems, a warehouse automation robot, Amazon added 20,000 jobs to its fulfillment centers last year and continues to add more in 2014.
On the other hand, those who argue that the more advanced forms of automation (such as robotics and AI) will cause increasing unemployment have several reasonable arguments on their side. In 1983, the same year Albus was optimistic that automation would not kill of millions of jobs, but in fact create many more, Nobel Prize-winning economist Wassily Leontief said:
“We are beginning a gradual process whereby over the next 30-40 years many people will be displaced, creating massive problems of unemployment and dislocation. In the last century, there was an analogous problem with horses. They became unnecessary with the advent of tractors, automobiles, and trucks… So what happened to horses will happen to people.”
We are now at the point were Leontief’s prediction could said to be coming true. As more and more jobs are performed by inexpensive hardware and software combinations people who used to get paid for those jobs will have to find new jobs, retrain or risk being jobless.
In his book Computer Power and Human Reason: From Judgment to Calculation, Joseph Weizenbaum, pioneer of the ELIZA psychotherapist machine, argued that “there is a difference between man and machine, and there are certain tasks which computers ought not be made to do, independent of whether computers can be made to do them.” James Albus believed in this human – machine symbiosis and that full employment is possible for humans and robots:
“The problem is not in finding plenty of work for both humans and robots. The problem is in finding mechanisms by which the wealth created by robot technology can be distributed as income to the people who need it. If this were done, markets would explode, demand would increase, and there would be plenty of work for all able-bodied humans, plus as many robots as we could build.”
I personally do not believe we are likely to see “full employment,” certainly not within the next generation. But I do believe that as individuals we should be taking note of what is taking place in the robotics and automation domains – many jobs WILL be displaced by robotics, we are all personally responsible for ensuring our own well-being is taken care of by a) saving and investing prudently ‘just in case’ and/or b) retraining to stay abreast of the new skills that are required as part of this next phase of commercial evolution.
Picture credit: RUR by Karel Capak
For the longest time, people thought that humans could not run a mile in less than four minutes. Then, in 1954, Sir Roger Bannister beat that perception, and shortly thereafter, once he showed it was possible, many other runners were able to achieve this also.
Not long after Sir Roger’s historic achievement, In June 1956, at Dartmouth, New Hampshire, four young scholars: John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon jointly initiated and organized the Dartmouth Symposium, which lasted for two months, the goal of the Symposium was simulating human intelligence using a machine.
Four events are frequently cited as outcomes of the Dartmouth Symposium: the neural network simulator demonstrated by Marvin Minsky, the searching method proposed by John McCarthy, and the “Logic Theorist” presented by Herbert Simon and Allen Newell and the founding of the term: “Artificial Intelligence.”
The work of Simon and Newell towards AI was especially highly considered and regarded as a major breakthrough in computer simulation of human intelligence – just like Bannister’s record breaking performance that no one thought was possible, Simon and Newell presented a program able ‘to mimic the problem solving skills of a human being.’
Whilst the Dartmouth Symposium is often considered the first significant event in AI and its participants received much recognition, for example John McCarthy, Allen Newell and Herbert Simon all are recipients of Turing Awards. Simon was also awarded a Nobel Prize in Economics “for his pioneering research into the decision-making process within economic organizations.”
Herbert Simon Professor of Computer Science and Psychology at Carnegie Mellon and researcher at the RAND Corporation, wrote a pair of articles that could be considered to be the seminal articles for the founding paradigm of Artificial Intelligence (AI) research and Behavioral Economics. The two articles are:
Rational Choice and the Structure (PDF)
These two papers are the foundation of the work that led to Herb Simon’s Nobel Prize in Economics: ‘that omniscient Economic Man, the decision maker, with his immense (assumed) information processing power and prowess was an implausible fiction.
Simon proposes a model of the decision maker characterized by limited information processing and information gathering capabilities; ‘who therefore must be satisfied with decisions less than optimal; who uses strategies and tactics of thought (what we now term heuristics) to achieve behaviors that are “good enough.” This led to bounded rationality, which Simon maintains, dealt with the limits of “information processing capacities.” Something he applied to both human intelligence and helped his and others work on AI.
Simon’s research on “human problem solving” became the core of a wide-ranging theoretical project in which AI, economics, and cognitive psychology were closely intertwined and led to his discovery that: “Economics is one of the sciences of the artificial.” (Simon, 1976, p. 441)
Simon and Heuristics
Daniel Kahneman, often thought of as the founder of Behavioral Economics and like Simon a recipient of the Noble Prize in Economics, credits Simon’s work on bounded rationality and heuristics (rules of thumb and shortcuts in thinking) as being hugely influential on his work with Amos Tversky.
In fact so dominant was the concept of the heuristic from Simon on AI that in Computer Science and Operations Research AI was sometimes called “heuristic programming.” See for example this paper by Minsky (Some methods of Artificial Intelligence and Heuristic Programming) and this article on heuristics in computer science.
The word “heuristic” is derived from the Greek verb heuriskein, meaning “to find” or “to discover.” Archimedes is said to have run naked down the street shouting “Heureka” (I have found it) after discovering the principle of flotation in his bath. Authors later changed this to Eureka.
The Logic Theorist program developed by Simon and Newell was “capable of discovering proofs for theorems in elementary symbolic logic, using heuristic techniques similar to those used by humans.” (Newell, Shaw, Simon, 1962, p. 146)
AI improving irrational thinking and behavior
A core theme of Behavioral Economics is that we act irrationally or make sub-optimal decisions. In Maps of bounded rationality: psychology for behavioral economics; Kahneman points out there is a conflict between the two systems we us for thinking. System 1 (perception and intuition) and System 2 (reasoning) can engender inconsistent preferences: “we cannot take it for granted that preferences that are controlled by emotion of the moment will be internally coherent, or even reasonable by the cooler criteria of reflective reasoning. In other words, the preferences of System 1 are not necessarily consistent with the preferences of System 2.”
James G. March a long term collaborator of Herb Simon on Organization Theory and Bounded Rationality writes: “Human beings have unstable, inconsistent, incompletely evoked, and imprecise goals.” (March, 1987, p. 598)
Through AI, machines are gaining in logic and ‘rational’ intelligence and there is no reason to believe that they cannot become smarter than humans. As we use these machines, or Cognitive Assistants, they will nudge us to make better decisions in personal finance, health and generally provide solutions to improve our circumstances.
Bounded Rationality, AI and our modern economy
Herbert Simon said: “The principle of bounded rationality is the capacity of the human mind for formulating and solving complex problems is very small compared with the size of the problems whose solution is required for objectively rational behavior in the real world.”
Simon’s work has had a significant impact on the economy and AI is becoming more and more available throughout our world to solve real problems.
Google’s search uses AI and bounded rationality, as Peter Norvig Director of Research at Google has written: Simon’s work on AI and Bounded Rationality: “led to the establishment of search algorithms as perhaps the primary tools in the armory of early AI researchers, and the establishment of problem solving as the canonical AI task.”
AI is already improving how we communicate, analyze data, make financial decisions and trades. It is being put to work in hospitals to improve health diagnosis and soon we will be wearing AI programmed smart watches to monitor our wellbeing.
In the last interview he gave before he passed away Herb Simon reflected on how computers will continue to shape our world and can improve our rationality.
AI technologies will soon be pervasive in solutions that could in fact be the answer to help us overcome irrational behavior and make optimal economic decisions. The more we understand the depth of Herbert Simon’s work the more we will be prepared to take advantage of the great opportunities AI offers us.
Picture credit: Creative Commons Wikipedia
I’m on the road today but was excited to discover that ACM (Association for Computing Machinery) named Leslie Lamport, a Principal Researcher at Microsoft Research, as the recipient of the 2013 ACM Alan M. Turing Award for imposing clear, well-defined coherence on the seemingly chaotic behavior of distributed computing systems, in which several autonomous computers communicate with each other by passing messages. He devised important algorithms and developed formal modeling and verification protocols that improve the quality of real distributed systems. These contributions have resulted in improved correctness, performance, and reliability of computer systems.
The ACM Turing Award, widely considered the “Nobel Prize in Computing,” carries a $250,000 prize, with financial support provided by Intel Corporation and Google Inc.
Here is a video interview with Leslie:
Congratulations Leslie – more here…
Five robotic, artificial intelligence or drone related reads for Monday 17th March:
- Victor, an emotional Scrabble playing robot who is very insecure. The Wall Street Journal.
- How the science of robotics is being used for religious purpose in Iran. The Independent.
- US lags as commercial drones take off around globe. Associated Press.
- The brief rise and long fall of Russia’s Military Robots… could be resurrected. Popular Science.
- We have entered ‘the post normal world.’ Pew Report.
What are you reading?
Dr Carl Frey & Dr Michael Osborne recently made headlines around the world with their Oxford Martin School study – ‘The Future of Employment: How susceptible are jobs to computerization?‘ which showed that nearly half of US Jobs could be at risk of being replaced through automation.
Much is written about how robotics and automation is displacing jobs in the manufacturing industry. Indeed the advanced manufacturing facilities of today and tomorrow are clean and awash with robots, computers, lasers, and other ultramodern machine technologies. The most common tool a production worker carries at the newest auto plants is not a wrench or screwdriver. It’s an iPad.
However little is discussed about the impact of robotics and automation on the financial sector.
The finance sector is now producing record annual profits despite significant staff reductions since 2008.
AIG, the Insurance company, reduced its staff from 116,000 at the end of 2007 to 63,000 by the end of 2012, a reduction of 53,000 people, yet profits recovered during the period of restructuring — despite the massive employee reduction, profits have nudged up slightly from $6.2 billion at the end of 2007 to $6.6 billion at the end of 2012, ($9 billion at the end of 2013). Two other major insurance companies AXA and Allianz have seen their headcounts reduce by a combined 98,566 over the last few years, despite rising profits. This may not be much of a surprise as insurance is increasingly sold online. Consider that in the UK alone some 70% of car insurance is purchased over the Internet, a massive transition in just a few years, which has in part contributed to the demise of so many jobs in insurance sales force and agents.
Insurance appears to be one sector that is using automated technology to improve productivity, reduce headcount and increase profits.
In the banking sector, Citi Bank, which had 357,000 employees until the global crisis of 2008 reduced its overall headcount by 98,000 to 259,000 by the end of 2012. There is a similar story at Bank of America (Merrill Lynch) with some 50,000 layoffs, whilst almost 25,000 people lost their jobs with the collapse of Lehman brothers.
A couple of weeks ago J.P. Morgan announced a further round of 12,000 – 15,000 job cuts: “the bank is looking to find new savings, partly because of technology that allows greater automation of clerical functions in branches.” Whilst looking online for future growth: “the bank is now looking at revamping its existing branch network with smaller buildings that make better use of new technology and require fewer staff.”
It’s fair to say that perhaps millions of jobs have been lost globally in the financial sector as automation drives efficiencies and both companies and their customers choose the improved services that online technologies offer to transact financial business.
So it would seem that companies are becoming leaner, doing more with less people, whilst maintaining, and indeed increasing, profitability.
The following video, streamed live on March 13th, is a very interesting overview by Dr’s Frey and Osborne on the challenges and opportunities of the automation age.
PS. I’m not a fan of the “robots will take ALL our jobs” meme, although Bill Gates did say this week that within 20 years, a lot of jobs will go away, replaced by software automation (“bots” in tech slang, though Gates used the term “software substitution”).
The ‘system’ behind the Google robotic cars that have driven themselves for hundreds of thousands of miles on the streets of several US states without being involved in an accident, or violating any traffic law, whilst analyzing enormous quantities of data fed to a central onboard computer from radar sensors, cameras and laser-range finders and taking the most optimal, efficient and cost effective route, is built upon the 18th-century math theorem known as Bayes’ rule.
In 1996 Microsoft’s Bill Gates described their competitive advantage as its ‘expertise in Bayesian networks,’ patenting a spam filter in 1998 which relied on Bayes Theorem. Other tech companies quickly followed suit and adapted their systems and programming to include Bayes theorem.
During World War II Alan Turing had used Bayes Theorem to crack the Enigma code, potentially saving millions of lives, and is credited with helping the allied forces victory.
Artificial Intelligence was given a new lease of life when in the early 1980’s Professor Judea Pearl of UCLA’s Computer Science Department and Cognitive System Lab introduced Bayesian networks as a representational device. Pearl’s work showed that Bayesian Networks constitute one of the most influential advances in Artificial Intelligence, with applications in a wide range of domains.
Bayes Theorem is based on the work of Thomas Bayes as a solution to a problem of inverse probability. It was presented in “An Essay towards solving a Problem in the Doctrine of Chances” read to the Royal Society in 1763 after Bayes’ death (he died in 1761). Put simply Bayes rule is a mathematical relationship between probabilities, which allows the probabilities to be updated in light of new information.
Before the advent of increased computer power Bayes Theorem was overlooked by most statisticians, scientists and in most industries. Today, thanks to Professor Pearl, Bayes Theorem is used in robotics, artificial intelligence, machine learning, reinforcement learning and big data mining. IBM’s Watson, perhaps the most well known AI system, in all its intricacies, ultimately relies on the deceivingly simple concept of Bayes’ Rule in negotiating the semantic complexities of natural language.
Bayes Theorem is frequently behind the technology development of many of the multi-billion dollar acquisitions we read about, and certainly a core piece of technology behind the billions in profits at leading tech companies, from Google’s search to LinkedIN, Netflix’s and Amazon’s recommendation engines, and will play an even more important role in future developments within automation, robotics and big data.
Professor Pearl, through his work in the Cognitive System Lab, recognized the problems of human psychology in software development and representation. In 1984 he published a book simply called Heuristics (Intelligent Search Strategies for Computer Problem Solving).
Pearl’s book relied on research by the founder of Behavioral Economics Daniel Kahneman and Amos Tversky and particularly their work with Paul Slovic: Judgment under Uncertainty: Heuristics and Biases. Cambridge University Press, 1982, where they confirmed their own reliance on Bayes Theorem:
Ch.25: Conservatism in human information processing: “Probabilities quantify uncertainty. A probability, according to Bayesians like ourselves; is simply a number between zero and one that represents the extent to which a somewhat idealized person believes a statement to be true…. Since such probabilities describe the person who holds the opinion more than the event the opinion is about, they are called personal probabilities.” (Page 359)
Kahneman (Nobel Prize in Economics) and Tversky showed Bayesian methods more closely reflect how humans perceive their environment, respond to new information, and make decisions. The theorem is a landmark of logical reasoning and the first serious triumph of statistical inference; Bayesian methods interpret probability as the degree of plausibility of a statement.
Kahneman and Tversky especially highlighted the heuristics and biases where Bayes Rule can overcome our irrational decision-making and this is why so many of the tech companies are seeking to train their engineers and programming staff with behavioral economics knowledge. We use the availability heuristic to assess probabilities rather than Bayesian equations. We all know that this gives way to all sorts of judgmental errors: a belief in the law of small numbers and a tendency towards hindsight bias. We know that we anchor around irrelevant information and that we take too much comfort in ever-more information that seems to provide us confirmation of our beliefs.
The representativeness heuristic
Heuristics are described as “judgmental shortcuts that generally get us where we need to go – and quickly – but at the cost of occasionally sending us off course.
When people rely on representativeness to make judgments, they are likely to judge wrongly because the fact that something is more representative does not make it more likely. This heuristic is used because it is an easy computation (Think Zipf’s law and human behavior – the principle of least effort). The problem is that people overestimate their ability to accurately predict the likelihood of an event. Thus it can result in neglect of relevant base rates (base rate fallacy) and other cognitive biases, especially confirmation bias.
The base rate fallacy describes how people do not take the base rate of an event into account when solving probability problems and is frequently and error in thinking.
Confirmation bias is the tendency of people to favor information that confirms their beliefs or hypotheses. Essentially people are prone to misperceive new incoming information as supporting their current beliefs.
It has been found that experts reassess data selectively, depending on their prior hypotheses over time. Bayesian statisticians argue that Bayes’ s theorem is a formally optimal rule about how to revise opinions in the light of evidence. Nevertheless, Bayesian techniques are, so far rarely utilized by management researchers or business practitioners in the wider business world.
Eliezer Yudkowsky of the Machine Intelligence Research Institute has written a detailed introduction of Bayes Theorem using behavioral economics examples and machine learning, which I highly recommend.
Time to think Bayesian and Behavioral Economics
As the major tech companies are showing, Bayesian and Behavioral Economics methods are well suited to address the increasingly complex phenomena and problems faced by 21st-century researchers and organizations, where very complex data abound and the validity of knowledge and methods are often seen as contextually driven and constructed.
Bayesian methods that treat probability as a measure of uncertainty may be a more natural approach to some high-impact management decisions, such as strategy formation, portfolio management, and decisions whether or not to enter risky markets.
If you are not thinking like a Bayesian, perhaps you should be.