Enter your email address to follow this blog and receive notifications of new posts by email.

Our Response to the UK Government request for written evidence on A.I.

This is an abridged version of the final response we submitted to the UK Government request for evidence on Artificial Intelligence. (The numbering is based on the questions we decided we could answer best).

 

1. a) What is the current state of artificial intelligence? There are currently no ‘true’ Artificial Intelligence (A.I.) systems. There are ad-hoc ‘learning’ systems, let’s call them narrow A.I. systems.

Defining A.I. The literature abounds with definitions of A.I. and human intelligence although very little consensus has been reached to date. Our comprehensive research of A.I. practitioners worldwide, Research Survey: Defining (machine) Intelligence (Lewis & Monett, 2017), which has collected over 400 responses, has identified considerable interest in identifying a well defined definition and goal of A.I. We hope that the results of our survey help to overcome a fundamental flaw: “That artificial intelligence lacks a stable, consensus definition or instantiation complicates efforts to develop an appropriate policy infrastructure” (Calo, 2017).

The goal of A.I., closely linked to its definition and highlighted in our survey, should ensure the ‘why’ of Artificial Intelligence; however, very few research papers provide a robust goal with society-in-the-loop. We agree with Hutter (2005): “The goal of A.I. systems should be to be useful to humans.” Or as Norbert Wiener wrote in 1960, “We had better be quite sure that the purpose put into the machine is the purpose which we really desire” (Wiener, 1960).

Whilst there are breakthroughs in narrow A.I. systems that can ‘simulate’ and surpass certain ‘individual’ aspects of human intelligence (for example, specific elements of pattern recognition, quicker at search, calculations, data analysis, and other cognitive attributes), A.I. development is currently some way off from achieving the goal of fully replicating human intelligence. However, the narrow A.I. methods, which are more specifically fields of A.I. research, are making considerable progress as stand alone techniques, namely Machine Learning (ML) and classes of ML algorithms such as Deep Learning (DL), Reinforcement Learning (RL), and Deep Reinforcement Learning (DRL).

Researchers acknowledge that the methodology applied in narrow A.I. systems can be unstable (Mnih et al., 2015). Nevertheless, these A.I. sub-domains are already starting to have considerable economic and social effect, as we outline below, and this impact will accelerate in the near future. Briefly:

  • Machine Learning: Whereas the vast majority of computer programs are hand-coded by humans, Machine Learning algorithms are capable of ‘self-learning,’ improving computability on a specific task against key performance metrics, and enhance output through experience.
  • Deep Learning: The key aspect of deep learning is that its features are not designed by human engineers. Instead, “they are learned from data using a general-purpose learning procedure” (LeCun, Bengio & Hinton, 2015). Deep Learning is defined by the same authors as “computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer.”
  • Reinforcement Learning: An algorithm which learns to control and predict data. The algorithms are reward and goal orientated: “Reinforcement learning is learning what to do –how to map situations to actions– so as to maximize a numerical reward signal. The learner is not told which actions to take, as in most forms of machine learning, but instead must discover which actions yield the most reward by trying them” (Sutton & Barto, 2012). See also below for Deep Reinforcement Learning.

Machine Learning: The most prevalent of these narrow A.I. sub-domains, in an operational context, is Machine Learning. ML algorithm can be either supervised, unsupervised or semi-supervised. The majority of current ML implementations are supervised learning. In supervised learning, the idea is we (humans) teach the computer how to do something. In unsupervised learning the machine learns by itself (Samuel, 1959).

ML systems are being used to help make decisions both large and small in almost all aspects of our lives, whether they involve simple tasks like dispensing money from ATM’s, recommendations for buying books or which movies to watch, email spam filtering, purchasing travel arrangements and insurance policies, to more objective matters like the prognosis of credit rating in loan approval decisions, and even life-altering decisions such as health diagnosis and court sentencing guidelines after a criminal conviction.

Systems utilizing ML information processing techniques are used for profiling individuals by law enforcement agencies, military drones, and other semi-autonomous surveillance applications. They capture information in our smart phones on our daily activities, from exercise and GPS data that tracks our location in real time, to emailing and social media interests and telephone calls. They are increasingly used in our cars and our homes. They are used to manage nuclear reactors and for managing demand across electricity grids, improving energy efficiency, and generally boosting productivity in the business environment.

Deep Learning: Deep learning is emerging as a primary machine learning approach for important, challenging problems such as image classification and speech recognition. Deep Learning methods have dramatically improved machine capabilities in speech recognition, approaching human-level performance on some object recognition benchmarks (He et al., 2016) and object detection (Ba, Mnih, & Kavukcuoglu, 2015). Which can also be very useful for self-driving cars and in many other domains where big data is available such as drug discovery and genomics (Nguyen et al., 2016).

Advances in Deep Learning will have broad implications for consumer and business products that can be significantly augmented by speech recognition. “Deep learning is becoming a mainstream technology for speech recognition at industrial scale” (Deng et al., 2013). This is particularly prevalent in telemarketing, tech help support desks (Vinyals & Le, 2015), and mobile personal assistants such as Apple’s Siri, Microsoft’s Cortana, Google Now, and Amazon Echo. Deep Learning is also being used for negotiations with other chatbots or people (Lewis et al., 2017).

Reinforcement Learning: Reinforcement Learning has gradually become one of the most active research areas in Machine Learning, Artificial Intelligence, and neural network research (Sutton & Barto, 2012). An RL agent interacts with its environment and, upon observing the consequences of its actions, can learn to alter its own behaviour in response to rewards received (Arulkumaran et al., 2017).

Within health, RL is being used for classifying gene-expression patterns from leukaemia patients into subtypes by clinical outcome (Ghahramani, 2015). These models have also contributed to massive savings at multiple Google Data Centers by helping to produce a 40% reduction in energy used for cooling and 15% reduction in overall energy overhead (Evans & Gao, 2016). Other typical examples of uses might include detecting pedestrians in images taken from an autonomous vehicle. As shown in (Shalev-Shwartz, Shammah, & Shashua, 2016), RL is proving to be especially effective in the development of self-driving cars which requires many capabilities such as sensing, vision, mapping, knowledge of driving policies, and regulations.

In robotics, RL is making progress in other seemingly simple tasks such as screwing a cap onto a bottle (Levine et al., 2016) or door opening (Chebotar, 2017).

A well-known successful example of RL is from the Google owned company DeepMind, specifically their AlphaGo, which defeated the human world champion in the game of Go. AlphaGo was comprised of neural networks that were trained using supervised and reinforcement learning in combination with a traditional heuristic search algorithm (Silver et al., 2016).

Deep Reinforcement Learning: One of the driving forces behind Deep Reinforcement Learning is the vision of creating systems that are capable of learning how to adapt in the real world. Further, researchers consider that “DRL will be an important component in constructing general AI systems” (Arulkumaran et al., 2017). As was shown through a single DRL architecture “in a range of different environments with only very minimal prior knowledge” (Mnih et al., 2015).

To date, DRL has been most prevalent in games (Mnih et al., 2013); however, recent development have shown that DRL algorithms have by “far the most complex behaviors yet learned” in a machine algorithm (Christiano et al., 2017).

  1. b) What factors have contributed to this? Historically, developments in A.I. were driven by government investment in research and development within academia and other research institutes. Whilst governments around the world still make large investments into A.I. research, recent major advances have largely been driven by significant investments by leading technology companies relying on techniques that were previously developed through government and other institutions investment.

Furthermore, computing power has increased dramatically. Meanwhile, the growth of the Internet and social media in the last 10 years has provided opportunities to collect, store, and share large amounts of data. Many leading technology companies are amassing huge amounts of ‘Big Data,’ supported in part by cloud computing resources. These companies have invested heavily in A.I. technologies and further seek to develop A.I. techniques to ensure a competitive advantage.

Another major factor is open access of scientific inventions and research in general –sites such as arXiv, provide immediate online publication of research papers, conference proceedings, etc. Additionally, open source frameworks and libraries for the development of ML algorithms have put opportunities for development into the hands of millions, thereby profiting from the advantages of cloud computing and parallel processing on GPUs. Examples include TensorFlow, Theano, CNTK, MXNet, and Keras. They implement model architectures and algorithms for methods, especially deep learning that can be run by calling functions without the need to implement them from scratch nor locally.

c) How is it likely to develop over the next 5, 10 and 20 years. There are several recent surveys of experts opinions on when A.I. will be available and their impact on the workplace. Many uncertainties exist concerning future developments of machine intelligence, one should therefore not consider the ‘expert view’ to be predictive of likely ten and twenty year scenarios.

d) What factors, technical or societal, will accelerate or hinder this development? There are some obvious factors such as a slow-down in investment which would impact research and development and education, creating another ‘A.I. winter’ and skills gap. Other factors such as global instability and government policy, may all hinder the development of A.I

Although the particular narrow A.I. models we outlined above already demonstrate aspects of intelligent abilities in narrow and limited domains, at this point they do not represent a unified model of intelligence and there is much work to be done before true A.I. is ‘amongst us.’

Further, technically there are still many factors that make narrow A.I unstable. Additionally there are technological challenges to overcome such as the curse of dimensionality –Richard Bellman (1957) asserted that high dimensionality of data is a fundamental hurdle in many science and engineering applications. He coined this phenomenon the curse of dimensionality, although recent developments in DRL have made some progress in addressing the curse of dimensionality (Bengio, Courville, & Vincent, 2013; Kulkarni et al., 2016). There are also many safety challenges to overcome such as security, data privacy (see for example (DeepMind, 2017)) and other technological problems still requiring breakthroughs.

Other advances will accelerate A. I. such as Facebook CommaAI (Baroni et al., 2017) and their A.I. roadmap (Mikolov, Joulin, & Baroni, 2015). Together with closer cooperation with Neuroscience and A.I. developers (Hassabis et al., 2017). We also believe the following papers will contribute to the acceleration of narrow A.I. solutions for mainstream uses beyond games and social media analytics: (Kalchbrenner, Danihelka, & Graves, 2015; Lake et al., 2016; Mnih et al., 2015).

2. We recommend the committee consider the findings in the paper by leading A.I. researchers at Microsoft, Ethan Fast and Eric Horvitz, Long-Term Trends in the Public Perception of Artificial Intelligence (Fast & Horvitz, 2017).

3. It is our belief that the goal of A.I must be to support humanity. At the present time it is difficult to predict the short term extent with which A.I. will impact on social and economic institutions but in the long term it could have a major negative consequence the social and economic effects of which could be severe for millions of people. In this case, according to a report to the US President of the United States (Furman et al., 2016), “Aggressive policy action will be needed to help (those) who are disadvantaged by these changes and to ensure that the enormous benefits of AI and automation are developed by and available to all.”

Other commentators such as Andrew Haldane (2015), Chief Economist at the Bank of England, believe it is clear that the introduction of AI machines and more advanced robotics could see a technological change and thus social and economic changes far larger than at any time in human history with massive unemployment of unprecedented scales.

Conversely, machines have been substituting human labor for centuries; yet, historically, technological changes have been associated with productivity growth and expanding rather than contracting total employment and with raising earnings. Research showed that factories that have implemented industrial robots also added over 1.25 million new jobs from 2009 to 2015 (Lewis, 2015).

The challenge for policymakers will be to update, strengthen, and adapt policies to respond to the social and economic effects of A.I. We have created an agenda with key research goals to ensure the development and the outcomes of A.I. and Artificial General Intelligence (AGI) are aligned with the social and economic advancement of all humanity, and how best to close those social and economic gaps through beneficial AI and AGI development.

4. Overall we believe that whilst some large corporations and their shareholders will benefit from the gains of A.I. the potential for artificial intelligence to enhance people’s quality of life in areas including education, transportation, and healthcare is vast. However, we are willing to offer our expertise to the committee so that government, policy makers, and researchers collaborate to develop and champion methodology “for wealth creation in which everyone should be entitled to a portion of the world’s A.I. produced treasures” (Stone et al., 2016).

5. Our research shows that theories of intelligence and the goal of A.I. have been the source of much confusion both within the field and among the general public. To help rectify this we are conducting a research survey: Defining (machine) Intelligence (Lewis & Monett, 2017).

The research survey on definitions of machine and human intelligence is still accepting responses and has an ongoing invitation procedure. However, we are incredibly surprised by the volume of responses together with the high level of comments, opinions, and recommendations concerning the definitions of machine and human intelligence that experts around the world have shared. As of September 6, 2017 we have collected more than 400 responses.

A.I. has a perception problem in the mainstream media even though many researchers indicate that supporting humanity must be the goal of AI. By clarifying the known definitions of intelligence and research goals of Machine Intelligence this should help us and other A.I. practitioners spread a stronger, more coherent message, to the mainstream media, policymakers, and the general public to help dispel myths about A.I.

6. We recommend the committee consider the findings projected through to 2030 in the report, The One Hundred Year Study on Artificial Intelligence (Stone et al., 2016), especially the sections on transportation, healthcare, education, low-resource communities, and public safety and security.

8. Human intellect is the source of many of its own problems. Errors in thinking and biases, which have grown powerful over time, are also showing up in the intelligent machines we program and may become even more prevalent in machines programmed with Artificial Intelligence.

Machines can no more do ethics than they can have psychological breakdowns. They can help to change circumstances, but they cannot reflect on their value or morality. It is the human element and bias that must be considered above all else.

9. For an ‘unbiased’ view see paper by Adrian Weller (2017) where he states “a brief survey, suggesting challenges and related concerns. We highlight and review settings where transparency may cause harm, discussing connections across privacy, multi-agent game theory, economics, fairness and trust.”

The role of the Government

  1. Key questions which governments and policy makers should be addressing are:
  • How do we mitigate the uncertainty and likelihood of massive unemployment?
  • What impact have A.I. systems and robots had in industrial factories? Have companies that employed robots, increased or decreased human employment?
  • What new skills have been required as robots enter the workplace?
  • Which new laws or modifications to laws will need to be implemented to mitigate risk and monitoring of A.I. and A.G.I.?
  • Monitor and provide reporting on emerging technology policy, with a focus on artificial intelligence and automation.
  • Provide research input into FLI’s Asilomar long-term issues (Asilomar AI Principles, 2017) with particular focus on: “23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.”

From: AGISI.org

Dr. Colin W. P. Lewis, A.I. Research Scientist

Prof. Dr. Dagmar Monett, A.I. Research Scientist (AGISI & Berlin School of Economics and Law)

References

Arulkumaran, K. et al. (2017). A Brief Survey of Deep Reinforcement Learning. CoRR, abs/1708.05866, https://arxiv.org/abs/1708.0586.

Asilomar AI Principles (2017). Future of Life Institute, https://futureoflife.org/ai-principle.

Ba, J. L., Mnih, V., and Kavukcuoglu, K. (2015). Multiple Object Recognition with Visual Attention. CoRR, abs/1412.7755, https://arxiv.org/abs/1412.7755.

Baroni, M. et al. (2017). CommAI: Evaluating the first steps towards a useful general AI. CoRR, abs/1701.08954, https://arxiv.org/abs/1701.08954.

Bellman, R. (1957). Dynamic Programming. Princeton, NJ: Princeton Univ. Press.

Bengio, Y., Courville, A., and Vincent, V. (2013). Representation Learning: A Review and New Perspectives. IEEE Trans. on Pattern Analysis and Machine Intelligence, 35(8):1798–1828.

Calo, R. (2017). Artificial Intelligence Policy: A Roadmap, https://ssrn.com/abstract=301535.

Chebotar, Y. et al. (2017). Path integral guided policy search. CoRR, abs/1610.00529, https://arxiv.org/abs/1610.00529.

Christiano, P. F. et al. (2017). Deep Reinforcement Learning from Human Preferences. CoRR, abs/1706.03741, https://arxiv.org/abs/1706.03741.

DeepMind (July 2017). What we’ve learned so far, https://deepmind.com/applied/deepmind-health/transparency-independent-reviewers/what-weve-learned-so-far/.

Deng, L. et al. (2013). Recent advances in deep learning for speech research at Microsoft. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP, pp. 8604–8608, IEEE.

Evans, R. and Gao, J. (2016). DeepMind AI Reduces Google Data Centre Cooling Bill by 40%. DeepMind, https://deepmind.com/blog/deepmind-ai-reduces-google-data-centre-cooling-bill-40.

Fast, E. and Horvitz, E. (2017). Long-Term Trends in the Public Perception of Artificial Intelligence. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI-17, San Francisco, CA, USA, February 4-9, 2017. AAAI Press, pp. 963–969.

Furman, J. et al. (2016). Artificial Intelligence, Automation, and the Economy. Executive Office of the President, Washington, D.C. 20502, https://obamawhitehouse.archives.gov/sites/whitehouse.gov/files/documents/Artificial-Intelligence-Automation-Economy.PDF.

Ghahramani, Z. (May 2015). Probabilistic machine learning and artificial intelligence. Nature, 521:452–459. DOI: 10.1038/nature14541.

Haldane, A. (2015). Labour’s Share – speech given at the Trades Union Congress, London. Bank of England, http://www.bankofengland.co.uk/publications/Pages/speeches/2015/864.aspx.

Hassabis, D. et al. (July 2017). Neuroscience-Inspired Artificial Intelligence. Neuron, 95(2):245–258.

He, K. et al. (2016). Deep Residual Learning for Image Recognition. In Proceedings of the 29th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016. Las Vegas, NV, USA, pp. 770–778, IEEE.

Hutter, M. (2005). Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability. Berlin: Springer.

Kalchbrenner, N., Danihelka, I., and Graves, A. (2015). Grid Long Short-Term Memory. CoRR, abs/1507.01526, https://arxiv.org/pdf/1507.01526.pdf.
Kulkarni, T. D. et al. (2016). Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation. CoRR, abs/1604.06057, https://arxiv.org/abs/1604.06057.

Lake, B. M. et al. (2016). Building Machines That Learn and Think Like People. Behav Brain Sci., 4:1–101.

LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep Learning. Nature, 521:436–444.

Levine, S. et al. (January 2016). End-to-end training of deep visuomotor policies. Journal of Machine Learning Research, 17(1):1334–1373.

Lewis, C. W. P. (2015) Study – Robots are not taking jobs. Robotenomics, https://robotenomics.com/2015/09/16/study-robots-are-not-taking-jobs.

Lewis, C. W. P. and Monett, D. (2017). Research Survey: Defining (machine) Intelligence. Ongoing survey, https://goo.gl/hMjaE1.

Lewis, M. et al. (2017). Deal or No Deal? End-to-End Learning for Negotiation Dialogues. CoRR, abs/1706.05125, https://arxiv.org/abs/1706.05125.

Mikolov, T., Joulin, J., and Baroni, M. (2015). A Roadmap towards Machine Intelligence. CoRR, abs/1511.08130, https://arxiv.org/abs/1511.08130.

Mnih, V. et al. (2013). Playing Atari with Deep Reinforcement Learning. CoRR, abs/1312.5602, https://arxiv.org/abs/1312.5602.

Mnih, V. et al. (2015). Human-level control through deep reinforcement learning. Nature, 518:529–533.

Nguyen, D.-T. et al. (2016). Pharos: Collating protein information to shed light on the druggable genome. Nucleic Acids Research, 45(D1):D995–D1002.

Samuel, A. L. (1959). Some Studies in Machine Learning Using the Game of Checkers. IBM Journal of Research and Development, 3(3):535–554.

Shalev-Shwartz, S., Shammah, S., and Shashua, A. (2016). Safe, Multi-Agent, Reinforcement Learning for Autonomous Driving. CoRR, abs/1708.05866, https://arxiv.org/abs/1708.05866.

Silver, D. et al. (January 2016). Mastering the game of Go with deep neural networks and tree search. Nature, 28;529(7587):484–489.

Stone, P. et al. (September 2016). Artificial Intelligence and Life in 2030. One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA, http://ai100.stanford.edu/2016-report.

Sutton, R. S. and Barto, A. G. (2012). Reinforcement Learning: An Introduction. Second edition. London, UK: The MIT Press.

Vinyals, O. and Le, Q. V. (2015). A Neural Conversational Model. CoRR, abs/1506.05869, https://arxiv.org/abs/1506.05869.

Weller, A. (2017). Challenges for Transparency. CoRR, abs/1708.01870, https://arxiv.org/abs/1708.01870.
Wiener, N. (1960). Some Moral and Technical Consequences of Automation. Science, 131(3410):1355–1358.

Artificial Intelligence and National Security

Artificial Intelligence and National Security

Rapid developments in Artificial Intelligence (AI), especially the sub domains of Reinforcement Learning and Machine Learning are high on the agendas of government policy makers in many countries. Last year the US Government* issued comprehensive reports on AI and its possible benefits and impact on society, likewise the European Union and other agencies are also active in reviewing policies on AI, Robotics and associated technology. As recent as one week ago the UK government initiated a new request for comments to its AI subcommittee – What are the implications of Artificial Intelligence?

On the back of the high level of interest from governments and policy makers around the world a new study, Artificial Intelligence and National Security, by researchers at the Harvard Kennedy Center on behalf of the U.S. Intelligence Advanced Research Projects Activity (IARPA) recommends three goals for developing future policy on AI and national security

  • Preserving U.S. technological leadership,
  • Supporting peaceful and commercial use, and
  • Mitigating catastrophic risk

The authors say their goals for developing policy are developed by lessons learned in nuclear, aerospace, cyber, and biotech and that Advances in AI will affect national security by driving change in three areas: military superiority, information superiority, and economic superiority.

Setting out their position the authors make the case that existing AI developments “have significant potential for national security.”

Existing machine learning technology could enable high degrees of automation in labor-intensive activities such as satellite imagery analysis and cyber defense.

They further emphasize that AI has the potential to be as transformative as other major technologies, stating that future progress in AI has the potential to be a transformative national security technology, on a par with nuclear weapons, aircraft, computers, and biotech.

The changes they see in military superiority, information superiority, and economic superiority are outlined below:

For military superiority, they write progress in AI will both enable new capabilities and make existing capabilities affordable to a broader range of actors.

For example, commercially available, AI-enabled technology (such as long-range drone package delivery) may give weak states and non-state actors access to a type of long-range precision strike capability.

In the cyber domain, activities that currently require lots of high-skill labor, such as Advanced Persistent Threat operations, may in the future be largely automated and easily available on the black market.

For information superiority, they say AI will dramatically enhance capabilities for the collection and analysis of data, and also the creation of data.

In intelligence operations, this will mean that there are more sources than ever from which to discern the truth. However, it will also be much easier to lie persuasively.

AI-enhanced forgery of audio and video media is rapidly improving in quality and decreasing in cost. In the future, AI-generated forgeries will challenge the basis of trust across many institutions.

For economic superiority, they find that advances in AI could result in a new industrial revolution.

Former U.S. Treasury Secretary Larry Summers has predicted that advances in AI and related technologies will lead to a dramatic decline in demand for labor such that the United States “may have a third of men between the ages of 25 and 54 not working by the end of this half century.”

Like the first industrial revolution, this will reshape the relationship between capital and labor in economies around the world. Growing levels of labor automation might lead developed countries to experience a scenario similar to the “resource curse.”

Also like the first industrial revolution, population size will become less important for national power. Small countries that develop a significant edge in AI technology will punch far above their weight.

Due to the significant impacts they see from AI they say that Government must formalize goals for technology safety and provide adequate resources, that government should both support and restrain commercial activity of AI and governments should provide more investment and oversight into the long term-focused strategic analyses on AI technology and its implications.

Noting that we are at an inflection point in Artificial Intelligence and autonomy, the researchers outline multiple areas they believe AI driven technologies will disrupt military capabilities – capabilities, which they say, will have far reaching consequences in warfare.

Policy makers around the world would do well to consider carefully the scenarios outlined in the study to ensure that AI technologies are adequately governed to provide assurances to citizens and ultimately to ensure that AI technologies benefit humanity.

 

*US Government and Agencies recent papers

June 2016—Defense Science Board: “Summer Study on Autonomy”

July 2016—Department of Defense Office of Net Assessment: “Summer Study: (Artificial) Intelligence: What questions should DoD be asking”

October 2016—National Science and Technology Council: “The National Artificial Intelligence Research and Development Strategic Plan”

October 2016—National Science and Technology Council: “Preparing for the Future of Artificial Intelligence”

December 2016—Executive Office of the President: “Artificial Intelligence, Automation, and the Economy”

January 2017—JASON: “Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD

Creating Shareholder Value with AI?

In a wonderfully titled report, Creating Shareholder Value with AI? Not so Elementary, My Dear Watson, the Equity Research company, Jefferies, LLC, take a hard look at IBM’s bet on cognitive computing, or Artificial Intelligence (AI). The 53 page report is well worth reading to understand why the research analysts consider IBM, despite significant investment in to their cognitive computing platform, Watson is losing the opportunity in AI and hence the authors consider IBM stock to under perform.

On a positive note for AI researchers they do acknowledge there is serious business and economic interest in AI, citing Andrew Ng’s Stanford talk on AI as the new Electricity:

AI is the New Electricity….Our checks confirm that a wide range of organizations are exploring incorporating AI in their business, mostly using Machine and Deep Learning for speech and image recognition applications.

And that IBM has an advantage in terms of technology:

IBM’s Watson platform remains one of the most complete cognitive platforms available in the marketplace today.

But IBM fall flat due to hefty service charges and the inability to attract AI talent:

The hefty services component of many AI deployments will be a hindrance to adoption. We also believe IBM appears outgunned in the war for AI talent and will likely see increasing competition.

I’m never a fan of forecasts for market share, forecasts in Robotics have shown how wide off the mark the industrial robotics landscape is from where it was forecast to be, nevertheless the Jefferies numbers are worth looking at, even if much of AI will be in house in organisations such as Google, Facebook, Amazon, etc. Jeffery’s seem to think the value of the market, shown in the chart below, is underestimated “we think these forecasts are unlikely to fully capture the value created by internal use of AI applications such as machine learning. For example, Facebook and Amazon are aggressively using machine learning to improve their offerings, make operations more efficient, and create new embedded services.”

Jefferies research exhibit 8

The analysts do note that the singularity is not near and provide an interesting chart depicting the areas they see growth… interestingly they see a large percentage of growth in algorithmic trading strategies, equivalent to 17% of the market! Yet strangely indicate health care spend will be slightly less, and driverless AI even less, despite this being where much of AI is heading today.

Many AI Apps Will Take Time to Emerge; The Singularity Is Not Near While we are big believers in the long term potential of AI and see rapid adoption of machine learning in the near term, our checks convince us that many AI methods and applications will take time to be adopted.

Jefferies exhibit 9

The analysts emphasise how IBM is losing the talent war and also has less access to the rich data of Google, Apple, Facebook and Amazon. Talent will be a major game changer in AI.

The report also does a good job of showing the current flow of investment by major corporations, in terms of acquisitions, and also investment into AI start-ups. Overall the analysis, except the forecasts, gives a fair overview of the AI market, but omits the major $’s flowing into Academic research and the costs of employing and training AI researchers, which is likely already in the early billions… I do however agree that IBM’s Watson risks not capturing the markets share its technology richly deserves – maybe IBM will end up capitalising by its patent’s as it so often has.

Take a look at the report and judge for yourself (PDF).

Robots employment-augmenting rather than employment-reducing

Autor and SalomonsRobots are everywhere in the media again. In February 2017 The New York Times Magazine published an article titled, “Learning to Love Our Robot Co-workers” (Tingley 2017). An article in The Washington Post in March 2017 warned, “We’re So Unprepared for the Robot Apocalypse” (Guo 2017). And, in The Atlantic Derek Thompson (2015, 2016) paved the way in the summer of 2015 with “A World without Work,” followed in October 2016 with an article asking, “When Will Robots Take All the Jobs?

The automation narrative told by these articles and other coverage is a story in which the inevitable march of technology is destroying jobs and suppressing wages and essentially making large swaths of workers obsolete.

What is remarkable about the automation narrative is that any research on robots or technology feeds fear, even if the bottom-line findings of the research do not validate any part of it.

There are some good new research papers and essays that seek to dismantle the claim of a world without work. One such paper is highlighted below.

In a June 2017 paper, titled: “Does Productivity Growth Threaten Employment?” together with a talk at the European Central Bank (ECB) – “Robocalypse Now?”, co-researchers, David Autor and Anna Salomons, set out 200 years of fears of mass unemployment driven by automation.

Autor and Salomon sought to test for evidence of employment-reducing technological progress. Harnessing data from 19 countries over 37 years, they characterize how productivity growth — an omnibus measure of technological progress — affects employment across industries and countries and, specifically, whether rising productivity ultimately diminishes employment, numerically or as a share of working-age population. They focus on overall productivity growth rather than specific technological innovations because (a) heterogeneity in innovations defies consistent classification and comprehensive measurement, and (b), because productivity growth arguably provides an inclusive measure of technological progress: The findings:

In brief, over the 35+ years of data that we study, we find that productivity growth has been employment-augmenting rather than employment-reducing; that is, it has not threatened employment.

Another way to consider the robots taking all the jobs, at least in the short term, is summed up by the outgoing Chief Executive of General Electric, Jeff Immelt who did not mince words regarding his feelings about the impending automation take over. Speaking at the Viva Teach conference in Paris, Immelt said:

I think this notion that we are all going to be in a room full of robots in five years … and that everything is going to be automated, it’s just BS. It’s not the way the world is going to work.

Self-Healing Graphene Holds Promise for Artificial Skin in Future Robots

With the first ever documented observation of the self-healing phenomena of graphene, researchers hint at future applications for its use in artificial skin.

Graphene, which is, in simple terms, a sheet of pure carbon atoms and currently the world’s strongest material, is one million times thinner than paper; so thin that it is actually considered two dimensional. Notwithstanding its hefty price, graphene has quickly become among the most promising nanomaterials due to its unique properties and versatile prospective applications.

The paper published in Open Physics refers to an extraordinary yet previously undocumented self-healing property of graphene’s, which could lead to the development of flexible sensors that mimic the self-healing properties of human skin.

The largest organ in the human body, skin has been known for its fascinating self-healing ability – but until now, emulating this mechanism proved too much of a challenge as manmade materials lack this aptitude. Due to unprecedented stretching, bending and incidental scratches, artificial skin used in robots is extremely susceptible to ruptures and fissures. The study offers a novel solution where a sub-nano sensor uses graphene to sense a crack as soon as it starts nucleation, and surprisingly, even after the crack has spread a certain distance. According to the authors, this technology could quickly become viable for use in the next generation of electronics.

According to Dr. Swati Ghosh Acharyya, one of the researchers.

We wanted to observe the self-healing behavior of both pristine and defected single layer graphene and its application in sub-nano sensors for crack spotting by using molecular dynamic simulation. We were able to document the self-healing of cracks in graphene without the presence of any external stimulus and at room temperature.

The results revealed that self-healing occurred by spontaneous recombination of the dangling bonds whenever within the limit of critical crack opening displacement.

The researchers subjected single layer graphene containing various defects like pre-existing holes and differently oriented pre-existing cracks to uniaxial tensile loading till fracture. Interestingly enough, once the load was relaxed, the graphene started to heal and the self-healing continued irrespective of the nature of pre-existing defects in the graphene sheet. No matter what length of the crack, the authors say they all healed, provided the critical crack opening distance lied within 0.3 – 0.5 nm for both the pristine sheet as well as for the sheet with pre-existing defects.

Simulating self-healing in artificial skin will open the way to a variety of daily life applications ranging from sensors, through to mobile devices and ultracapacitors. In case of the latter, graphene-based devices would have an advantage of the large surface of graphene to provide increase in the electrical power by storing electrons on graphene sheets. Apparently such supercapacitors would have as much electrical storage capacity as lithium-ion batteries but could be recharged in minutes instead of hours.

The original article is fully open access and available on De Gruyter Online.

 

 

Investments ramping up in Industrial Robots

robotenomics_arm-institute

In Late December 2016 Rethink Robotics, supplier of Co-Bots secured an additional US$ 18 million investment. The new round, despite being somewhat short of the US$ 33 million sought as indicated by their SEC filing, included funding from the Swiss headquartered private equity investment firm, Adveq, as well as contributions from all previous investors, including Bezos Expeditions, CRV, Highland Capital Partners, Sigma Partners, DFJ, Two Sigma Ventures, GE Ventures and Goldman Sachs.

I think that Rethink’s Baxter and Sawyer robots are setting a new standard in advanced robotics for businesses of all sizes – the only downside is that Rethink sub contract the manufacturing of their robots which gives them less control of delivery scheduling and has possibly considerably hindered their over all growth, cash flow outlays and profitably. This could reflect, in a very hot growth market, the less than enthusiastic take up by new investors and indeed appetite for considerably increasing investment from existing investors. However in the coming months I would expect Rethink would secure the additional US$ 15 million they seek, maybe via Asian manufacturing partners, a region that is becoming increasingly important for Rethink as they endeavor to capture a larger share of the co-bot market.

In addition to Rethink’s new investment – a very interesting, relative, new comer to the industrial robotic manufacturing scene, the Advanced Robotics Manufacturing (ARM) Institute, a U.S. national, public-private partnership, has announced funding of US$ 250 million.

The U.S. Department of Defense awarded the public-private Manufacturing USA institute to American Robotics, a nonprofit venture led by Carnegie Mellon, with more than 230 partners in industry, academia, government and the nonprofit sector across the U.S. The institute will receive $80 million from the DOD, and an additional $173 million from the partner organizations.

Based in Pittsburgh, ARM is led by a newly established national nonprofit called American Robotics, which was founded by Carnegie Mellon University and includes a national network of 231 stakeholders from industry, academia, local governments and nonprofits.

The mission of ARM is essentially four-pronged. To 1) empower American workers to compete with low-wage workers abroad; 2) create and sustain new jobs to secure U.S. national prosperity; 3) lower the technical, operational, and economic barriers for small- and medium- sized enterprises as well as large companies to adopt robotics technologies; and 4) assert U.S. leadership in advanced manufacturing.

ARM’s 10-year goals include increasing worker productivity by 30 percent, creating 510,000 new manufacturing jobs in the U.S., ensuring that 30 percent of SMEs adopt robotics technology, and providing the ecosystem where major industrial robotics manufacturers will emerge.

These investments keep robotics on course to be one of the main investment areas for improving manufacturing productivity and indeed increasing jobs and corporate profitability.

The ARM investment sounds very similar to those of the EU’s public / private initiative announced in June 2014, albeit that is a Euro 2.8 billion initiative and less ambitious, but very worthy, target of adding 240,000 new jobs.

 

Photo: ARM Institute impact

 

New Whitehouse report – Artificial Intelligence, Automation, and the Economy.

Overview

ai-automation-and-the-economyA new report released by the Whitehouse indicates that accelerating Artificial Intelligence (AI) capabilities will enable automation of some tasks that have long required human labor. The report authors indicate that these transformations will open up new opportunities for individuals, the economy, and society, but they will also disrupt the current livelihoods of millions of Americans. At a minimum, some occupations such as drivers and cashiers are likely to face displacement from or a restructuring of their current jobs.

The challenge for policymakers will be to update, strengthen, and adapt policies to respond to the economic effects of AI.

Although it is difficult to predict these economic effects precisely, the report suggests that policymakers should prepare for five primary economic effects:

  • Positive contributions to aggregate productivity growth;
  • Changes in the skills demanded by the job market, including greater demand for higher-level technical skills;
  • Uneven distribution of impact, across sectors, wage levels, education levels, job types, and locations;
  • Churning of the job market as some jobs disappear while others are created; and
  • The loss of jobs for some workers in the short-run, and possibly longer depending on policy responses.

More generally, the report suggests three broad strategies for addressing the impacts of AI-driven automation across the whole U.S. economy:

  1. Invest in and develop AI for its many benefits;
  2. Educate and train Americans for jobs of the future; and
  3. Aid workers in the transition and empower workers to ensure broadly shared growth.

Key points from the report

The authors state it is unlikely that machines will exhibit broadly-applicable intelligence comparable to or exceeding that of humans in the next 20 years, it is to be expected that machines will continue to reach and exceed human performance on more and more tasks.

AI should be welcomed for its potential economic benefits. However there will be changes in the skills that workers need to succeed in the economy, and structural changes in the economy. Aggressive policy action will be needed to help Americans who are disadvantaged by these changes and to ensure that the enormous benefits of AI and automation are developed by and available to all.

Today, it may be challenging to predict exactly which jobs will be most immediately affected by AI-driven automation. Because AI is not a single technology, but rather a collection of technologies that are applied to specific tasks, the effects of AI will be felt unevenly through the economy. Some tasks will be more easily automated than others, and some jobs will be affected more than others—both negatively and positively. Some jobs may be automated away, while for others, AI-driven automation will make many workers more productive and increase demand for certain skills. Finally, new jobs are likely to be directly created in areas such as the development and supervision of AI as well as indirectly created in a range of areas throughout the economy as higher incomes lead to expanded demand.

Strategy #1: Invest in and develop AI for its many benefits. If care is taken to responsibly maximize its development, AI will make important, positive contributions to aggregate productivity growth, and advances in AI technology hold incredible potential to help the United States stay on the cutting edge of innovation. Government has an important role to play in advancing the AI field by investing in research and development. Among the areas for advancement in AI are cyberdefense and the detection of fraudulent transactions and messages. In addition, the rapid growth of AI has also dramatically increased the need for people with relevant skills from all backgrounds to support and advance the field. Prioritizing diversity and inclusion in STEM fields and in the AI community specifically, in addition to other possible policy responses, is a key part in addressing potential barriers stemming from algorithmic bias. Competition from new and existing firms, and the development of sound pro-competition policies, will increasingly play an important role in the creation and adoption of new technologies and innovations related to AI.

Strategy #2: Educate and train Americans for jobs of the future. As AI changes the nature of work and the skills demanded by the labor market, American workers will need to be prepared with the education and training that can help them continue to succeed. Delivering this education and training will require significant investments. This starts with providing all children with access to high-quality early education so that all families can prepare their students for continued education, as well as investing in graduating all students from high school college- and career- ready, and ensuring that all Americans have access to affordable post-secondary education. Assisting U.S. workers in successfully navigating job transitions will also become increasingly important; this includes expanding the availability of job-driven training and opportunities for lifelong learning, as well as providing workers with improved guidance to navigate job transitions.

Strategy #3: Aid workers in the transition and empower workers to ensure broadly shared growth. Policymakers should ensure that workers and job seekers are both able to pursue the job opportunities for which they are best qualified and best positioned to ensure they receive an appropriate return for their work in the form of rising wages. This includes steps to modernize the social safety net, including exploring strengthening critical supports such as unemployment insurance, Medicaid, Supplemental Nutrition Assistance Program (SNAP), and Temporary Assistance for Needy Families (TANF), and putting in place new programs such as wage insurance and emergency aid for families in crisis. Worker empowerment also includes bolstering critical safeguards for workers and families in need, building a 21st century retirement system, and expanding healthcare access. Increasing wages, competition, and worker bargaining power, as well as modernizing tax policy and pursuing strategies to address differential geographic impact, will be important aspects of supporting workers and addressing concerns related to displacement amid shifts in the labor market.

Finally, if a significant proportion of Americans are affected in the short- and medium-term by AI-driven job displacements, US policymakers will need to consider more robust interventions, such as further strengthening the unemployment insurance system and countervailing job creation strategies, to smooth the transition.

I will add detailed comments and my thoughts as I digest the full report in the coming days.

 

 

 

New US Robot report – “This is going to create a whole new economy”

The document focuses on autonomous vehicles, eldercare, manufacturing and more

usroboticsroadmap

 

The new U.S. Robotics Roadmap calls for better policy frameworks to safely integrate new technologies, such as self-driving cars and commercial drones, into everyday life.

The detailed document also advocates for increased research efforts in the field of human-robot interaction to develop intelligent machines that will empower people to stay in their homes as they age.  It calls for increased education efforts in the STEM fields from elementary school to adult learners

The roadmap’s authors, more than 150 researchers from around the nation, also call for research to create more flexible robotics systems to accommodate the need for increased customization in manufacturing, for everything from cars to consumer electronics

The goal of the U.S. Robotics Roadmap is to determine how researchers can make a difference and solve societal problems in the United States. The document provides an overview of robotics in a wide range of areas, from manufacturing to consumer services, healthcare, autonomous vehicles and defense. The roadmap’s authors make recommendation to ensure that the United States will continue to lead in the field of robotics, both in terms of research innovation, technology and policies.

We also want to make sure that research solves real life problems and gets deployed,

said Henrik I. Christensen, a professor computer scientist at the University of California San Diego, and the document’s lead editor.

We need to make sure that we are making an impact on people’s lives. 

Unmanned vehicles and policy

The advances in the field of self-driving cars have far outpaced the predictions researchers made in the 2013 edition of the roadmap. But autonomous vehicles still have several obstacles to overcome, the researchers said. “It is important to recognize that human drivers have a performance of100 million miles driven between fatal accidents,” Christensen said. “It is far from trivial to design autonomous systems that have a similar performance.”

Self-driving cars need to become more like industrial robots, which can run autonomously for three years without human intervention, he added. Also, the many methods and technologies used in the field of self-driving vehicles need to be resolved into a single standard. “Systems integration might not get a lot of press, but it is essential,” Christensen said.

Finally, local, state and federal agencies need to formulate policies and regulations that ensure these cars can share the road safely with vehicles driven by people. Regulations and policies also need to be put in place for unmanned aerial vehicles, better known as drones or UAVs. When this is done, UAVs could revolutionize the way we ship goods by air, monitor the environment—and much more. They could help first responders during natural disasters and terrorist attacks.

Researchers also need to get better at controlling swarms of UAVs and robots. “Currently, it takes a small group of people to run complex UAVs. This ratio needs to be inverted so that one person can control a small group of UAVs and other autonomous robots. Human-robot interactions should resemble the relationship between an orchestra conductor and musicians,” Christensen said. “Individual players need to be smart enough to take cues from the conductor and play on their own.”

Health care and home companion robots

A major wave of companion robots is about to enter the market, as the population of developed countries ages. For example, 50 percent of the Japanese population is over 50 years old. “We need to help the elderly stay in their homes,” Christensen said. “And robots can help us get there.”

To reach this goal, robots will need to have a better understanding of their surroundings and become more reliable. Existing systems are equipped with basic navigation methods. But long-term autonomy with little or no human intervention needs to be the goal. In addition, robotic home companions will need to be able to perform a wider range of tasks.

It is also essential that robots be easy enough to control so that they can be used by everyone. That means that home care robots, for example, need user interfaces that are no more complicated than a TV remote.

“This needs to be moon shot for robotics research,” Christensen said.

Manufacturing

In recent years, the need to customize products such as cars has increased dramatically. For example, a high-end vehicle can feature millions of different options, from the color of its seats to the configuration of its electronics. As a result, manufacturers have turned to increasingly sophisticated technology to drive assembly lines. This in turn has brought many factories back to the United States. In the past six years, the U.S. manufacturing sector has added 600,000 jobs. “Tremendous growth in robotics doesn’t have to mean job losses,” Christensen said.

But this expansion of robotic systems in industry must overcome two major obstacles, the roadmap states. Researchers need to develop user interfaces that will allow workers to operate robotic systems with little or no training. In other words, user interfaces need to become more like video games, Christensen said.

Also, robots’ manipulation skills need to improve dramatically, to match at least the dexterity of a young child. Right now, the most advanced robots have the grasping abilities of a one-year-old, Christensen said.

An Industrial Internet and the Internet of Things

For all applications, the core challenge is flexible integration of robotic systems with human operators and collaborators.  Researchers envision an environment where physical systems are linked wirelessly via smart sensors and smart chips, within an industrial Internet of Things. This will make it easier for robots to navigate their environment and work with people. At the same time it is important to design these systems to be secure so that they cannot be hijacked or used in cyber attacks.

Amazon is at the forefront of this movement and owns 40 percent of application program interfaces, or APIs, related to IoT—which are open source, Christensen said. “This is going to create a whole new economy,” he said.

Education

Robotic systems will dramatically change everyday life both in the home and at work in coming years. As a result, the public and the workforce need to be trained to interact with these systems. Training needs to happen at all levels, from kindergarten to 12th-grade and in trade schools before college. But most education efforts need to be focused on kindergarten through 12th-grade. Too many young people are dropping out of high school and will be left behind by this new economy based on robotics and the Internet of Things, Christensen said.

“We need to empower people to use robots,” he said. “We need to realize that most of the interfaces we design today for robotic systems aren’t easy to use.”

A shared robotics infrastructure

Researchers also are making a call to build a common, shared research infrastructure for robotics in the United States. The research network would expand existing sites, with a focus on testing autonomous driving, medical and health care robotics, micro- and nanorobotics, agriculture robotics, UAVs and underwater robotics. Each site would need about $3 million to be revamped into a shared facility.

 

The full report is available here…

 

 

 

 

AI not yet but Machine Learning and Big Data are rapidly evolving

Solve problems

In his book Adventures in the Screen Trade, the hugely successful screenwriter William Goldman’s opening sentence is – “Nobody knows anything.” Goldman is talking about predictions of what might and what might not succeed at the box office. He goes on to write: “Why did Universal, the mightiest studio of all, pass on Star Wars? … Because nobody, nobody — not now, not ever — knows the least goddamn thing about what is or isn’t going to work at the box office.” Prediction is hard, “Not one person in the entire motion picture field knows for a certainty what’s going to work. Every time out it’s a guess.” Of course history is often a good predictor of what might work in the future and when, but according to Goldman time and time again predictions have failed miserably in the entertainment business.

It is exactly the same with technology and Artificial Intelligence (AI), probably more than any other technology has fared the worst when it comes to predictions of when it will be available as a truly ‘thinking machine.’ Fei-Fei Li, director of the Stanford Artificial Intelligence Lab even thinks: “today’s machine-learning and AI tools won’t be enough to bring about real AI.” And Demis Hassabis founder of Google’s DeepMind (and in my opinion one of the most advanced AI developers) forecasts: “it’s many decades away for full AI.”

Researchers are however starting to make considerable advances in soft AI. Although with the exception of less than 30 corporations there is very little tangible evidence that this soft AI or Deep Learning is currently being used productively in the workplace.

Some of the companies currently selling or and/or using soft AI or Deep Learning to enhance their services; IBM’s Watson, Google Search and Google DeepMind, Microsoft Azure (and Cortana), Baidu Search led by Andrew Ng, Palantir Technology, maybe Toyota’s new AI R&D lab if it has released any product internally, within Netflix and Amazon for predictive analytics and other services, the insurer and finance company USAA, Facebook (video), General Electric, the Royal Bank of Scotland, Nvidia, Expedia and MobileEye and to some extent the AI light powered collaborative robots from Rethink Robotics.

There are numerous examples of other companies developing AI and Deep Learning products but less than a hundred early-adopter companies worldwide. Essentially soft AI and Deep Learning solutions, such as Apple’s Siri, Drive.ai, Viv, Intel’s AI solutions, Nervana Systems, Sentient Technologies, and many more are still very much in their infancy, especially when it comes to making any significant impact on business transactions and systems processes.

Machine Learning

On the other hand, Machine Learning (ML), which is a subfield of AI, which some call light AI, is starting to make inroads into organizations worldwide. There are even claims that: “Machine Learning is becoming so pervasive today that you probably use it dozens of times per day without knowing it.”

Although according to Intel: “less than 10 per cent of servers worldwide were deployed in support of machine learning last year (2015).” It is highly probable Google, Facebook, Salesforce, Microsoft and Amazon would have taken up a large percentage of that 10 percent alone.

ML technologies, such as the location awareness systems like Apple’s iBeacon software, which connects information from a user’s Apple profile to in-store systems and advertising boards, allowing for a ‘personalized’ shopping experience and tracking of (profiled) customers within physical stores. IBM’s Watson and Google DeepMind’s Machine Learning have both shown how their systems can analyze vast amounts of information (data), recognize sophisticated patterns, make significant savings on energy consumption and empower humans with new analytical capabilities.

The promise of Machine Learning is to allow computers to learn from experience and understand information through a hierarchy of concepts. Currently ML is beneficial for pattern and speech recognition and predictive analytics. It is therefore very beneficial in search, data analytics and statistics – when there is lots of data available. Deep Learning helps computers solve problems that humans solve intuitively (or automatically by memory) like recognizing spoken words or faces in images.

Neither Machine Learning nor Deep Learning should be considered as a attempt to simulate the human brain – which is one goal of AI.

Crossing the chasm – not without lots of data

If driverless vehicles can move around with decreasing problems, this is not because AI has finally arrived, it is not that we have machines that are capable of human intelligence, but it is that we have machines that are very useful in dealing with big data and are able to make decisions based on uncertainties regarding the perception and interpretation of their environment – but we are not quite there yet! Today we have systems targeted at narrow tasks and domains, but not that promised by ‘general purpose’ AI, which should be able to accomplish a wide range of tasks, including those not foreseen by the system’s designers.

Essentially there’s nothing in the very recent developments in machine learning that significantly affects our ability to model, understand and make predictions in systems where data is scarce.

Nevertheless companies are starting to take notice, investors are funding ML startups, and corporations recognize that utilizing ML technologies is a good step forward for organizations interested in gaining the benefits promised by Big Data and Cognitive Computing over the long term. Microsoft’s CEO, Satya Nadella, says the company is heavily invested in ML and he is: “very bullish about making machine learning capability available (over the next 5 years) to every developer, every application, and letting any company use these core cognitive capabilities to add intelligence into their core operations.”

The next wave – understanding information

Organizations that have lots of data know that information is always limited, incomplete and possibly noisy. ML algorithms are capable of searching the data and building a knowledge base to provide useful information – for example ML algorithms can separate spam emails from genuine emails. A machine learning algorithm is an algorithm that is able to learn from data, however the performance of machine learning algorithms depends heavily on the representation of the data they are given.

Machine Learning algorithms often work on the principle most widely known as Occam’s razor. This principle states that among competing hypotheses that explain known observations equally well one should choose the “simplest” one. In my opinion this is why we should use machines only to augment human labor and not to replace it.

Machine Learning and Big Data will greatly compliment human ingenuity – a human-machine combination of statistical analysis, critical thinking, inference, persuasion and quantitative reasoning all wrapped up in one.

“Every block of stone has a statue inside it and it is the task of the sculptor to discover it. I saw the angel in the marble and carved until I set him free.” ~ Michelangelo (1475–1564)

The key questions businesses and policy makers need to be concerned with as we enter the new era of Machine Learning and Big Data:

1) who owns the data?

2) how is it used?

3) how is it processed and stored?

Update 16th August 2016

There is a very insightful Quora answer by François CholletDeep learning researcher at Google where he confirms what I have been saying above:

“Our successes, which while significant are still very limited in scope, have fueled a narrative about AI being almost solved, a narrative according to which machines can now “understand” images or language. The reality is that we are very, very far away from that.”

 

Photo credit, this was a screen grab of a conference presentation, now I do not remember the presenter or conference but if I find it I will update the credit!

 

 

 

 

When machines replace jobs, the net result is normally more new jobs

Two of the current leading researchers in labor economics studying the impact of machines and automation on jobs have released a new National Bureau of Economic Research (NBER) working paper, The Race Between Machine and Man: Implications of Technology for Growth, Factor Shares and Employment.

The authors, Daron Acemoglu and Pascual Restrepo are far from the robot-supporting equivalent of Statler and Waldorf, the Muppets who heckle from the balcony, unless you consider their heckling is about how so many have overstated the argument of robots taking all the jobs without factual support:

Similar claims have been made, but have not always come true, about previous waves of new technologies… Contrary to the increasingly widespread concerns, our model raises the possibility that rapid automation need not signal the demise of labor, but might simply be a prelude to a phase of new technologies favoring labor.

In The Race Between Machine and Man, the researchers set out to build a conceptual framework, which shows which tasks previously performed, by labor are automated, while at the same time more ‘complex versions of existing tasks’ and new jobs or positions in which labor has a comparative advantage are created.

The authors make several key observations that show as ‘low skilled workers’ are automated out of jobs, the creation of new complex tasks always increases wages, employment and the overall share of labor increases. As jobs are eroded, new jobs, or positions are created which require higher skills in the short term:

Whilst “automation always reduces the share of labor in national income and employment, and may even reduce wages. Conversely, the creation of new complex tasks always increases wages, employment and the share of labor.”

They show, through their analysis, that for each decade since 1980, employment growth has been faster in occupations with greater skill requirements

During the last 30 years, new tasks and new job titles account for a large fraction of U.S. employment growth.

In 2000, about 70% of the workers employed as computer software developers (an occupation employing one million people in the US at the time) held new job titles. Similarly, in 1990 a radiology technician and in 1980 a management analyst were new job titles.

Looking at the potential mismatch between new technologies and the skills needed the authors crucially show that these new highly skilled jobs reflect a significant number of the total employment growth over the period measured as shown in Figure 1:

From 1980 to 2007, total employment in the U.S. grew by 17.5%. About half (8.84%) of this growth is explained by the additional employment growth in occupations with new job titles.

Figure 1

Unfortunately we have known for some time that labor markets are “Pareto efficient; ” that is, no one could be made better off without making anyone worse off. Thus Acemoglu and Restrepo point to research that shows when wages are high for low-skill workers this encourage automation. This automation then leads to promotion or new jobs and higher wages for those with ‘high skills.’

Because new tasks are more complex, the creation may favor high-skill workers. The natural assumption that high-skill workers have a comparative advantage in new complex tasks receives support from the data.

The data shows that those classified as high skilled tend to have more years of schooling.

For instance, the left panel of Figure 7 shows that in each decade since 1980, occupations with more new job titles had higher skill requirements in terms of the average years of schooling among employees at the start of each decade (relative to the rest of the economy).

Figure 7

However it is not all bad news for low skilled workers the right panel of the same figure also shows a pattern of “mean reversion” whereby average years of schooling in these occupations decline in each subsequent decade, most likely, reflecting the fact that new job titles became more open to lower-skilled workers over time.

Our estimates indicate that, although occupations with more new job titles tend to hire more skilled workers initially, this pattern slowly reverts over time. Figure 7 shows that, at the time of their introduction, occupations with 10 percentage points more new job titles hire workers with 0.35 more years of schooling). But our estimates in Column 6 of Table B2 show that this initial difference in the skill requirements of workers slowly vanishes over time. 30 years after their introduction, occupations with 10 percentage points more new job titles hire workers with 0.0411 fewer years of education than the workers hired initially.

Essentially low-skill workers gain relative to capital in the medium run from the creation of new tasks.

Overall the study shows what many have said before, there is a skills gap when new technologies are introduced and those with the wherewithal to invest in learning new skills, either through extra education, on the job training, or self-learning are the ones who will be in high demand as new technologies are implemented.