China is doing far more than talking about AI.
In 2017, the country's (http://www.gov.cn/zhengce/content/2017-07/20/content_5211996.htm) national government announced it wanted to make the country and its industries world leaders in AI technologies by 2030.
This facial recognition system (Deep Face) was developed in 2014 using deep learning, a technique in machine learning, which is a subfield of AI.
The Deep Face system consists of a nine-layer artificial neural network (ANN) with more than 120 million connection weights and was trained on 4 million images uploaded by Facebook users.
In the EU, the European Commission is taking a multi-pronged approach to fostering the responsible development and deployment of AI.
In addition to public investment in research and the promotion of public-private partnerships, the EU has brought together experts from various disciplines in a European AI High Level Expert Group--of which I am a member--tasked with developing ethical guidelines that will consider principles such as values, explainability, transparency, and bias, as well as with recommending policies that include funding and infrastructures to support AI in Europe.
The significant attention paid to AI in the popular press in recent years has led to growing uncertainties among nonexperts in AI about what AI can and cannot do, and what the consequences are of ever more capable AI.
The general public wonders whether AI is going to take away their jobs or maybe even take over the world.
The CBI is detecting a major shift is on the horizon as businesses of all sizes look to unlock the potential of AI.
Over the next five years, AI holds the top spot as the technology set to impact companies across all sectors.
It is estimated that in 2017 alone, companies globally have completed around PS15bn in mergers and acquisitions related to AI.
The temptation to rush into a project you have not clearly thought through is heightened when your C-suite colleagues take an active interest in AI.
If you are a CTO/CIO, avoid AI initiatives just so that your team can be seen to be doing something in this hot area.
AI should not be weaponised, and any AI must have an impregnable"off switch." Beyond that, we should regulate the tangible impact of AI systems (for example, the safety of autonomous vehicles) rather than trying to define and rein in the amorphous and rapidly developing field of AI.
I propose three rules for artificial intelligence systems that are inspired by, yet develop further, the"three laws of robotics" that the writer Isaac Asimov introduced in 1942: A robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey the orders given it by human beings, except when such orders would conflict with the previous law; and a robot must protect its own existence as long as such protection does not conflict with the previous two laws.
Business, apps, smartphones, hospitals, defense, cars; everything is already moving towards automation through AI.
While there's a whole lot that true AI will be able to achieve in the future, for now let's talk about the present day AI and some of its applications in different fields of life.
New jobs associated with the boost in global business revenues could reach more than 800,000 by 2021, surpassing those jobs lost to automation from AI.
Underpinning the adoption of AI, 46 per cent of AI adopters report that more than 50 per cent of their CRM activities are executed using the public cloud.
Machine learning and natural language processing will be the other top application areas of AI.
They discussed what to call their work and finally concluded it should be called AI.
They did not exactly define it but named their research area as AI.