Recently OpenGov had the opportunity to have a chat on artificial intelligence with Prof. Chengqi Zhang, Distinguished Professor at the University of Technology Sydney (UTS). He is also an Honorary Professor at the University of Queensland and an Adjunct Professor at the University of New South Wales. He has occupied the positions of the Chairman of the Australian Computer Society National Committee for Artificial Intelligence since November 2005, and the Chairman of IEEE Computer Society Technical Committee of Intelligent Informatics (TCII) since June 2014.
Prof. Zhang’s key areas of research are data mining and its applications. His work focuses on fundamental research, as well as industry collaborations to solve problems through AI applications. He has published around 300 refereed papers, edited 16 books, and published seven monographs.
Prof. Zhang started with an overview of the history of AI, demonstrating that it is not as if that the AI technologies have come out of the blue during the last few years. The journey started with a grand vision of emulating human intelligence. The Turing Test proposed by Alan Turing in 1950 sought to test through interviews a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from that of a human.
Around 1960, the General Problem Solver was created, which could be applied to "well-defined" problems such as proving theorems in logic or geometry, word puzzles and chess.
The next stage was expert systems, where the knowledge of human experts was fed into the system, so that they could solve expert-level problems, which are difficult enough to require significant human expertise within a particular domain. The next stage of development was autonomous agents and Multi-agents. Multi-agents have sensors, which were absent in the expert systems. Then came data mining and machine learning, where the system seeks to learn from the real world.
All of these can be classified as ‘symbolic’ AI. Symbolic AI is the collective name for all methods in AI research that are based on high-level ‘symbolic’ (human-readable) representations of problems, logic and search. The problem and domain knowledge are explicitly encoded in the algorithms used.
The other stream is called connective/ connectionist AI, such as neural networks. A neural network is an interconnected group of nodes, modelled on the vast network of neurons in a brain. Such systems can be used in areas where traditional rule-based programming is not very successful and their performance improves progressively as they consider more examples and process more data.
Evolutionary computation is another example. In evolutionary computation, an initial set of candidate solutions is generated and iteratively updated. Each new generation is produced by removing less desired solutions, and introducing small random changes. In biological terminology, a population of solutions is subjected to natural selection (or artificial selection) and mutation. Over iterations, the population will gradually evolve to increase in fitness, in this case the chosen fitness function of the algorithm.
Prof. Zhang said that connective form of AI works very well in certain areas, but for reasons we cannot explain.
These two streams of symbolic and connectionist AI are now coming together in data-driven AI. All the earlier AI was knowledge-driven. This is a big shift.
The algorithms have been around for a long time but the computing power was not available. Now, we have the big data, the computing power, as well as the deep learning algorithms. Consequently, the applicability of AI is growing by leaps and bounds.
“For example, if you look at facial recognition now, success rate is fairly high, like 97 per cent or more. That you can use in real life. That’s why AI now is not only in the laboratory. It is impacting our lives in the real world. That’s why the Chinese government put out a national strategy. You cannot afford to fall behind,” Prof. Zhang said.
Another example is translation. Now we can let Google or Baidu do 90% of the job. Prof. Zhang pointed out that thanks to science-fiction, most of the time people think of robots when they hear of AI. In real-life, robotics is a subset of AI. AI in many manifestations is already impacting our lives, like speech recognition, natural language processing, facial recognition, product recommendations.
Businesses are also benefiting from the use of AI. For instance, large supermarket chains like Walmart can maintain extraordinarily low inventory levels because they can predict purchase patterns.
The ‘black box’ problem of deep learning
We do not understand how deep learning algorithms arrive at their conclusions. It is a black box. Which hampers trust among users, whether organisations (such as national security for governments) or individuals (autonomous cars or AI-assisted medical care). It can also pose ethical dilemmas and legal issues. For instance, if an AI-based system is being used for predictive policing and sentencing, then the opacity can pose a major problem.
We asked Prof. Zhang for his views on the subject. He replied that whether the black box nature is a problem or not depends on the application.
For example, if an AI system is being used to improve efficiencies or boost revenue by 5% or 10%, then the black box nature of deep learning might not matter much. But if it is a matter of human lives, in an area such as medicine, then transparency becomes important.
Prof. Zhang said that we might be actually able to simulate the thinking of the human brain through quantum computing if we understand brain and we have quantum computer in one day. That could be the next generation of AI and it would be explainable.
Case study- Optimising budget expenditure on disease prevention for Australian Department of Health
A team under Prof. Zhang worked with the Department of Health in Australia to find the optimal setting for budget expenditure on disease prevention.
If the government spends more on frequent health testing it can save far bigger expenditures on treatment in the future through earlier detection of health issues. However, then there can be excessive, unnecessary testing also. And you cannot wait for a long time to get feedback on a policy. The damage might have been already done by then.
So, AI systems were used to conduct simulations to evaluate policy before implementing it in the real world. The process starts with data mining. Once the information, the cohorts and classifications are in place, machine learning is applied and predictions are made.
The results might not be 100% accurate but it provides indication of which policies would be more beneficial. The accuracy of the predictions is used as feedback for the system.
The results could be further improved with additional open government data from more fields and improved cross-departmental data sharing. However, it is important to take care of privacy and security while opening up or sharing data.
Privacy concerns often prevent data custodians in governments from opening up data. Prof. Zhang said that both policy measures and technology will play a role in preserving privacy. Policy can help define access control and impose responsibilities and penalties for violation.
Aggregated, anonymised data can be linked with data from other sources resulting in the recovery of personally identifiable information. But technologies like differential privacy can help prevent re-identification of individuals from linked data sets. Differential privacy aims to provide means to maximize the accuracy of queries from statistical databases while minimizing the chances of identifying its records.
It works by adding randomised noise to data to protect individual entries without significantly changing the result and ensuring that an attacker can learn virtually nothing more about an individual than they would learn if that person’s record were absent from the dataset.
AI vs humans?
The above use case is an example of AI assisting in a policy decision. Prof Zhang cautioned that AI cannot create a policy framework, which still has to be done by humans.
AI seems to be getting cleverer by the day and current AI systems can create simple models. For complicated models which need significant creative thinking, humans are still required. For example, the AI may generate 5 or 10 models and then human input would be required to improve on them. The machine might get it 95% right but we, humans, still need to do a final check. AI can help humans do a better job, but cannot replace them.
Prof. Zhang was a guest speaker at a recent OpenGov Breakfast Insight session on Artificial Intelligence.