February 27, 2020
"When you look at the list of leading research institutes and universities in artificial intelligence research, Japan is nowhere to be found," points out Sadaoki Furui, Chair of the Board of Trustees of the Toyota Technological Institute at Chicago (TTI-Chicago). "Japanese researchers in this field have very little presence." His scathing comments came during Nagoya University Informatics Symposium 2020: value creation by artificial intelligence technology and the mission of informatics, which took place at the end of January. In his special lecture titled "the teaching and research of artificial intelligence in Japan, seen from the US perspective," Furui, who is renowned for his research into speech information processing, stated: "universities must do more to nurture young researchers and give them opportunities to make their marks."
Meanwhile, Associate Professor Minao Kukita from the Graduate School of Informatics, the last speaker of the symposium, looked at the subject from a philosopher's perspective. His talk, titled "artificial intelligence is a message," highlighted the problem that constant exposure to that message might change people's perception of humanity.
They both raise some important issues relating to artificial intelligence (AI), albeit from completely different perspectives. These issues also represent major challenges for universities: what role will universities, and Meidai in particular, be required to play?
Why are Japanese universities not among the top players in AI research? The subtitle of Furui's talk hints at the problem: "get out of the silos!" TTI-C is a graduate university specialized in informatics and has strength in computer science and AI research. With his in-depth knowledge of US research universities and their AI research, Furui singles out the segmented, silo-like structure of Japanese universities as the root cause. In Japan, specialisms and laboratories operate independently with very little communication between them. This does not work well in the field of AI, where frontier research tends to be carried out by teams rather than individuals. Paper repository sites receive as many as 100 papers a day, and it is essential that researchers work in groups so that they can divide up papers to read, discuss their findings and write papers. Researchers working on their own can never compete against the sheer manpower of organized teams.
Data science education is the foundation of AI research, and Japan's Ministry of Education, Culture, Sports, Science and Technology (MEXT) is trying to address this issue at multiple levels, but the scale of the efforts of American universities is beyond comparison. For example, the Massachusetts Institute of Technology (MIT), one of the leaders in the field and second only to Stanford University in the number of published papers on AI, has invested the equivalent of 100 billion yen in data science teaching across the entire university. This massive investment, incommensurable in scale to anything we have seen in Japan, is said to have been funded largely through donations. The money has enabled MIT to hire 50 new faculty members, around half of whom are AI experts in wide-ranging fields such as physics and chemistry, while the other half are computer science experts.
I remember MIT doing something like this before. In the 1990s, the institute predicted that the 21st century was going to be an age of life sciences and spent many years preparing for that future by making life science learning compulsory across all its schools and departments, including the School of Engineering. When MIT sees something as essential foundational knowledge, it makes sure that all its students are taught it. This time, with data science, the scale is even bigger than the 1990s precedent - a sign of the strength of MIT's commitment.
There must be many bright young people in Japan who also have the potential to lead the field, says Furui, who stresses the urgency for universities to provide the right environment for them, especially at the doctoral level. It is important, he says, that these talents are released from silos and allowed to conduct collaborative research in the real sense of the word, across the boundaries of departments and even universities.
Hiroshi Murase, Dean of the Graduate School of Informatics, concedes to Furui's point that Japanese universities are facing challenges, not least in the decline in the number of students enrolling in doctoral course, but argues that Japan can approach research in its own unique way. We are in the midst of what is called the third AI boom, with the US and China leading the way in massive worldwide investment into the field. If Japan wants to simply follow their lead, it cannot possibly match their financial muscle. However, there is no guarantee that the current trend is going to continue unchanged; the history of academia is littered with booms followed by cool-downs, only to be replaced by new booms. Murase considers it important that research efforts are based on long-term visions that look beyond the current boom and towards the future of technology.
Take the example of two Japanese researchers who were at the very beginning of current AI boom but have come to prominence only recently. The spotlight fell on them when Yann LeCun of the New York University, one of the driving forces behind the boom, revealed that his major source of inspiration were papers he had stumbled upon around 1980, when there were hardly any papers on AI coming out of the US or Europe after the first boom of AI research. They were written by mathematical engineering researcher Shun-ichi Amari, Professor Emeritus of the University of Tokyo and an early pioneer of human cognition modeling, and Kunihiko Fukushima, who was working on the development of image pattern recognition technology at the NHK Science & Technology Research Laboratories. Fukushima's technology never reached anywhere near the commercial product stage as computers did not have enough processing power back then, but he quietly persisted with the research even though there was little interest, and it laid the foundation of today's image recognition technology.
Despite that, it was a group led by Geoffrey Hinton, Professor Emeritus of the University of Toronto and a mentor of LeCun, who took ideas from Amari and Fukushima's research, continued the research work based on them and came up with the concept of deep learning, which triggered the current boom in AI research.
Murase, who joined NTT's research institute in 1980 at the depth of winter in AI research and worked on image and pattern recognition research, remembers how Fukushima cut a solitary figure at academic conferences with the unpopular research he continued alone. The two pioneers' achievement led nowhere in Japan.
Murase moved to Meidai in 2003 and has since seen the highs and lows of the information technology field. In 2006, when the scandal of senior high schools not teaching compulsory subjects came to light, it was revealed that information studies had been one of the first subjects dropped by schools. There was also a period when information studies departments at universities were unpopular across the world. The reversal of fortunes from those days to today's boom is remarkable.
Given the difficulty in predicting how technologies develop in the future, it is essential that at least 20-30% of research efforts are spent in fields that are unfashionable and out of limelight. This is a role universities must take up and, Murase says, is an important one particularly for a university like Meidai, with its unique location in Nagoya, an industry-intensive city some distance away from Tokyo.
Murase's talk of Meidai's unique role reminds me of what Professor Emeritus Hiroyuki Sakaki of the University of Tokyo, a former president of the Toyota Technological Institute and a renowned expert in semiconductor research, once said. He was commenting on the development of blue LED by University Professor Isamu Akasaki and Professor Hiroshi Amano. The material they chose was gallium nitride (GaN), which many researchers had deemed too difficult to work with and given up on. Apparently, a professor at the University of Tokyo had also been working on this material until around 1980, but when he retired, none of his pupils was willing to take on his mantle, and the research died a death at the University of Tokyo. This coincided with Akasaki's return to Meidai from Matsushita Electric's research lab in 1981. Akasaki, ably assisted by young Amano, persisted with the research until it bore a fruit. Sakaki attributes this success to "Meidai's flexibility that allowed a professor with an industry background to come on board, which was rare back then, and supported his long-term research."
This example, I believe, highlights Meidai's unique environment that nurtures original research.
Another point Murase highlights as we consider the future of AI research is that AI is not a purely engineering matter. AI has a potential to transform our society dramatically. This means that all fields of study that are associated with different aspects of society have an important role to play. For example, how do we guide AI's learning? This is a subject closely aligned with human learning, and educationalists have long been exploring this question. Another example is labor: the way people work may be affected if AI is tasked with demand forecasting and workers are required to adjust their work patterns as dictated by AI. How can workers be protected in such a situation? This may well become a matter of legal systems. The Nagoya University Graduate School of Informatics, which was established in 2017, is designed to be a place for academics from humanities and social sciences disciplines to conduct research alongside engineering researchers.
Kukita - the philosopher I mentioned earlier who raised the issue of AI's message in his talk - is one of them.
Kukita sees AI as a revolutionary tool for statistical risk analysis of humans; to him, it is a game-changing tool that makes it possible to predict each person's risk- and/or benefit-generating potential based on human big data. The four big tech giants from the US - GAFA - are voraciously gathering information and making massive investments in research precisely because even a small improvement in the accuracy of predictions can yield enormous profits.
In Kukita's words, as AI plays this role, it sends out a message: "other people are a risk to you, and this risk can be estimated in advance and avoided." When we are constantly exposed to this implicit message without realizing it, he fears, the human society itself may start treating people as risks and looking at them in terms of profits and losses they might bring. This may even lead to the exclusion of individuals who are judged to be detrimental to the society. Problem is, AI can make mistakes - which is natural since its judgment is based on statistical probability. Being human means we sometimes buck expectations, but once AI makes a decision, it can be difficult to overturn it.
This issue is already manifesting itself in the US. Mathematician Cathy O'Neil, whose experience of working at a hedge fund made her aware of the danger of allowing decisions to be made based on big data, warns about it in her book titled Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. The message of the title, a pun on weapons of mass destruction, is clear.
One of the examples in the book concerns public schools in Washington D.C., the capital of the United States, which decided to use AI to improve their education standards. Teachers who scored poorly in the evaluation conducted by AI were dismissed, but among those fired was a teacher whom everyone rated highly. The teacher appealed the decision, but the reason for the dismissal was never explained, and the decision was not reversed. A follow-up investigation eventually found that the students of that teacher had come from an elementary school that artificially inflated children's scores. This made the teacher evaluation data on the students' performance improvement look worse than it should have been. In the end, the teacher was hired by a wealthy private school that did not use AI for teacher evaluation, and the public school system lost a talented teacher. Weapons of math destruction destroyed the career of a teacher, and cash-strapped public schools ended up worse off. O'Neil points out in the book that decisions made by weapons of math destruction have a tendency to make the rich richer.
Meanwhile in Japan, a recent news report revealed that a job information company had been providing employers with its predictions on the probability of applicants to turn down their job offers. Problems like these are certain to continue cropping up.
What possible changes will AI bring to our society? What do we need to do to ensure that it does not make our society unjust? Kukita believes that it is the job of universities to gather the insights of diverse experts in order to explore these questions and warn the wider society about potential risks.
Another aspect we need to consider is what sort of education students need as they face the age of AI. Last November, Professor Kazuhisa Todayama, a philosopher and Director of the Nagoya University Institute of Liberal Arts and Sciences, gave a thought-provoking talk titled "education for future generations facing a life with artificial intelligence." The first point he made was that, while it is difficult to predict the future, it is likely that many jobs will be lost to AI, and that the nature of remaining jobs will change. For example, once AI becomes proficient in diagnosing diseases, doctors may be left only with the role of keeping the morale of patients up and taking the responsibility for decisions. In fields such as research and art, only the most maverick geniuses among researchers and artists may survive as AI beat everyone else. AI and machines may eventually be able to do most of ordinary jobs.
What will the role of universities be as we face such a future? To Todayama, it is "to produce people who are able to design and bring about a society as good as it can be while coexisting with AI." In such a future, he suggests, humans may actually be able to be more human.
This is a challenge that universities must tackle with all their collective strength, breaking down the barriers between academic disciplines.
Todayama ended his talk with the words of Alan Kay, who is widely regarded as the father of personal computer: "the best way to predict the future is to invent it."
I would like to follow his example here.
Let's make Meidai a university that invents the future!