Home

In Information Science I look at the interactions between people and technology, how technology is shaping individual lives and social groups, as well as how the ways that people use technology can shape new developments.

Accompanying the progressive diffusion of the word information into so many fields, the emergence of studies about these information-related disciplines is only a matter of time. In fact, all the disciplines related to information that we have seen today are the results that sprung from the inquiries about various types of information. Therefore, in the early history of information studies, three classical schools were formed during the 1950s–1980s, they are: the information science originating from computer science, the information science originating from library science, and the information science originating from telecommunications. However, in Japan, the origin of modern information science has a very close connection with journalism. There are two reasons for them being named classical information science schools: (a) these subjects used the term “information science”; (b) in most cases, the term “information science” was used alone, that is, there isn’t a determiner or qualifying word before the term “information science” in their articles, books, societies, conferences, and the name of a department or a college. Of course, the researchers in these fields were naturally often labelled “information science researchers”. So, the key problem is how to identify which one of them is a real information Science.

“The most important problem for AI today is abstraction and reasoning,” — Francois Chollet, AI researcher at Google, quoted in Understanding the limits of deep learning, April 2nd, 2017

Turing-test passing Artificial Intelligence (AI) is already here in various forms, and will continue to evolve and be a boon for the tech sector, but we still have a long way to go. The AI Roadmap Institute outlines 29 Unsolved Problems in AI, many of which could be addressed with better abstraction. AI is simultaneously being developed for purely profit-squeezing exploitative purposes as well as weaponized for war and law enforcement (this has already been the case since the introduction of the ignominious oxymoronic “smart bombs”). A chorus of hype is drowning out the few critical voices that can help shape the peaceful manifestation of AI and develop its commercial educational forms.

That’s where we — The Abs-Tract Organization — come in. Abstraction is the central concept in computer science. I generally define abstraction as “a conceptual process of complexity reduction that highlights the essential properties or first principles of a given object or idea,” while in computer science it takes more specific forms and refers to the nesting and ordering of information. Programmers will not shut up about it, but virtually all of that conversation is bounded by its own technical terminology and therefore cut off from philosophical abstraction. The core principle is the same, but there needs to be more consilience between abstraction in AI and the abstraction of critical thinking. We must bridge the conversation about abstraction between computer science and philosophy in order to humanize AI. Self-driving cars will save millions of dollars and lives, but do we know where were going?

“I want to suggest that if a real artificial intelligence (AI) is going to be built, sociologists will have to play a major part in it.” — Randal Collins, Sociological Insight, 1992

The AI industry should emphasize and integrate existing knowledge from sociology in order to know what to teach and program AI to think and do. Simultaneously, programmers and policy-makers alike need to learn how to abstract different types of content, not merely their own discourse, in order to come to the right consensus. As I’ve argued in The New Reproach of Abstraction, this process is blocked and forestalled by a rebuking of complexity thinking, which extends to reproaching sociology as well, and takes various anti-intellectual forms.

How can we create artificial intelligence, if we haven’t even mastered intelligence? As we’ve already seen with Microsoft’s Hitler-loving chatbot Tay, if AI takes its cues from public discourse, its going to be evil. It needs to somehow be smarter than us, so its smart enough not to destroy us, as humans are prone to do. Imagine a conflict resolution AI that could synthesize a debate between Noam Chomsky and Sam Harris, and to such an extent that they would both concede and find consensus. Deep Blue is feeble-minded compared to such an AI.

But we can hardly program AI to reconcile our ideological spats if we don’t even understand it ourselves. That’s why basic analogue abstraction has to be mastered first. Abstraction is not yet explicit enough in education and think tanks to catalyse the dramatic shift in perspective that it implies. The above quotes by Francois Chollet and Randal Collins inform where the necessary innovation lies (particularly with abstraction and sociology), and the type of think tank The Abs-Tract Organization strives to be; one dedicated to understanding abstraction as a varied but universal cognitive problem-solving process to help humanize AI and solve all social problems abstractly. The flow diagram below is a simplification of the higher efficiency of abstract problem solving.

From: Information 2011, 2, 510-527; DOI:10.3390/info2030510