The already vast quantity of information available on the web grows exponentially every year. According to IBM, we produce more than 2.5 quintillion bytes, or 2,500,000 Terabytes, of data every day. In fact, according to their website, more than 90% of the data available on the Internet today was generated in the last couple of years alone (IBM, 2017). This enormous and opaque sea of information is not only difficult to navigate, it is also strictly guarded and centralized in the hands of a few mega corporations (Google, Microsoft, Amazon, etc), exercising enormous control over the content created and that, with little to no transparency.
Consider how the technologies behind the online services helping us search through this information have become tightly kept trade secrets. While the algorithms at work behind the scene are unknown to us, they are, arguably, also quite archaic. Search engines only reach, approximately, 25% of the data available on the web, while only offering users the possibility to search for information only by keywords, through the filter of their very own rigid, static and secret search algorithms. Simply put, users are not entirely in control: the technology is. This technology also appears to not be working in the best interests of the people, but the big tech companies. As Pierre Lévy puts it in his book, The Semantic Sphere 1: Computation, cognition and information economy, how can we best find the needle of relevant info in a gigantic haystack of digital data? How do we measure its value in a way that is transparent? How can I find the information that has the greatest value to us? How can we, as human beings, use those resources already available in order to enhance our collective intelligence? (Lévy, 2011, p. 157)
Because of their inherent nature, none of the current search engines will help you answer these questions effectively. The information is there, but it has not been properly mapped. We have reached a point very similar to centuries ago, explains Lévy, when traditional methods of organizing information went through drastic changes, leading to the libraries we know of today (Lévy, 2011, p. 109). While the size of our collective memory grows exponentially, new and more effective methods of organizing information must be developed. For Lévy, it is imperative that this collective memory be organized and presented in a way that makes it explicit, if we are to indeed make use of our collective intelligence .
There can be no intelligence without memory, argues Lévy. While the Internet brings us access to the vast majority of contemporary human knowledge, “nothing offers us a readable image of the functioning of our collective intelligence” (Lévy, 2011, p. 162). It is for this reason that the creation of a metalanguage, for the explication of knowledge in the human sciences, has become necessary. This metalanguage, or pivot between languages, would allow for the possible automatic categorization of ideas between natural languages, drawing from the already available data on the web today. For Lévy, this would be made possible with the creation of the Information Economy Meta Language (IEML), a project for which Lévy has been working on and developing, in collaboration with other experts, for the last decade or so.
To understand the potential and protocols behind the Information Economy Meta Language, one must look at Pierre Lévy’s visionary book published in 1994, Collective Intelligence: Mankind’s Emerging World in Cyberspace. In the book, the philosopher describes a near-future where humans’ potential for collective intelligence, enabled by new information technologies, will make collaboration between people possible without the usual hindrances caused by physical space or traditional societal hierarchies. In his words, “cyberspace could become the most perfectly integrated medium within a community for problem analysis, group discussion, the development of an awareness of complex processes, collective decision-making, and evaluation.” (Lévy, 1997, p. 59) While at the time, some decried such an idea as utopic, looking around us today, nothing could be further from the truth. Simply put, evidence of collective action enabled by technology abound. One good example would be, of course, Wikipedia. Although Wikipedia relies on a dedicated core of supporters to keep the website running, it is hard to deny that the site demonstrates a certain potential for the ability of humans to work constructively together horizontally.
IEML: How does it work?
The first chapters of the book, The Semantic Sphere 1: Computation, Cognition and Information Economy, presents us with the general nature and structure of information in general. Lévy explains that the use of our “digital memory common to all humanity”, of which I explained the nature of earlier, has been drastically hindered by a number of technical and sociological factors. These problems involve issues of classification, linguistic differences and cultural fragmentation, hindering our capacity to efficiently tap into this vast reservoir of knowledge. This is important, because, as Lévy argues, “the level of human development of a community and the cognitive power of the creative conversation that drive it are interdependent” (Lévy, 2011, p. 192). Lévy proposes, as solution, the creation of a semantic coding, a new digital language which would benefit from qualities that the natural languages do not inherently possess. This semantic coding would act as a pivot between natural languages, enabling the automatic organization of ideas according to the relevance of their respective communities.
As a system, the IEML semantic sphere would “make the greatest possible number of operations on concepts and their semantic relationships automatically calculable” (Lévy, 2011, p. 345), allowing for the automatic connections between ideas and concepts to be possible. Such a system for encoding meaning allows for the creation of a hypercortext, a sort of scientific observatory, one that is “capable of reflecting human collective intelligence by using the storage and calculation power of the digital medium”(Lévy, 2011, p. 275), all the while drawing from the already vast quantity of data on the web today.
For Lévy, it is through this interchangeability of the semantic, enabled by the common metalanguage, that a form of open universalism can be possible, one that is no longer blocked by the obstacles of the past. We can see here how it links back to the premise of Lévy’s book on Collective Intelligence. It is indeed difficult to imagine collaboration on a global scale, in real-time, when everyone uses a different system of symbols to communicate, for example. According to Lévy, the Semantic Sphere would enable such collaboration to take place, while allowing everyone the possibility to communicate in their respective languages. Meanwhile, we should not underestimate, says Lévy, the potential that computers can have in augmenting, rather than replacing, our intelligence. Rather than solely focusing on artificial intelligence and machines that think for us, he argues, we should also aim for these machines to “increase our individual and social power in information processing, communication and reflection” (Lévy, 2011, p. 195), similarly to how we already do with other multimedia applications. The effect of such an augmented intelligence program enabled by the IEML would be to collectively and individually “increase the reflexivity of human intelligence” (Lévy, 2011, p. 199) as well as its symbolic cognition, both individually and collectively. (Lévy, 2011, p. 204)
The general properties of the semantic sphere contains actual (implicit meaning) and virtual (transparent to calculation) dependencies. This semantic machine, argues Lévy, can be seen as an “automatic process making the conceptual addressing of the world of ideas scientifically possible” (Lévy, 2011, p. 343). At its core, the meta-language differs from the natural language in important ways. By allowing for a system of coordinates of the mind to emerge “in the form of calculable transformation group”(Lévy, 2011, p. 228), differentiating concepts from words and sentences.
Although conventional languages may possess terms or words that may not have equivalent in another language, Lévy argues that this wont be a problem. Concepts from any languages can easily be translated into a ‘text’ IEML (or USL-text) in the form of a node, while being linked with its relevant percept, or URL, within the digital realm. In return, the semantic machine can reroute to the USL the corresponding sentences associated to the concept as well as their respective term definition in the dictionary. The benefit, as opposed to natural languages, is that it can do this semantic organization automatically, due to the intricacies of the IEML language. Concepts encoded with their respective links within the meta-language (USL) thus become capable of “precise correspondence with natural languages”, as well as other regular languages (Lévy, 2011, p. 235).
In sum, much like a sort of digital cartographer of our collective mind, the IEML semantic machine would map each network and theoretically translate its nodes into all natural languages. The IEML semantic machine, argues Lévy, can thus otherwise be seen “as the missing link of cognitive modelling” (Lévy, 2011, p. 270), as it provides us with a “system of semantic coordinates that can unify the nature of the mind within the computational framework of a transformation group”(Lévy, 2011, p. 267). Humans would still have an important role to play in order to make this endeavor possible. Lévy believes that collective interpretation games will be used in order to “integrate data into models of cognitive systems”, according to their relevance of their respective communities, from the Hypercortext. Ideally, this eco-system of ideas, or hermeneutic memory, must reflect the collective human intelligence without imposing “epistemological, theoretical or cultural bias” (Lévy, 2011, p. 295).
In relation to my own research on collective action, in class this last semester, I asked professor Lévy about collective activism and what may one day be the next step in terms of organization after the event of social media. As we know, social media are especially good at organizing people together for a specific cause but often very bad at maintaining support for the long term. Once the protest day is over, maintaining collaboration, organizing ideas, and making sure everyone has access to the same ideas and information can be difficult. One of the application of IEML might that, in the future, activists may have an easier time collaborating together with a system that helps connecting ideas together automatically. International campaigns, which draw people from diverse cultures speaking different languages, would have an easier time working on the same level thanks to the applications of the meta-language. While it is still early to consider, as the third phase of the IEML project should be commencing this summer, we can already imagine how a system such as this one could day improve our capacity to work collectively together without the former impediment of the past.