Architecture is everything. It shapes every aspect of how technology is designed, produced, and used. Even the most subtle aspect of architecture has dramatic impact. In the modern world, technology is an extremely powerful shaper of society. Yet this power is usually under-appreciated. Engineering is a pragmatic field. It is guided by the popular architecture of the day, for better or worse. Yet if we reshape the architecture of technology, we can be a major force of social transformation.
The InfoCentral project has many tightly-related goals, because it is the expression of an enormous vision. At our roots is a desire to create uncompromising technology design and architecture that helps people and improves society.
InfoCentral seeks to create a new communications media, architected for the age of social computing. It is the substrate for a new generation of dynamic mediums of thought and communication, promising enormous social impact. It promotes collaboration, community-building, and social networking to be first-class features, woven into the architecture of the next generation of Internet software rather than depending on proprietary third-party services. By doing so, it greatly increases expressiveness. All information becomes interactive and socially networked by default. The social and information architecture represented by InfoCentral is absolutely necessary for information systems that manage our increasingly urbanized, globalized, hyper-connected world. At the core is a need for contextualization – the ability for ideas to be seen and understood in a more holistic context. There is a greater chance for civility if all parties are given a voice, not only to share and collaboratively refine their ideas, but to engage the other sides directly, in a manner of point and counterpoint, statement and retraction. Because contextualization is default, the sort of “echo chambers” of isolated thinking and ideology that exist on the Internet today can be virtually eliminated. As in science, refined consensus is more powerful than any individual authority claim.
InfoCentral seeks to create an information environment that largely replaces standalone applications, mobile apps, and all forms of Web pages and services. It is based upon a philosophy of working as close as possible to information itself, rather than wrapping it in applications. The information environment is built from small, modular components that do one thing well and are infinitely composable into new interactions.
InfoCentral seeks to create an information architecture that makes pervasive computing feasible. I define pervasive computing, simply, as the turning point at which most computing technology has become seamlessly interoperable by default. This basic property manifests all of the other futuristic academic vision associated with the term, from the Internet of Things to advanced AI to predictions of a more cooperative, less hierarchical society. The current Web has brought us to a certain level of interoperability, in terms of providing a relatively standard UI language, but it has entirely failed to create interoperability at the semantic and logic levels needed for pervasive computing. The fact that proprietary self-contained applications have returned so strongly with the rise mobile computing is proof of this.
The first step is to give every entity of information strong identity – a global identity that never changes, once it is created and that does not rely upon fragile authority structures. For this, the best choice is secure hash values. Revisions of existing information thereby get new identities. With strong identity, any information in the world can unambiguously refer to any other information, without worry that the information behind an identity will change in the future. Like the World Wide Web, all information is globally retrievable. In our proposed design, however, links cannot break, references cannot become stale, and information cannot arbitrarily expire. We propose a layered, decentralizable, multi-method hash ID dereferencing scheme, largely based on smart propagation of entity hosting metadata. (ie. not an unfeasible single global DHT)
To make information fluid, we must structure it differently. There must no longer be files, folders, spreadsheets, or database tables. All data will live in a massive graph of (typically small) information entities linked together in myriad ways. For example, a “person” entity has a “home address” entity which has a “street address” entity which has a “street name” entity. This street name entity, which is a shared public record, is used by all address entities for that street. The duplication, ambiguity, and input errors of unstructured text fields are eliminated. Updates can happen in one place.
We must also add fluidity to metadata, the unbounded domain of information about information. For this, we create and collect annotations entities, which refer to and describe or enhance other information entities. (including other annotations) There can be many collections of annotations for an entity, from diverse sources. Anyone can create public or private annotations, for any purpose. Consider Wikipedia, except that, instead of text articles about subjects, we have graphs of information around conceptual entities, to which anyone might have something useful to add. Unlike Wikipedia, there is, by design, no need for a central site to manage the content. The street name entity mentioned earlier might be annotated with historical information about the street (from a historical society), the city’s maintenance schedule for it (from the planning commission), and various discussions among neighbors about cleanup work and gardening (from a social network). This demonstrates the power of strong identity at work. Once there is a permanent placeholder identity for a concept, related information can be arbitrarily collected around it, even though it is from many independent sources.
This new breed of graph-structured, interwoven, many-layered, radically-collaborative information requires a new software model. Classic software applications (whether desktop, web, or mobile) are self-contained systems that tend to corral a set of information and limit its use elsewhere, by virtue of needing to manage the application’s context. Instead, we need information to be fully detached from software and endlessly re-usable in all sorts of novel contexts. The solution is small generic software modules that each do one thing or understand one type of information, but do not dominate the information they operate upon. In aggregate, these modules can be wired together to replace all of the functionality of traditional applications. Like fluid information itself, they can be endlessly re-used in other contexts. Imagine “LEGO™ bricks” in place of injection molding. In this analogy, useful module wirings are as assembly plans. We might have a module that handles postal addresses and another that deals with time-lines, for our street history review. Another module might be a schedule manager. The city will use this to arrange maintenance tasks, which were originated via yet another set of modules. There are no word processors, spreadsheets, or database apps. The new paradigm is to focus on creating and linking clean information. Software then follows, adapting itself to the context. Granted, this is an incredibly simplified explanation, but the idea of “dynamic information environments” woven from small modules will fundamentally change how we build and use software.
In a world of fluid information that everybody can access and interact with, privacy and security will often be provided by encrypting information entities themselves. To limit who may read a certain information or annotation entity, we can encrypt it and give the keys to only those desired. For example, I may refer someone to my home address object, which reveals my city, but not provide the key to my encrypted street address entity. Because information is now broken down into minimized pieces, it is easy to assign fine-grained access, whether through cryptographic methods or traditional access control lists. There is much to say about security architecture, but, to the end user, the critical concept is exchanging directly with other users. (or amongst groups) This replaces relying upon third parties to manage access to private information – which they themselves can see fully. In the InfoCentral model, you are always in control of your information and are able to make it as public or private as needed. This trade-off will be explicit, as suitable for the context. However, we intentionally do not prescribe a particular security scheme or set of security technologies. This area needs flexibility, as threats and countermeasures will continue to evolve.
Once our information is fluid, secure, and able to be managed without traditional software applications (or websites), we will no longer depend upon massive centralized social network Web applications. Though some inherent advantages to large public social networks will remain, these can be lightweight, decentralized shared resources. The benefit will be more than just regaining some of our privacy and eliminating ties to the advertising economy. The next generation of social networking technology will also be far more powerful and flexible. Unlike current social networks, where we are wholly dependent upon the operators to enhance our experience over time, it will be possible to create custom social networks, both public and private, with special features needed by different communities. These social “sub-networks” can then be layered and inter-networked, to build deeper interactions among groups and with global public networks. There will no longer be a need for separate, internal-only social media tools in business, education, government, etc. It will be possible to keep some things private without losing the global context and connectivity. From the user’s perspective, everything will be seamlessly integrated. Gone will be the day of managing dozens of sites, apps, and logins, each with their own segregated function and silo of information.
In conjunction with social networking, the next step of the vision is to dramatically improve the modes and processes of having online discussions and collaborations. Today’s social media, from blogs to forums to chatrooms, lends itself to wasteful banter and repetition. There is no way to have a single global comprehensive exploration of a topic, so millions of redundant, often low-quality conversations happen among groups of friends and colleagues. If a truly fresh idea emerges, there’s no easy way for it to rise in prominence beyond the group. Likewise, the Internet has been rightfully accused of creating echo chambers for bad ideas and the breeding grounds of conspiracy theories, alternative science, religious cults, and irrational zealotry of all sorts. The solution is contextualization and community. Using the “permanent concept identity” principle once again, we can build interaction around any well-defined talking point or assertion of truth. Millions of participants can build a graph of information around the discussion entity, with points and counterpoints, facts and verifications, statements and retractions, private and public side-discussions, polls and questionnaires, articles, illustrations, and any other form of media deemed useful. All of this information can be linked, cross-referenced, consolidated, and evaluated by the communities involved. By default, nothing stands alone. Everything is contextualized by the continual process of collaboratively annotating the graph of data and opinions. As consensuses form, even amidst persistent disagreements, the best quality information will rise to the top, through reputation-building. If there are multiple viewpoints, each will become highly refined. Feedback mechanisms will also allow ranking, categorizing, and filtering of the content, to prevent abuse and allow user-customized views. Ultimately, the ramifications of this new mode of communication will be revolutionary for every aspect of society. Perhaps it will even help usher an overarching shift toward rationality, tolerance, and scholarly candor.
The same functionality used for exhaustive discussions and explorations can also be used to bring delightfully-intuitive, unlimited-scale collaboration to any form of media or information management – literature, music, visual arts, film, journalism, software code, design and architecture, research, engineering, business management – to name just a few. Other planned features will offer new ways for intellectual labor to be financially rewarded, from novel forms of socially-networked patronage and gifting to consulting networks and new, low-barrier open markets for creative labor. Think of “open source” production methods applied to just about any form of information – but with stronger contracts possible, where beneficial. Finally, the system can be used to build and strengthen real-life communities, with tools for social discovery, outreach, continual education, needs and resources awareness, minimum-overhead administration, and even facilitation of conflict resolution.
InfoCentral’s goals sound lofty, even ridiculous, yet all of the core knowledge needed to build what I have just described already exists, thanks largely to recent research breakthroughs. We need only a plan to integrate it and a willingness to let go of stale paradigms. InfoCentral is dedicated to pursuing this.
This is a placeholder website. More info coming soon. Author may be reached via: contact at [this domain name]
copyright 2015, Chris Gebhardt