| Characteristic | Classic Unix model | Modern web model | Infocentric model |
|---|---|---|---|
| Data identity / referencing | Naming (files) | Naming (URIs) | Secure hashes only |
| Standard data model | Plain text | JSON, XML documents | General data graph |
| Persistence model | Mutable files | Mutable resources | Immutable entities |
| Dynamic data model | Text streams / pipes | Resource polling | Graph updates (append) |
| Metadata model | File metadata | Statements about URIs | Statements about HIDs |
| Metadata management | Filesystems | Undefined | Reference collections |
| Programming paradigms | Mostly imperative | Multi-paradigm | Declarative |
| Typing disciplines | Mixed | Mostly dynamic | Static |
| Execution model | Processes | Request-driven | Functions |
| Composition model | IPC | Scripts, REST, QBE | Declarative/Functional |
| Resource-sharing model | Client-Server | Client-Server | Fully-decoupled |
| Resource-sharing paradigm | File shares | Publishing | Reference collection |
| Access control model | Authentication | Authentication | Cryptographic (PKI) |
| Standard user interface | Command shell | HTML5 browser | Multi-modal, pervasive |
The principle applies to all engineering domains, thus we agree.
Programs, in the command-line shell sense, are replaced by small functional modules and/or pure functions that can be trivially wired together, as long as types match. Therefore, this principle is simply fulfilled in a different manner.
This is part of a larger a development methodology, but is valid in essence. Semantic editors and hash referencing make refactoring of prototypical code much easier.
There are aspects of this throughout the infocentric data model. Hash references are the ultimate portable link, but they aren't as efficient as centralized, hierarchical, authoritative systems.
At face value, this is an outdated principle based on the aesthetic that everything must be raw human readable and editable in case something goes wrong. But this has never been reality. Even "simple" text editors perform complex manipulations of binary UTF-8 data, with encodings, character mappings, etc.
The valid component of this principle is that there needs to be a simple base data model that all tools can operate upon as a neutral foundation that drives interoperability. In the InfoCentral designs, this is the Persistent Data Model. It is effectively the replacement for "plain text files" of the Unix philosophy.
This refers to re-using existing small components, rather than writing completely independent new tools for every use case. We wholeheartedly agree.
Shell scripts are replaced by functional / flow-based module wirings, which are more accessible to a wider range of end users, somewhat akin to spreadsheets today. Modern semantic editors replace esoteric syntax and endless long man pages.
Captive interfaces refers to monolithic applications that are their own little world, separate from the rest of the system and any other local software. If the application doesn't have the feature you need, you're out of luck or must work toward adding it. The general infocentric UI model is zero apps, fluid integration, and maximum type-driven interoperability. Users are never captive to particular pre-designed interfaces/interactions and can always drill-down to see what is happening behind the UI. All software can be modularly re-wired to meet new needs. New interactions upon shared data can be created with similar ease.
This notion here is pipelines or streams. In Unix designs, plain text is the content streamed between programs. In infocentric designs, strongly-typed semantic graph data is streamed between functions / modules. The philosophy is the same.