What are the fundamental requirements and building blocks of a distributed internet?

Autonomous Internet
Michel Bauwens

What are the fundamental requirements and building blocks of a distributed internet?

In preparation for the Contact summit in NYC 20 October 2011, we want to understand the current landscape of projects/initiatives building a distributed internet and the fundamental requirements so we can better coordinate efforts.

ANSWER ONE: Michel Bauwens

Here is a proposed definition of conditions, in non-technical language:

What is a p2p infrastructure:

1) A P2P communications and cooperation infrastructure is a technological and social infrastructure which allows any individual to voluntarily aggregate with others for purposes of communication or the creation of common value

2) A P2P technological infrastructure allows any agent to initiate actions from any point within the network, on the basis of equality of communication (network neutrality) and without any censorship impeding free speech and the freedom of association and cooperation. It is a distributed infrastructure in which elements of decentralisation and centralisation can only serve the efficiency of the network, and not issues of control or profit.

3) A P2P ownership infrastructure is preferentially owned by the users and producers of value over the network. Private and public ownership of P2P communication platforms can only be justified by superior service provision which does not impede the p2p rights of usage defined in 1 and 2.

4) A P2P governance infrastructure is based on the full rights by communities and participating individuals to govern their own infrastructures and actiivities. Governance by owners of infrastructure is subsumed to the prior usage and cooperation rights of users, including the right to autonomy and self-governance. The governance of P2P infrastructures recognizes participation by all legitimate stakeholders.

5) P2P revenue infrastructures recognize that value creation is primarily the result of user activities and cooperation, and has benefit or revenue sharing processes in place which recognize this value creation, but do not impede the continuation of peer to peer relational dynamics and value creation over the network.

I have attempted to outline a political strategy to achieve this in the article here: http://p2pfoundation.net/What_Di…

Here is a list of 25+ projects, maintained at http://p2pfoundation.net/Categor…

Projects to decentralize/distribute the internet:

  1. Appleseed [2] – distributed social network
  2. Bitcoin, a decentralized internet currency.
  3. Diaspora – will hopefully be a social networking community where users can run their own federated “pods”, thus owning their personal data and directly controlling what is shared with who.
  4. The Dot-P2P Project, an alternative DNS hierarchy that resists censorship.
  5. The Freedom Box initiated by Eben Moglen and the Freedom Box Foundation: independent plug-in server
  6. GNUnet is a framework for secure peer-to-peer networking that does not use any centralized or otherwise trusted services
  7. GNU Social [3]
  8. Lorea [4] – distributed social networks, already running on 10 networks
  9. One Social Web [5] – distributed social network using xmpp
  10. One Swarm [6]- F2F (friend2friend) P2P sharing; a new peer-to-peer tool that provides users with explicit control over their privacy by letting them determine how data is shared
  11. Open BTS: working on a new kind of cellular network that can be installed and operated at about 1/10 the cost of current technologies, but that will still be compatible with most of the handsets that are already in the market.
  12. Open Media Vault [7]
  13. Open Moko, A project to create a ‘free' or open source Open Mobile Telephony platform.
  14. Open PGP encryption is based on self-issued certificates which gain authority as a result of a web of trust expressed via user- maintained keyrings rather than a hierarchical certificate authority system that can be centrally compromised.
  15. Open Storage Pod, [8] open hardware project, small cubes to store terabytes
  16. Open WRT [9]- GNU/Linux based free firmware for gateways and routers.
  17. Own Cloud, data storage project from the wider KDE community
  18. Retro Share [10] – secure communications with friends
  19. Seeks Project [11] – “social websearch”
  20. Sovereign Computing Group [12] – similar project to Freedom Box, with a very interesting Manifesto.
  21. Sparkle Share, [13] open source ‘dropbox' replacement
  22. Status.Net is a microblogging system that allows users to run their own Twitter-like site and federate selected streams with other systems.
  23. The Tahoe Least-Authority File System, a highly fault-tolerant, secure internet filesystem.
  24. The Tor Project, an anonymizing overlay network.
  25. Unhosted: “Unhosted is a project for strengthening free software against hosted software. With our protocol, a website is only source code. Dynamic data is encrypted and decentralised, to per-user storage nodes. This
    benefits free software, as well as scalability, robustness, and online privacy.”
  26. YaCy is a search engine where many nodes share information to build a distributed index.

ANSWER TWO: Colin Hawkett

Given the following desirable outcomes from the distributed internet –

  1. Cannot be shutdown at a central location
  2. Difficult to shut down via ‘virus' type attack
  3. The threshold for being able to connect, participate and strengthen the network is low
  4. Can operate as separate parts in isolation if necessary
  5. Protects privacy and anonymity

Then the key requirements for those things

  • Commodity hardware – easily & affordably obtainable & installable without dependence on single point-of-failure mega suppliers. One of the biggest problem in a distributed system is bridging physical distance between nodes – e.g. intercontinental connection is very difficult to do without expensive centralised hardware (big wires, satellites etc.). The more bandwidth we can get at range, the better our system becomes. Greater connectivity reduces the need for individual bandwidth requirements.
  • Diversity – of routes and of protocols. The former is fairly clear, while the latter offers some protection against virus type attacks, and reminds us to keep innovating the protocol(s). Also offers some protection against privacy issues.
  • Every node must be capable of filling all roles. This doesn't mean each node does the same role, but it might. Another way of stating this requirement is that the smallest distributed net should be able to consist of as little as one node.
  • Must be latency tolerant. Unless we have solved the range problem, then getting from A -> B may take a lot of network hops.
  • Have a clearly defined reliability algorithm (more than likely using geographical redundancy), and subsequent recovery mechanism should critical data be lost (e.g DNS records may be lost if the wrong nodes in the wrong combination go down, and if this is a big problem then it becomes a target for attack)
  • Have a clearly defined consistency algorithm. If we are keeping replicas, then how close are they to all being the same at any given point in time. In general the system must be able to cope with eventual consistency, and in the case of a network that becomes divided, some copies may be very different from the master. This problem is fairly related to the previous point.

And those are the easy ones 🙂 – here are some harder ones –

  • Distributed governance – who owns the master data? Who decides which protocols we use? Who decides which hardware is appropriate? Who punishes misuse? On whose authority? Distributed tech is only part of the story. Who watches the watchers?
  • Identity & Trust – who holds your identifying information? Where? How are you authorised to maintain it? Is the mechanism I use as reliable as the mechanism you use? How is your reputation determined? Where is that information held? How do we trust the holders of the information?
  • Resilience – the system must have an immune system. If we look at the biological distributed system, then the human body (for example) has a very complex and trusted internal network, and a very few external interfaces which are heavily protected. Should one of those interfaces be damaged, then others can adapt to fill their role. ‘Intruders' in the trusted internal system must be identifiable and destroyable. In effect this point highlights that we must design for malicious intent. We must also have a quarantine mechanism to protect sections of the net from being destroyed by compromised other sections.

No doubt there's plenty more – it's a ‘holy-grail' type problem. Nature has evolved a very resilient distributed system. We would do well to pay attention to that.

ANSWER THREE: Richard C. Adler

Speaking as an archivist, a robust metadata schema should receive serious consideration. How far this could go toward the creation of a truly semantic environment is one question. But it will also be important to consider whether the positive virtue of a metadata-rich next net will need to be balanced with the positive virtue of anonymity as it was provided by the ‘old' net (especially in light of Egypt, etc.).

There is a balance to be found here, and I find myself increasingly interested in trying to determine where the most likely points of leverage may lie in establishing one.

ANSWER FOUR: Paul B. Hartzog

At its most basic, the key problem to be solved involves intercommunication between two layers: 1) the hardware parts made of atoms, 2) the virtual parts made of bits.

Building distributed infrastructure composed of addressing, routing, and a diverse ecology of interoperable software and devices, is a bold new step, and one I'm quite excited about.

ANSWER FIVE: Patrick Anderson, Economic Systems Debugger

The most important issue we must address is the difficulties faced during co-ownership of shared physical resources required to host that production.

1.) The users themselves must have real ownership, and so must be the initial and ideally the only investors.

2.) The ‘return' the users will receive for this investment is at-cost access to the outputs of that production – in this case, internet connectivity.

3.) For late-coming users who want to buy any surplus product (network bandwidth, data storage, CPU cycles), we should charge price above cost (profit) for as much as “the market will bear”.

4.) All profit collected from non-owners must be treated as that payer's investment in more physical sources – so that each and every user incrementally becomes a co-owner in the material assets required to host that production.

When any user pre-pays (whether as an up-front investment, or when paying price above cost (profit)), they receive two items:

A.) A title of ownership over those co-owned physical assets.

B.) A book of “Scheduling Tickets” or “Allocation Tokens” that the
owner would use to prove ownership of and therefore collect the
outputs of that production.

ANSWER SIX: Danielle Lanyard

the fundamental root for disruption is the same whether they on the internet, the stoop, or the ivory tower: human beings must be driven by an ideal that deviates with the status quo, and that would require a disruption from this fabric in order to be realized. the energy of the ideal converts into the will, and then as it is realized, from disruption to actual change…whether it is Rosa Parks on the bus to spark the US civil rights movement, or Wael Ghonim's facebook page to spur uprising in Egypt.

Financial Liberty at Risk-728x90