Poor Richard: PeerPoint – Open P2P Proposal

Software
0Shares

poor richard
Poor Richard

PeerPoint

An Open P2P Requirements Definition and Design Specification Proposal

Google Doc created June 6, 2012

Last updated October 19, 2012

PeerPoint shares a vision of “Sovereign Computing”:

“To be the true owner of your information and of your computer's hardware resources, as well as to share these things in any way you want and only with whomever you want. To participate in the Internet free of the middleman, as an autonomous, independent and sovereign individual.” (Klaus Wuestefeld)

PeerPoint’s version of the sovereign individual is the peer. A peer is a critter of the bio-digital ecosystem. The bio-digital ecosystem includes nature, human culture, machine devices and the internet. The term “peer” can apply to a person or a machine, and either kind of peer can play different roles in various groups, networks, and communities. But there are no “second class” peers but trust relations between sovereign peers. A sovereign peer may choose to interact through any kind of network and with any entity whether it be a trusted equal or an untrusted corporate giant.  A peer always retains an autonomy of agency to consent to or reject any relationship. There is no particular entity, group, or service in the “internets” that a sovereign peer can't go around or do without.

Arguably life on the internet is already like that and always has been. The problem is that for most internet users their agency, or sovereignty, is severely compromised. They submit to many relationships and services without really being informed. Is it their own fault? No, because the deck is stacked against them. Important facts and choices are unknown, withheld, or obfuscated. There are either insufficient alternatives or so many choices no human person has time to evaluate them all. Out of necessity we put our trust in proxies (others who make decisions for us), and that trust is very often betrayed.

The PeerPoint project is intended to serve several communities of interest from average internet users to social entrepreneurs and technology innovators. The project will need to present different faces and appropriate on-ramps to these different communities. This document is only a beginning.

DOCUMENT (142 Pages):  PeerPoint

Text Only Below the Line.  For Graphics Download the Document

Abstract

The Arab Spring, the Occupy Movement, Los Indignados, and similar uprisings around the world demonstrate that a new, open society and open democracy is struggling to rise from the bottom up. But the internet has been captured by giant corporations whose business models are based on centralized infrastructure, proprietary technology, user surveillance and censorship, and unilateral terms of service. These developments threaten the success of our collective aspirations.

The PeerPoint Open P2P Requirements Definition and Design Specification Proposal describes an evolving, crowdsourced design specification for sovereign computing in the form of a suite of inter-operating peer-to-peer (p2p) applications to include (but not limited to) social networking, real-time project collaboration, content management, distributed database management, voting, trust/reputation metrics, complementary currency, crowdfunding, and others. This specification overlaps with existing p2p projects but also goes substantially beyond anything currently in the pipeline.

 The PeerPoint Open Design Specification is not meant to replace or down-play existing p2p initiatives. It is intended to complement such efforts by providing an open vehicle for cross-community collaboration involving users and developers alike and by working to facilitate technical interoperability and synergy between open P2P implementations. PeerPoint aims to involve a broad base of stakeholders to jointly:

1       identify detailed user requirements across a broad range of social and digital collaboration needs

2       document best practices and solution sets favored by the technical community

3       make informed and detailed correlations between the requirements and prefered solutions

The project is intended to clarify and prioritize what the user community needs from the technical community, in order to prevail in the social, political, and economic struggles facing our digital society now and in coming years. With the participation of the developer community it could also become an open reference and repository of best practices and prefered solution sets in p2p technology.

Members of p2p projects, interested programmers and designers, power users, activists, and others are encouraged to participate in the collaborative development of the open PeerPoint Design Specification and to adopt any part of the specs they can use in their own work.

Collaboration

The initial methods for collaboration are limited, and one of the first priorities is to expand them.

       PeerPoint (This Google Doc) Editing is open–see below.

       PeerPoint Candidate Software Components (Google Spreadsheet) This comparison matrix is very incomplete–please contribute to populating it.

       PeerPoint on GitHub (Bare bones at present)

       PeerPoint (This document as a web page in HTML format)

       Next Net Google Group > PeerPoint Discussion topic

Joining the Next Net Google group automatically enables edit permission for the present document.

As soon as an appropriate wiki-based platform is selected the project will move there. A GitHub wiki was created but it was too bare bones. A better wiki platform is desired and suggestions[1]  would be appreciated. We are considering the Referata hosted semantic mediawiki platform. If you know of an existing Referata wiki that might welcome this kind of content please let us know.

To begin actively participating in the PeerPoint project, please read the PeerPoint topic thread at the Next Net Google Group.

Editing of this document by Next Net members is encouraged but please:

1       don’t delete existing material without permission (discuss at PeerPoint topic @ Next Net)

2       do add new text in a color other than black-on-white and add your name below in that color (I suggest including the RGB values so you can reproduce your selection later)

3       Or, if you prefer, make a copy of this doc and edit that, but please share what you come up with!

Collaborators:

       Poor Richard (black-on-white)

       Example (blue-on-gray – 0,0,225 / 204,204,204)

       David[2]  Bowman (green-on-gray -12, 52, 61 / 204,204,204)

       Rebentisch[3]  (violet on white)

       Nathan (as such)

       Michael Maranda (green on black)

PeerPoint = Peer-to-Peer Everything

[This is a back-of-the-envelope first draft of top-level design specifications.]

PeerPoint is an evolving, crowdsourced design specification for a suite of integrated peer-to-peer (p2p) applications to include (but not limited to) social networking, real-time project collaboration, content management, distributeddatabase management, voting, trust/reputation metrics, complementary currency, crowdfunding, etc.

The PeerPoint Requirements and Design Specification is not meant to replace or supersede existing software and technology development efforts. It is intended to elicit and catalog the needs of the user community and to help inform and coordinate the work of the floss / p2p / hacker community to promote more rapid convergence towards common standards and interoperable solutions. It is intended to be an open P2P development roadmap, collectively designed by all the stakeholders in a free, democratic future for the internet and its users.

Members of p2p projects, interested programmers and designers, power users, activists, and others are encouraged to participate in the collaborative development of the PeerPoint project.

The initial scope consists of:

       establish the PeerPoint project’s collaboration platform and process

       survey the needs and desires of various categories of users and other stakeholders

       define a  taxonomy of  users and user requirements with weights and priorities

       survey the FLOSS/p2p technology ecosystem

       define a taxonomy of FLOSS/p2p solutions and create a comparison matrix

       identify best (or dominant) p2p standards, methods and solutions

       create a recursive cross reference of user requirements and solution sets

       encode the above in a machine-readable linked-data ontology

All of these threads can be pursued in parallel with only a minimal amount of sequential dependencies.

PeerPoint is a design to Occupy the Internet.

The PeerPoint project might ultimately produce far more than a set of design documents. The PeerPoint design specifications could contribute to the development of an operational suite of improved application software specifically designed for social collaboration and activism. The PeerPoint project could also help to create new software development tools and new distributed network infrastructures.  But more than that, PeerPoint could help to shift the balance of power from central, corporate authorities to independent digital citizens.

At its most extreme potential, the PeerPoint specification could lead to an inexpensive (or free) self-contained, all-in-one, plug-and-play personal network appliance. Such an appliance would be connected between a user’s PC, home network, or mobile device and a network access point such as a connection to an internet service provider (ISP). It would support multiple access methods (phone lines, mobile devices, wifi, ethernet, etc.) for maximum connectivity. It might be accessed by remote mobile devices either over commercial cellular networks or over independentwireless mesh networks like those used by Occupy Wall Street.

With the PeerPoint approach, each user would retain ownership and control of all the data and content they created. PeerPoint users might connect to the internet via commercial ISPs, but those ISP’s would, if the user so desired, only act as blind, passive carriers of PeerPoint encrypted communications and content.

That is the ultimate vision behind the PeerPoint project framework, but PeerPoint is not intended to be the “one true path” to that goal or any other. It is a complementary and parallel process among all the other R&D efforts in the diverse and evolving ecosystem of open, p2p technology.

Regardless of how far the PeerPoint project evolves, it is designed to create some value at each step along the way. The PeerPoint conceptual framework already adds some value to the conversation among internet stakeholders. The project will evolve outward from this initial nucleus with each incremental addition requiring minimal investment and adding some immediate value for stakeholders.

The Need

The social tools provided by Facebook, Twitter, Skype, etc. have been fun and fairly useful, but if we think about the serious and intensive collaborative effort it will take to shift an entire civilization onto a more principled, democratic, and sustainable footing, we are going to need more powerful and comprehensive digital work tools. Those tools need to belong to us and they need to meet the social and political needs of our time, not the needs of a few self-serving corporations or their shareholders.

Google, Facebook, Twitter, etc. are proprietary, for-profit platforms that exploit users to create content and value. But they provide value as well, so a “Facebook killer” must provide greater user value (functionality, privacy, etc.) than Facebook. For numerous reasons the services provided by the commercial companies do not adequately meet the creative, social, political, and financial needs of the 99%. They are not up to the tasks that participatory democracy, non-violent social change, and sustainable economic systems will demand of our internet communications and our evolving cooperative methods of creating, working, organizing, negotiating, and decision-making together, in groups large and small, regardless of the geographical distances between us. This new kind of group interaction over distances is what allows self-selected individuals to coalesce into powerful workgroups, forums, and movements. It is also what will enable direct participation in the legislative process to function at a large scale for the first time in human history.

The corporate internet business model is based on surveillance of our online activity, our thought, and our expression. By data mining the vast amounts of our information in their custody, they identify our patterns of thought and behavior. They do this ostensibly to sell us stuff and to make money, and so far we have accepted this as the cost of our “free” use of corporatized internet services. But what other, less benign uses can this surveillance and data mining be put to?

I have been hoping for somebody like the Linux community or Wikipedia Community to step up and create an appliance-like p2p node that provides all the apps needed for secure (and when desired, anonymous) social networking, voting, collaboration, crowdfunding, etc. — something that comes complete, out of the box, with the apps pre-installed; that connects easily to your personal computer, home network, or mobile device, and solves all our needs for personal and social digital tools… But it ain’t happening. Most existing organizations and projects already have a particular vision, scope, or direction that stops somewhere short of the PeerPoint scope or heads in a different technology direction. On the other hand there is a variety of visionary writers and thinkers who imagine next generation networks or future netscapes in graphic terms, but who haven’t created detailed roadmaps to, or technical specifications for, those inspiring visions. PeerPoint is trying to work the middle space between the brick-and-mortar institutions and the visionaries — with the intent to lay some track between the two.[4] 

What to what?

Peer to peer (p2p) theory can be applied to many different domains. The three domains most important to the PeerPoint project are p2p culture, p2p production, and p2p technology. There are a number of different (and sometimes conflicting) versions or interpretations of these domains.

Peer-to-peer (P2P) is not restricted to technology, but covers every social process with a peer-to-peer dynamic, whether these peers are humans or computers.Peer-to-peer as a term originated from the popular concept of P2P distributed application architecture that partitions tasks or workloads between peers. This application structure was popularized byfile sharing systems likeNapster, the first of its kind in the late 1990s. The concept has inspired new structures and philosophies in many areas of human interaction. P2P human dynamic affords a critical look at currentauthoritarian andcentralized social structures. Peer-to-peer is also a political and social program for those who believe that in many cases, peer-to-peer modes are a preferable option.

An encyclopedic wiki on every p2p topic imaginable is maintained at the Foundation for P2P Alternatives.

The PeerPoint project will use the following summaries of the three domains as a jumping off point, but participants are encouraged to consider these as open definitions. No attempt to establish a PeerPoint p2p orthodoxy is implied.

1. P2P Culture (or p2p social process)

P2P culture is a post-capitalist socio-economic framework that includes but transcends capitalism and encompasses many hybrids of open and closed, public and private, and hierarchical and egalitarian associations.

P2P emphasizes cooperation, openness, fairness, transparency, information symmetry, sustainability, accountability, and innovation motivated by the full range of human aspirations including, but definitely not limited to, personal financial gain.

P2P is a “post-capitalist framework” because many peers are quite happy to abandon capitalism's euphemisms and reductio ad absurdums altogether. However, others consider aspects of capitalism to have played a role in lifting millions from poverty and would rather adapt it to changing social and ecological needs than to abandon it for something novel. I think it is entirely possible to craft new forms of natural, ecological, and democratic capitalism which “do no harm”, and I think there is ample room in the p2p community for such a “diversity of tactics.”

P2P social process can operate in almost any economic context if two specific rules are respected. P2P capitalism, p2p Marxism, p2p anarchy, or p2p whatever must honor:

1       the political and legal equality of every peer

2       the fully informed consent of every peer

The relative degree to which these rules are followed is the relative degree of p2p-correctness, regardless of any other characteristics of the socioeconomic environment. It is entirely up to the self-identified capitalist, Marxist, anarcho-syndicalist, or whatever, to accept or reject these rules, in which case they are (or are not, respectively) a p2p capitalist, p2p Marxist, etc.

However, the simplicity of these two rules is deceptive because they have many corollaries and implications. And they don't solve the problem of competing or conflicting rights and interests among peers–we still require courts, legislatures, and social contracts for that.

In an ideology-agnostic nutshell, you could say the P2P social framework is about cooperative individualism (this is precisely how Michel Bauwens describes peerism in “The Political Economy of Peer Production”).

Individuals are interdependent but retain a self-identity, dignity, and an autonomous intellectual and moral agency. Any system which diminishes that diminishes itself.

A peer is a self-directed individual, voluntarily consenting to various cooperative social contracts or arrangements. Whether cooperation is one to one, one to many, many to one, or many to many, all cooperators are peers. If they are not peers, the enterprise probably should not be called cooperation. Instead it would be some variety of coercion, manipulation, or exploitation.

A person's success at being a peer and cooperating with other peers depends largely on how well they absorb the ideas of composability,  subsidiarity, intersubjectivity and enlightened self-interest.

The mixture of individuality (selfishness) and sociality (cooperation) in each person may reflect the multilevel interaction of individual and group selection in evolution. This often carries a level of cognitive and cultural dissonance that each peer and peer group must grapple with.

2. P2P Production (or Peer Production)

Per Wikipedia: “Peer production (also known by the termmass collaboration) is a way of producing goods and services that relies on self-organizing communities of individuals who come together to produce a shared outcome… In these communities, the efforts of a large number of people are coordinated to create meaningful projects. The information age, especially the Internet, has provided the peer production process with new collaborative possibilities and has become a dominant and important mode of producing information. Free and open source software are two examples of modern processes of peer production. One of the earliest instances of networked peer production isProject Gutenberg, a project that involves volunteers that make “etexts” from out-of-copyright works available online. Modern examples areWikipedia, an online encyclopedia, andLinux, a computer operating system. For-profit enterprises mostly use partial implementations of peer production.Amazon built itself around user reviews,Google is constituted byuser-generated content (i.e.Youtube). Peer production refers to the production process on which the previous examples are based.Commons-based peer production is a subset of peer production.”

3. P2P Technology

Per Wikipedia: “A peer-to-peer computer network is one in which each computer in the network can act as a client or server for the other computers in the network, allowing shared access to files and peripherals without the need for a central server. P2P networks can be set up in the home, a business or over the Internet… P2P networks can be used for sharing content such as audio, video, data or anything in digital format. P2P is a distributed application architecture that partitions tasks or workloads among peers. Peers are equally privileged participants in the application. Each computer in the network is referred to as anode. The owner of each computer on a P2P network would set aside a portion of its resources – such as processing power, disk storage or network bandwidth -to be made directly available to other network participants, without the need for central coordination by servers or stable hosts. With this model, peers are both suppliers and consumers of resources, in contrast to the traditionalclient–server model where only servers supply (send), and clients consume (receive).”


Important concepts common to p2p culture, p2p production, and p2p technology

All of the following concepts are highly recursive and interwoven so it is difficult to organize them. The following outline could be arranged in many alternate ways.

1       Individual sovereignty (need definitions–what it is & isn’t in p2p context)

a       Interdependence

b       Equality of agency

2       Cooperation (need definitions–what it is & isn’t in p2p context)

a       Intersubjectivity

b       Reciprocity

c       Meritocracy

d       Enlightened self-interest

3       Openness (need p2p definitions)

a       Transparency

b       Security

c       Anonymity

d       Informed Consent

e       Open participation

4       Commons (need p2p definitions)

a       Physical & virtual

b       enclosure

c       boundaries

i         logical

ii       physical

iii     social

iv     political

v       geographic

d       Geography

i         locality

ii       bioregions

e       Sustainability

i         Renewable

1       cradle to cradle metrics

ii       Resilient

1       Diversity

2       Capacity

3       Aware & Adaptive

iii     Climax, steady-state, homeostasis

iv     Externality

v       Conservation

1       metrics

2       Efficiency

3       recycling & reuse

f        Access

i         openness

1       public/private

ii       Scarcity and rivalry

iii     Contestability: There are alternatives in principle to the dominant solution, even if everyone takes the dominant solution. In economics it often stands for a Thatcherist abuse of the term, meaning you don't need antitrust, it is enough to make dominant players contestable. But what is meant here is “contestability” of technical solutions. E.g. I am not forced to use Internet Explorer anymore.

g       Production

i         supply chains

ii       adding value

iii     value chains

iv     recycling

v       scale & scope

vi     economy and efficiency

h       Distribution

i         exchange

1       valuation

2       reciprocity

3       symmetrical

a       one to one

b       many to many

c       fair trade

4       asymmetrical

a       one to many

b       net gain or loss

ii       free?

1       gifts

2       sharing

iii     co-consumption

iv     markets

1       composite networks

a       nodes

b       structures (ring, star, cluster, etc.)

c       relational algorithms

2       metrics & accounting

3       profit

4       externalities

5       intangibles

6       currencies

7       barter

8       trust/reputation

9       regulation

a       free vs fair

5       Composability

a       Per Wikipedia: Composability is a system design principle that deals with the inter-relationships of components. A highly composable system provides recombinant components that can be selected and assembled in various combinations to satisfy specific user requirements. In information systems, the essential features that make a component composable are that it be:

       self-contained (modular): it can be deployed independently – note that it may cooperate with other components, but dependent components are replaceable

       stateless: it treats each request as an independent transaction, unrelated to any previous request. Stateless is just one technique; managed state and transactional systems can also be composable, but with greater difficulty.

6       Subsidiarity

 

a       (Christianity / Roman Catholic Church) (in the Roman Catholic Church) a principle of social doctrine that all social bodies exist for the sake of the individual so that what individuals are able to do, society should not take over, and what small societies can do, larger societies should not take over

b       (Government, Politics & Diplomacy) (in political systems) the principle of devolving decisions to the lowest practical level

c       Per Wikipedia: The concept of subsidiarity is applicable in the fields of government, political science, cybernetics, management, military (Mission Command) and, metaphorically, in the distribution of software module responsibilities inobject-oriented programming. Subsidiarity is, ideally or in principle, one of the features offederalism, where it asserts the rights of the parts over the whole.

——————————

Sidebar for developers:

One approach to the PeeerPoint design process would be to start with an existing foundation platform like the FreedomBox and extend the spec outward from that. If a FreedomBox were used as a starting platform, the PeerPoint application package would be added on top of the FreedomBox security stack.

The PeerPoint apps don’t yet exist as an integrated package, or even as individual apps that are adequate to replace Facebook, Twitter, Google Docs, Google Search, Google Earth, YouTube, Kick-Starter, etc. etc. All this functionality is envisioned for the PeerPoint eventually.

It will be necessary to include interfaces/connectors to the most popular proprietary client-server applications like Google and Facebook so that PeerPoint adopters can choose to abandon those systems (or not) in their own good time. This contingency is important because some users will adopt PeerPoint entirely for its collaboration facilities rather than its security or privacy features.

Initially the specified solution set would consist of a first tier of essential apps that must be tightly integrated in their interfaces/connectors, protocols, and data structures. After defining the first tier, development of the specs would continue on a second-tier of applications. Work on the second-tier specs could be much more distributed and parallel since the final specs for all the basic interfaces, protocols and data structures of the first tier modules would be available to all interested designers and developers.

A minimalist approach to the solution side of the PeerPoint design spec would be to identify existing p2p applications that could be stitched together with the least amount of effort and then create specs for the glue, string, and middleware required to hang it all together.

However, p2p architecture has some additional wrinkles or permutations that might expand the range of potential PeerPoint components beyond the classical or “pure” p2p applications.

Peer-to-peer can mean client-to-client or server-to-server, and within one node it can include client-server, too. Multiple clients and/or servers can reside on a node and act as a team. Stand-alone or conventional free/open (non-p2p) client-side applications can potentially be modified to communicate with remote peers.

———————————————————

The common requirements for each PeerPoint app are:

       world class, best-of-breed

       free/libre/open-source software (FLOSS) license

       consistent with peer-to-peer (p2p) architecture including composability and subsidiarity

       consistent, granular, user-customizable security and identity management across apps

       integrated with other apps via a common distributed data store and/or “data bus” architechture and/or application programming interfaces(APIs)

       semantic web and linked data enabled (The Semantic Web of Data of Tim Berners-Lee)

       consistent, user-customizable large, medium, and small-screen (mobile device) user interfaces and display formats

       each app must be able to connect/interface with its corresponding major-market-share counterparts (Facebook, Google, Twitter, etc.)

       GPS enabled (inclusion of geo-location services – Open Geospatial Consortium (OGC), an international voluntary consensus standards organization, originated in 1994. In the OGC, more than 400 commercial, governmental, nonprofit and research organizations worldwide collaborate in a consensus process encouraging development and implementation of open standards forgeospatial content and services,GISdata processing and data sharing.

First tier services & applications (this needs to be expanded and organized into user requirements, systems requirements, and proposed solution sets for the services and apps in each category)

1       integrated development tools: comparison of open source code repository/hosting facilities, Comparison of IDEs, application life-cycle management (ALM), open source ALMs, open-source collaborative IDEs: Cloud9, Collide

2      

3       identity management

4       semantic web ontology specs, APIs, and libraries

5       security & anonymity platform (FreedomBox, Freenet, I2P or better)

6       a system library that is really good at security, p2p service discovery, storing & transmitting data, etc. Application developers can build on this to make p2p applications.

7       a library of p2p middleware and APIs for interfacing with conventional apps and between p2p apps

8       ubiquitous trust/reputation metrics (like/dislike, trust/distrust, P2P Metrics, Connect.me )

9       distributed data store (distributed hash tables, CouchDB and/or Freenet or better)

10    asynchronous coms (email, microblogging, chat, voicemail, etc.) (Syndie or better)

11    real-time communication (IM, voice, videoconference, etc.)

Second tier services & apps

12    social networking:socialswarm.net: list of distributed projects,Wikipedia: distributed social network apps,

13    crowdsourcing: content collaboration & management  (semantic wiki engine, wiki farm platform, Etherpad, Google Docs,, LibreOffice, or better)

14    project management/workflow or integrated collaboration environment (ICE), Bettermeans, ChiliProject

15    enterprise resource planning

16    user-customizable complementary currency and barter exchange (Community Forge or better, Bitcoin or better)

17    crowdfunding (http://www.quora.com/Is-there-an-open-source-crowdfunding-platform, Selfstarter)

18    accounting & financial reporting

19    voting (LiquidFeedback or better)

20    universal search across all PeerPoint data/content and world wide web content (YaCy or better)

Third tier services & apps

       resource sharing (cpu, graphics card, storage, bandwidth, etc.)

       a full-blown, collaborative WYSIWYG publishing platform like WordPress

21    thinktank farming

       3D hypergrid browser and developer tools (Hippo, OpenSim, OpenCobalt or better)

       3D game engines

       studio-quality graphics, audio, and multi-media production: video editing software list,  comparisons

       computer-aided design (CAD) tools

       data analysis: Freebase

       data visualization:  data visualization software list, Freebase, Prezi (proprietary), Gource , yWorlds

       Personal Health Record (PHR) system

       Disaster Preparedness & recovery

Digital Commons

One contribution the PeerPoint can make to the digital commons and the ethics of sharing is to incorporate a computing resource- sharing capability into its system design. Every personal computer, tablet, smart phone, etc. is idle or operating far below its capacity most of the time.

Added up, this unused capacity is equivalent to many supercomputers sitting idle. Those idle virtual supercomputers could be used in the public interest if the personal computing devices connected to the internet were designed to share their idle capacity for public purposes. Users might also be given the option to designate various percentages of their idle capacity to different uses, causes, groups, etc.

BOINC: Open-source software forvolunteer computing andgrid computing. Use the idle time on your computer (Windows, Mac, or Linux) to cure diseases, study global warming, discover pulsars, and do many other types of scientific research. It's safe, secure, and easy:

Peer Publica

Once PeerPoint is up and running with the first tier applications we may be able to organize the 99% well enough to begin rapid development of the more complex second-tier applications and to start building or buying alternative network infrastructure.

Our new public internet won’t be owned by corporations or by the state. It will be owned by the people, an instrument of the people to invoke the people’s will and help bring both government and corporations under civic control.

Obstacles

“We are not progressing from a primitive era of centralized social media to an emerging era of decentralized social media, the reverse is happening…. Surveillance and control of users is not some sort of unintended consequence of social media platforms, it is the reason they exist….Free, open systems, that neither surveil, nor control, nor exclude, will not be funded, as they do not provide the mechanisms required to capture profit….we do not have the social will nor capacity to bring these platforms to the masses, and given the dominance of capital in our society, it’s not clear where such capacity will come from. …Eliminating privilege is a political struggle, not a technical one.” (emphasis added)Dmytri Kleiner

I partly agree, but I think we have both a political struggle and a technical struggle rolled into one.

The integral organizing and collaboration tools described in PeerPoint are tools (maybe I should even call them weapons) that we need now to conduct our political struggle, not later. The community that brought us Linux, Wikipedia, Project Gutenberg, andLibreOffice (the integrated suite of open source applications that replaces Microsoft Office), is capable of bringing us a PeerPoint or something equivalent if it understands the imperative nature of the need.

If anyone doubts this, look at Wikipedia’s impressiveList of Open Source Software.

But free/libre/open source software (FLOSS) and hacker development community is largely self-motivated and idiosyncratic, with many islands of genius and inspiration separated by vast seas of  minutia and trivia. Or to put it another way, the FLOSS & hacker community is like an orchestra tuning up or playing without a score. It is a cacophony of individual efforts most with relatively narrow scope compared with that of PeerPoint. The bulk of thecommunity does not yet seem to perceive itsenlightened self-interest in our existential struggle for open source, p2p society and open source p2p government. The digital space for activism lags far behind the social space that it should mirror. Maybe the “digital libertarians” in the software development community feel they can outwit Big Brother better on their own terms as individuals. Perhaps we need to help open their “Doors of Perception” wider.

The PeerPoint Design Specification is not intended to replace or supercede existing software and technology development efforts. It is a complementary program designed to help coordinate the work of the floss/hacker/p2p community towards a future point of [5] convergence and interoperability. It is essentially a statement of what the progressive user community desperately needs from the technical community in order to prevail in the social, political, economic, and environmental struggles that confront us. It is intended to be a description of needs and potential solutions collectively designed by all the participants stakeholders in a free, democratic future for the internet environment.

Criticisms of PeerPoint

(see Next Net Google Group > PeerPoint Discussion topic)

Some have accused the PeerPoint project of being too ambitious and naive–they've seen & done it all before and have a smug, superior attitude. They argue that the correct approach is more of the same process they are accustomed to–don’t make a “master plan”, just put your head down and code, code, code.

 

But it is exactly the laissez-faire technocratic approach that produced the present state of affairs in which the internet is now colonized and dominated by huge corporate predators. Digital anarchists, libertarians, cynics and other self-interested technocrats may be the naive, unwitting pawns of the powerful actors they intended to defy or hold in contempt. They almost seem to hold social justice and participatory democracy in contempt as well, or at least to view it with apathy.

We need to admit that the world got a whole lot closer to going down the tubes on our watch. Despite the best of intentions, we all get a really big-assed #FAIL.

 

Einstein's definition of insanity is doing the same thing over and over and expecting different results. By that definition anybody who thinks that old-style FLOSS and independent, freelance, DIY, ad hoc, iterative development is going to pull us through the crises and the threats we now face is not just naive–they are buried inside a mystery wrapped in a conundrum, locked within an enigma. They are lost in space.

 

The threats to privacy, liberty, democracy, and equality have steadily grown worse despite all our BRILLIANT efforts up till now, so only a different strategy can be expected to reverse that trend.

 

That strategy is not a continued, exclusive reliance on autonomous, self-organizing, emergent systems. That's all well and good but not, by itself, enough. We need to try something else as well. That something else might even be something that was tried in the past and discredited because it was ineffective then. It might be large-scale collective organization and design.

 

Critics of PeerPoint have suggested that on its best day it would be a vain effort to imitate the W3C (perhaps as PeerPoint can be see as a standardization effort). On its worst day it would be no more than an over-ambitious pipe dream. But they aren't the only ones who don't want another W3C. What we want is more like a combination of the Linux Foundation and the Wikimedia Foundation. Not that I'm knocking the W3C (peace be with them) but I am proposing something fresher and more agile–more like an on-going re-mix+mashup+hackathon…

 

The critics also say [Strawman phrase w/o reference] that nothing good was ever designed by a “committee” implying that I have proposed some kind of bureaucratic nightmare. They point to giant, government-sponsored boondoggles they were part of in the past. My friend Fabio had a better rebuttal than I could have given:

 

“Design by committee may not work, but design (and build, review, adjust, adapt, discover, unfold, involving everyone during the whole thing) by community does work and is proven to produce life-affirming architecture, in contrast to deadening architecture produced by the default “efficient”, commercial endeavor. A committee and a community. Both are groups of people. So is a mob, or an army, or a corporation. What's the difference?”

Complex structures may emerge from simple social actions which do not intend to create the complex structures, but we don’t need to depend on fortunate cases of emergence and serendipity alone. Design and planning are useful, too.

In the past, large-scale, collective design often stalled, bogged down, or failed because it was forced to adopt centralized, top-down planning and organization methods. Now we can do things in a much more distributed, horizontal, and agile manner. (Its called peer to peer culture, or as Fabio put it–community.)

 

Grand designs also failed due to organizational structures and designs that were monolithic (and hierarchical with geographic + political boundaries). Now we can create organizations and designs that are modular and composable, and which obey subsidiarity.

 

Finally, many parties to conflict have won or lost based on their access to technology. The famous metaphor is “bringing a knife to a gun-fight.” It reminds me of the scene where Indiana Jones faces the menacing swordsman. (does anybody know how to embed a video in a Google Doc?) Somehow ignoring the thesis of Guns, Germs, and Steel, some PeerPoint critics argue that revolutions are not about tools or technology, they are just about people and social relations. The politically correct position in some circles is that technology doesn't make revolutions, people make revolutions. Tell that to an Afghan tribesman and see if he will discard his AK-47 or his satellite phone.[???]

 [The war example is good, not the ingeniuity of leaders but technological progress often shifts the weights. There is a phrase by Ernst Jünger that a machine gun outweights the patriotic heroism of a 1914 volunteer battalion[6] ]

As Elinor Ostrom wrote in her last words to the world before her death on June 12, 2012,

 

“The goal now must be to build sustainability into the DNA of our globally interconnected society. Time is the natural resource in shortest supply…We have a decade to act before the economic cost of current viable solutions becomes too high. Without action, we risk catastrophic and perhaps irreversible changes to our life-support system. Our primary goal must be to take planetary responsibility for this risk, rather than placing in jeopardy the welfare of future generations.”

 

The bottom line is that the PeerPoint Open Design Specification project is meant to promote a more rapid and coherent development of our next generation of non-violent weapons of social revolution.

 

Let those who don't think we need a non-violent social revolution shut the hell up and get out of the way.

We have lots of programmers, but not lots of time.

 

At the very least we need to offer something like anX-Prize (or an X-P2P Prize) and we need to be ready and willing to fund and provision projects that fall within PeerPoint’s conceptual design scope. That could begin right now withFreedomBox, a base on which a PeerPoint might be constructed.

So pony up, folks. Like the old auctioneer says, “What’s it worth? You tell me.”

Poor Richard


“All right, now, folks–what’s it worth? Com’on–you tell me!”


PeerPoint Requirements Definition

PeerPoint Requirements will be divided into the following (frequently overlapping) topics:

Tier 1

1       integrated development tools

2       identity management

3       semantic web ontology

4       security & anonymity

5       system library

6       library of p2p middleware and APIs

7       distributed data store

8       trust/reputation metrics

9       asynchronous communication

10    real-time communication

Tier 2

11    social networking

12    crowdsourcing: content collaboration

13    project management/workflow

14    enterprise resource planning

15    complementary currency and exchange systems

16    crowdfunding

17    accounting and financial reporting

18    voting

19    search

Tier 3

20    thinktank farming

21    computing resource sharing (cpu, graphics card, storage, bandwidth, etc.) (Parallella)

22    3D hypergrid browsing

23    3D game engines

24    computer-aided design (CAD) tools

25    data analysis and visualization

26    Personal Health Record (PHR) system

27    Disaster Preparedness & recovery

I. PeerPoint Requirements: Integrated Development Tools

       comparison of open sourcecode repository/hosting facilities

       Comparison of integrated development environments

       application life-cycle management (ALM)

       open source ALMs

II. PeerPoint Requirements: Identity Management (IM)

The first step in defining the problem space of identity management is to define identity. What is it?  FromThe Free Dictionary (tfd.com):

identity: 1. The collective aspect of the set of characteristics by which a thing is definitively recognizable or known

Wikipedia defines Digital identity as “a set of data that uniquely describes a person or a thing (sometimes referred to as subject or entity) and contains information about the subject's relationships to other entities.The social identity that an internet user establishes through digital identities in cyberspace is referred to asonline identity. A critical problem in cyberspace is knowing with whom you are interacting. In essence, the problem is that “on the Internet, nobody knows you're a dog.”

According to Wikiperdia, “an online identity, internet identity, or internet persona, is asocial identity that an Internet user establishes inonline communities and websites. It can also be considered as an actively constructed presentation of oneself. Although some people prefer to use their real names online, some internet users prefer to beanonymous, identifying themselves by means ofpseudonyms, which reveal varying amounts ofpersonally identifiable information. An online identity may even be determined by a user's relationship to a certain social group they are a part of online. Some can even be deceptive about their identity. In some online contexts, includingInternet forums,MUDs,instant messaging, andmassively multiplayer online games, users can represent themselves visually by choosing anavatar, an icon-sized graphic image. Avatars, digital representations of oneself or proxy that stands in for a person in virtual worlds, are how users express their online identity.As other users interact with an established online identity, it acquires areputation, which enables them to decide whether the identity is worthy oftrust.Some websites also use the user'sIP address to track their online identities using methods such astracking cookies.”

PeerPoint IM Terms and Definitions

       entity: anything that has a definite, recognizable identity, whether a person, group, organization, place, object, computer, mobile device, concept, etc.

Identity conceptual view (credit: Wikipedia)

       attribute: any characteristic, property, quality, trait, etc. that is inherent in or attributed to an entity. An entity has one or more attributes and an attribute has one or more values. For example “the sky (entity) has color (attribute) of blue (value).” This entity-attribute-value (EAV) model is sometimes called a “triple” as in theResource Description Framework (RDF). An attribute (which is also a kind of entity) may have attributes of its own. These are often logically nested in a hierarchical fashion. For example, an address may be an attribute of a company but also an entity with attributes of street, city, state, etc. An entity may have multiple instances of the same attributes, such as multiple aliases or addresses. (Different programming languages, protocols, frameworks, and applications may organize the entity-attribute-value model differently; or use different terms such as object for entity or property for attribute; but this is probably the most generic approach.)

Rdf-graph3 (Photo credit: Wikipedia)

       identity: a definitive and recognizable set ofattribute-value pairs (or entity-attribute-value triples) for a particular entity. The set of attribute-value pairs may be partial or exhaustive, depending on the intended purpose of the identity construct.

       identification (ID): adataset (value, record, file, etc) which represents the most concise amount of information required to specify a particular entity and distinguish it from others. An ID may be local to a particular context, such as a company employee ID or inventory number, or it may be universal. Examples of universal ID are Global Trade Item Numbers (GTIN) and uniform resource identifiers (URI).  The ID typically consists of a smaller quantity of data than the full identity dataset and only represents or refers to the full identity.

Identity management problem space

The PeerPoint requirements will explore various parts of the Identity Management problem space, all of which overlap or interpenetrate each other:

1       description

2       classification

3       identity provisioning and discovery (directory services, including identity & directory linking, mapping, and federation)

4       authentication (validation/verification of ID, security certificates, security tokens, security token service)

5       authorization (access control,role-based access control,single sign on)

6       security (anonymity, vulnerabilities,risk management)

1. Identity Description

Description is meant here in its most general sense as the entire set of attributes and values that describe an entity, and not simply a “description” box or field in a record. This is the aspect of identity management which establishes the attributes and values by which an entity is typically recognizable or known in a particular context. A description can attempt to be exhaustive, but in most cases it is only as complete as required for its intended purpose in a given application.

PeerPoint requirements:

       Identity management functions should be consistent across all PeerPoint applications, so the requirements should be implemented as part of a PeerPoint system library from which all applications,middleware,APIs, etc. can call the necessary functions. Interfaces or connectors must be provided for non-PeerPoint-compatable systems.

       There are many methods in existing software applications, protocols, and frameworks to describe the identity of entities. The PeerPoint identity management solutions must inter-operate with as many of these as possible. For that reason the PeerPoint descriptions of entities must be as generic, modular, composable, and extensible (open-ended) as possible.

       PeerPoint user interfaces (UI) must allow users to extend and customize entity descriptions in as intuitive a manner as possible without reducing or destroying the interoperability of the descriptions with those of other platforms. One approach is to provide user input forms with the most common or universal attributes for various types of entities, combined with fields for additional user-defined attribute-value pairs as well as simple tags.

       In both standardized and customizable parts of entity descriptions, the UI should provide as much guidance as possible about the most typical names and/or value ranges for attributes without locking the user in to these “preferred” or popular choices.

One of the most basic entities in social networking systems is the person, member, or user account. The identity description for such an entity is commonly called a “user profile.” User profiles are also found in most applications that involve online collaboration. The most primitive form of user account consists of a user ID (or UID) and a password, where both the ID and password are simple alphanumeric strings. But increasingly, user accounts for social and collaborative applications include elaborate user profiles. Facebook is a good example, having one of the most extensive user profiles of any internet application.

Below is a partial screenshot ofPoor Richard's Facebook Profile:

The information in a Facebook User Profile is organized into numerous logical categories. Some not shown include the user's friends, Facebook groups to which the user belongs, and a personal library of documents and images. Other profile sections include free-form text.

Many of the profile data categories such as “Arts and Entertainment” may include unlimited numbers of “likes” or tags. These are added via an intuitive interface in which the user begins typing something such as a-r-e-t-h-a- -f-r-a-n-k… and as the user types, a list of matching tags is displayed and  continuously updated with each keystroke, showing possible matches from the Facebook database. If no match is found by the end of typing, the entered tag label is displayed as-is with a generic icon. Facebook's database of entities in the various categories is created and maintained primarily by Facebook users who create Facebook “pages” for people, groups, companies, products, movies, authors, artists, etc.

Other social network sites have profile features not found in the Facebook User Profile. Google + adds a feature to the “friends” data category called “circles” and a homepage feature called “hangouts”. Google + users can organize friends into user-defined categories called circles that inter-operate with other Google apps, and can create live audio-video chat groups with user-defined membership.

LinkedIn has additional profile data categories for resumes, cvs, and employment references, recommendations or testimonials.

In addition to users, on various social networks accounts may be created for special-interest groups, fan clubs, companies, organizations, and topic pages of all kinds. The structures of the profiles for different types of accounts on different networks vary widely.

Very limited, generic profiles are also hosted by services such asGravatar andAbout.me.

SampleGravatar profile:

OpenID Simple Registation is an extension to theOpenID Authentication protocol that allows for very light-weight profile exchange. It is designed to pass eight commonly requested pieces of information when an End User goes to register a new account with a web service.

A Personal Data Service (PDS) is “a personal, digital identity management service controlled by an individual. It gives the user a central point of control for their personal information (e.g. interests, contact information, affiliations, preferences, friends). The user's data attributes being managed by the service may be stored in a co-located repository, or they may be stored multiple external distributed repositories, or a combination of both. Attributes from a PDS may be accessed via an API. Users of the same PDS instance may be allowed to selectively share sets of attributes with other users.” (Wikipedia)

Gravatar and OpenID SR are simple examples of what PeerPoint will call a meta-profile. More elaborate meta-profile systems are evolving, such as:

       data.fm is “a open source, cloud-based PDS with a centralized underlying attribute store as well as an API to enable bi-directional attribute updates from external websites and services. The APIs are based on standards and includeWebDav,SPARQL andLinked Data. Data formats exchanged include RDF, XML, JSON.” (Wikipedia) This web data platform supports several generations of standards and recommendations: DAV, AJAX,JSONP,CORS Read/Write Linked Data, RDF/XML/JSON content negotiation,SPARQL 1.1, andWebID

       MyProfile intends to provide a solution for managing the numerous accounts and profiles that users have on the Internet. Its main purpose is to provide a unified user account, or simply ‘user profile’, which as opposed to current ‘silo’ profiles, would really be under the user’s control, on a device controlled by the user. Features will include:

       DATA CONTROL – It's your data and only you should control it. The data are hosted on a device controlled only by you (it could be your home computer or a plug computer).

       PRIVACY – You decide who should have access and to which resources. Access Control Lists ontologies will be used to define how access is granted or denied.

       WebID – Authenticate to services using WebID. No more usernames and passwords to remember! WebID provides high security using cryptographic certificates.

       LINKED DATA – Take advantage of the full potential of Linked Data. Your profile is accessible as an RDF file, allowing you full access to the Semantic Web.

PeerPoint requirements:

       the capability to create and maintain meta-profiles for any type of entity

       intuitive user interface for creating, customizing, and maintaining meta-profiles

       allow the creator of a profile to determine where any portion of it is stored and with whom any portion of it is shared

       capability to synchronize the PeerPoint meta-profile with profiles in non-PeerPoint applications

2. Identity Classification: “people, places and things”

Different kinds of entities have different kinds of descriptions, so an important part of the identity management problem is the problem of sorting things into various categories. Sorting things into categories or classes is often calledcategorization or  classification. Classification systems are often calledtaxonomies. Examples might include the index of an encyclopedia, a library card catalog, or a glossary of internet terms.

In the case of information systems, the termontology means “a rigorous and exhaustive organization of some knowledge domain that is usually hierarchical and contains all the relevant entities and their relations.” (tfd.com)  Wikipedia says  “Anontology renders sharedvocabulary andtaxonomy which models a domain with the definition of objects and/or concepts and their properties and relations. Ontologies are the structural frameworks for organizing information and are used inartificial intelligence, theSemantic Web,systems engineering,software engineering,biomedical informatics,library science,enterprise bookmarking, andinformation architecture as a form ofknowledge representation about the world or some part of it. The creation of domain ontologies is also fundamental to the definition and use of anenterprise architecture framework.

Another related term in information systems isnamespace, often used in relation towiki structures anddirectory services.

Semantic ontologies are often implemented as systems of structured metadata that can be added to web pages, embedded in HTML or XML,  or embeded in scripts or other code that runs in browsers or other clients servers, or peer nodes..

In identity management, two of the main systems of categories, or taxonomies, would be categories of entities and categories of attributes. Attributes are themselves categories of values (the attribute “color” is a category of colors: red, blue, green, etc.).

Examples of high-level categories of entities might include:

       people

       groups

       organizations

       places

       internet technologies

       devices

Examples of very high-level categories of attributes could include:

       Material properties

       Chemical properties

       Physical properties

       Mental properties

       Economic attributes

These taxonomies becomesemantic web ontologies when they are defined in machine-readable protocols such as:

       Extensible Markup Language (XML)

       JSON or JavaScript Object Notation, is “a text-basedopen standard designed forhuman-readable data interchange. It is derived from theJavaScript scripting language for representing simpledata structures andassociative arrays, called objects. Despite its relationship to JavaScript, it islanguage-independent, with parsers available for many languages. The JSON format is often used forserializing and transmitting structured data over a network connection. It is used primarily to transmit data between a server and web application, serving as an alternative toXML.” (Wikipedia)

       Resource Description Framework (RDF)

       Web Ontology Language (OWL)

       Attention Profiling Mark-up Language (APML) is anXML-based format for expressing a person's interests and dislikes. APML allows users to share their own personal Attention Profile in much the same way that OPML allows the exchange of reading lists between News Readers. The idea is to compress all forms of Attention Data into a portable file format containing a description of ranked user interests.More »

       Simple Object Access Protocol (SOAP)

       Description of a Project (DOAP) (anRDF schema andXML vocabulary to describe software project)

       Service Provisioning Markup Language (SPML) is anXML-based framework, being developed byOASIS, for exchanging user, resource and service provisioning information between cooperating organizations

       Friend of a friend (FOAF) a machine-readableontology describing persons, their activities and their relations to other people and objects.

       “WebID” redirects here. It is not to be confused with WeBid, the onlineauction software.

       The WebID Protocol (formerly known as FOAF+SSL) is “a decentralized secure authentication protocol utilizing FOAF profile information as well as theSSL security layer available in virtually all modern web browsers. Contrary to the usual SSL utilization patterns, it does not require the dedicatedCertificate authority to perform the user authorization. Useful identities can be minted for users easily by authorities, but a FOAF-basedweb of trust connecting all the user's activity on the World Wide Web can then be established gradually, without formalkey signing parties, to make the identity more trustworthy and hard for anyone (even the original issuing authority) to forge.” (Wikipedia)

Linked Data

One great advantage of machine-readable ontologies is the ability to semantically link data across the web.

Linked Data Platform Use Cases And Requirements (W3C)

       1.1 Use Cases

       1.1.1 Maintaining Social Contact Information

       1.1.2 Keeping Track of Personal and Business Relationships

       1.1.3 System and Software Development Tool Integration

       1.2 Requirements

Linking open-data community project

The goal of the W3CSemantic Web Education and Outreach group'sLinking Open Data community project is to extend the Web with a data commons by publishing various open datasets as RDF on the Web and by setting RDF links between data items from different data sources. In October 2007, datasets consisted of over two billion RDF triples, which were interlinked by over two million RDF links. By September 2011 this had grown to 31 billion RDF triples, interlinked by around 504 million RDF links. There is also aninteractive visualization of the linked data sets to browse through the cloud.

Dataset instance and class relationships

Clickable diagrams that show the individual datasets and their relationships withinthe DBpedia-spawned LOD cloud, as shown by the figures above, are:

       Instance relationships amongst datasets

       Class relationships amongst datasets

3. Identity provisioning and discovery (directory services, including identity & directory linking, mapping, and federation)

(PeerPoint requirements to be determined)

“A directory service is the software system that stores, organizes and provides access to information in a directory. In software engineering, a directory is a map between names and values. It allows the lookup of values given a name, similar to a dictionary. As a word in a dictionary may have multiple definitions, in a directory, a name may be associated with multiple, different pieces of information. Likewise, as a word may have different parts of speech and different definitions, a name in a directory may have many different types of data.” (Wikipedia)

       List of directory services

       OpenLDAP (openldap.org) is “a free, open source implementation of theLightweight Directory Access Protocol (LDAP). LDAP is a platform-independent protocol. Several commonLinux distributions include OpenLDAP Software for LDAP support. The software also runs onBSD-variants, as well asAIX,Android,HP-UX,Mac OS X,Solaris,Microsoft Windows (NT and derivatives, e.g. 2000, XP, Vista, Windows 7, etc.), andz/OS.” (Wikipedia)

       Friend of a friend (FOAF) search engine (foaf-search.net) “You can use the input field above to search through 6 million interconnected persons, organisations and places in the semantic web. Enter the name, e-mail, nick, homepage, openid, mbox-hash or URI of the person, organisation or place you are searching. Friend of a friend (FOAF) is a decentralized social network usingsemantic web technology to describe persons and their relations in a machine readable way. The Friend of a friend vocabulary can also be used to describe groups, organisations and other things. Everybody can create a Friend of a friend profile describing himself and whom he knows. This profile can be published anywhere on the web. Many social networking websites publish the openly accessible information of their members with Friend of a friend.DBpedia uses it to publish data about persons in Wikipedia. If you want to create a profile right away, you can useFOAF-a-Matic. More information can be found on theFOAF project website, onWikipedia or in thespecification.”

4. Authentication (validation/verification of ID, security certificates, security tokens, security token services))

(PeerPoint requirements to be determined)

In an article on Digital identity Wikipedia observes, “Currently there are no ways to precisely determine the identity of a person in digital space. Even though there are attributes associated to a person's digital identity, these attributes or even identities can be changed, masked or dumped and new ones created. Despite the fact that there are many authentication systems and digital identifiers that try to address these problems, there is still a need for a unified and verified identification system in cyberspace.”

       W3C links on authentication 1 Libraries, 2 Protocols, 3 Services, and 4 APIs

       WebID Authentication Delegation (W3C)

       List of authentication protocols (Wikipedia)

       public-key infrastructure (PKI) is “a set of hardware, software, people, policies, and procedures needed to create, manage, distribute, use, store, and revokedigital certificates. A PKI is an arrangement that bindspublic keys with respective user identities by means of acertificate authority (CA). The user identity must be unique within each CA domain.” (Wikipedia)

       CAPTCHA, verify that a user of a web-site is human to prevent automated abuse

       Extensible Authentication Protocol (EAP), is anauthentication framework frequently used inwireless networks andPoint-to-Point connections.

       Identity verification service is “an online service used to establish a mapping from a person'sonline identity to their real life identity. These services are used by some social networking sites, Internet forums, dating sites and wikis to stopsockpuppetry, underage SignUps,spamming and illegal activities like harassment and scams.” (Wikipedia)

       The Certification Authority Browser Forum, also known as CA/Browser Forum, is a voluntary consortium of certification authorities and browser industry leaders that created theSSL certificates, and vendors ofInternet browser software and other applications. In April 2011, the CA/Browser Forum released “Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates” for public consultation. The intent is that all browser and relying party application software developers will incorporate the Baseline Requirements into their accreditation and approval schemes as requirements for all applicants who request that a self-signed root certificate be embedded as a trust anchor. This would extend common standards for issuing SSL/TLS certificates beyond EV to include all Domain-validated (DV) and Organisation-validation (OV/IV) certificates.

5. Authorization (access control, role-based access control, single sign-on)

(PeerPoint requirements to be determined)

       W3C links on authorization (incomplete)

       WebID authorization delegation (W3C)

       Cross-Origin Resource Sharing (CORS) (W3C) User agents commonly apply same-origin restrictions to network requests. These restrictions prevent a client-side Web application running from one origin from obtaining data retrieved from another origin, and also limit unsafe HTTP requests that can be automatically launched toward destinations that differ from the running application's origin. In user agents that follow this pattern, network requests typically use ambient authentication and session management information, including HTTP authentication and cookie information. This specification extends this model in several ways…

       Access Control Service, orWindows Azure AppFabric Access Control Service (ACS) is aWindows-owned (proprietary–included here for example of functionality only)cloud-based service that provides an easy way of authenticating and authorizing users to gain access toweb applications and services while allowing the features ofauthentication and authorization to be factored out of the application code.

6. Security (privacy, anonymity, vulnerabilities, risk management)

(PeerPoint requirements to be determined)

Security can never be 100%.  It is often based on trust and reputation, which we need for a web without gatekeepers.

Privacy and anonymity can be thought of as a forms of security.

Security by obscurity is an important principle. 

Many writers take the view that there is no such thing as anonymity on the internet (search “no such thing as anonymity on the internet” for sources) due to data mining and pattern analysis technologies. Kat Orphanides writes: “Even if you disable cookies, your browser could easily share enough information to give you a unique signature on the web. I've been testing the computers I use on the Electronic Frontier Foundation'sPanopticlick website, which reports the identifying information your browser is sharing and compares it against data it has already collected from other users. So far, every system I've tested has been uniquely identifiable.”

Perhaps if identity can be discovered heuristically, that’s the way PeerPoint should go, rather than using certificates, tokens, etc. On the other hand, perhaps part of Peerpoint’s requirements should be methods for obfuscating such identifying patterns to preserve anonymity when that is a user’s desire. Is it possible to distinguish between legitimate (e.g. political) and illegitimate (e.g. criminal) reasons for anonymity?

Freedom not Fear (freedomnotfear.org and eff.com)

                                                                                   

       We are a coalition of more than 150 organizations that share a common goal.

       We want freedom of speech in a digitalized world and a free and uncensored internet to express ourselves.

       We want privacy in the knowledge society, not surveillance.

       We want to live in freedom, not in fear.

Connect.me Connect.Me is the first P2P reputation and discovery network that works across Facebook, Twitter, LinkedIn, and other providers. As you build your reputation, you can use it to curate the social web by vouching for the people, content and businesses you trust. Connect.Me is not just a new app, it’s the beginning of a larger movement to put people back in control of the social web.

Identity Management Resources

       Glossary of Semantic Technology Terms (mkbergman.com)

       The Five Stars of Web Identity (melvincarvalho.com)

       The Laws of Identity (identityblog.com)

       AAA commonly stands for authentication, authorization and accounting. It refers to a security architecture for distributed systems, which enables control over which users are allowed access to which services, and how much of the resources they have used. Twonetwork protocols providing this functionality are particularly popular: theRADIUS protocol, and its newerDiameter counterpart. (Wikipedia)

       Apache Shindig is the reference implementation of OpenSoicial  and OpenSocial API specifications, a standard set of Social Network APIs which includes:

       Profiles

       Relationships

       Activities

       Shared applications

       Authentication

       Authorization

       The National Strategy for Trusted Identities in Cyberspace (NSTIC)

       No hub. No center.(identityblog.com)

       Federated Identity Management in Cloud Computing (clean-clouds.com)

       Reimagining Active Directory for the Social Enterprise  (msdn.com)

       The Personal Identity Consortium was founded in 2010 by Kaliya “Identity Woman” Hamlin to catalyze a thriving ecosystem. Projects include:

       Standards Engagement and DevelopmentTo succeed, an effective personal data ecosystem needs to use open standards to allow many different services to interoperate. We track developments in many open standards efforts and are proactively engaged in several standards technical committees. We report on our activity in the Personal Data Journal.

       The Open Group and MIT Experts Detail New Advances in ID Management (sys-con.com)

       What is OpenID Connect? “OpenID Connect is a suite of lightweight specifications that provide a framework for identity interactions via RESTful APIs. The simplest deployment of OpenID Connect allows for clients of all types including browser-based, mobile, and javascript clients, to request and receive information about identities and currently authenticated sessions. The specification suite is extensible, allowing participants to optionally also support encryption of identity data, discovery of the OpenID Provider, and advanced session management, including logout.”

       Open Data Protocol (OData) “is an open web protocol for querying and updating data. The protocol allows for aconsumer to query a datasource over theHTTP protocol and get the result back in formats likeAtom,JSON or plainXML, including pagination, ordering or filtering of the data. Many of the building blocks that make up OData are standardized viaAtom and AtomPub. The OData specification is available under theMicrosoft Open Specification Promise (OSP). Microsoft has released an OData software development kit (SDK) consisting of libraries for .NET, PHP, Java, JavaScript, webOS, and the iPhone.” (Wikipedia)

       Security Assertion Markup Language (SAML)

       MIT Core ID Project Site “The increase dependence today of citizens on the IT and telecoms infrastructure for their day-to-day activities points to the crucial need for an “identity infrastructure” that offers an ecosystem in which digital identities can be created, managed and destroyed in a practical manner. Such an identity ecosystem must support digital identities which maintain the privacy of the human person associated with the identity, and allows the human person to personalize their identity according to their needs.”

       The Jericho Forum Identity Commandments (collaboration.opengroup.org) “define the principles that must be observed when planning an identity eco-system. Whilst building on “good practice”, these commandments specifically address those areas that will allow “identity” processes to operate on a global, de-perimeterised scale; this necessitates open and interoperable standards and a commitment to implement such standards by both identity providers and identity consumers

       Access governance: Identity management gets down to business; NetIQ integrates former Novell IDM tools (securitybistro.com)

       OIX Open Identity Exchange “Building trust in online identity”

       How to steal a facebook identity (blog.mostof.it)

       AWS Identity and Access Management (IAM) (Amazon Web Services)

       xID In accordance with The Standards of LIFE forInformation, the xID specification uses a distributed storage model that allows data to be held in separated silos that are as close to the people they serve as is practical, given the security requirements. It also specifies the nature of a transaction between trusted and untrusted systems that returns verification results without exposing or compromising the contents of the identity record.

The xID system is concerned solely with identity, and does not store any other data than the xID records. Related data, such as medical records or legal records, are stored separately, and include xID certificate references.

       Windows Identity Foundation (WIF) is a Microsoft framework for building identity-aware applications. It provides APIs for buildingASP.NET orWCF basedsecurity token services as well as tools for building claims-aware andfederation capable applications.

       Identity, Persistence, and the Ship of Theseus and Reddit commentsClojure Working Models and Identity: While some programs are merely large functions, e.g. compilers or theorem provers, many others are not – they are more like working models, and as such need to support what I'll refer to in this discussion as identity. By identity I mean a stable logical entity associated with a series of different values over time. Models need identity for the same reasons humans need identity – to represent the world. How could it work if identities like ‘today' or ‘America' had to represent a single constant value for all time? Note that by identities I don't mean names (I call my mother Mom, but you wouldn't). So, for this discussion, an identity is an entity that has a state, which is its value at a point in time. And a value is something that doesn't change. 42 doesn't change. June 29th 2008 doesn't change. Points don't move, dates don't change, no matter what some bad class libraries may cause you to believe. Even aggregates are values. The set of my favorite foods doesn't change, i.e. if I prefer different foods in the future, that will be a different set. Identities are mental tools we use to superimpose continuity on a world which is constantly, functionally, creating new values of itself.

       Connect.me Connect.Me is the first P2P reputation and discovery network that works across Facebook, Twitter, LinkedIn, and other providers. As you build your reputation, you can use it to curate the social web by vouching for the people, content and businesses you trust. Connect.Me is not just a new app, it’s the beginning of a larger movement to put people back in control of the social web.

       Netention is a tool for describing one's current life situation (“is”), and potential future situations (“will be”) – as linked data objects. A semantic “story” of human life consists of statements detailing the aspects about which an individual is concerned or interested. Netention collects a community of peoples' stories, and interlinks them with automatically discovered opportunities that are mutually inter-satisfying – essentially suggesting to its participants how they could realize the desired futures they have described. mailinglist :http://www.automenta.com/global-survival-group

      

LAWS OF IDENTITY IN BRIEF

1. User Control and Consent:

Digital identity systems must only reveal information identifying a user with the user’s consent. (Starts here…)

2. Limited Disclosure for Limited Use

The solution which discloses the least identifying information and best limits its use is the most stable, long-term solution. (Starts here…)

3. The Law of Fewest Parties

Digital identity systems must limit disclosure of identifying information to parties having a necessary and justifiable place in a given identity relationship. (Starts here…)

4. Directed Identity

A universal identity metasystem must support both “omnidirectional” identifiers for use by public entities and “unidirectional” identifiers for private entities, thus facilitating discovery while preventing unnecessary release of correlation handles. (Starts here…)

5. Pluralism of Operators and Technologies:

A universal identity metasystem must channel and enable the interworking of multiple identity technologies run by multiple identity providers. (Starts here…)

6. Human Integration:

A unifying identity metasystem must define the human user as a component integrated through protected and unambiguous human-machine communications. (Starts here…)

7. Consistent Experience Across Contexts:

A unifying identity metasystem must provide a simple consistent experience while enabling separation of contexts through multiple operators and technologies.(Starts here…)

III. PeerPoint Requirements: Security and Anonymity

Security can never be 100%.  It is often based on trust and reputation, which we need for a web without gatekeepers.

Privacy and anonymity can be thought of as a forms of security.

Security by obscurity is an important principle. 

Many writers take the view that there is no such thing as anonymity on the internet (search “no such thing as anonymity on the internet” for sources) due to data mining and pattern analysis technologies. Kat Orphanides writes: “Even if you disable cookies, your browser could easily share enough information to give you a unique signature on the web. I've been testing the computers I use on the Electronic Frontier Foundation'sPanopticlick website, which reports the identifying information your browser is sharing and compares it against data it has already collected from other users. So far, every system I've tested has been uniquely identifiable.”

Perhaps if identity can be discovered heuristically, that’s the way PeerPoint should go, rather than using certificates, tokens, etc. On the other hand, perhaps part of Peerpoint’s requirements should be methods for obfuscating such identifying patterns to preserve anonymity when that is a user’s desire. Is it possible to distinguish between legitimate (e.g. political) and illegitimate (e.g. criminal) reasons for anonymity?

Freedom not Fear (freedomnotfear.org and eff.com)

                                                                                   

       We are a coalition of more than 150 organizations that share a common goal.

       We want freedom of speech in a digitalized world and a free and uncensored internet to express ourselves.

       We want privacy in the knowledge society, not surveillance.

       We want to live in freedom, not in fear.

TrustCloud: A Framework for Accountability and Trust in Cloud Computing (hp.com)

CryptoParty Handbook This 392 page, Creative Commons licensed handbook is designed to help those with no prior experience to protect their basic human right to Privacy in networked, digital domains. By covering a broad array of topics and use contexts it is written to help anyone wishing to understand and then quickly mitigate many kinds of vulnerability using free, open-source tools.

IV. PeerPoint Requirements: semantic web ontology

V. PeerPoint Requirements: system library

VI. Library of P2P Middleware and APIs

VII. PeerPoint Requirements: distributed data store

misc: Storage Quota Management API (W3C)

VIII. Trust/reputation Metrics

PeerPoint Requirements:

– user ratings/reports of peer nodes

– white/black lists of peers (by individuals, groups, communities, institutions, etc)

– hierarchical ID/trust certificate authorities (groups, communities, trusted institutions, states, etc.)

xID In accordance with The Standards of LIFE forInformation, the xID specification uses a distributed storage model that allows data to be held in separated silos that are as close to the people they serve as is practical, given the security requirements. It also specifies the nature of a transaction between trusted and untrusted systems that returns verification results without exposing or compromising the contents of the identity record.

The xID system is concerned solely with identity, and does not store any other data than the xID records. Related data, such as medical records or legal records, are stored separately, and include xID certificate references.

– A heuristic method for predicting trustworthiness of a potential peer (“You may like these peer nodes…”)

“What we want is peers that are trusted by entities like ourselves, and/or have engaged in transactions that are beneficial to entities like ourselves, not those that allegedly trust entities that we trust and have allegedly engaged in transactions like those that we have engaged in.” (James        <p2p-hackers@lists.zooko.com> p2p-hackers Digest, Vol 69, Issue 12)

“The dataset I wish to collect and make public doesn't say anything about what the transactions actually are. The optimizing factors are success rate and transfer rate. [The Slope One and Singular Value Decomposition] algorithms are typically employed for “recommendation systems” such as the one seen on Amazon, i.e. “based on your behavior we think you'll like products X, Y, and Z”, where recommendations are driven by a large corpus of user data. I am attempting to perform a similar calculation, except in this case it's “based on my behavior I think I'll like peers X, Y, and Z”, and the calculation is driven by a large corpus of peer interaction metadata the

system collects and distributes by design.” (Tony <p2p-hackers@lists.zooko.com> p2p-hackers Digest, Vol 69, Issue 12)

Resources:

PeerTrust: In an open peer-to-peer information system, peers often have to interact with unfamiliar peers and need to manage the risk that is involved with the interactions. PeerTrust aims to develop a trust mechanism for such system so peers can quantify and compare the trustworthiness of other peers and perform trusted interactions based on their past interaction histories without trusted third parties.

TrustCloud: A Framework for Accountability and Trust in Cloud Computing (hp.com)

Building Trust in P2P Marketplaces  (p2pfoundation.net) Transparency is key

On the Web, vast amounts of data are created every day. Most of the companies I examined in my thesis are looking for ways to make this data available and useful to users, for instance by calculating so-called “trust scores” with the help of algorithms. These scores, which are based on data from social networks and other sources (that provide things like damage reports, peer reviews and transaction history) are supposed to help strangers judge each other’s trustworthiness. This information facilitates and accelerates the process of building trust between strangers on the Web. Since you take your trust score with you whatever platform you are on, it encourages good behavior. A person who has worked hard to build up an online reputation will not want to jeopardize that. My research also showed that it is crucial for companies offering these systems to remain as transparent as possible about how their trust scores are derived. Since trust is complex and every platform requires different dimensions of trust, every person should be able to understand the score and decide themselves whether they want to trust a person or not. Being a good driver is very different, for example, from being a friendly and reliable CouchSurfing host.

Another issue with creating trust and identity systems in general is data privacy. The functioning of these trust systems heavily depends on the users’ willingness to give a third party their data in return for building their online reputation, and not everyone is willing to do that. Especially in countries outside the U.S. people seem to be reluctant to reveal their personal data to third parties. It’s thus crucial for companies working on trust systems to gain the trust of users and P2P platforms. As Simon Baumann, PR Spokesperson at the German ride-sharing company Carpooling.com noted, “the question is always, how trustworthy the trust system is.”

Breiifly, Breiifly Blog

Legit aims to be the Credit System of the Sharing Economy. We correlate data across marketplaces, creating a holistic picture of a user's reputation. Legit measures real, transaction-based accountability without relying on social network data. Overall behavior improves when everyone is held accountable. Real-time alerts keep you up to date so you can act before damage is caused. Plus, the good reputation that users build on other marketplaces empowers them on yours.

Slope One is a family of algorithms used forcollaborative filtering, introduced in a 2005 paper by Daniel Lemire and Anna Maclachlan. Arguably, it is the simplest form of non-trivial item-based collaborative filtering based on ratings. Their simplicity makes it especially easy to implement them efficiently while their accuracy is often on par with more complicated and computationally expensive algorithms. (Wikipedia)

VIIII. Asynchronous Communication

X.  Real-time Communication

————————-


PeerPoint Comment/Discussion

       The Next NetPeerPoint Discussion Join the discussion!

       Free Network Foundation Forum > Networking>The Outernet (a related topic)

Excerpts from PeerPoint discussions:

       Nathan comments : If you take a bit of paper and draw a tiny fraction of the web on it, a

bunch of interconnected dots, nodes and arcs, then have a good ol' stare

at it, you'll quickly recognise every architecture you can conceive in

there: centralization, decentralization, neighbourhoods, client-server

relations, hubs, p2p networks – clients can only make requests, servers

can receive requests, peers can do both.

 

If you stick a uniform interface (e.g. HTTP) in front of anything (data

store, web service, data transformation service) and address those

things using uniform names (e.g. URIs), and have them communicate using

uniform media types (e.g. RDF, CSV, JSON w/ schemas) then the boundaries

are broken down, universality and generality prevail.

 

The web can be seen as a bunch of interconnected agents, communicating

for one reason or another – it exactly models the same connections we

have in the real world, it's a social system – the human world is just a

bunch of interconnected agents communicating for one reason or another.

 

IMHO, the PeerPoint document describes the web. Begins to capture what's

possible when you realise you can couple a server+client together, and

more importantly has the right social and ethical reasons behind it.

 

Turing discovered that if you standardize the input in to a machine, you

don't have to break it down and rebuild it every time you want to do a

new task. Perhaps now people are realizing that we don't have to break

down and rebuild our apps every time we want them to do a new task, all

we have to do is standardize the input and output.

 

Imagine what would be possible with a standardized web dav like protocol

for uniform data (rdf/linked data in various forms), and a standardized

API for using that data – pretty much everything. Especially when you

consider that 95%+ of what every web developer and programmer ever does

is just data transformation, take that out of the equation and you have

a world of developers with 95% of their time free to innovate, create

and discover. (more…)

       Nathan comments: OpenLink Data Sources (ODS) is layered on top of virtuoso… Each module is not only already packaged with existing UI's, but due to it's heritage, each module is also available via SOAP and REST, meaning you can build your own applications and UIs over the top of it – as browser apps, on client, server or on peers. IMHO ODS-Briefcase is one of the most wonderful modules available for it, it's basically a really nice RESTful WEBDAV enabled data store package, with full support for multiple auth* protocols right up to WebID, and which recognises different data types. For instance it allows RDF that's been PUT/POSTed to be sponged straight in to the very powerful SPARQL-enabled triple running behind the scenes. E.G. it understands your data and serves as both a CRUD store, and a more advanced store which you can query extremely fast, using v powerful query languages like SPARQL.

data.fm is “the other project” which is truly way ahead of the field at the minute, it's a RESTful, multi-auth* enabled store which supports querying, CRUD, automatic media type transformation, data browsers and even tabulator panes to view data. It's also open source and you can run your own instances very easily. Highly highly recommended.

Tabulator is also worth mentioning here, it's one of TimBL’s long running code based projects and is simply wonderful too – very well designed, and extensible in every way – Tim of course also understands data inside out, and the webizing of systems.

The three projects above are very much complementary, all interlinked, Kingsley (openlink) knows TimBL (tabulator/wem/semweb) knows Joe (creator ofdata.fm, from Tim's team at MIT). It may be fair to say that each of the projects wouldn't be quite what they are today without the presence of the others.

IMHO, the most valuable thing anybody in this group can do is to take the time to fully understand:

1) Virtuoso+ODS and Kingsley's blog posts

2) Tabulator + TimBLs Design Issues

3) Data.fm and it's correlations to 1+2

Those three can be seen as the reference implementations of the next generation of the web, one which can easily be P2P too, and which continues to be built, standardized and innovated around. (more…)”

       Fabio C comments:What's the center, the heart, the core of the “revolutionary” P2P technology that's being sought? Is it how it addresses? Is it how it routes? Is it how it executes remote code? Is it administration-free? Is it replicated, resilient? Is it administratively decentralized? Is the network decentralized? Is it low latency? Is it dependable? Can we have secrecy, authenticity and all that? Does it scale? Is it “cheap” to produce? Can it be produced in a decentralized fashion with 3D printers? Does it protect against free-riding? Is it anonymous? Etc. — I think all of these questions are secondary. They are all part of the final answer, but what's the central question? I don't think any of these is illustrative of the central question, the central issue…(more…)

       Sepp comments: Clearly, p2p needs a tech infrastructure that isn't available today. You can point to all the bits and pieces and to usenet as long as you want, you still don't have a workable system. What Poor Richard is advocating is to take those pieces and sew them together into a useful, workable, user-friendly suite of applications that can run on a real decentralized network, one where the edge is king, where our own computers are the powerhouse. I wonder why… [some don’t] see and appreciate that vision. Perhaps a case of having worked on bits and pieces for so long that it seems there is nothing else to do but continue doing those bits and pieces and hoping that somehow, by some miracle, they meld into something useful…(more)

       Paul H comments: Wikipedia and Linux are two prime examples of substantive creations greater than Poor Richard's proposal that were done almost entirely by volunteers with no desire for compensation other than the joy of creating something awesome. I believe the time is ripe for PeerPoint or something equivalent. In fact it's long overdue. Because of the growing dissatisfaction with Facebook, and efforts to control and censor the net, there are more people wanting, and willing to build something like this than ever before.

       Sepp comments: I believe the positive point to sell this could be that we're able, with PeerPoint, to make our own space in which to talk, make plans, tell friends about what's happening. No longer do we have to do these things in the presence and under the watchful eyes of the corporations and the government. It is like having a house. We'll have our own space where we can work, communicate, entertain friends and interact with family – all on line. That freedom does not exist today. We're always going through a provider, or a social networking site, or a search engine, all of them seeking to profit from our transit or our stay in their territory and of course any email and phone call is open to being collected and analyzed by government and other intelligence agencies. PeerPoint can be OUR space on line.

       Fabio C comments: PeerPoint is geared towards “Occupying” the Internet. This statement is clear: there's a desire to capture the “magic” of the Occupy movement, the deeper, quality stab at it that it achieved, whatever it actually is, and contribute to it. You will feel that connection when you use and/or help develop this system. It will feel that the better world that seems concretely (and joyously) closer with Occupy will also seem so when contributing to that other form of Occupy that is PeerPoint.  And since Occupy was framed at times as something ill-defined, difficult to describe, aimless and purposeless — a key indicator it's probably interesting and worth digging deeper — that nevertheless attracted throngs of vibrant people (purposeless? right…) and then proceeded to show patterns that reflect things of longing in my deeper self — more positive signs — I perhaps hoped it was possible to translate to both the pattern of participation in the “creation” of something like PeerPoint, whatever actual roles end up being there. That is, there's some substrate of equality and shared purpose that underlies it, a hope that we'll see each other using different lenses, like in Occupy.

       Paul H comments: With the right protocols, and an evolving open-sourced platform I could see PeerPoint eclipsing what Facebook does now, and offering a lot more freedom and functionality than what is now available, not to mention privacy that is totally controlled at the user end. There could be everything from the most open bazaars to private/personal network for just your friends. Cryptography should be built in from the start. Why is it after all this time, PGP and any consumer level cryptographic program so damn hard to set up? Again I'm no expert, but I understand it well enough that it does not have to be hard. Two seniors, with little computer experience and on different sides of the country, should be able to start communicating privately with less than a minute of easy-to-walk-through set up.

       Mark R comments: Given the reasons for building a new, peer-to-peer internet in the first place, it would be reasonable to offer choice as to whether to request that someone else keep a mirror of your transactions, and that the choice should be granular; for instance, I do want my health records and some other records kept, but if Mitt and Rove are in charge of the country, I don't want my political activities or beliefs available to them, so I might gladly join such a system and toggle politics off the store list.

Resources

       PeerPoint Candidate Software Components (this comparison matrix is very incomplete–please contribute to populating it)

       PeerPoint on GitHub

       PeerPoint (This Google Doc)

       PeerPoint (This document as a web page in HTML)

       Freedom In the Cloud: Software Freedom, Privacy, and Security for Web 2.0 and Cloud Computing, transcript of a speech given by Eben Moglen at a meeting of the Internet Society's New York branch on Feb 5, 2010 (softwarefreedom.org)

       Sharecropping the long tail (roughtype.com) MySpace, Facebook, and many other businesses have realized that they can give away the tools of production but maintain ownership over the resulting products. One of the fundamental economic characteristics of Web 2.0 is the distribution of production into the hands of the many and the concentration of the economic rewards into the hands of the few. It’s a sharecropping system, but the sharecroppers are generally happy because their interest lies in self-expression or socializing, not in making money, and, besides, the economic value of each of their individual contributions is trivial. It’s only by aggregating those contributions on a massive scale – on a web scale – that the business becomes lucrative. To put it a different way, the sharecroppers operate happily in an attention economy while their overseers operate happily in a cash economy. In this view, the attention economy does not operate separately from the cash economy; it’s simply a means of creating cheap inputs for the cash economy.

       NSA whistleblower: They’re assembling information on every U.S. citizen (rawstory.com)

       Software That Lasts 200 Years by Dan Bricklin (bricklin.com) The structure and culture of a typical prepackaged software company is not attuned to the long-term needs of society for software that is part of its infrastructure. This essay discusses the ecosystem needed for development that better meets those needs.

       Brownfield software development is a term commonly used in the IT industry to describe problem spaces needing the development and deployment of new software systems in the immediate presence of existing (legacy) software applications/systems. This implies that any newsoftware architecture must take into account and coexist with live software alreadyin situ. … Brownfield development adds a number of improvements to conventionalsoftware engineering practices. These traditionally assume a “clean sheet of paper” or “green field” target environment throughout the design and implementation phases of software development. Brownfield extends such traditions by insisting that the context (local landscape) of the system being created be factored into any development exercise. This requires a detailed knowledge of the systems, services and data in the immediate vicinity of the solution under construction. (Wikipedia)

       Architectural Styles and the Design of Network-based Software Architectures by Roy Thomas Fielding “In order to identify those aspects of the Web that needed improvement and avoid undesirable modifications, a model for the modern Web architecture was needed to guide its design, definition, and deployment.”

       Organizing P2P organizations

       Collaborative Design Strategies for Community Technology   (Open Technology Institute)

       Life in Code and Software: Mediated Life in a Complex Computational Ecology (Issues regarding user experience, perspective, and requirements, both individually and socially)

       The New Toolkit  (futurestreetconsulting.com) — “Anthropologists have appropriated the word ‘toolkit’ to describe the suite of technologies that accompanies a particular grouping of humans….Hyperconnectivity, hyperdistribution, hyperintelligence and hyperempowerment have propelled human culture to the midst of a psychosocial phase transition, similar to a crystallization phase in a supersaturated solution, a ‘revolution’ making the agricultural, urban and industrial revolutions seem, in comparison, lazy and incomplete.  Twenty years ago none of this toolkit existed nor was even intimated.  Twenty years from now it will be pervasively and ubiquitously distributed, inextricably bound up in our self-definition as human beings.  We have always been the product of our relationships, and now our relationships are redefining us.”

       American ISPs to launch massive copyright spying scheme on July 1

       The Curious Case of Internet Privacy (MIT Technology Review) By Cory Doctorow, June 6, 2012. Free services in exchange for personal information. That's the “privacy bargain” we all strike on the Web. It could be the worst deal ever.

       What Digital Commoners Need To Do Michel Bauwens on the strategic phases in the construction of a peer to peer world

       Four Design Principles for True P2P Networks

       Ten Principles for an Autonomous Internet

       W3C Design Issues: Architectural and philosophical points, Tim Berners-Lee.

       Linked Data Tim Berners-Lee. The Semantic Web isn't just about putting data on the web. It is about making links, so that a person or machine can explore the web of data.  With linked data, when you have some of it, you can find other, related, data.

       Read-Write Linked Data, Tim Berners-Lee. There is an architecture in which a few existing or Web protocols are gathered together with some glue to make a world wide system in which applications (desktop or Web Application) can work on top of a layer of commodity read-write storage. The result is that storage becomes a commodity, independent of the application running on it.

       How Long Before VPNs Become Illegal? (torrentfreak.com)  

       Data Snatchers! The Booming Market for Your Online Identity (Computerworld)

       I Know What You Downloaded on BitTorrent…. (torrentfreak.com)

       Eben Moglen: Why Freedom of Thought Requires Free Media and Why Free Media Require Free (video) http://youtu.be/sKOk4Y4inVY

       What Facebook Knows The company's social scientists are hunting for insights about human behavior. What they find could give Facebook new ways to cash in on our data—and remake our view of society.

       Consent of the Networked: The Worldwide Struggle For Internet Freedom (Amazon.com) The Internet was going to liberate us, but in truth it has not. For every story about the web’s empowering role in events such as the Arab Spring, there are many more about the quiet corrosion of civil liberties by companies and governments using the same digital technologies we have come to depend upon. Sudden changes in Facebook’s features and privacy settings have exposed identities of protestors to police in Egypt and Iran. Apple removes politically controversial apps at the behest of governments as well as for its own commercial reasons. Dozens of Western companies sell surveillance technology to dictatorships around the world. Google struggles with censorship demands from governments in a range of countries—many of them democracies—as well as mounting public concern over the vast quantities of information it collects about its users. In Consent of the Networked, journalist and Internet policy specialist Rebecca MacKinnon argues that it is time to fight for our rights before they are sold, legislated, programmed, and engineered away. Every day, the corporate sovereigns of cyberspace make decisions that affect our physical freedom—but without our consent. Yet the traditional solution to unaccountable corporate behavior—government regulation—cannot stop the abuse of digital power on its own, and sometimes even contributes to it. A clarion call to action, Consent of the Networked shows that it is time to stop arguing over whether the Internet empowers people, and address the urgent question of how technology should be governed to support the rights and liberties of users around the world.

       Social Media, Inc.: The Global Politics of Big Data

       How connective tech boosts political change (CNN Intl.)

      

       Video: The case for p2p in under 2 minutes (Opera Unite promo)

       Video: The Gettysburg Address (actually very relevant)

       WebRTC: Real-time Audio/Video and P2P in HTML5 (video) WebRTC brings webcam access, p2p, and rich audio/video communication capabilities to the browser. In this talk, we'll give an overview of the WebRTC technologies available today, show how to build WebRTC apps, and discuss the potential this technology adds to the Web Platform.

       Disaster Preparednes

Facebook is eating the world, except for China and Russia: World map of social networkshttp://tnw.to/b06w

Organizations

       Foundation for Peer to Peer Alternatives (also known as the P2P Foundation) is an organization with the aim of studying the impact ofpeer to peer technology and thought on society. It was founded byMichel Bauwens and coordinated byFranco Iacomella. (Wikipedia) It maintains an encyclopedic wiki on every p2p topic imaginable, publishes a blog and curates numerous web sites, newsletters and news feeds.

       Wikimedia Foundation, Inc. A non-profit that operates online collaborativewiki projects includingWikipedia,Wiktionary,Wikiquote,Wikibooks,Wikisource,Wikimedia Commons,Wikispecies,Wikinews,Wikiversity,Wikimedia Incubator, and Meta-Wiki.

       “Imagine a world in which every single human being can freely share in the sum of all knowledge. That's our commitment. And we need your help. TheWikimedia Foundation, Inc. is a nonprofit charitable organization dedicated to encouraging the growth, development and distribution of free, multilingual content, and to providing the full content of thesewiki-based projects to the public free of charge. The Wikimedia Foundation operates some of thelargest collaboratively edited reference projects in the world, includingWikipedia, a top-ten internet property.”  (Wikimedia Foundation)

       “Wikipedia is the #5 site on the web and serves 482 million different people every month – with billions of page views. Google might have close to a million servers. Yahoo has something like 13,000 staff. We have 679 servers and 131 employees.” (Jimmy Wales, Wikipedia Founder)

       Linux Foundation A non-profit technologyconsortium chartered to foster the growth ofLinux. Founded in 2007 by the merger of theOpen Source Development Labs (OSDL) and theFree Standards Group (FSG), the Linux Foundation sponsors the work of Linux creatorLinus Torvalds and is supported by leading Linux and open source companies and developers from around the world. The Linux Foundation promotes, protects,and standardizes Linux “by providing a comprehensive set of services to compete effectively with closed platforms”. (Wikipedia)

       KDE is “an internationalfree software community[1] producing an integrated set ofcross-platform applications designed to run onLinux,FreeBSD,Microsoft Windows,Solaris and Mac OS X systems. It is known for itsPlasma Desktop, adesktop environment provided as the default working environment on many Linux distributions… The goal of the community is to provide basic desktop functions and applications for daily needs as well as tools and documentation for developers to write stand-alone applications for the system. In this regard, the KDE project serves as an umbrella project for many standalone applications and smaller projects that are based on KDE technology. These includeCalligra Suite,digiKam,Rekonq,K3b and many others. KDE software is based on theQt framework..” (Wikipedia)

       FreedomBox Foundation The project currently describes a FreedomBox as “a personal server running a free software operating system, with free applications designed to create and preserve personal privacy.” The developers aim to create and preserve personal privacy by providing a secure platform for building federated social networksThis shall be done by creating a software stack that can run onplug computers that can easily be located in individual residences or offices. By promoting a decentralized deployment of hardware, the project hopes that FreedomBoxes will “provide privacy in normal life, and safe communications for people seeking to preserve their freedom in oppressive regimes. FreedomBox is p2p but the current spec includes only a few end-user applications like email. To subscribe to the FreedomBox Discussion List visithttp://lists.alioth.debian.org/cgi-bin/mailman/listinfo/freedombox-discuss

       The Free Network Foundation is an educational and outreach organization. We are looking to enable operators all over the world to have a standardized software/hardware stack for operating their local portion of the global FreeNetwork. Current Projects

       Free Software Foundation advocates for free software ideals as outlined in theFree Software Definition, works for adoption of free software and free media formats, and organizesactivist campaigns against threats to user freedom likeWindows 7, Apple'siPhone andOS X,DRM on music, ebooks and movies, andsoftware patents. We promotecompletely free software distributions of GNU/Linux, and advocate that users of the GNU/Linux operating systemswitch to a distribution which respects their freedom. We drive development of theGNU operating system andmaintain a list of high-priority free software projects to promote replacements for common proprietary applications. We build and updateresources useful for the free software community like theFree Software Directory, and thefree software jobs board. We also providelicenses for free software developers to share their code, including theGNU General Public License.

       Apache Software Foundation (ASF) is a non-profit corporation to support Apache software projects, including theApache HTTP Server. The ASF is a decentralized community of developers producing software distributed under the terms of theApache License and is thereforefree and open source software (FOSS). The Apache projects are characterized by a collaborative, consensus-based development process and an open and pragmatic software license. Each project is managed by a self-selected team of technical experts who are active contributors to the project. The ASF is ameritocracy, implying that membership to the foundation is granted only to volunteers who have actively contributed to Apache projects. The ASF is considered a second generation open-source organization (Wikipedia)

           

       Worldwide Web Consortium (W3C)is the main internationalstandards organization for theWorld Wide Web (abbreviated WWW or W3). Founded byTim Berners-Lee atMIT and currently headed by him, theconsortium is made up of member organizations which maintain full-time staff for the purpose of working together in the development of standards for theWorld Wide Web. As of 29 March 2012, the World Wide Web Consortium (W3C) has 351 members. W3C also engages in education and outreach, develops software and serves as an openforum for discussion about the Web.“ (Wikipedia)

       W3C Read Write Web Community Group (RWW) — Focus on Read-Write aspect of the WWW via use of WebID protocol and ACLs. “Q2 of 2012 sees the Read Write Web with a few maturing social platforms, and some focus starting to shift to the challenges associated with building an application framework. Some great discussions on the scope of the group has yielded a list oftopics added to the wiki … ”

       Semantic Web Education and Outreach Interest Group The W3C Semantic Web Education and Outreach (SWEO) Interest Group has been established to develop strategies and materials to increase awareness among the Web community of the need and benefit for the Semantic Web, and educate the Web community regarding related solutions and technologies.

       Internet Engineering Task Force (IETF) “develops and promotesInternet standards, cooperating closely with theW3C andISO/IEC standards bodies and dealing in particular with standards of theTCP/IP andInternet protocol suite. It is an openstandards organization, with no formal membership or membership requirements. All participants and managers are volunteers, though their work is usually funded by their employers or sponsors; for instance, the current chairperson is funded byVeriSign and the U.S. government'sNational Security Agency.” (Wikipedia)

       Object Management Group (OMG) is aconsortium, originally aimed at setting standards for distributedobject-oriented systems, and is now focused on modeling (programs, systems and business processes) and model-based standards. Products include the Common Object Request Broker Architecture (CORBA) standard and Data Distribution Service for real-time systems (DDS), a specification for publish/subscribe middleware for distributed systems. (Wikipedia)

       Electronic Frontier Foundation (EFF) is an international non-profitdigital rightsadvocacy and legal organization based in the US. Its mission is to:

       Engage in and support educational activities which increase popular understanding of the opportunities and challenges posed by developments in computing andtelecommunications.

       Develop among policy-makers a better understanding of the issues underlying free and open telecommunications, and support the creation of legal and structural approaches which will ease the assimilation of these new technologies by society.

       Raise public awareness aboutcivil liberties issues arising from the rapid advancement in the area of new computer-basedcommunications media.

       Support litigation in the public interest to preserve, protect, and extendFirst Amendment rights within the realm of computing and telecommunications technology.

       Encourage and support the development of new tools which will endow non-technical users with full and easy access to computer-based telecommunications. (Wikipedia)

       semanticweb.org The Semantic Web is the extension of the World Wide Web that enables people to share content beyond the boundaries of applications and websites. It has been described in rather different ways: as a utopic vision, as a web of data, or merely as a natural paradigm shift in our daily use of the Web. Most of all, the Semantic Web has inspired and engaged many people to create innovative semantic technologies and applications. semanticweb.org is the common platform for this community. You can extend semanticweb.org. Make sure that your favourite semantictool,event, orontology is here!

       OpenLink Software, Inc. develops and deploys standards-compliantmiddleware products. OpenLink Software is creator and owner of theUniversal Data Access drivers suite (comprisingOpenLink ODBC Drivers,OpenLink JDBC Drivers,OpenLink OLE-DB Providers,OpenLink ADO.NET Providers, andOpenLink XMLA Providers); theVirtuoso Universal Server; theiODBC driver manager; theOpenLink AJAX Toolkit forRIA development;OpenLink Data Spaces; and other leading-edge middleware products.

       Unlike Us (Institute of Network Cultures) The aim of Unlike Us is to establish a research network of artists, designers, scholars, activists and programmers who work on ‘alternatives in social media’.

       Social Swarm is an open think tank initiated by the German privacy and digital rights NGOFoeBuD.

       Weightless SIG White space spectrum provides the scope to realise tens of billions of connected devices worldwide overcoming the traditional problems associated with current wireless standards – capacity, cost, power consumption and coverage. The forecasted demand for this connectivity simply cannot be accommodated through existing technologies and this is stifling the potential offered by the machine to machine (M2M) market. In order to reach this potential a new standard is required – and that standard is called Weightless.

       The Co-Intelligence Institute works to further the understanding and development of co-intelligence. It focuses on catalyzing co-intelligence in the realms ofpolitics, governance, economics andconscious evolution of ourselves and our social systems. We research, network, advocate, and help organize leading-edge experiments and conversations in order to weave what is possible into new, wiser forms of civilization.

       Open Technology Institute The Open Technology Institute formulates policy and regulatory reforms to support open architectures and open source innovations and facilitates the development and implementation of open technologies and communications networks. OTI promotes affordable, universal, and ubiquitous communications networks through partnerships with communities, researchers, industry, and public interest groups and is committed to maximizing the potentials of innovative open technologies by studying their social and economic impacts – particularly for poor, rural, and other underserved constituencies. OTI provides in-depth, objective research, analysis, and findings for policy decision-makers and the general public.

       Open Knowledge Foundation We promote open knowledge because of its potential to transform the world for the better. Whether you’re an organisation seeking software solutions for making your data more openly available or you want some advice on what licenses you should apply to your data, we can help. Our CommunityOur Services

       Comunes is a non-profit collective dedicated to facilitating the use of free/libre web tools and resources to collectives and activists alike, with the hopes of encouraging the Commons. The Manifesto explains our approach. Projects include:

       Ourproject.org is a web-based collaborative free content repository. It acts as a central location for offering web space and tools (hosting, mailing lists, wiki, ftp, forums…) for projects of any topic. Active since 2002, nowadays it hosts 1,000 projects and its services receive around 1,000,000 monthly visits.

       Kune is a platform for encouraging collaboration, content sharing & free culture. It aims to improve/modernize/replicate the labor of what ourproject.org does, but in an easier manner and expanding on its features for community-building. It allows for the creation of online spaces of collaborative work, where organizations and individuals can build projects online, coordinate common agendas, set up virtual meetings and join people/orgs with similar interests.check it out here!.

       Move Commons (MC) is a simple web tool for initiatives, collectives and NGOs to declare and visualize the core principles they are committed to. The idea behind MC follows the same mechanics ofCreative Commons tagging cultural works, providing a user-friendly, bottom-up, labeling system for each initiative with 4 meaningful icons and some keywords. It aims to boost the visibility and diffusion of such initiatives, building a network among related initiatives/collectives across the world and allowing mutual discovery. Thus, it can facilitate the climb up to critical mass. Added to which, newcomers could easily understand the collective approach in their website, and/or discover collectives matching their field/location/interests with a simple search.

       Eclipse Foundation is a not-for-profit, member supported corporation that acts as the steward of Eclipse, an open source community focused on “building an open development platform comprised of extensible frameworks, tools and runtimes for building, deploying and managing software across the lifecycle.” The most well-known of theEclipse projects is theEclipse platform, a multilanguage software development environment andIDE. The Eclipse Foundation's stated aim is to “cultivate both an open source community and an ecosystem of complementary products and services. (Wikipedia)

       ProgrammableWeb (not p2p-specific) Find: APIs, Mashups, Code, and Coders. The latest on what's new and interesting with mashups, Web 2.0 APIs, and the Web as Platform. It's a directory, a news source, a reference guide, a community.

       Community Forge: a non-profit association that designs, develops and distributes free, open-source software for building communities with currencies.

      

       Agile Knowledge Engineering and Semantic Web (AKSW) A research group hosted by the University of Leipzig and the Institute for Applied Informatics (InfAI). It consists of the three subgroups Emergent Semantics, Machine Learning and Ontology Engineering, and Semantic Abstraction. AKSW has launched a number of high-impact R&D projects:

       Triplify tackles the chicken-and-egg problem of the Semantic Web by providing a building block for the “semantification” of Web applications.

       SoftWiki – distributed, end-user-centred Requirements Engineering for evolutionary software development

       OntoWiki is a Semantic Data Wiki as well as an Application Framework providing support for agile, distributed knowledge engineering scenarios.

       DBpedia is a community effort to extract structured information from Wikipedia and make this information available on the Web.

       DL-Learner tackles the problem of learning concepts / class expressions in Description Logics / OWL from examples.

       Project Danube This is an open-source project offering software for identity and personal data services on the Internet. The core of this project is an XDI-based Personal Data Store – a semantic database for your personal data, which always remains under your control. Applications on top of this database include the Federated Social Web, the selective sharing of personal data with organizations, and experimental peer-to-peer communication architectures. The efforts of this project reflect ongoing discourse about political and social questions about anonymity vs. veronymity, centralization vs. decentralization, and the appropriate handling of personal data online.

       For the XDI² (XDI-Squared) library, seehere. Or try the XDI web toolshere.

       The Personal Identity Consortium was founded in 2010 by Kaliya “Identity Woman” Hamlin to catalyze a thriving ecosystem. We are:

       connecting the entrepreneurs building new businesses around user-centric personal data;

       advocating for individuals having the tools and rights to access and manage their own data; and

       helping business sectors that depended on and made money in the old personal data ecosystem to transform their practices to make money in the new one.

       The Startup Circle was founded in June of 2011. This community’s mission is connecting startups, particularly personal data startups. We are focused on proactively supporting the development of shared understanding and shared language, which are critical precursors for high-performance collaboration.

       Industry Collaborative is in development  for technologists and business leaders from established companies in banking,  telecom, cable, web, advertising, finance, device manufacturing, media and other industries seeking to understand opportunities, launch pilot projects and ultimately offer services in the ecosystem. Companies in these industries currently engage by subscribing to the Personal Data Journal.

       Vision Development & Industry Outreach We hold true to one core non-negotiable: People are ultimately in control of the sum of their data.  There is a huge diversity of services and business models that can thrive in this ecosystem. We are focused on supporting these interoperable ecosystem visions becoming reality and communicating them to neighboring industries.

       Standards Engagement and Development To succeed, an effective personal data ecosystem needs to use open standards to allow many different services to interoperate. We track developments in many open standards efforts and are proactively engaged in several standards technical committees. We report on our activity in the Personal Data Journal.

       Foundation for a Free Information Infrastructure The FFII is a global network of associations dedicated to information about free and competitive software markets, genuine open standards and patent systems with lesser barriers to competition. The FFII contributions enabled the rejection of the EU software patent directive in July 2005, working closely with the European Parliament and many partners from industry and civil society. CNET awarded the FFII the Outstanding contribution to software development prize for this work. FFII continues to defend your right to a free and competitive software market and informational freedom.

       TIO Libre is a community of corporate service providers and experts who share the goal of providing freedom and loyalty to Web based outsourcing services. The TIO Libre name derives from the notion of Total Information Outsourcing. TIO consists of implementing the information system of an organisation only by using Web based services such as Web Mail, Web ERP, Web CRM, Web Marketing, Web Translation, etc. TIO is currently based on technologies such as Web 2.0, Entreprise 2.0, Cloud Computing, Software as a Service (SaaS), Service Oriented Architecture (SOA). TIO accelerates the adoption of new business applications by SMEs and small organisations and at reduced costs. The Libre term in “TIO Libre” refers to the notion of Freedom and Loyalty in business. Whenever a provider of TIO services uses technical or legal methods to prevent its clients from migrating to another TIO provider, clients are no longer Free. Whenever a provider of TIO services takes the data of its clients and provides it to government agencies or to competitors, suppliers are no longer Loyal.

       Association for Progressive Communications APC’s mission is to empower and support organisations, social movements and individuals in and through the use of information and communication technologies (ICTs) to build strategic communities and initiatives for the purpose of making meaningful contributions to equitable human development, social justice, participatory political processes and environmental sustainability.

       ReadWriteWeb blog

       emergent by design, a blog exploring the co-evolution of humanity and our technologies

       Ian Clarke's Locutus of Blog A developer of freenet and other p2p projects

       Symantec IT Whitepapers (not p2p-specific)

Related software, projects, and technologies

       Lists and comparissons of p2p infrastructure and software

       PeerPoint – Candidate Software Components (incomplete draft)

       List of anonymous P2P networks and clients (Wikipedia)

       http://p2pfoundation.net/Distributed_Social_Network_Projects

       http://p2pfoundation.net/Category:P2P_Infrastructure

       http://p2pfoundation.net/Category:NextNet

       http://p2pfoundation.net/Category:Autonomous_Internet;

       http://p2pfoundation.net/Category:Standards

       Social Swarm software evaluations

       https://gitorious.org/social/pages/ProjectComparison

       GNU Social/Project Comparison

       http://we-need-a-free-and-open-social-network.wikispaces.com/Distributed+Social+Network+Projects

       https://en.wikipedia.org/wiki/Distributed_social_network

       not p2p-specific:

       List of free software web applications (Wikipedia)

      

       Free Software Directory

       Portal:Free software (Wikipedia)

       Technology Mashup Matrix  

       ProgrammableWeb Find: APIs, Mashups, Code, and Coders. The latest on what's new and interesting with mashups, Web 2.0 APIs, and the Web as Platform. It's a directory, a news source, a reference guide, a community.

       comparison of software tools related to theSemantic Web or to semantic technologies in general

       list of ontologies (considered one of the pillars of theSemantic Web)

       comparison of microblogging servicesandsocial network services that have status updates.

       List of microblogging services

       List of Linux distributions

       List of formerly proprietary software

       List of free software project directories

       List of open source software packages

       List of trademarked open source software

       Ontology Part of the PeerPoint Open Design process is defining a vocabulary, an ontology, a taxonomy, or folksonomy for p2p application design. Wikipedia says: an ontology is a “formal, explicit specification of a shared conceptualisation”. An ontology renders sharedvocabulary andtaxonomy which models a domain with the definition of objects and/or concepts and their properties and relations. Ontologies are the structural frameworks for organizing information and are used in artificial intelligence, theSemantic Web,systems engineering,software engineering,biomedical informatics,library science,enterprise bookmarking, andinformation architecture as a form ofknowledge representation about the world or some part of it. The creation of domain ontologies is also fundamental to the definition and use of anenterprise architecture framework. 

       HTML5 is amarkup language for structuring and presenting content for the World Wide Web, and is a core technology of the Internet originally proposed byOpera Software. It is the fifth revision of theHTML standard and, as of June 2012, is still under development. Its core aims have been to improve the language with support for the latest multimedia while keeping it easily readable by humans and consistently understood by computers and devices (web browsers,parsers, etc.). (Wikipedia)

       The Semantic Web Ontology for Requirements Engineering (SWORE) is an ontology that has been developed to describe a requirements model within the SoftWiki methodology. The SoftWiki methodology supports a wiki-based distributed, end-user centered requirements engineering for evolutionary software development. The core of SWORE are classes that represent essential concepts of nearly every requirements engineering project. It supports the core concepts Requirement, Source, Stakeholder, Glossar. It is aligned to external vocabularies like DC-Terms, SIOC, FOAF, SKOS, DOAP or the tagging ontologies Tags and MUTO.

       Semantic Web (Web 3.0) is a collaborative movement led by theWorld Wide Web Consortium (W3C) that promotes commonformats for data on the World Wide Web. By encouraging the inclusion ofsemantic content in web pages, the Semantic Web aims at converting the current web of unstructured documents into a “web of data”. It builds on the W3C'sResource Description Framework (RDF). According to the W3C, “The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries.” (Wikipedia)

Semantic Web Stack:

       Web 3.0 architecture: (melvincarvalho.com)

       Application Architectures:

(Wikipedia) An application is a compilation of various functionalities all typically following the same pattern. Applications can be classified in various types depending on the Application Architecture Pattern they follow. A “pattern” has been defined as: “an idea that has been useful in one practical context and will probably be useful in others”. To create patterns, one needs building blocks. Building blocks are components of software, mostly reusable, which can be utilised to create certain functions. Patterns are a way of putting building blocks into context and describe how to use the building blocks to address one or multiple architectural concerns. Applications typically follow one of the following industry-standard application architecture patterns: [Note: peer-to-peer can mean client-to-client or server-to-server, and within one node it can include client-server, too. Multiple clients and/or servers can reside on a node and act as a team. Stand-alone or conventional free/open (non-p2p) client-side applications can potentially be modified to communicate with remote peers as well.]

       client/server is a computing model that acts asdistributed application which partitions tasks or workloads between the providers of a resource or service, calledservers, and service requesters, calledclients. Often clients and servers communicate over acomputer network on separate hardware, but both client and server may reside in the same system. A server machine is a host that is running one or more server programs which share their resources with clients. A client does not share any of its resources, but requests a server's content or service function. Clients therefore initiate communication sessions with servers which await incoming requests.

The client/server characteristic describes the relationship of cooperating programs in an application. The server component provides a function or service to one or many clients, which initiate requests for such services.

       Collaboration [p2p]: Users working with one another to share data and information (a.k.a. user-to-user) Information

       Aggregation: Data from multiple sources aggregated and presented across multiple channels (a.k.a. user-to-data)

       Replicated Servers: Replicates servers to reduce burden on central server.

       Layered Architecture: A decomposition of services such that most interactions occur only between neighboring layers.

       Pipe and Filter Architecture: Transforms information in a series of incremental steps or processes.

       Subsystem Interface: Manages the dependencies between cohesive groups of functions (subsystems).

       Reactor: Decouples an event from its processing.

       Event-Centric: Data events (which may have initially originated from a device, application, user, data store or clock) and event detection logic which may conditionally discard the event, initiate an event-related process, alert a user or device manager, or update a data store.

       Enterprise Process-Centric: A business process manages the interactions between multiple intra-enterprise applications, services, sub-processes and users.

       Bulk Processing: A business process manages the interactions between one or more bulk data sources and targets.

       Extended Enterprise: A business process manages the interactions between multiple inter-enterprise applications, services, sub-processes and users.

       Model–View–Controller (MVC) is “a type of [software architecture]  that separates the representation of information from the user's interaction with it. The model consists of application data and business rules, and the controller mediates input, converting it to commands for the model or view. A view can be any output representation of data, such as a chart or a diagram. Multiple views of the same data are possible, such as a pie chart for management and a tabular view for accountants. In addition to dividing the application into three kinds of component, the MVC design defines the interactions between them.

       A controller can send commands to its associated view to change the view's presentation of the model (for example, by scrolling through a document). It can send commands to the model to update the model's state (e.g. editing a document).

       A model notifies its associated views and controllers when there has been a change in its state. This notification allows the views to produce updated output, and the controllers to change the available set of commands. A passive implementation of MVC omits these notifications, because the application does not require them or the software platform does not support them.

       A view requests from the model the information that it needs to generate an output representation.

With the responsibilities of each component thus defined, MVC allows different views and controllers to be developed for the same model. It also allows the creation of general-purposesoftware frameworks to manage the interactions

       Three-tier application architecture is aclient–server architecture in which theuser interface,functional process logic (“business rules”),computer data storage anddata access are developed and maintained as independentmodules, most often on separateplatforms. Apart from the usual advantages of modular software with well-defined interfaces, the three-tier architecture is intended to allow any of the three tiers to be upgraded or replaced independently in response to changes in requirements ortechnology. Three-tier architecture has the following three tiers:

       Presentation tier: This is the topmost level of the application. The presentation tier displays information related to such services as browsing merchandise, purchasing, and shopping cart contents. It communicates with other tiers by outputting results to the browser/client tier and all other tiers in the network.

       Application tier (business logic, logic tier, data access tier, or middle tier) The logic tier is pulled out from the presentation tier and, as its own layer, it controls an application’s functionality by performing detailed processing. The middle tier may be multi-tiered itself (in which case the overall architecture is called an “n-tier architecture”).

       Data tier: consists of database servers where information is stored and retrieved. It keeps data neutral and independent from application servers or  business logic. Giving data its own tier also improves scalability and performance.

Comparison with the MVC architecture: At first glance, the three tiers may seem similar to themodel-view-controller (MVC) concept; however, topologically they are different. A fundamental rule in a three tier architecture is the client tier never communicates directly with the data tier; in a three-tier model all communication must pass through the middle tier. Conceptually the three-tier architecture is linear. However, the MVC architecture is triangular: the view sends updates to the controller, the controller updates the model, and the view gets updated directly from the model

Three-tier application architecture

.

       OpenLink Data Spaces (ODS) is a new-generation Distributed Collaborative Application platform for creating presence in the semantic web via Data Spaces derived from Weblogs, Wikis, Feed Aggregators, Photo Galleries, Shared Bookmarks, Discussion Forums and more. Data Spaces are a new database-management technology frontier that deals with the virtualization of heterogeneous data and data sources via a plethora of data-access protocols. As Unified Data Stores, Data Spaces also provide solid foundation for the creation, processing and dissemination of knowledge, making them a natural foundation platform for the emerging Data-Web (Semantic Web, Layer 1). Why are Data Spaces important? They provide a cost-effective route for generating Semantic Web Presence from Web 2.0 and traditional Web data-sources, by delivering an atomic data container for RDF Instance Data derived from data hosted in Blogs, Wikis, Shared Bookmark Services, Discussion Forums, Web File Servers, Photo Galleries, etc. Data Spaces enable direct and granular database-style interaction with Web Data.

nathan writes:“ODS is layered on top of virtuoso. Each module is not only already packaged with existing UI's, but due to it's heritage, each module is also available via SOAP and REST, meaning you can build your own applications and UIs over the top of it – as browser apps, on client, server or on peers. IMHO ODS-Briefcase is one of the most wonderful modules available for it, it's basically a really nice RESTful WEBDAV enabled data store package, with full support for multiple auth* protocols right up to WebID, and which recognises different data types. For instance it allows RDF that's been PUT/POSTed to be sponged straight in to the very powerful SPARQL-enabled triple running behind the scenes. E.G. it understands your data and serves as both a CRUD store, and a more advanced store which you can query extremely fast, using v powerful query languages like SPARQL.”


       OpenLink Virtuoso Universal Server is “amiddleware anddatabase engine hybrid that combines the functionality of a traditionalRDBMS,ORDBMS,virtual database,RDF,XML,free-text,web application server andfile server functionality in a single system. Rather than have dedicated servers for each of the aforementioned functionality realms, Virtuoso is a “universal server”; it enables a singlemultithreaded serverprocess that implements multiple protocols. Theopen source edition of Virtuoso Universal Server is also known as OpenLink Virtuoso.” (Wikipedia)


       KDE Platform is a set of frameworks byKDE that serve as technological foundation for all KDE applications. The Platform is released as separate product in sync with KDE’sPlasma Workspaces as part of theKDE Software Compilation 4. While the Platform is mainly written inC++, it includes bindings for other programming languages.

       KDE Software Compilation 4 is based on Qt 4, which is also released under the GPL for Windows and Mac OS X. Therefore KDE SC 4 applications can be compiled and run natively on these operating systems as well. KDE SC 4 includes many new technologies and technical changes. The centerpiece is a redesigned desktop and panels collectively calledPlasma, which replacesKicker,KDesktop, andSuperKaramba by integrating their functionality into one piece of technology; Plasma is intended to be more configurable for those wanting to update the decades-olddesktop metaphor. There are a number of new frameworks, includingPhonon (a new multimedia interface making KDE independent of any one specific media backend)Solid (an API for network and portable devices), andDecibel (a new communication framework to integrate all communication protocols into the desktop). Also featured is a metadata and search framework, incorporatingStrigi as a full-text file indexing service, andNEPOMUK with KDE integration.

       NEPOMUK (Networked Environment for Personal, Ontology-based Management of Unified Knowledge) is an open-source software specification that is concerned with the development of a socialsemantic desktop that enriches and interconnects data from different desktop applications using semanticmetadata stored asRDF. Initially, it was developed in the NEPOMUK project[2] and cost 17 million euros, of which 11.5 million was funded by theEuropean Union. TheZeitgeist framework, used byGNOME and Ubuntu'sUnity user interface, uses the NEPOMUK ontology, as does the Tracker search engine.The Java-based implementation of NEPOMUK[7] was finished at the end of 2008 and served as a proof-of-concept environment for several novel semantic desktop techniques. It features its own frontend (PSEW) that integrates search, browsing, recommendation, and peer-to-peer functionality. The Java implementation uses theSesame RDF store and theAperture framework for integrating with other desktop applications such as mail clients and browsers. A number of artifacts have been created in the context of the Java research implementation: WikiModel

       NEPOMUK-KDE is featured as one of the newer technologies inKDE SC 4.[5] It usesSoprano as the main RDF data storage and parsing library, while ontology imports are handled through theRaptor parser plugin and theRedland storage plugin, and all RDF data is stored inOpenLinkVirtuoso, which also handles full-text indexing.[6] On a technical level, NEPOMUK-KDE allows associating metadata to various items present on a normal user's desktop such as files, bookmarks, e-mails, and calendar entries. Metadata can be arbitrary RDF; as of KDE 4, tagging is the most user-visible metadata application.

       data.fm is “an open source, PDS with a centralized underlying attribute store as well as an API to enable bi-directional attribute updates from external websites and services. The APIs are based on standards and includeWebDav,SPARQL andLinked Data. Data formats exchanged include RDF, XML, JSON.” (Wikipedia) melvincarvalho writes:I should mentiondata.fm which is developed at Tim Berners-Lee's lab at MIT, I run this locally as my “personal data store” and it can handle 1 million hits a month no problem. (more…)” Nathan writes: Yes definitely,data.fm is “the other project” which is truly way ahead of the field at the minute, it's a RESTful, multi-auth* enabled store which supports querying, CRUD, automatic media type transformation, data browsers and even tabulator panes to view data. It's also open source and you can run your own instances very easily. Highly highly recommended. (more…)”

       Tabulatoris a generic data browser and editor. Using outline and table modes, it provides a way to browseRDF/Linked Data on the web. RDF is the standard for inter-applicationdata exchange. It also contains afeature-rich RDF store written in JavaScript. Developed by Tim Berners-Lee and MIT CSAILDIG group. (Wikipedia) Nathan writes: “Tabulator is … one of TimBL’s long running code based projects and is simply wonderful too – very well designed, and extensible in every way – Tim of course also understands data inside out, and the webizing of systems. (more…)”

       RetroShare is free software for encrypted,serverlessemail,Instant messaging,BBS andfilesharing based on afriend-to-friend network built onGPG. It is not strictly adarknet since peers can optionally communicate certificates and IP addresses from and to their friends. After authentication and exchanging an asymmetric key, ssh is used to establish a connection. End to end encryption is done usingOpenSSL. Friends of friends cannot connect by default, but they can see each other if the users allow it. Features include:

       File sharing and search: It is possible to share folders between friends. File transfer is carried on using a multi-hop swarming system. In essence, data is only exchanged between friends, although the ultimate source and destination of a given transfer are possibly multiple friends apart. A search function performing anonymous multi-hop search is another source of finding files in the network. Files are represented by their SHA-1 hash, and http-compliant file links can be exported, copied and pasted into/out RetroShare to publish their virtual location into the RetroShare network.

       Communication: RetroShare offers several services to allow friends to communicate. A private chat and a private mailing system allow secure communication between known friends. A forum system allowing both anonymous and authenticated forums distributes posts from friends to friends. A channel system offers the possibility to auto-download files posted in a given channel to every subscribed peer.

       User interface: The core of the RetroShare software is based on an offline library, to which two executables are plugged: a command-line executable, that offers nearly no control, and a graphical user interface written in Qt4, which is the one most users would use. In addition to functions quite common to other file sharing software, such as a search tab and visualization of transfers, RetroShare gives users the possibility to manage their network by collecting optional information about neighbor friends and visualize it as a trust matrix or as a dynamic network graph.

       Anonymity: The friend-to-friend structure of the RetroShare network makes it difficult to intrude and hardly possible to monitor from an external point of view. The degree of anonymity can still be improved by deactivating the DHT and IP/certificate exchange services, making the Retroshare network a real Darknet. (Wikipedia)

melvincarvalho wrote: “One system I really like technically is RetroShare.  It's open source, has first class developers, who really know their stuff. and an active, working, community.  One team has already ported libretroshare into a browser.  Imagine reatlime, secure, encrypted, chat straight in your browser, plus a ton of other features.  There's even  little chess game you can plug in to the framework so you can challenge your friends.  Once you see this working it's a real paradigm shift, that makes you think ‘why doesnt every browser do this?'.”

       Friend of a Friend (FOAF) Wiki Friend of a friend (FOAF) is a decentralized social network usingsemantic web technology to describe persons and their relations in a machine readable way. The Friend of a friend vocabulary can also be used to describe groups, organisations and other things. Everybody can create a Friend of a friend profile describing himself and whom he knows. This profile can be published anywhere on the web. Many social networking websites publish the openly accessible information of their members with Friend of a friend. If you want to create a profile right away, you can useFOAF-a-Matic.

       JSON-LD, or JavaScript Object Notation for Linked Data, is a method of transportingLinked Data usingJSON. It has been designed to be as simple and concise as possible, while remaining human readable. Furthermore, it was a goal to require as little effort as possible from developers to transform their plain old JSON to semantically rich JSON-LD. Consequently, an entity-centric approach was followed (traditional Semantic Web technologies are usually triple-centric). This allows data to be serialized in a way that is often indistinguishable from traditional JSON. (Wikipedia)

       Turtle (Terse RDF Triple Language) is a serialization format forResource Description Framework (RDF) graphs. A subset ofTim Berners-Lee and Dan Connolly'sNotation3 (N3) language, it was defined by Dave Beckett, and is a superset of the minimalN-Triples format. Unlike full N3, Turtle doesn't go beyond RDF's graph model.SPARQL uses a similar N3 subset to Turtle for its graph patterns, but using N3's brace syntax for delimiting subgraphs.Turtle is popular amongSemantic Web developers as a human-friendly alternative toRDF/XML. A significant proportion of RDF toolkits include Turtle parsing and serializing capability. Some examples areRedland,Sesame,Jena andRDFLib.

       Notation3, or N3 as it is more commonly known, is a shorthand non-XML serialization ofResource Description Framework (RDF) models, designed with human-readability in mind: N3 is much more compact and readable than XML RDF notation. The format is being developed byTim Berners-Lee and others from theSemantic Web community. N3 has several features that go beyond a serialization for RDF models, such as support for RDF-based rules.Turtle is a simplified, RDF-only subset of N3.

       Resource Description Framework (RDF) is a family of World Wide Web Consortium (W3C) specifications originally designed as ametadatadata model. It has come to be used as a general method for conceptual description or modeling of information that is implemented in web resources, using a variety of syntax formats.

       RSS (originallyRDF Site Summary, often dubbed Really Simple Syndication) is a family ofweb feed formats used to publish frequently updated works—such as blog entries, news headlines, audio, and video—in a standardized format. An RSS document (which is called a “feed”, “web feed”, or “channel”) includes full or summarized text, plusmetadata such as publishing dates and authorship. RSS feeds benefit publishers by letting themsyndicate content automatically. A standardizedXML file format allows the information to be published once and viewed by many different programs. They benefit readers who want to subscribe to timely updates from favorite websites or to aggregate feeds from many sites into one place. RSS feeds can be read using software called an “RSS reader“, “feed reader”, or “aggregator“, which can beweb-based,desktop-based, or mobile-device-based. The user subscribes to a feed by entering into the reader the feed'sURI or by clicking afeed icon in a web browser that initiates the subscription process. The RSS reader checks the user's subscribed feeds regularly for new work, downloads any updates that it finds, and provides auser interface to monitor and read the feeds. RSS allows users to avoid manually inspecting all of the websites they are interested in, and instead subscribe to websites such that all new content is pushed onto their browsers when it becomes available. (Wikipedia)

       FeedSync for Atom and RSS, previously Simple Sharing Extensions, are extensions toRSS andAtom feed formats designed to enable the synchronization of information by using a variety of data sources. It is licensed under theCreative Commons Attribution-ShareAlike License (version 2.5) and theMicrosoft Open Specification Promise. The scope of FeedSync for Atom and RSS is to define the minimum extensions necessary to enable loosely-cooperating applications to use Atom and RSS feeds as the basis for item sharing – that is, the bi-directional, asynchronous synchronization of new and changed items amongst two or more cross-subscribed feeds. Note that while much of FeedSync is currently defined in terms of Atom and RSS feeds, at its core what FeedSync strictly requires is:

       A flat collection of items to be synchronized

       A set of per-item sync metadata that is maintained at all endpoints

       A set of algorithms followed by all endpoints to create, update, merge, and conflict resolve all items

          TheOpen Data Movement aims at making data freely available to everyone. There are already various interesting open data sets available on the Web. Examples includeWikipedia,Wikibooks,Geonames, etc. The goal of the W3C SWEO Linking Open Data community project is to extend the Web with a data commons by publishing various open data sets as RDF on the Web and by setting RDF links between data items from different data sources. RDF links enable you to navigate from a data item within one data source to related data items within other sources using a Semantic Web browser. RDF links can also be followed by the crawlers of Semantic Web search engines, which may provide sophisticated search and query capabilities over crawled data. As query results are structured data and not just links to HTML pages, they can be used within other applications.

       Web Data Commons Extracting Structured Data from the Common Web Crawl. More and more websites have started to embed structured data describing products, people, organizations, places, events into their HTML pages. The Web Data Commons project extracts this data from several billion web pages and provides the extracted data for download. Web Data Commons thus enables you to use the data without needing to crawl the Web yourself.

       Semantic MediaWiki (SMW) is a free, open-source extension to MediaWiki – the wiki software that powers Wikipedia – that lets you store and query data within the wiki's pages. Semantic MediaWiki is also a full-fledged framework, in conjunction with many spinoff extensions, that can turn a wiki into a powerful and flexible “collaborative database”. All data created within SMW can easily be published via the Semantic Web, allowing other systems to use this data seamlessly.

       OntoWiki free,open-sourcesemantic wiki application, meant to serve as anontology editor and aknowledge acquisition system. OntoWiki is form-based rather than syntax-based, and thus tries to hide as much of the complexity of knowledge representation formalisms from users as possible. In 2010 OntoWiki became part of the technology stack supporting the LOD2 (Linked Open Data) project It enables intuitive authoring of semantic content, with an inline editing mode for editing RDF content, similar to WYSIWIG for text documents. (Wikipedia) OntoWiki demos:

       Distributed, End-user Centered Requirements Engineering for Evolutionary Software Development                                               

       The Semantic Web Ontology for Requirements Engineering (SWORE)

       DBpedia is a technology to extract structured information fromWikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against Wikipedia, and to link other data sets on the Web to Wikipedia data. The DBpedia Ontology is a shallow, cross-domain ontology, which has been manually created based on the most commonly used infoboxes within Wikipedia. The ontology currently covers over 320 classes which form a subsumption hierarchy and are described by 1,650 different properties. With the DBpedia 3.5 release, we introduced a public wiki for writing infobox mappings, editing existing ones as well as editing the DBpedia ontology. This allows external contributors to define mappings for the infoboxes they are interested in and to extend the existing DBpedia ontology with additional classes and properties.

       The Product Types Ontology: High-precision identifiers for product types based on Wikipedia. (Creative Commons license)

       GoodRelations is a standardized vocabulary (also known as “schema”, “data dictionary”, or “ontology”) for product, price, store, and company data that can (1) be embedded into existing static and dynamic Web pages and that (2) can be processed by other computers. This increases the visibility of your products and services in the latest generation of search engines, recommender systems, and other novel applications. GoodRelations is now fully compatible with theHTML5 microdata specification and can be used as an e-commerceextension for the schema.org vocabulary. GoodRelations Snippet Generator: Create a GoodRelations markup snippet for copy-and-paste into your HTML

       VisualDataWeb This website provides an overview of our attempts to a more visual Data Web. The term Data Web refers to the evolution of a mainly document-centric Web toward a more data-oriented Web. In its narrow sense, the term describes pragmatic approaches of the Semantic Web, such as RDF and Linked Data. In a broader sense, it also includes less formal data structures, such as microformats, microdata, tagging, and folksonomies.

       The Data Hub is a community-run catalogue of useful sets of data on the Internet. You can collect links here to data from around the web for yourself and others to use, or search for data that others have collected. Depending on the type of data (and its conditions of use), the Data Hub may also be able to store a copy of the data or host it in a database, and provide some basic visualisation tools. This site is running a powerful piece of open-source data cataloguing software calledCKAN, written and maintained by theOpen Knowledge Foundation.

       WebSocket “is a web technology providing for bi-directional,full-duplex communications channels over a singleTCP connection. The WebSocketAPI is being standardized by theW3C, and the WebSocket protocol has been standardized by theIETF asRFC 6455.”

       Web Notifications API: This W3C specification provides an API to display notifications to alert users outside the context of a web page.

       clojure is a dynamic programming language that targets the Java Virtual Machine (and the CLR, and JavaScript). It is designed to be a general-purpose language, combining the approachability and interactive development of a scripting language with an efficient and robust infrastructure for multithreaded programming. Clojure is a compiled language – it compiles directly to JVM bytecode, yet remains completely dynamic. Every feature supported by Clojure is supported at runtime. Clojure's approach to Identity and State:

       Imperative programming: An imperative program manipulates its world (e.g. memory) directly. It is founded on a now-unsustainable single-threaded premise – that the world is stopped while you look at or change it. You say “do this” and it happens, “change that” and it changes. Imperative programming languages are oriented around saying do this/do that, and changing memory locations. This was never a great idea, even before multithreading. Add concurrency and you have a real problem, because “the world is stopped” premise is simply no longer true, and restoring that illusion is extremely difficult and error-prone. Multiple participants, each of which acts as though they were omnipotent, must somehow avoid destroying the presumptions and effects of the others. This requires mutexes and locks, to cordon off areas for each participant to manipulate, and a lot of overhead to propagate changes to shared memory so they are seen by other cores. It doesn't work very well.

       Functional programming: Functional programming takes a more mathematical view of the world, and sees programs as functions that take certain values and produce others. Functional programs eschew the external ‘effects' of imperative programs, and thus become easier to understand, reason about, and test, since the activity of functions is completely local. To the extent a portion of a program is purely functional, concurrency is a non-issue, as there is simply no change to coordinate.

       Working Models and Identity: While some programs are merely large functions, e.g. compilers or theorem provers, many others are not – they are more like working models, and as such need to support what I'll refer to in this discussion as identity. By identity I mean a stable logical entity associated with a series of different values over time. Models need identity for the same reasons humans need identity – to represent the world. How could it work if identities like ‘today' or ‘America' had to represent a single constant value for all time? Note that by identities I don't mean names (I call my mother Mom, but you wouldn't). So, for this discussion, an identity is an entity that has a state, which is its value at a point in time. And a value is something that doesn't change. 42 doesn't change. June 29th 2008 doesn't change. Points don't move, dates don't change, no matter what some bad class libraries may cause you to believe. Even aggregates are values. The set of my favorite foods doesn't change, i.e. if I prefer different foods in the future, that will be a different set. Identities are mental tools we use to superimpose continuity on a world which is constantly, functionally, creating new values of itself.

       GNU Privacy Guard GnuPG is theGNU project‘s complete and free implementation of the OpenPGP standard as defined byRFC4880 . GnuPG allows to encrypt and sign your data and communication, features a versatile key management system as well as access modules for all kinds of public key directories. GnuPG, also known as GPG, is a command line tool with features for easy integration with other applications. A wealth offrontend applications andlibraries are available. Version 2 of GnuPG also provides support for S/MIME.

       GnuPG isFree Software (meaning that it respects your freedom). It can be freely used, modified and distributed under the terms of theGNU General Public License .

       GnuPG comes in two flavours:1.4.12 is the well known and portable standalone version, whereas2.0.19 is the enhanced and somewhat harder to build version.

       ProjectGpg4win provides a Windows version of GnuPG. It is nicely integrated into an installer and features several frontends as well as English and German manuals.

       ProjectGPGTools provides a Mac OS X version of GnuPG. It is nicely integrated into an installer and features all required tools.

       ProjectAegypten developed the S/MIME functionality in GnuPG 2.

       OpenPGP is a non-proprietary protocol for encrypting email using public key cryptography. It is based on PGP as originally developed by Phil Zimmermann. The OpenPGP protocol defines standard formats for encrypted messages, signatures, and certificates for exchanging public keys. OpenPGP has become the standard for nearly all of the world's encrypted email. By becoming an IETF Proposed Standard (RFC 4880), OpenPGP may be implemented by any company without paying any licensing fees to anyone. The OpenPGP Alliance brings companies together to pursue a common goal of promoting the same standard for email encryption and to apply the PKI that has emerged from the OpenPGP community to other non-email applications.

       A darknet is a distributed P2P filesharing network, where connections are either made only between trusted peers — sometimes called “friends” (F2F) using non-standard protocols and ports or usingonion routing. (Wikipedia)

       Tor  FOIA Documents Show TOR Undernet Beyond the Reach of the Federal Investigators

        FreedomBox Foundation We're building software for smart devices whose engineered purpose is to work together to facilitate free communication among people, safely and securely, beyond the ambition of the strongest power to penetrate. They can make freedom of thought and information a permanent, ineradicable…

       freenet Share files, chat on forums, browse and publish, anonymously and without fear of blocking or censorship. Then connect to your friends for even better security.

             

       YaCy – The Peer to Peer Search Engine

       Apache Wave is a software framework for real-time collaborative editing online. Google Inc. originally developed it as Google Wave to merge key features of communications media such as email, instant messaging, wikis, and social networking. Communications using the system can be synchronous or asynchronous. Software extensions provide contextual spelling and grammar checking, automated language translation, and other features (Wikipedia)

       Apache Hadoop is anopen sourcesoftware framework that supports data-intensivedistributed applications licensed under the Apache v2 license.[1] It enables applications to work with thousands of computational independent computers and petabytes of data. Hadoop was derived fromGoogle‘sMapReduce andGoogle File System (GFS) papers. Hadoop is a top-level Apache project being built and used by a global community of contributors,[2] written in theJava programming language. (Wikipedia)

       Eclipse is a multi-languagesoftware development environment comprising anintegrated development environment (IDE) and an extensibleplug-in system. It is written mostly inJava. It can be used to develop applications in Java and, by means of various plug-ins, other programming languages includingAda,C,C++,COBOL,Haskell,Perl,PHP,Python,R,Ruby (includingRuby on Rails framework),Scala,Clojure,Groovy andScheme. It can also be used to develop packages for the softwareMathematica. Development environments include the Eclipse Java development tools (JDT) for Java, Eclipse CDT for C/C++, and Eclipse PDT for PHP, among others. (Wikipedia)

       Light Table: a new IDE

       Apache Subversion (often abbreviated SVN, after the command name svn) is asoftware versioning andrevision control system distributed under an open source license. Developers use Subversion to maintain current and historical versions of files such as source code, web pages, and documentation. (Wikipedia)

       OpenJDK (Open Java Development Kit) is a free and open source implementation of theJava programming language

       Ourproject.org is a web-based collaborative free content repository. It acts as a central location for offering web space and tools for projects of any topic, focusing on free knowledge. It aims to extend the ideas and methodology of free software to social areas and free culture in general. Thus, it provides multiple web services (hosting, mailing lists, wiki, ftp, forums…) to social/cultural/artistic projects as long as they share their contents with Creative Commons licenses (or other free/libre licenses). Active since 2002, nowadays it hosts 1,000 projects and its services receive around 1,000,000 monthly visits.

       Kune is a platform for encouraging collaboration, content sharing & free culture. It aims to improve/modernize/replicate the labor of what ourproject.org does, but in an easier manner and expanding on its features for community-building. It allows for the creation of online spaces of collaborative work, where organizations and individuals can build projects online, coordinate common agendas, set up virtual meetings and join people/orgs with similar interests. It sums up the characteristics of online social networks with collaborative software, aimed at groups and boosting the sharing of contents among orgs/peers.Demo site.  Differences between N-1/Lorea/Elgg and Kune/Apache Wave.

Kune Addendum via Samer @ unlike-us@listcultures.org, Wed, 31 Oct 2012:

“We thought you might be interested in the new release of the collaborative federated social network Kune, codename “Ostrom” http://kune.cc/ Kune is focused in real-time collaboration (not just communication), in building (not just in sharing). This new release is fully multi-lingual, supporting 12 languages, and with multiple improvements. It is coined “Ostrom” as a homage to the Nobel Prize of Economics Elinor Ostrom, who demonstrated how the Commons can be managed by their communities in a better and more successful way than how the State and the Market manage them.”

“Kune aims to be a free/libre decentralized social network, so you would stop using Facebook. It provides real-time simultaneous collaborative edition of documents so you can stop using Google Docs and wikis. It allows you to communicate in discussion lists, so you stop using mailing lists and Google/Yahoo/MSN Groups. It provides group calendars so you forget about Google Calendar. It provides chat compatible with Gmail/Jabber chat accounts of your friends. It allows galleries of images, videos, maps or any rich contents, so you can stop using Flickr/Youtube. It provides multiple other tools for collaboration such as polls, doodles, or add-ons (same way as the Firefox add-ons). It is an advanced mail inbox so you use less and less your classical e-mail. And eventually, it will allow publishing contents to the general public so you would be able to create your own customized group web-pages without needing any CMS (WordPress, Drupal, etc).”

       Move Commons (MC) is a simple web tool for initiatives, collectives and NGOs to declare and visualize the core principles they are committed to. The idea behind MC follows the same mechanics ofCreative Commons tagging cultural works, providing a user-friendly, bottom-up, labeling system for each initiative with 4 meaningful icons and some keywords. It aims to boost the visibility and diffusion of such initiatives, building a network among related initiatives/collectives across the world and allowing mutual discovery. Thus, it can facilitate the climb up to critical mass. Added to which, newcomers could easily understand the collective approach in their website, and/or discover collectives matching their field/location/interests with a simple search. Although there are a few initiatives already with their MC, it is still a beta-version under development, with the support of theMedialab-Prado Commons Lab.

        Other Comunes Collective projects

       Alerta!is a community-driven alert system

       Plantaré is a community currency for seed exchange

       The World of Alternatives is a proof-of-concept initiative that aims to classify and document collectively alternatives of our “Another World is Possible” in Wikipedia

       Karma is a proof-of-concept gadget for a decentralized reputation rating system

       Massmob is a proof-of-concept gadget for calling and organizing meetings and smart mobs

       Troco is a proof-of-concept gadget of a peer-to-peer currency

       SourceForge is a web-basedsource code repository. It acts as a centralized location for software developers to control and managefree and open source software development… As of July 2011, the SourceForge repository hosts more than 300,000 projects and has more than 2 million registered users… SourceForge offers free access to hosting and tools for developers offree /open source software, competing with other providers such asRubyForge,Tigris.org,BountySource,Launchpad,BerliOS,JavaForge,GNU Savannah,GitHub andGitorious. (Wikipedia)

       Git is adistributed revision control andsource code management system with an emphasis on speed. Also GitHub.

       Fossil: Simple, high-reliability, distributed software configuration management

       GNUnet is a framework for secure peer-to-peer networking that does not use any centralized or otherwise trusted services. A first service implemented on top of the networking layer allows anonymous censorship-resistant file-sharing. Better than average documentation for developers

       Tokina Tonika is an administration-free platform for large-scale open-membership (social) networks with robust security, anonymity, resilience and performance guarantees

       Sneer is a free and open source sovereign computing platform. It will (in future) enable you to share hardware resources (CPU, disk space, network bandwidth) with your friends. The current version will host your own social network, information and media, and let you create Snapps (sovereign applications) using Eclipse.

       WebBox: Supporting Decentralised and Privacy-respecting Micro-sharing with Existing Web Standards – WebBox inverts the standard development approach: not focusing on applications, but focusing on standardized data formats so that everything saves to, requests data from, and is controlled by a single data store.  Then, it doesn't matter what applications you're running, they all understand and interact with the same data.

       Hocnet  is a concept for a competitively decentralized internet. Instead of allowing a small group to have oligopolistic control over the Internet, or attempting to solve the problems that come with completely decentralized networks, Hocnet attempts to present a solution with the advantages of both approaches by allowing centralization where needed, but preventing it from becoming oligopolistic by utilizing competition, with low barriers both to entry and change in provider. Hocnet is primarily a system to locate, compensate, and receive goods from the individual providers of each service that the network needs to operate, thus commoditizing network access. Some services, such as routing, will have competing centralized providers. Other services, such as passing bandwidth, will be provided profitably by everyone using the network. Competition will both lower prices and optimize use of the network by providing large monetary incentives for improved routing and QOS.

       Tethr.us, will provide satellite-based Internet access, GSM service (SMS), and wifi connectivity in one box. It will be portable, easy to hide and very very easy to use anywhere in the world.

       Tribler (4th generation p2p technology – A BitTorrent client/social extension)                                                                 

       Fast content search

       Wiki-style channels

       Video-on-demand support

       Fully decentralized

       Reputation system

       Integrated P2P database

       barter system

       Tribler s an application that enables its users to find, enjoy and share content (video, audio, pictures, and much more). Tribler has three functions:

       1. Find content

       Keyword search

       Through our improved search functionality you search in content of other Tribler users, and in content of big video web portals such as Youtube and Liveleak.

       Browse in different categories

       You can browse through different categories as video, audio, pictures, etc. You can also see what is most popular and what is made available recently. All these functionalities will definitely help you in finding something you like.

       See what friends and taste neighbors like – users can “vote” on content they like

       By making friends and getting in touch with users with similar taste you can find content that you might find interesting. You yourself can also show to your friends what you like and what they definitely should see.

       2. Consume content. Because of the integrated video and audio player you can almost immediately start watching your favorite video(s) or listen to your favorite song(s).

       3. Share content. Tribler is a social application. This means you can make friends with other users and you can show to everyone what you like and dislike. And by sharing your content you also help other Tribler users to enjoy their favorite content.

       Tribler video (Stanford University): http://youtu.be/JQiLaKdzD0E

       Askemos  combines incorruptible privilege delegation and non-repudiable replication of communicating processes into a trustworthy network. Physical machines under control of their operators execute applications processes under permanent multilateral audit. The aim of Askemos is to enable reliable and justiciable data processing. A modelling framework for Societal Infrastructure Software. See comments on Dan Bricklin's essay “Software that lasts 200 years” for a discussion on applicable software-ecologic principles. A web-alike but peer-to-peer distributed application server featuring a free programmable level instead of any special purpose application as usually found in overlay networks. Implementations of an Askemos peer can be obtained from ball.askemos.org. Note that Askemos.org site is a work in progress concerned with the rationale and abstract specification exclusively; including data formats, protocols, service interfaces etc. – however not the actual implementations.The network'shonest majority of hosts provides users with exclusive control, and thus real ownership of processes. Askemos models a “virtual constitutional state” where physical hosts bear witness to the interactions of virtual agents (akin to citizens).Self verifying identifiers can confirm that original documents have not been tampered with. The real potential forusing Askemos is for identity and time stamp services, information management in public administration and libraries attaching metadata and archives, with the goal of establishing robust systems that can endure for centuries.

       Peerscape is an experimental peer-to-peer social network implemented as an extension to the Firefox web browser. (inactive since 2010?)

       Extensible Messaging and Presence Protocol (XMPP)

       WebRTC 1.0: Real-time Communication Between Browsers

       CJDNS, is a collection of networking tools primarily consisting of CJDRoute

       OpenWrtis aLinux distribution primarily targeted atrouting onembedded devices. It comprises a set of about 2000 software packages, installed and uninstalled via theopkg package management system. OpenWrt can be run onCPE routers,residential gateways,smartphones (e.g.Neo FreeRunner),pocket computers (e.g.Ben NanoNote), and ordinary computers …The project incorporates awiki and aforum,… (Wikipedia)

       OpenVPN is an open source software application that implementsvirtual private network (VPN) techniques for creating secure point-to-point or site-to-site connections in routed or bridged configurations and remote access facilities. It uses a custom security protocol that utilizesSSL/TLS for key exchange. It is capable of traversingnetwork address translators (NATs) andfirewalls. It was written by James Yonan and is published under theGNU General Public License (GPL). OpenVPN allowspeers toauthenticate each other using apre-shared secret key,certificates, orusername/password. When used in a multiclient-server configuration, it allows the server to release anauthentication certificate for every client, usingsignature andCertificate authority. It uses theOpenSSL encryptionlibrary extensively, as well as theSSLv3/TLSv1protocol, and contains many security and control features.

       Distributed Hash Tables (DHT) (Wikipedia) “a class of a decentralizeddistributed system that provides a lookup service similar to ahash table; (key, value) pairs are stored in a DHT, and any participatingnode can efficiently retrieve the value associated with a given key. Responsibility for maintaining the mapping from keys to values is distributed among the nodes, in such a way that a change in the set of participants causes a minimal amount of disruption. This allows a DHT toscale to extremely large numbers of nodes and to handle continual node arrivals, departures, and failures.” List of DHTs (Wikipedia)

       List of Distributed Data Stores (DDS) (Wikipedia)

       HBase is anopen source, non-relational,distributed database modeled afterGoogle'sBigTable and is written in Java. It is developed as part ofApache Software Foundation‘sApache Hadoop project and runs on top ofHDFS (Hadoop Distributed Filesystem), providing BigTable-like capabilities for Hadoop. That is, it provides afault-tolerant way of storing large quantities of sparse data. HBase features compression, in-memory operation, andBloom filters on a per-column basis as outlined in the original BigTable paper. Tables in HBase can serve as the input and output forMapReduce jobs run in Hadoop, and may be accessed through theJava API but also throughREST,Avro orThrift gateway APIs. HBase is not a direct replacement for a classic SQL Database, although recently its performance has improved, and it is now serving several data-driven websites, including Facebook's Messaging Platform. (Wikipedia)

       CouchDB Apache CouchDB™ is a distributed database that uses JSON for documents, JavaScript for MapReduce queries, and regular HTTP for an API. CouchDB is a database that completely embraces the web. Query, combine, and transform your documents with JavaScript. CouchDB works well with modern web and mobile apps. You can even serve web apps directly out of CouchDB. And you can distribute your data, or your apps, efficiently using CouchDB’s incremental replication. CouchDB supports master-master setups with automatic conflict detection. CouchDB comes with a suite of features, such as on-the-fly document transformation and real-time change notifications, that makes web app development a breeze. It even comes with an easy to use web administration console. You guessed it, served up directly out of CouchDB! We care a lot aboutdistributed scaling. CouchDB is highly available and partition tolerant, but is alsoeventually consistent. And we care a lot about your data. CouchDB has a fault-tolerant storage engine that puts the safety of your data first.

       OpenStack is anInfrastructure as a Service (IaaS)cloud computing project byRackspace Cloud andNASA. Currently more than 150 companies have joined the project among which areAMD,Intel,Canonical,SUSE Linux,Red Hat,Cisco,Dell,HP,IBM andYahoo!. It isfreeopen source software released under the terms of theApache License. OpenStack integrates code from NASA'sNebula platform as well asRackspace's Cloud Files platform. (Wikipedia)

       Tahoe-LAFS (Tahoe Least-Authority Filesystem) is “anopen source, secure, decentralized, fault-tolerant,peer-to-peerdistributed data store anddistributed file system[3]. It can be used as anonline backup system, or to serve as a file or web host similar toFreenet,[4] depending on the front-end used to insert and access files in the Tahoe system. Tahoe can also be used in aRAID-like manner to use multiple disks to make a single largeRAIN pool of reliable data storage. The system is designed and implemented around the “Principle of Least Authority” (POLA). Strict adherence to this convention is enabled by the use of cryptographic capabilities which grant the minimal set of privileges necessary to accomplish a given task to requesting agents. A RAIN array acts as storage—these servers do not need to be trusted for confidentiality or integrity of the stored data.” (Wikipedia)

       Open Computing Language (OpenCL) is a framework for writing programs that execute acrossheterogeneous platforms consisting of central processing unit (CPUs),graphics processing unit (GPUs), and other processors. OpenCL includes a language (based onC99) for writing kernels (functions that execute on OpenCL devices), plusapplication programming interfaces (APIs) that are used to define and then control the platforms. OpenCL providesparallel computing using task-based and data-based parallelism. OpenCL is an open standard maintained by the non-profit technology consortiumKhronos Group. It has been adopted byIntel,Advanced Micro Devices,Nvidia, andARM Holdings. OpenCL gives any application access to the graphics processing unit for non-graphical computing. Thus, OpenCL extends the power of the graphics processing unit beyond graphics. (Wikipedia)

       Parallela The goal of the Parallella project is to democratize access to parallel computing.

       ns-3 is a discrete-event network simulator, targeted primarily for research and educational use. ns-3 is free software, licensed under theGNU GPLv2 license, and is publicly available for research, development, and use. The goal of the ns-3 project is to develop a preferred, open simulation environment for networking research: it should be aligned with the simulation needs of modern networking research and should encourage community contribution, peer review, and validation of the software. ns-3 is a C++ library which provides a set of network simulation models implemented as C++ objects and wrapped through python. Users normally interact with this library by writing a C++ or a python application which instantiates a set of simulation models to set up the simulation scenario of interest, enters the simulation mainloop, and exits when the simulation is completed.

       OpenLink Data Spaces What are Data Spaces? Basically, They provide structured data access partitioning just as ‘machine names' provide name oriented partitioning for DNS and ‘table names' the same for database systems. A named, structured data cluster within a distributed data network where each item of data (each “datum”) has a unique identifier. Fundamental characteristics of data spaces include: Each Data Item is a Data Object endowed with a unique HTTP-based Identifier. Data Object Identity is distinct from its Content, Structure, and Location (Address).  Data Object Representation is delivered via structured content constrained by a Schema (in this case the Entity-Attribute-Value model).  Creation, Update, and Deletion privileges are controlled by the Data Space owner.

       Netsukuku: Fractal address system for a p2p cloud The Netsukuku project is based on the idea of exploiting the potential of WiFi connectivity, linking the PCs of wireless communities to act as routers, forming a network that could become as large or larger than the current Internet. Netsukuku is an ad-hoc network forming software built around an address system designed to handle massive numbers of nodes while requiring minimal CPU and memory resources. It could be used to build a world-wide distributed, fault-tolerant, anonymous, and censorship-resistant network, fully independent of the Internet. Netsukuku does not rely upon backbones, routers or internet service providers nor any other centralized system, although it may take advantage of existing systems of this nature to augment unity and connectivity of the existing Netsukuku network. (recent progress)

       Network SimulationTools

       Babelis a loop-avoiding distance-vector routing protocol for IPv6 and IPv4 with fast convergence properties. It is based on the ideas inDSDV,AODV and Cisco'sEIGRP, but is designed to work well not only in wired networks but also in wireless mesh networks.

       Simple Web Discovery (SWD)

       KadOH – Javascript P2P framework to build P2P applications for browsers and node.js. By implementing the basis of theKademlia DHT, KadOH lets you build distributed web applications for mobile and desktop devices. With its flexible and extensible design, you can easily adapt KadOH to fit your needs. KadOH is available under theMIT License. KadOH abstracts many differenttransport protocols to provide P2P connections. In the browser we supportXMPP over Bosh andSocket.io shipped with a node.js router, and you can go for UDP and native XMPP in a node.js application. We plan to supportWebRTC soon! Today we try to push the modularization ideas to move toward a framework oriented system, providing tools to build and test distributed applications (mainly DHT based). [Currently] building a Twitter-like demo application totally decentralized based on KadOH. documentation wiki

       Backbone.js gives structure to web applications by providing models with key-value binding and custom events, collections with a rich API of enumerable functions, views with declarative event handling, and connects it all to your existing API over a RESTful JSON interface.

       BOINC: Open-source software forvolunteer computing andgrid computing. Use the idle time on your computer (Windows, Mac, or Linux) to cure diseases, study global warming, discover pulsars, and do many other types of scientific research. It's safe, secure, and easy:

       GPU : a Global Processing Unit   GPU is a Gnutella client that allows users to share CPU-resources. GPU allows the creation of computer alliances. The CPU-time sharing system does not recognize privileges between users. Each person agrees to provide network resources as needed and in return is able to get CPU-cycles from other clients on the network system. Plugins extend the capabilities of client nodes … Right now, this client allows rendering of Terragen movies. An experimental climate simulator is included, too.

       Bitcoin: a decentralizedelectronic cash system that usespeer-to-peer networking,digital signatures andcryptographic proof so as to enable users to conduct irreversible transactions without relying on trust. Nodesbroadcasttransactions to the network, which records them in a public history, called the blockchain, after validating them with aproof-of-work system.

       Community Forge: a non-profit association that designs, develops and distributes free, open-source software for building communities with currencies.

       Trusted Platform Module (TPM) is both the name of a publishedspecification detailing asecure cryptoprocessor that can storecryptographickeys that protect information, as well as the general name of implementations of that specification, often called the “TPM chip” or “TPM Security Device”. The TPM specification is the work of theTrusted Computing Group.

       ChiliProject is a web based project management system. It supports your team throughout the complete project life cycle, from setting up and discussing a project plan, over tracking issues and reporting work progress to collaboratively sharing knowledge. It is provided under theGNU General Public License, version 2.

       dotbot  wants to make the internet as open as possible. Currently only a select few corporations have a complete and useful index of the web. Our goal is to change that fact by crawling the web and releasing as much information about its structure and content as possible. View preliminaryinternet statistics andfree downloadable index.

       data.fm: data cloud ThisRead/WriteLinked Data service is free (and open-source) for educational and personal use.

       ownCloud is a flexible, open source file sync and share solution..Store your files, folders, contacts, photo galleries, calendars and more on a server of your choosing. Access that folder from your mobile device, your desktop, or a web browser. Access your data wherever you are, when you need it. Sync Your Data – Keep your files, contacts, photo galleries, calendars and more synchronized amongst your devices. One folder, two folders and more – get the most recent version of your files with the desktop and web client or mobile app of your choosing, at any time. Share your data with others, and give them access to your latest photo galleries, your calendar, your music, or anything else you want them to see. Share it publicly, or privately. It is your data, do what you want with it.

       PageKite makes local websites or SSH servers publicly accessible in mere seconds, and works with any computer and any Internet connection. The fast, reliable way to make localhost part of the Web. It's also 100%Open Source

       identi.ca is anopen sourcesocial networking andmicro-blogging service. Based onStatusNet, a micro-blogging software package built on theOStatus (formerlyOpenMicroBlogging) specification, Identi.ca allows users to send text updates (known as “notices”) up to 140 characters long. While similar toTwitter in both concept and operation, Identi.ca provides many features not currently implemented by Twitter, includingXMPP support and personaltag clouds. In addition, Identi.ca allows free export and exchange of personal and “friend” data based on theFOAF standard; therefore, notices can be fed into a Twitter account or other service, and also ported in to a private system similar toYammer.

       Friendica What if social networks were more like email? What if they were all inter-connected, and you could choose which software (and even which provider) to use based purely on what they offered you? Now they are! Friendica is bringing them all together. All of these can be included in your Friendica “social stream” where you may interact with them using a familiar conversational interface – and perhaps arrange them into private conversation groups. (Note: Two-way and private communications are not yet available on all networks, and in a few cases these abilities are not possible due to limitations in the underlying communications formats.) Friendica is also the most technically advanced and feature-rich decentralised Facebook alternative currently available for the indie web. Friendica doesn't require that you use any other services for social communication. We provide fully distributed communications protocols (“DFRN” and “Zot!”) for securely sharing with your friends across the internet. These have all the privacy and communication features you expect, military-grade message encryption, and more. This network is infinitely scalable, and nobody can ever own it. They're your photos. Your posts – to be shared with who you wish and only with those you wish. Friendica is decentralised, open source, secure, private, modular, extensible, unincorporated, and federated.

       Rizzoma is free and opensource. Communicate and collaborate in real-time. All existing communication and collaboration tools display messages chronologically and in a linear way making a context fragmented and difficult to comprehend. Rizzoma allows communication within a certain context permitting a chat to instantly become a document where topics of a discussion organized into branches of mind-map diagram and minor details are collapsed to avoid distraction.

       Smallest Federated Wiki New project from Ward Cunningham, wiki inventor. The wiki innovates three ways. It shares through federation, composes by refactoring and wraps data with visualization. Follow our open development on GitHub or just watch our work-in-progress videos here.

       sharedearth.net is a social network for humans to build trust and share possessions and skills, helping us rewrite our cultural and economic stories to reflect our emerging experience of universal interconnection.

       Social VPN is a free and open-source P2P Social Virtual Private Network (VPN) that seamlessly networks your computer with the computers of your friends so that:

       Your computer can communicate directly to computers of your friends, and all communication is encrypted and authenticated. In other words, you are in full control of who you connect to and all your communications are private.

       This private network is configured with no hassle. The social VPN does all the configuration automatically for you. All you and your friends need to do is run this software and log in to your XMPP backend (such as Google chat, or Jabber.org).

       You and your friends can communicate, share and collaborate in countless ways, with existing applications, like iTunes, Windows shared folders, and remote desktop. You can share files and folders, stream music and video, play multi-user games, access remote desktops, and run a Web server private to your friends.

       If you own multiple computers at different places, you can also use the Social VPN to seamlessly access your files, desktop, etc remotely – creating your own personal VPN.

       OpenSocial is “a public specification that defines a component hosting environment (container) and a set of commonapplication programming interfaces (APIs) for web-based applications. Initially it was designed forsocial network applications and was developed byGoogle along withMySpace and a number of other social networks. In more recent times it has become adopted as a general use runtime environment for allowing untrusted and partially trusted components from third parties to run in an existing web application. The OpenSocial Foundation has also moved to integrate or support numerous other open web technologies. This includesOauth and OAuth 2.0,Activity Streams, and portable contacts, among others. Applications implementing the OpenSocial APIs will be interoperable with any social network system that supports them, including features on sites such asHi5.com,99factors.com,MySpace,orkut,Netlog,Sonico.com,Friendster,Ning, andYahoo!.” (Wikipedia)

       Apache Shindig is the reference implementation of OpenSoicial  and OpenSocial API specifications, a standard set of Social Network APIs which includes:

       Profiles

       Relationships

       Activities

       Shared applications

       Authentication

       Authorization

       TEXTUS: an open source platform for working with collections of texts and metadata. It enables users to transcribe, translate, and annotate texts, and to manage associated bibliographic data.

       co-ment(R) Free / open source software Web-based text annotation system. COMT is the core engine of co-ment(R), the leading Web service for text annotation. It is an autonomous software released under the  GNU Affero GPL version 3 . COMT enables you to install and run a text-annotation Web service. COMT operates a workspace shared among a group of users. In this workspace, one can create, upload, submit to comments, revise, and export texts and their comments. User rights are defined for the whole set of texts in the workspace and can be specialized for each text. You can install COMT to run your own service or use co-ment as a hosted service:

       Microsoft SharePoint (non-p2p but representative of other proposed aspects of PeerPoint architechture) is a web application platform developed byMicrosoft. First launched in 2001, SharePoint has historically been associated with intranetcontent management anddocument management, but recent versions have significantly broader capabilities. SharePoint comprises a multipurpose set of web technologies which are useful for many organizations, backed by a common technical infrastructure. By default, SharePoint has aMicrosoft Office-like interface, and it is closely integrated with the Office suite. The web tools are designed to be usable by non-technical users. SharePoint can be used to provideintranet portals,document & file management,collaboration,social networks,extranets,websites,enterprise search, andbusiness intelligence. It also has capabilities around system integration, process integration, and workflow automation. Enterprise application software (e.g.ERP orCRM packages) often provide some SharePoint integration capability, and SharePoint also incorporates a complete development stack based on web technologies and standards-based APIs. As an application platform, SharePoint provides central management, governance, and security controls for implementation of these requirements. The SharePoint platform integrates directly intoIIS – enabling bulk management, scaling, andprovisioning of servers, as is often required by large organisations orcloud hosting providers. In 2008, theGartner Group put SharePoint in the “leaders” quadrant in three of itsMagic Quadrants (forsearch,portals, andenterprise content management). SharePoint is used by 78% ofFortune 500 companies. Between 2006 to 2011, Microsoft sold over 36.5 million user licenses. Microsoft has two versions of SharePoint available at no cost, but it sells premium editions with additional functionality, and provides a cloud service edition as part of theirOffice 365 platform (previouslyBPOS). The product is also sold through a cloud model by many third-party vendors. (Wikipedia)

       Bettermeans (workflow) lets you use the same decision-making rules, and self-organizing principles behind open source to run your project.

       Selfstarter  is an open source starting point for building your own ad-hoc crowdfunding site. It was put together byLockitron after they wereturned down from Kickstarter.

       Stack Exchange Network (proprietary w/open API) is a group ofquestion and answer websites, each covering a specific topic, where questions, answers, and users are subject to a reputation award process. This process allegedly promotes knowledgeable users, best answers, and important questions.

       Reddit(proprietary–included here for example of functionality) is asocial news /bookmarking website where the registered users submit content, in the form of either a link or a text “self” post. Other users then vote the submission “up” or “down”, which is used to rank the post and determine its position on the site's pages and front page.

       PSYC is, asbenchmarks show, the fastest yet extensible text-based protocol we are aware of, providing amessaging infrastructure for humanconversation and social exchange of possiblybinary data. It has learned from protocols such asIRC andXMPP and chose an approach that shouldscale globally by generalizing themulticast concept beyondprogrammablechatrooms topresence awareness, eventnotification,news– andfriendcasting. In commercial settings PSYC is also being used fortelephony andaudio/video. PSYC providestrust metrics for a distributed social graph and publish/subscribe data. In combination withpseudonymous routing technology it turns intosecure share, a platform for maximumprivacy socialnetworking andapplications

       MediaCommons Press, an in-development feature ofMediaCommons, promoting the digital publication of texts in the field of media studies, ranging from article- to monograph-length.

       Y Worlds: a cooperative dedicated to the rethinking and reworking of everything complex. We share a pervasive sense that our current system of organizing, explaining and grappling with anything complex is broken and in need of some revolutionary thinking and doing.  We ask you to become part of this enterprise. A new form of language. A new way to work with computers centered around real time generative visualization. A new inclusive web based world that people can build together for tangible benefit, and live within. A new set of principles to anchor what we do and why. A new criteria for telling the truth that counters the deluge of misinformation.

       Nathan’s P2P routing proposal: “we all have a /hosts file on our

OS, we could just point “hello.nn” to an IP address  – in one step the

DNS system is circumvented and taken out of the equation.

Step 2 we make a quick app that updates the hosts file from a data source.

Step 3 we webize DNS records in a standard format (linked data) and have

app from step 2 read those records and update our hosts file.

It's small and doesn't scale out of the box, but we'd quickly have an

alternative to DNS and shared understanding, and free “domains”.

 From there you just scale up, make a net mounted DHT for these records

and so forth – others more skilled in that area can do that.

The main takeaway is that it's possible, really easy to start doing, and

that HTTP and other protocols will all work out of the box thanks to the

abstractions built in to URIs as names.

… it's nice to consider a P2P web where DNS is replaced by open DHTs, and where each node on the network is both client and server (peer) serving and responding to HTTP requests. If you think about it, HTTP is a nigh on perfect async communication method for peers to communicate with each other, POSTing back replies when they are ready. Tip: the last section of the REST dissertation has a complimentary note on this, where Roy mentions that adding a simple message identifier header would allow async communication with messages being returned in whatever order was fastest, rather than whatever order they were requested in.

P2P Application Examples

Opera Unite (video) Opera Unite is not free/open software, but this is an example of functionality that belongs in PeerPpoint. Another Opera Unite demo. Opera Unite was dropped but Opera 12 also includes features for p2p peeps to die for.

Syndie

This information about Syndie is included as an example of an application designed for a peer-to-peer world. PeerPoint applications, in addition to being integrated with one-another, would ideally be designed for use in many network environments by people with many different security and anonymity requirements.

Syndie is an open source system for operating distributed forums (Why would you use Syndie?), offering a secure and consistent interface to various anonymous and non-anonymous content networks.

On the whole, Syndie works at the *content layer* – individual posts are contained in encrypted zip files, and participating in the forum means simply sharing these files. There are no dependencies upon how the files are transferred (overI2P,Tor,Freenet,gnutella,bittorrent,RSS,usenet,email), but simple aggregation and distribution tools will be bundled with the standard Syndie release.

Syndie Technical features

While its structure leads to a large number of different configurations, most needs will be met by selecting one of the options from each of the following three criteria:

       Forum types:

       Single author (typical blog)

       Multiple authors (multiauthor blog)**

       Open (newsgroups, though restrictions may be included so that only authorized** users can post new threads, while anyone can comment on those new threads)

       Visibility:

       Anyone can read anything

       Only authorized* people can read posts, but some metadata is exposed

       Only authorized* people can read posts, or even know who is posting

       Only authorized* people can read posts, and no one knows who is posting

       Comments/replies:

       Anyone can comment or send private replies to the author/forum owner

       Only authorized** people can comment, and anyone can send private replies

       No one can comment, but anyone can send private replies

       No one can comment, and no one can send private replies

* reading is authorized by giving people the symmetric key or passphrase to decrypt the post. Alternately, the post may include a publicly visible prompt, where the correct answer serves to generate the correct decryption key.

** posting, updating, and/or commenting is authorized by providing those users with asymmetric private keys to sign the posts with, where the corresponding public key is included in the forum's metadata as authorized to post, manage, or comment on the forum. Alternately, the signing public keys of individual authorized users may be listed in the medtata.

Individual posts may contain many different elements:

       Any number of pages, with out of band data for each page specifying the content type, language, etc. Any formatting may be used, as its up to the client application to render the content safely – plain text must be supported, and clients that can should support HTML.

       Any number of attachments (again, with out of band data describing the attachment)

       A small avatar for the post (but if not specified, the author's default avatar is used)

       A set of references to other posts, forums, archives, URLs, etc (which may include the keys necessary to post, manage, or read the referenced forums)


Other specifications similar/related to PeerPoint

FreedomBox

Roadmap

1       Requirements Specification

2       Design Specification

3       Implementation Phase

4       Polishing, Testing, Verification, Validation Phase

User Requirements

1       WishList

2       Use Cases

a       Sharing pictures with friends

b       Social networker

c       Political Activist

d       Non-computer savvy person

e       Making data backup

f         Developer

g       User's web site becomes visible after plugging device into network behind NAT router

Software Requirements

1       Software requirements

a       Physical layer requirements

b       System features

c       Interface requirements

i         User interfaces

ii        Hardware interfaces

iii      Software interfaces

iv      Communications interfaces

d       Other Non functional requirements

i         Performance Requirements

ii        Safety Requirements

iii      Security Requirements

iv      Software Quality Attributes

v       Communications protocols

vi      Error handling

e       Other requirements

i         Database requirements

ii        Internationalization requirements

iii      Legal requirements

iv      Reuse objectives for the project

Freedombox Mailing List Poll

Date: Thu, 31 May 2012 07:39:43 -0500
From: Nick Daly
To: freedombox-discuss@lists.alioth.debian.org
Subject: [Freedombox-discuss] What You Want from a FreedomBox!

Hi folks, the votes are in (people have stopped replying to the original
thread), so here's how the votes have broken down. 

In tallying these votes, I do not claim to have perfectly interpreted
everyone's words, nor do I claim to have made no mistakes.  The emails
to this list themselves are the raw data, so my inaccuracies should be
self-evident.  I counted each vote in each email once (I did *not* count
one vote per email) and attempted to include all sub-threads and
side-threads of the “What do you want in a FreedomBox?” email chain.

The data are sorted by number of votes for each category, then by number
of votes per service, and finally alphabetically, when services share
votes.

Social Sharing/Connections/Network tool (20):

– Email: XXXXX
– Jabber: XXXXX
– Social Media Network: XXX
– Etherpad: XX
– VOIP/Video Chat: XX
– Plans: X
– Real-time messaging: X
– Social bookmarks: X

Privacy Device (20):

– Censorship Circumvention: XXXXXX
– Privacy Device: XXXXX
– Ad-free Internet: XXXX
– Anonymous Internet: XXX
– Anonymous Chat: XX

Self-publishing tool (13):

– Photo Sharing: XXXXX
– Wiki: XXXX
– Blog: XXX
– Website: X

Backup tool (12):

– Dropbox: XXXXXX
– Backup tool: XXXXX
– Crypto-key recovery: X

Personal Information Manager (6):

– Personal Information Manager: XXXXX
– Identity Provider: X

Connectivity Device (5):

– IPv6/IPsec Router: XXXX
– Mesh Network: X

Media Device (4):

– Media Device: XXX
– Podcast Downloader: X

The Other Category (uncounted):

– Ripple (??): X

– Data Gathering System: X

– PeerPoint (??): X

– WebBox: X

– Shell Account (??): X

– E-Currency Wallet: X

Just thought you'd like to know and comment on how this all turned out.
This will also help inform the direction of the next hackfest.  Please
pick up a project and coordinate with other interested members of the
list to start integrating the service into the FreedomBox.

As a side-note, I was *really* surprised by the results.  I didn't
expect to see the privacy category get as many votes as the social
category, nor did I expect email to be quite so popular.

Nick

The Soveriegn Computing Manifesto The purpose of sovereign computing is to bring to the Internet the kinds of freedoms we have in real life, but have lost online.

                       

       First Freedom: Own Name                             

       Second Freedom: Nicknames

       Third Freedom: Trust

       Fourth Freedom: Privacy

       Fifth Freedom: Expression

       Sixth Freedom: Hardware

       Seventh Freedom: Software

The Free Network Movement (Free Network Foundation) Free network definition: Our intention is to build communications systems that are owned by the people that use them, that allow participants to own their own data, and that use end-to-end encryption and cryptographic trust mechanisms to assure privacy. We call such systems ‘free networks' and they are characterized by the following five freedoms:

       Freedom 0) The freedom to participate in the network.

       Freedom 1) The freedom to determine where one's bits are stored.

       Freedom 2) The freedom to determine the parties with whom one's bits are shared.

       Freedom 3) The freedom to transmit bits to one's peers without the prospect of interference, interception or censorship.

       Freedom 4) The freedom to maintain anonymity, or to present a unique, trusted identity.

Free Network Platform Components

       FreeNetwork Overview

       FreedomBox: personal server

       FreedomNode: end user “home” equipment

       FreedomTower: local infrastructure wireless hub (construction docs)

       FreedomLink: inter-site overlay networking

       FreedomTunnel: optional centralized services (deployment notes)

       FreedomNoc: network operating center blueprint

       FreedomLab: research and development

Ends and Means of the Free Network Movement

       1 Introduction

       2 Vision

       2.1 Material Peer-to-peer

       2.2 The Five Freedoms

       2.2.1 Access

       2.2.2 Transmission

       2.2.3 Storage

       2.2.4 Authentication

       2.2.5 Consignment

       2.3 Overview

       2.4 Components

       3 Context

       3.1 Stakeholders

       3.1.1 Tier 3 Networks

       3.1.2 Tier 2 Networks

       3.1.3 Tier 1 Networks

       3.2 Initiatives

       3.2.1 Federated Social Web

       3.2.2 Nodal Computers

       3.2.3 Distributed Social Networks

       3.2.4 Distributed Global Names

       3.2.5 Wireless Mesh

       4 Strategic Roadmap

       4.1 Sovereign Computing

       4.2 The Neighborhood Network

       4.3 Autonomous Systems

       4.4 Backbones of our Own

       4.5 A Human Right

       5 Conclusion

We The Data Is an extensive definition of the problem space.We The Data asked a crowd of experts what is arguably this century’s most important questions: How can we make our data work for us and not against?  Where must we focus our know-how and creativity so the power in our data is democratized and made vibrant with meaning and value for every individual creating it? What emerged is a nexus of Core Challenges, that if solved together, we believe will catalyze the most positive change. Our goal is to spark synergy among people and organizations who are tackling a nexus of interdependent Core Challenges and collectively giving rise to the Gutenburg press of our era: flows of data that are at once more fluid and more trustworthy, new and more accessible tools for analysis and visualization, and vehicles of communication and collaboration that help communities come together to gain a voice, mobilize resources, coordinate action, and create the ventures of the future.”                       

The Global Square specs partially overlap with PeerPoint

SecuShare currently only includes social networking, file sharing, and IM apps, but this link compares features of existing tools and should be useful for developing more detailed specs.

Video on design issues and existing projects in the social space: http://vimeo.com/39256857

Social Swarm   Criteria for software evaluation  List of candidate software

Safebook: a Privacy Preserving Online Social Network (pdf) This specification covers social networking only, but has a good discussion of p2p architecture. Additional documentation is available at http://www.safebook.us . Safebook beta code was apparently acquired by MatchUpBox, whose site appears to be under development. The MatchUpBox graphic below indicates content management functionality, but no further specs seem to be available yet.

Value Network Infrastructure

(Sensorica, Greener Acres) [Note: the user-facing, front-end functionality is similar to PeerPoint, but in the general features section below, a peer-to-peer back-end architecture is not specified. This is a major difference at the software engineering level.]

What is an infrastructure?

TheWikipedia definition

Our working definition: an infrastructure is a coherent set of tools used by an individual or a group to fulfill certain tasks in order to achieve certain goals.

Value networks need an infrastructure in order function. This infrastructure is intended to facilitate value creation, exchange, transformation and consumption.

Example: online collaborative communities need tools for communication, coordination, project management, data storage and sharing, etc. All these tools can be made interoperable, and can be integrated together into an infrastructure, offering a unified work environment, user-friendly interfaces, etc.

General features of the infrastructure

1       Open source – access to the source code, so that everyone can trust the system and help to improve it.

2       User-friendly – easy to use tools, reducing learning barriers, intuitive environment, transparent

3       Organic – giving access to members to modify/improve it

4       Scalable – able to support millions of users

5       Fractal – allows easy exchange of data between different value networks and allows their coalescence into super-networks

6       Easy to maintain – being open source, development and maintenance is delegated to an open community, modular

7       Free – or very low cost, allowing everyone in the world to use it, reducing economical barriers

8       Portable – interface with all imaginable digital devices, with mobile and location-based applications 

Some other considerations

The value network is also a social “animal”, something that gives members feedback, asks members for involvement, etc. There must be some Artificial Intelligence in there, to analyse data about activity, social data, about value and how it flows, about what’s needed, what’s urgent, etc. Have active systems, automated agents send out alerts to the right members, according to their roles and their reputation. This thing must also be able to interact with those who come in contact with it for the first time. Value networks have a social skin! [tibi]

Important systems/modules

1       Individual Profile

2       Self-organization

1       Thevalue accounting and exchange system

2       The reputation system

3       The role system

4       Decision making

5       Normative system

6       Feedback system

1       Metrics

2       Visualization

1       Mapping

1       geographical mapping

2       process mapping

1       Others

1       Incentive system

2       Education system

1       Value production

1       Inventory/materials management system

2       Project management

1       alert system – take info from project management and send our emails + general posts, alerts.

1       Shared database

2       Communications

1       In-network communications

2       External communications

1       Coordination

2       Collaboration

1       SENSORICA labonline and open labonline networks

2       Virtual working space

1       Value Distribution

1       Service system

2       External communication

Use Case Examples

– Use Case: The Indignados Movement, Lorea, N-1

– Use Case: ALEC envy we need to copy, hack, and re-mix parts of the ALEC model into a new model that is a venue for creating public interest open source legislation.  The right-wing ALEC is run like a criminal conspiracy. An open Citizen’s Legislative Exchange Council (CLEC) can be run like a democratic cooperative. The old ALEC is sick in the original sense of the word but a new public-interest CLEC could be sick in a street way, yo.

                                   

                       

– Use Case: Next Net Infrastructure & Roadmap A Roadmap for transition to a distributed, decentralized infrastructure that would exist under a commons based co-ownership model instead of corporate or government control. The Free Network Movement, presentedtheir manifesto for the big picture 5 stage process of transition. (lightly edited):

Stage 1: The Co-op

Stage one consists of the emergence of network access cooperatives. [A mesh network] allows us to share a single internet connection amongst many physically disparate locations. We and many others are able to purchase Internet access cooperatively, thus driving down the amount that each of us pays. This struggle for collective purchasing will happen in many towns and cities, in city blocks and subdivisions, in residential towers and intentional communities. The obvious economic advantage to the end user (reduced cost) makes this an easy sell to the people

.

Stage 2: The Digital Village

The unseen benefit of the aforementioned co-ops is that they wrest the terminal nodes of the network away from the control of the telco/ISP hegemony. This provides for the opportunity of network applications that are truly peer-to-peer. At first, this will only happen within each isolated cooperative community. Imagine a town that makes shared use of a few pipes, whose flow of information is distributed accross the last mile via mesh. Now imagine each node of that mesh network is aDiaspora pod running a codebase that is specifically designed for use in mesh networks. There is still a reliance on the big pipes for access to the wider internet, but to pass each other messages and participate in social networking, at the town level, a truly peer-to-peer architecture will be in place. Thus arises the digital village. What used to be just a co-op for purchasing access has suddenly become a community that is able to share information directly with one another. It takes only a little more imagination to see that Diaspora is one of many applications that could run on this architecture.

Stage 3: Towards Unity

Using packet tunnelling (i.e.Freenet orTOR) in concert with the existing global network, we can simulate the contiguity of geographically disparate digital villages. Suddenly, people all over the world are able to share with one another directly. Specify a user@a_node@a_network and you’ve got a unique address for each network user. Of course, the corporate giants still own the backbone at this stage, which is why we can only say *towards* unity.

Stage 4: A Backbone of our Own

Stage 4 is when the dream of true co-ownership becomes a reality. In this stage, the corporate-owned fiber backbone is replaced with a community-owned backbone. This could be accomplished via a constellation of telecommunications satellites or the construction of HF or Whitespace radios. Satellite dishes or TV-Band towers would replace the pipes that used to come from the ISP, and their connectivity could be distributed throughout every digital village. The only cost that anyone would ever have to pay for network access would be the cost of a mesh node (could be integrated into a PC, or shareable stand alone). Not everyone will be able to afford a node, which is why the roadmap doesn’t end with Stage 4.

Stage 5: A Human Right

Once the Mesh Interface for Network Devices is global, energies can be focused towards providing a node to anyone who wants one. We believe that access to the network is a human right, and this is our vision for supplying it to all of humanity.

A Few Notes:

A common counter-argument to this proposal is that mesh technologies don’t scale beyond a few thousand nodes. Our rebuttal is that they won’t have to. The federation of digital villages means that no single mesh would have to grow larger than some optimal number. Furthermore, there is reason to believe that mesh routing protocols will improve rapidly in the near future. The wide release ofB.A.T.M.A.N. will provide for a significant improvement in performance ofO.L.S.R.

Resources:

       Community Broadband Networks: a project of the Institute for Local Self-Reliance

       Broadband Properties Municipal Fiber Portal

       The National Broadband Plan

– Use Case: Engaging For the Commons – Global Pull Platform by Helene Finidori. This is a use case that demands a sophisticated technology platform like PeerPoint.

                        

In january 2011, The Secretary General of the UN Ban Ki-moon, called for revolutionary thinking and action to ensure an economic model for survival. A year later, the Global Sustainability Panel he created to this effect published its recommendations report for Rio+20:Resilient people, resilient Planet, a future worth choosing. The vision of the GSP as expressed in the report revolves around choice, influence, participation and action, and calls for a political process “able to summon both the arguments and the political will necessary to act for a sustainable future.“…

Whether one agrees or not with the principles of political economics put forward by the UN, “activating” human agency and political will and addressing the root causes for power unbalance and resistance to change is at the heart of tomorrow's paradigm shift.

This has been my research subject during the past year which led me to draft an action-oriented strategy and process methodology for generating engagement, accountability and outcomes in the political, economic, social and environmental spheres, which may contribute to enable this activation. Inspired by Elinor Estrom's “Governing The Commons: The Evolution of Institutions for Collective Action”, the objective is to turn around the tragedy of the commons by empowering individuals and communities, nurturing public wisdom and collective debate, helping push issues onto public agendas, and influencing policy and corporate behavior in a systemic and dynamic perspective.

A group of us is now working to pull together the best elements available or in the making on the web to create a global pull platform to engage for the commons  and enable a form of evolutionary activism as part as of an emergent collective response in the context of a citizen/actor network and peer to peer commons of knowledge.

The principles of the platform.

The platform is structured around commons, issues of social, environmental, economic nature, such as those included in thisframework for reliable prosperity, treatedas social objects: the nodes around which social networks are created, conversations and repeated interactions are initiated, new territories explored, meaning and intents shared, learning achieved.

People subscribe to individual issues then designate the actors who they think may have an influence -positive or negative-  on the status of an issue. This ‘appointment of actors’ by ‘citizen-followers’ creates a pull dynamic. Bringing together the parties susceptible of impacting progression on an issue and those to whom they are accountable will yield conversations, knowledge flow, and feedback loops beneficial to learning, progress visualization, and evaluation. The goal is to create a context favorable to collaboration, exchange of ideas and know-how. The pull dynamic is intended to stimulate political action and on-the-ground response, and ultimately advance the governance of the commons

 

The process consists in letting people/organizations:

       Select, follow, learn as a ‘citizen' about the causes, issues, commons they care for and the actors involved

       Keep informed and track progress and status of these issues

       Self assign actor role and communicate/report on self-activity and impact and status of issue. Self assignment is a declaration of engagement at various possible levels (governance institution, activist, champion, observer, kow-how or knowledge resource)

       Share practical solutions, proposals for tasks and collaboration, volunteer for tasks and collaboration

       Find solutions and potential collaborators for action

       Select or refer designated actors to acknowledge or request their engagement and action at various levels (governance institution, activist, free rider, champion, observer, kow-how or knowledge resource ).

       As a selected or designated actor, participate in the conversation, report on activity and impact (or if not, become the object of the action…)

       As a citizen-follower evaluate and rate activity/impact of and trust toward actors' activity, impact and progress.

       As a citizen-follower organize for collective action

       As a an actor, garner follower participation

       Initiate and participate in conversations, debates, deliberations

The ecosystem is composed of:

       Common’s spaces: carefully curated knowledge base, space for learning, evaluating, debating, deliberating, and planning collective action, crowdsourcing solutions. This would include planetary as well as local commons or issues.

       Common’s graph: shows network of followers and stakeholders, possibilities for collaboration, critical mass, power structures and possible leverage points for grassroots action or civil participation.

       Progress & Impact or Situation Dashboard: shows activity, status, impact and progress. Informed by reporting from stakeholders and evaluations by followers, as well as real time indicators provided by independent observers.

Graph, space dashboards of various commons can be combined at various levels for bigger picture views.

The platform creates a context for the following:

       Curate the knowledge flow and increase learning about issues, and physical as well as political solutions through visibility of activity and impact

       Connect and interrelate people, stakeholders, issues, and knowledge.

       Help situate an issue in its physical, metaphysical, political and social space and its network of interdependence and navigate within.

       Define boundaries of an issue/common through its graph of followers and actors, and help define the natural levels of governance or stewardship of a common or issue.

       Help situate self and others in the multidimensionality of an issues’ space (geography, graph, stakes, interests, roles, positions, possibilities…) and navigate within.

       Identify roles and interdependence between actors and issues.

       Visualize the emergent bigger picture, and adopt systemic or transversal approaches.

       Communicate and discern expectations, communicate and evaluate outcomes, identify and act upon gaps

       Discern patterns of possibilities and leverage points, as well as who can generate best impact for specific challenges.

       Stimulate stepping up to task, collaboration between stakeholders and collective response.

 

The design map above gives an idea of the types of modules that would be integrated together. The platform requires the integration of the best existing networks, tools, process methodologies and user interfaces in terms of learning and action research, curation and issues framing, evaluation and moderation, trustnets, debate and deliberation, e-government/governance, collaboration, crowdsourcing, crowdfunding, collective action planning, data collection, visualization; with a focus on wisdom and integrity stewardship…

Such an ecosystem would need to be open source and supported by legitimate institutions willing to forward civil participation.

From a Systemic and Dynamic Perspective

In systemic terms the dynamics at play are the following:

Power Dynamics: users -citizens > pull (designate) stakeholders -actors- > seek accountability/evaluate status > push activity > visualize progress > identify gaps / form expectations : a dynamic + feedback loop >> increase learning & informed action >> building engagement culture >> engagement to participate

Action Dynamics: stakeholders -actors > entrusted & challenged to act by users -citizens> acknowledge expectations & gaps > pull & pool resources & solutions to act > report action & progress: a dynamic + feedback loop >> increase access, community & capability >> building a mindful action culture >> empowerment & enablement to act

From a user perspective.

A Pull Network emerging from Connecting Citizens, Issues & Stakeholders

The Citizen

       As social entities, individuals or groups, the users, ‘citizens', designate the issues, topics, commons they care about and wish to follow -i.e. that they would like to learn about and where they would like to see some action engaged, some progress made, with various degrees of engagement on their part. By doing so they become a follower of this issue. Selected issues can be quite diverse, in domain or geography:  they can be very global, such as the pollution of oceans or poverty or obesity worldwide or very local such as the preservation of a river or biodiversity or traditional seeds in one particular area, or the economic insertion of a disenfranchised community in a particular suburb…

The Actor

       For each issue, the citizens also designate/refer and thereby ‘follow’ the actors that they believe can have an impact, whether positive or negative, on the progress of the issue or the governance of the common. By doing so, they bring the actors into their community of followers to create an Issue/citizen/actor network.

       This designation/referral works at the level of stimulating an actor to rise to meet a challenge. It is at the very heart of the pull dynamic. Designated actors such as governments, corporations, governance institutions, NGO's, activists, social entrepreneurs, free riders, champions, independent observers are challenged and entrusted by their ‘citizen-followers' to deliver outcomes and produce impact and subsequently become accountable for their actions and results.

       Citizens have the ability to  self designate as actor to indicate their presence/activity as an actor and share resources, ideas and know-how. An expected effect of this dynamic is to unleash ‘agency’ and turn an increasing number of citizens into actors by providing them with access to possibilities and capacity in the areas that they have chosen and that they care for the most.

The Social Graph – Visualizing & Navigating the Network

       By this dual followship process, each issue/common has anetwork of followers and actors which can be visualized in the common's social graph. The scope and variety of the followers and actors show the reach (from the local to planetary) and depth (possible various ramifications and interdependences) of an issue. It outlines its boundaries: the natural levels of governance or stewardship of a common or issue and the possible perimeters for pooling resources. The graph shows the critical mass of followers & actors, its density and diversity. The entities who appear in the core represent key constituencies, others interact or watch, and can further be pulled in. The ‘proximity’ and interdependence between the players, the potential for synchronicity and synergies, the insights on power structures and possible leverage points create a context for action to emerge and for negative reinforcement loops to be inverted.

       The aggregation of issues produces a Global Graph that enables to visualize further interrelations and interactions, to navigate between the issues, the various players, and the various levels of intervention from the smallest local level to the planetary, see how some players are involved in several issues and can be ‘activated' as such, and ultimately undertake action of a more systemic nature.

The Dashboard – Reporting, data collection and visualization

       Designated actors become entrusted or accountable of their actions toward their ‘followship' of citizens. They are encouraged to work on outcomes and to report on actions engaged and the general progress of an issue.

       Informed citizens evaluate the impact of the actors they selected and their level of confidence in outcomes.

       This ‘internal’ reporting and evaluation informs a common's Progress & Impact or Situation Dashboard, and participates in the documentation of the issue.

       External indicators from independent sources also feed the dashboard.

       Visualizing data enables to:

       show ‘evolution in the making’ how small ‘local' actions add up to create large impacts, how big goals can be carried out from very small distributed initiatives.

       acknowledge status and evolution of issues as much as possible in real time contributing to learning and informed action.

       help actors engage into more effective political participation and on the ground solutioning.

       highlight the gaps between expectations and outcomes and detect deceptive action

       push things further onto political agendas

The Learning & Action Space

       The Common's Learning & Action Space is the environment where the density, diversity and synchronicity of the network can be valuably exploited, where the data, actors, resources that will have been pulled together to generate optimized outcomes can be put to work. These spaces will need to be widely and wisely moderated and curated in order to avoid oversimplification or hijacking to the benefit of special interests…

       Citizens and actors learn from each other and from the knowledge base, discuss the issue and undertake individual, collaborative and collective action. This is a space where exchange, dialogue, deliberation, facilitation takes place at the practical, social and political level; where users-citizens are able to design their discovery/learning/action journey; where actors can share know-how on solutions on-the-ground; where they are able to find parties to exchange, discuss, negociate with; where resources can be shared; where solutions can be spread and diffused, co-created or crowd-sourced; where civil participation to policy making and governance can be garnered… informed and bootstrapped by all what is described above.

– Use Case: Creating Sustainable Societies: The Rebirth of Democracy and Local Economies by John Boik, Ph.D. outlines a “Framework of a Principled Society” (p2pfoundation.net). This is the kind of use case that would be well-served by the PeerPoint platform:

“A Principled Society is envisioned as a local entity, but its core elements would be designed to overcome several major weaknesses seen at the national level. In this way, Principled Societies would be extensible to wider implementation in the future. The proposed framework consists of three core elements:

1. A new type of local currency system, called a Token Exchange System. Tokens are an electronic form of currency that circulates within a Society, in conjunction with the dollar. They are used by businesses and individuals to purchase goods and services, as well as fund local development and community services.

2. A new type of socially responsible corporation, called a Principled Business. A Principled Business is a cross between a nonprofit and a for-profit corporation. Like a nonprofit, it fulfills a social mission. Like a for-profit, it is self-sustaining and does not rely on donations. Principled Businesses compete with one another for interest-free loans offered by a Society. They coexist alongside standard businesses.

3. A new type of governance system based on collaborative direct democracy, called a Collaborative Governance System. Members collaborate in the creative problem-solving process of developing new rules. In a Principled Society, members are the legislature. For efficiency, councils would execute day-to-day operations and make minor decisions. Major issues would be decided by the entire membership in a user-friendly, efficient, online process.

The Internet application that would act as the infrastructure for a Principled Society is both practical and technologically achievable. It could be developed as a no-frills initial version perhaps with three to ten years of effort, given adequate funding and community interest. Each year thereafter, further enhancements could follow. From the beginning, the effort will be organic, and hopefully involve many thousands as momentum grows. Each interested person can contribute in small or large ways to move the project forward.”

– Use Case: ThinkFree Cloud Office

This is not open or p2p but is included as a use-case–functionality desired in PeerPoint.

ThinkFree Office is an office program that enables you to create documents, spreadsheets and presentations. Using ThinkFree Online (web office), you can enjoy the office program through a web browser without installing a separate office program in your PC. ThinkFree Mobile allows you to view and edit office documents using your smartphone.ThinkFree Server connects to your company's business system to provide a cloud environment in which you and your coworkers can work on the same document together.In short,ThinkFree Products provide a perfect ‘cloud office' environment, the keyword of today's IT.

Wireless

How To Set up Small Campus / Small Enterprise Network – VillageTelco

Avillage telco consists of amesh network made up ofWi-Fi mini-routers combined with an analogue telephone adaptor (aka ‘Mesh Potato')

OpenBTS at Burning Man: Best Full Story « Public Intelligence Blog

Today I bring you a story that has it all: a solar-powered, low-cost,open source cellular network that’s revolutionizing coverage in underprivileged and off-grid spots. It usesVoIP yet works with existing cell phones. It has pedigreed founders. Best of all, it is part of the sex, drugs and art collectively known asBurning Man.

Open-Meshcreates ultra low-cost zero-config, plug & play wireless mesh network solutions that spread an Internet connection throughout a hotel, apartment, office, neighborhood, village, coffee shop, shopping mall, campground, marina and just about anywhere else you can imagine.

The Open Source Wireless Coalition (OSWC)

is a global partnership of open source wireless integrators, researchers, implementors and companies dedicated to the development of open source, interoperable, low-cost wireless technologies. OSWC member organizations have pioneered open source wireless research and development and are global leaders in the field. Charter organizations include: Acorn Active Media Foundation, Austin Wireless, BGWireless, CUWiN Foundation, FreeNetworks.org, Freifunk, FunkFeuer, HRFreeNet, Ile Sans Fil, Less Networks, Metrix Communication, NYCwireless, Seattle Wireless, and Wireless Lancaster.

Coalition members have extensive experience creating wireless solutions in municipalities worldwide — from rural villages in Ghana to major metropolitan areas in Europe and the United States.

Essentials of wireless mesh networking (2009 book)

Aircrack-ng is a network software suite consisting of a detector,packet sniffer,WEP andWPA/WPA2-PSKcracker and analysis tool for802.11wireless LANs. It works with anywireless network interface controller whose driver supportsraw monitoring mode (for a list, visit the website of the project orand can sniff802.11a,802.11b and802.11g traffic. The program runs under Linux and Windows (wikipedia)

B.A.T.M.A.N. is a routing protocol which is … intended to replaceOLSR. B.A.T.M.A.N.'s crucial point is the decentralization of the knowledge about the best route through the network – no single node has all the data. Using this technique, the need for spreading information concerning network changes to every node in the network becomes superfluous. The individual node only saves information about the “direction” it received data from and sends its data accordingly. Hereby the data gets passed on from node to node and packets get individual, dynamically created routes. A network ofcollective intelligence is created.

Using an Android as a webserver

The Guardian Projectaims to create easy to use apps, open-source firmware MODs, and customized, commercial mobile phones that can be used and deployed around the world, by any person looking to protect their communications and personal data from unjust intrusion and monitoring. While smartphones have been heralded as the coming of the next generation of communication and collaboration, they are a step backwards when it comes to personal

security, anonymity and privacy.

Austrian Programmers Build Free Bridge to Internet

(youtube) A group of computer programmers and hackers in Austria is creating a low-cost way of spreading Internet access across communities. “FunkFeuer” which means “network fire” in German, uses everyday technology to create a wireless network, called a “mesh,” that can transmit data from person to person, without involving companies or governments. (up to 200km point-to-point)

2,4GHz-High-Power-WLAN-Outdoor-CPE Gigabit Router

Security/Privacy

From: freebirds

Subject: [Freedombox-discuss] PSN,  ARM's Trust Zone and TPM

On June 27, 2012, Ben the Pyrate asked:

I'm a little confused about all this concern I've been seeing

about UUIDs. Could someone explain this to me? How exactly does it

hurt your privacy/anonymity if your CPU has a UUID?

Or, asked another way, what is the attack vector? What would a

hacker or government or other adversary need to do in order to

track someone by their UUID? Please help me to understand this

threat.

Best regards,

Ben the Pyrate

My answer:

In 1999, Intel announced that its Pentium III processors have a

processor serial number (PSN). Whereas, Intel had concealed that

its  earlier processor, the Pentium II had a PSN. See:

http://findarticles.com/p/articles/mi_m0BNO/is_2000_June/ai_62263364

/ andhttp://bigbrotherinside.org/ and

http://www.theregister.co.uk/1999/03/16/finding_your_pentium_ii_psn/

.

Intel installed a PSN for digital rights management. I will discuss

digital rights management under my paragraph on Trusted Platform

Module (TPM).

“It (PSN) allows software manufacturers and websites to identify

individuals more precisely.” From:

http://www.geek.com/glossary/P/psn-processor-serial-number/

“But what I thought was the most interesting was that the processor

serial number still gets reported to the Windows operating system.”

From:http://discussions.virtualdr.com/archive/index.php/t-

100736.html

“Pentium III's serial number could be read by external programs.”

http://www.hardwarecentral.com/archive/index.php/t-52051.html

Privacy groups protested against the PSN's invasion of privacy. The

EU and China intended to ban Pentium III. See

http://en.wikipedia.org/wiki/Pentium_III

Therefore, Intel developed software that would disable the PSN for

users who's BIOS did not give an option to disable PSN. Disabling

means that the PSN would not be visible online. Whereas, the BIOS

option and Intel's software did not work. The PSN leaked and was

visible online. See:http://articles.cnn.com/keyword/pentium-iii

andhttp://bigbrotherinside.org/.

The PSN also leaked because malware hacked Intel's disabling. Intel

asked Symantec for a patch. The patch did not work.

Intel's misrepresented that it would discontinue inserting PSN and

in its place use TPM (Trusted Platform Module). Whereas, Intel

continued to insert PSN in its next processor, the Pentium 4. See

http://www.hardwarecentral.com/archive/index.php/t-49252.html

TPM's invasion of privacy is discussed at

http://www.gnu.org/philosophy/can-you-trust.html and see section on

How can TC be abused? athttp://www.cl.cam.ac.uk/~rja14/tcpa-

faq.html

TPM is a 1 GB microchip on the motherboard. TPM is not in the

processor. TPM has an universally unique identifier (UUID). In

addition to its own visible UUID, TPM creates a composite UUID

containing the serial numbers of other hardware such as the

internal hard drive. Websites, government, IT administrators and

hackers can see these UUIDs.

For example, if a consumer purchases an e-book or software and

changes his or her internal hard drive or copies it onto another

computer, the e-book will not play.

Government, hackers and information brokers can track the activity

and geolocation of computers by their UUIDs. Websites that read the

UUIDs can sell this tracking information along with other tracking

information to information brokers who resell it to investigators

who resell it to abusers.

There is more than version of TPM. “Meanwhile, there are spin-offs

and enhancements whose security characteristics were embedded even

more strictly. Examples are Intel's LaGrande Technology, ARM's

TrustZone, and starting in 2006, AMD's Presidio is expected to hit

the market.”

Besides being tracked by use of a credit card, consumers can be

tracked by the UUID when they do online banking.

ARM's TrustZone

Secured PIN entry for enhanced user authentication in mobile

payments & banking

? Anti-malware that is protected from software attack

? Digital Right Management

? Software license management

? Loyalty-based applications

? Access control of cloud-based documents

? e-Ticketing Mobile TV

http://mobile.arm.com/products/processors/technologies/trustzone.php

?tab=Why+TrustZone?

Marvell uses ARM processors. ARM processors supporting TrustZone

include: ARM Cortex-A15, ARM Cortex-A9, ARM Cortex-A8, ARM Cortex-

A7, ARM Cortex-A5 and ARM1176. I could not tell by reviewing

Marvell's website which ARM the Kirkwood 88F6281 or the Sheva

processor in DreamPlug has. Could you please ask Marvell?

Hackers had it easy when one OS dominated the world. One article

discussed that hackers are performing less software attacks and

instead attacking processors. Hacking the processor at the kernel

level gives complete remote control of the computer. A PSN makes

the processor visible online. A PSN makes the processor vulnerable

to hacks.

Firmware rootkits that infect the BIOS are not always erased by

flashing the BIOS. See articles on the mebromi firmware rootkit.

A mesh network and OpenVPN and proxies, such as TOR, do not fully

grant privacy. The PSN and/or TPM's UUID are visible offline. I

cannot cite references on this. I have been hacked offline, first

by my wifi card and after I removed my wifi card and bluetooth

card, by my PSN.  Yes, computers can be hacked via their wifi cards

even though the computers are offline. See

http://www.usatoday.com/tech/news/computersecurity/hacking/2006-08-

02-wireless-hackable_x.htm

There are plenty of articles on hacking bluetooth due to

bluetooth's MAC address being visible.

The old methods of tracking computers were IP address and MAC

address of the wifi card. If this were completely sufficient, there

would be no reason for PSN and TPM. The fact that they exist means

that they enable tracking of computers via hardware.

Don't give a false sense of security by promising privacy unless

you are also offering hardware privacy. Except for MAC address on

wifi cards, we had hardware privacy prior to Pentium II's PSN.

FreedomBox can ask Marvell and/or other manufacturer to “down

grade” to the early 1990s and give us back our hardware privacy.

Unorganized addenda

88+ Projects & Standards for Data Ownership, Identity, & A Federated Social Web « emergent by design (venessa)

Next Net Infrastructure & Roadmap for Municipal Broadband Networks « emergent by design (venessa)

GNU social – ProjectCom parison – Open wiki – Gitorious

New Social Web Project ,wiki home ,Links ,Information Center ,Distributed Social Network Projects

Choke Point Project- towards a distributed internet infrastructure

– create an interactive data visualization to identify choke points, showing vulnerabilities

– document the related open projects and point to articles with analysis and strategy

– release datasets and tools used to track down Internet choke points.

Netention – Netention Semantic Editor Feature Requirements

#wethedata WE ARE DATA. The Arab Spring and Zipcar are part of the same data revolution. How? Right now, data may be what we intentionally share, or what is gathered about us – the product of surveillance and tracking. We are the customer, but our data are the product. How do we balance our anxiety around data with its incredible potential? How do we regain more control over what happens to our data and what is targeted at us as a result? We The Data have the power to topple dictators, or empower them. We The Data can broaden economic opportunity to new, as yet unimagined kinds of entrepreneurs, or further consolidate economic power in the hands of a few large corporations. We The Data can create new forms of social cooperation and exchange, or give us more of the same corporate obsession with better targeted advertising.  It’s up to us: #wethedata

Automenta Software that works with us, instead of for us. A future that promises accelerated automation, personal and group empowerment, open knowledge, and the evolving ergonomics of human-computer interaction. Weopenly disclose the designs of our innovations in order to encourage community participation with world-class corporations, engineers, and scholars in peer-reviewable development processes.

Projects

       Spacegraph

       Netention

       Global Survival System

       JCog

       Atomize

       CortexIt

       CritterGod

       Site Strobe

       Intelligent Command Shell

       Team Biofeedback Sensor Network

Towards an Interlinked Semantic Wiki Farm Abstract. (pdf) This paper details the main concepts and the architecture of UfoWiki, a semantic wiki farm { i.e. a server of wikis { that uses form-based templates to produce ontology-based knowledge. Moreover, the system allows diferent wikis to share and interlink ontology instance between each other, so that knowledge can be produced by dierent and distinct communities in a distributed but collaborative way. Key words: semantic wikis, wiki farm, linked data, ontology population, named graphs, SIOC

OntoWiki  is “a free,open-sourcesemantic wiki application, meant to serve as anontology editor and aknowledge acquisition system. It is a web-based application written inPHP and using either aMySQL database or aVirtuoso triple store. OntoWiki is form-based rather than syntax-based, and thus tries to hide as much of the complexity of knowledge representation formalisms from users as possible. OntoWiki is mainly being developed by theAgile Knowledge Engineering and Semantic Web (AKSW) research group at theUniversity of Leipzig, a group also known for theDBpedia project among others, in collaboration with volunteers around the world. In 2009 the AKSW research group got a budget of 425,000€ from theFederal Ministry of Education and Research of Germany for the development of the OntoWiki. In 2010 OntoWiki became part of the technology stack supporting the LOD2 (Linked Open Data) project. Leipzig University is one of the consortium members of the project, which is funded by a €6.5m EU grant.” (Wikipedia http://en.wikipedia.org/wiki/OntoWiki)

cjdns is “a networking protocol and reference implementation, founded on the ideology that networks should be easy to set up, protocols should scale up smoothly, and security should be ubiquitous. The belief that security should be ubiquitous and unintrusive like air is part of cjdns' core. The routing engine runs inuser space and is compiled by default withstack-smashing protection,position-independent code,non-executable stack, and remapping of theglobal offset table as read-only (relro). The code also relies on an ad-hoc sandboxing feature based on setting the resource limit for open files to zero, on many systems this serves block access to any newfile descriptors, severely limiting the code's ability to interact with the system around it.” (Wikipedia)

partial list of services:

       HypeIRC – IRC Network

       EzCrypt – An encrypted pastebin

       HypeDNS – Temporary DNS service

       mesh.neoretro.net – Public NTP server

       Hypediscuss – General Forums

       Uppit – Reddit Clone

       Urlcloud – File Hosting

6 mechanisms that will help create the global brain

Ross Dawson, July 10, 2012 at 12:46 am

                                                                                                                                   

One of the many reasons humanity is at an inflection point is that the age-old dream of the “global brain” is finally becoming a reality.

I explored the idea in my bookLiving Networks, and at more length in my pieceAutopoiesis and how hyper-connectivity is literally bringing the networks to life.

Today, mywork on crowdsourcing is largely focused on the emerging mechanisms that allow us to create better results from mass participation.

Some of the best work being done in the space is at theMIT Center for Collective Intelligence. A few of their researchers (including founder Thomas Malone) have just written a short paperProgramming the Global Brain.

Ryan Merkley: Online video — annotated, remixed and popped

Videos on the web should work like the web itself: Dynamic, full of links, maps and information that can be edited and updated live, says Mozilla Foundation COO Ryan Merkley. On the TED stage he demos Popcorn Maker, a new web-based tool for easy video remixing. (Watch a remixed TEDTalk using Popcorn Maker — and remix it yourself.)

Quantellia

What if you could see the future, and then change it? Quantellia allows you to understand how today’s decisions affect tomorrow.  The company’s award-winning World Modeler™ software platform allows users to rapidly create an interactive decision simulation that illustrates how decisions flow through a cause-and-effect model to impact outcomes.  With the ability to draw from a variety of enterprise and / or web-based data sources in real time, World Modeler™ is the next evolutionary step in decision support software, going beyond presentation of the current situation to an integrated prediction of the future impact of today’s decisions. Quantellia offers its platform to a network of professional decision modelers, and also offers line-of-business applications like its new Decision Engineering for Enterprise Program Management (DEEPM™) solution.www.quantellia.com.  (NOTE: interesting technolgy, but not open source ~PR)

Guifi Net

an attempt to create an alternative autonomous internet infrastructure, mostly based in the Catalan region of Spain

Roger Baig Viñas:

How is Guifi related to the internet: is it complementary or alternative, and if the latter, why do we need it?

guifi.net can be seen as both things at the same time. On one hand it can be considered as a complement to the Internet because guifi.net network can be used to extend the “network of networks” coverage, and on the other hand it is an alternative to it: guifi.net users don't need to connect to the Internet, i. e. to use an ISP, any more for their digital communications among them, therefore, the so common and artificial picture of to neighbors connecting both of them to their ISP to exchange a file will not take place again among them.

Ramon Roca adds:

Guifi “is absolutely complementary. Actually we do see as an extension of it up to the end user by enabling a self-service access. Regarding to the commercial ISP, wants to become an alternative, although because of how is currently regulated, there might be a neet to setup gateways to the internet. We do need it if we want the internet to reach the end users but without the need of having to do through a commercial ISP as an alternative.”

To date, November 2008, guifi.net has about 5500 working nodes, most of them linked each other. Geographically the main activity is centered in Catalonia, escencially because the project was born there, but everyone is encouraged to expand the network coverage contributing with his link.

MOAT: Meaning Of A Tag

MOAT (Meaning Of A Tag) provides aSemantic Web framework to publish semantically-enriched content from free-tagging one.

While tags are widely used in Web 2.0 services, their lack of machine-understandable meaning can be a problem for information retrieval. Especially people can use tags that have different meanings depending on the context (e.g.: “apple”), but can also use different tags to express the same thing (e.g.: “semweb”, “semantic_web”). Moreover, as tags as not related to each other, finding content might be an issue, especially to browse the long tail.

MOAT aims to solve this by providing a way for users to define meaning(s) of their tag(s) using URIs of Semantic Web resources (such as URIs fromDBpedia,geonames … or any knowledge base). Thanks to those relationships between tags and URIs of existing concepts, they can annotate content with those URIs rather than free-text tags, leveraging content into Semantic Web, bylinking data together. This means modeling facts such as “In this blog post, I use the tag “apple” and I refer to <http://dbpedia.com/resource/Apple_Records>, not the fruit nor the computer brand”. Moreover, these tag meanings can be shared between people, providing an architecture of participation to define and exchange meanings of tags (as URIs) within a community of users.

To achieve this goal, MOAT relies on anarchitecture that can be deployed for any organisation or community and that involves a lightweightontology, a MOATserver, and some third-partyclients. Theontology can also be used stand-alone, as a model to define meaning for your tags in blog posts, tagged pictures … In case you're looking for a practical implementation of MOAT and do not want to browse technical details, have a look atLODr.

Hybrid Approaches to Taxonomy & Folksonmy

www.slideshare.net

       The tired debate Taxonomy Folksonomy Control Democracy Top-down Bottom-up Arduous process Just do it Accurate Good enough Restrictive Flexible Static Evolving Expensive to maintain Low cost – “crowdsourced”

       The relevance problem Search results should be relevant to what a searcher wants, but technology can only determine if it is relevant to a search term* Taxonomies and folksonomies = 2 approaches to the problem of relevance with common goal of describing content, each with particular gaps *Billy Cripe: Folksonomy, Keywords & Tags: Social & Democratic User Interaction in Enterprise Content Management http://www.oracle.com/technology/products/content-management/pdf/OracleSocialTaggingWhitePaper.pdf

PLURALITY

A 14 minute film

Directed by: Dennis Liu

Written by: Ryan Condal

Produced by: Jonathan Hsu, Dennis Liu

Rick Falkvinge writes:

“This short film …had me absorbed from the get-go. When it was over, it felt like 30 seconds had passed. That in itself is remarkable – but the short film also communicates a very chilling insight into where we’re going. The movie is about ever-increasing surveillance, and how it always ends up where we don’t want it – with quite a few surprises baked in.

In the movie, DNA scanners are everywhere, and links your DNA with centralized access control lists to everything. Predictably, it started out as a convenience, until legislation stipulated that law enforcement can and shall have access to all of it. The plot twists towards the end are gripping.”

IEML, Information Economy MetaLanguage A symbolic system able to exploit the computational power, the capacity of memory and the ubiquity of the digital medium. This symbolic system is called IEML, for Information Economy MetaLanguage. It is :

(1) an artificial language that translates itself automatically into natural languages,

(2) a metadata language for the collaborative semantic tagging of digital data,

(3) a new addressing layer of the digital medium (conceptual addressing) solving the semantic interoperability problem,

(4) a programming language specialized in the design of semantic networks,

(5) a semantic coordinate system of the mind (the semantic sphere), allowing the computational modeling of human cognition and the self-observation of collective intelligences.

Fabio Cecin @ Next Net

I agree with @tawnuac who would “enrich or publish data to RDF”

as I think that centralized taxonomies will always fall short for some

segment of the population. But we all belong to multiple communities, each

of which may have it's own “language” for describing elements of the world

they are most interested in. As such, we choose to place more attention on

car care from our automotive club and child care from our family and school

moms we trust. Therein lies the key: the trust we place in different

people and communities is domain specific, and as mentioned above, the very

definition of “domain” may vary from person to person. In the above

examples, “car care” communities will vary depending upon whether I drive a

'70 VW microbus or a '12 Lexus RX Hybrid, and “child care” communities will

vary if I have a toddler or a tween. And of course, the levels of trust we

assign even to similar communities is a very personal matter.

So my thinking is that at the (decentralized) core of a decentralized

community will be a collection of personalized and community-centric trust

metrics. Switching point of view, I believe what we need to design are

secure, open source “Reputation Calculation Engines” (RCE) that operate on

collections of digitally signed RDF triples (or “reputes”). Note that the

digital signatures can come from anonymous or pseudonymous sources, but

they are essential in calculating reputation to prevent spoofs, floods,

etc. An RCE will in general ignore – or provide less weight to – reputes

that come from short-lived anonymous sources, and apply greater reputation

strength to sources signed from reputable sources

Note that any addressible object in this reputation-based economy – from

signatures to care repair companies to RCEs – can have their own

domain-specific reputes attached to them. I expect there will be some very

well-known RCEs, and Google-like search engines that can point you to those

most trusted, but as we are all different, we can each have our private

RCEs that assign X reputation to RCE1 and Y reputation to RCE2 within any

domain, and further increase reputation for some signatures and decrease

others. IMO, only when each of us is in charge of who we trust – and we

don't have trust dictated to us – can a decentralized, privacy-enhanced

system work.

Fabio Cecin

Namecoin is a an alternativeDomain Name System that is based onBitcoin technology (the Namecoin network reaches consensus every few minutes as to which names/values have been reserved or updated). Each user has its own copy of the fulldatabase, which attempts to reduce censorship on the DNS level. The use of public-key cryptography also means that only the owner is allowed to modify a name in thedistributed database.The first and most popular Namecoin project isDot-bit.

Namecoin is a peer-to-peer generic name/value datastore system based onBitcoin technology (a decentralized cryptocurrency). It allows you to:

       Securely register and transfer arbitrary names, no possible censorship!

       Attach values to the names (up to 1023 bytes)

       Trade and transact namecoins, the digital currency NMC.

There are plenty of possibleuse cases. Read more aboutNamecoin.

What is Dot-BIT

Dot-BIT, the first project using namecoin, is building a domain name system (DNS) using the .bit TLD. Our goal is to spread .bit domains by providing resources and tools to the community, from developers to end users.

Mozilla Persona is a completely decentralized and secure authentication system for the web based on the open BrowserID protocol. To ensure that Persona works everywhere and for everyone, Mozilla currently operates a small suite ofoptional, centralized services related to Persona.

Why should you and your site use Persona?

1       Persona completely eliminates site-specific passwords, freeing users and websites from the burden of creating, managing, and securely storing passwords.

2       Persona is easy to use. With just two clicks a Persona user can sign into a new site likeVoost orThe Times Crossword, bypassing the friction associated with account creation.

3       Persona is easy to implement. Developers can add Persona to a site in a single afternoon.

4       Best of all, there's no lock-in. Developers get a verified email address for all of their users, and users can use any email address with Persona.

5       Persona is built on the BrowserID protocol. Once popular browser vendors implement BrowserID, they will no longer need to rely on Mozilla to log in.

http://www.w3.org/2012/10/31-identity-minutes.html

Talk about WebRTC Identity work, WebID, OpenID, and experimental API from Mozilla to hide deliver encrypted text to DOM without letting cleartext be under control of WebApp.

Signed RDF

(W3C Spec) Assertions may be signed to facilitate decisions that require trust. Simple signatures include checksums or other assertions about independently verifiable characteristics of a resource. The simplest example of a signature is a statement that the associated assertions apply only to the version of the resource labeled with a given creation date. Stronger signatures will include cryptographic measures to increase the likelihood of detection of falsification of or inadvertent changes to the signed assertions or the resource(s) to which they apply.

Fen Labalme > Next Net: On Fri, Nov 2, 2012 at 9:44 AM, Fabio Barone wrote:

that's why I think the proposal of RDF enriched folksonomies

have merit and may address these issues.

It's not a platform.

It's not a unified language / ontology.

It's not trying to change/save the world.

And yes, it wouldn't be perfect and solve all problems.

It's leaving to folks to make new meaning out of data – changing perceptions, making new links, and maybe changing the world…

Totally agree.  Just add a digital sig (or any unique hash) to each RDF triple <http://www.w3.org/TR/WD-rdf-syntax-971002/#signing> and you start to enable trust and reputation.

The OpenPrivacy initiative (OpenPrivacy.org) is an Open Source collection of software frameworks, protocols and services providing a cryptographically secure and distributed platform for creating, maintaining, and selectively sharing user profile information.

In effect, OpenPrivacy is the first open platform that enables user control over personal data while simultaneously – and at user discretion – providing marketers with access to higher quality profile segments. The resulting marketplace for anonymous demographic profiles will create opportunities for a new breed of personalized services that provide people and businesses with timely and relevant information. Throughout the system, information may be shared with guaranteed personal privacy, creating at last a level playing field for the user, marketer and infomediaries.

Several projects are in the works, listed with the most-developed initiative first:

Sierra, a reference implementation of the Reputation Management Framework (RMF)

OpenPrivacy's core project is designed to ease the process of creating community with reputation enhanced pseudonymous entities. The RMF is primarily a set of four interfaces: Nym Manager, Communications Manager, Storage Manager and Reputation Calculation Engine (RCE). Sierra is a reference implementation that meets these interfaces.

Talon

A simple yet powerful component system for Java. Sierra is being developed using Talon and we expect that Talon will soon be able to use Sierra's reputation manager to drive component selection

Zero-knowledge proof of sibling nym relationships by parent

paper forthcoming

Reputation Capital Exchange

A secure mechanism for mapping between RCEs that use different trust metrics. This is accomplished by first attaching an OpenPrivacy-style Nym to the local namespace user name, and then by authenticating a match between these secure nyms.

Reptile

An open source/free software Syndicated Content Directory Server (SCDS) that provides a personalized news and information portal with privacy and reputation accumulation.

User Content License – Reversing the Privacy Policy Circle

Adding an HTTP header prior to the request being transferred from client to server that contains a user copyright notice for any data transferred from the client. (While not directly related to the concept of anonymous profile data, we think it's a cool hack!)

OpenPrivacy > Next Net

I agree with @tawnuac who would “enrich or publish data to RDF” as I think that centralized taxonomies will always fall short for some segment of the population.  But we all belong to multiple communities, each of which may have it's own “language” for describing elements of the world they are most interested in.  As such, we choose to place more attention on car care from our automotive club and child care from our family and school moms we trust.  Therein lies the key: the trust we place in different people and communities is domain specific, and as mentioned above, the very definition of “domain” may vary from person to person.  In the above examples, “car care” communities will vary depending upon whether I drive a '70 VW microbus or a '12 Lexus RX Hybrid, and “child care” communities will vary if I have a toddler or a tween.  And of course, the levels of trust we assign even to similar communities is a very personal matter.

So my thinking is that at the (decentralized) core of a decentralized community will be a collection of personalized and community-centric trust metrics.  Switching point of view, I believe what we need to design are secure, open source “Reputation Calculation Engines” (RCE) that operate on collections of digitally signed RDF triples (or “reputes”).  Note that the digital signatures can come from anonymous or pseudonymous sources, but they are essential in calculating reputation to prevent spoofs, floods, etc.  An RCE will in general ignore – or provide less weight to – reputes that come from short-lived anonymous sources, and apply greater reputation strength to sources signed from reputable sources.

Note that any addressible object in this reputation-based economy – from signatures to care repair companies to RCEs – can have their own domain-specific reputes attached to them.  I expect there will be some very well-known RCEs, and Google-like search engines that can point you to those most trusted, but as we are all different, we can each have our private RCEs that assign X reputation to RCE1 and Y reputation to RCE2 within any domain, and further increase reputation for some signatures and decrease others.  IMO, only when each of us is in charge of who we trust – and we don't have trust dictated to us – can a decentralized, privacy-enhanced system work.

Open Wireless Movement   (EFF) helps foster a world where the dozens of wireless networks that criss-cross any urban area are now open for us and our devices to use.

What is the Open Wireless Movement?

Imagine a future with ubiquitous open Internet.

We envision a world where, in any urban environment:

       Dozens of open networks are available at your fingertips.

       Tablets, watches, and other new devices can automatically join these networks to do nifty things.

       The societal expectation is one of sharing, and, as a result, wireless Internet ismore efficient.

       The false notion that an IP address could be used as a sole identifier is finally a thing of the past, creating a privacy-enhancing norm of shared networks.

We're working with a coalition of volunteer engineers to build technologies that will let users open their wireless networks without compromising their security or sacrificing bandwidth. And we're working with advocates to help change the way people and businesses think about Internet service.

Hypothes.is is an open-source software project that aims to collect comments about statements made in any web-accessible content, andfilter and rank those comments to assess each statement's credibility. It's been summarized as “a peer review layer for the entire Internet.” The project is to write software and establish a system which will allow annotation of web pages, using comments contributed by individuals and a reputation system for rating the comments. The plan is that the comments will be stored in theInternet Archive. Normal use is planned to be with abrowser plug-in, and the plan is that links to specific comments will also be viewable without needing a plug-in.

HyperThread An expermental way to view threads on App.net. A graphical visualization of social discussion threads.

App.net is an ad-free, subscription-based social feed and API. App.net aims to be the backbone of the social web through infrastructure that developers can use to build applications and that members can use for meaningful interactions. App.net launched in August of 2012. It’s owned and operated by Mixed Media Labs, founded by CEO Dalton Caldwell and CTO Bryan Berg.

App.net’s core values

       We are selling our product, NOT our users

       We will never sell your personal data, content, feed, interests, clicks, or anything else to advertisers. We promise.

       You own your content

       App.net members always have full control of their data and the fundamental right to easily back-up, export, and delete ALL of their data, whenever they want.

Tahoe-LAFS-on-S3 is a reliable and scalable cloud storage back-end for use with theTahoe-LAFS.org client software. Tahoe-LAFS is a Free Software, Open Source cloud storage system. It encrypts and cryptographically integrity-checks your files for provider-independent security. That means that the confidentiality and integrity of your files cannot be violated by anyone-not even employees of the storage service provider.

What is it good for?

Securely backing up your data off-site. The “tahoe backup” command inspects your local filesystem for files that have changed since the last time you ran it. It uploads each file that has changed and it creates a directory in Tahoe-LAFS to hold the current version of each of the files. You can browse or access old versions just by browsing the old snapshot directories.

Where is the data stored?

Your data, encrypted, is stored on Amazon's Simple Storage Service (S3), which is a convenient, reliable, and widely understood platform for storage

Doodleis a free Internet calendar tool for time management, and coordinating meetings. It is based inZurich, Switzerland and has been operational since 2007. Users are polled to determine the best time and date to meet. Meeting coordinators (administrators) receive e-mail alerts for votes and comments. Registration is required to have this function. Doodle interacts with various external calendaring systems, such asIBM Lotus Notes. Through the use of awidget for Lotus Notes, users are able to create and manage Doodle polls within a Lotus Notes client application.Google Calendar,Yahoo Calendar,Microsoft Outlook andApple iCal can be utilized with Doodle to track dates.Google Map may also be used to share the location of the event. Similar popular competing products include Dudle (Free Software maintained by theTU Dresden), ScheduleOnce,Tungle,TimeBridge and WhenIsGood. There's also a privacy enhanced version of Dudle.

Transmutable Work Work in public! Emerge from behind your intellectual firewall and share your work. The source code is public so feel free to find a nerd and a server and boot your own site.

       Django is the web stack

       Markdown is used for all of the user entered text

       AWS hosts the servers

       light on structure and heavy on flexibility

       usesmarkdown instead of markup mangling WYSIWYG

       doesn't sell user data to megacorps

       doesn't flood the tubes with bac               

Captricity is the easiest, fastest, and most cost-effective way to capture data trapped on paper—such as thousands of hand-completed survey forms—and convert it into digital data that can be searched, stored, shared, and studied.               

When Open Data and Civic Hackers Meet for the First Time… I wouldn’t quite say it is romantic.  But when teams of software developers, designers, and data scientists get their hands on data sets they previously had no access to, the results are spectacular. That was the scene this past weekend at the hack-a-thon sponsored by two of the Code for America Accelerator companies (Captricity, of […] Read full story                                  

                                               

                                   

Occupy the Comms In occasion of Agora 99 we are launching ‘Occupy the Comms’, the ultimate toolkit for popular news reporting. Occupy the Comms has been developed over the past five months by a dozen people in New York, California, Brussels, France, Madrid and elsewhere. The beta version has been online for a few weeks. So what is Occupy the Comms?

For the last decade and a half, step by step, Internet has offered people all the necessary tools to report on the news themselves. First came weblogs, then came photo and video sharing, then social networking greatly enhanced the quick exchange of information. The latest development has been live stream, the opportunity to broadcast video directly from your mobile phone.

Occupy the Comms is the next step in this evolution. It brings everything together. It allows everyone to participate in a horizontal way. And there’s no catch. Money is not an issue.

In short OtC works on three different levels. The first level is real time news, the second is editing, the third is all-round broadcasting.

The site is structured around groups. You create a group for a certain event. Automated bots can scan the Internet for all content related to that event, like live streams. The users watching those streams can collaborate by creating a pad that indexes what happens at what time and what additional information like photos, tweets and blog posts is available.

On the second level, contributors from around the world can use the primary information to create videos or articles that capture the event from any perspective in word and image. The site features a chat which enables online editors to work together on a project, to divide the tasks, and to minimise the time necessary to finish it.

On the third level, streams and edited content can be broadcast and mixed on specific channels like GlobalRevolution.TV, or any other channel you want to create yourself. Aside from those, they can be distributed through regular outlets like YouTube and Vimeo.

These are the basics. There are even more interesting features which make OtC a formidable weapon of 21st century news reporting.

How to make a great Open API.

Although APIs tend to hide data they can certainly be valuable bridges to both read and write to existing services. REST seems to be a very popular model with arguably HTTP POST the dominant write protocol on the web today

Financial and political microtransactions are a necessity.

Many think [politics] would be improved by banning all money. I believe this is the wrong approach and is, in fact, dangerous over the long term. (I'd also add that a ban is very unlikely to be accomplished). Transparency and limits yes… but a ban on citizen participation no.

A better approach addresses two essentials:

       drastically lower the threshold for participation in the lobbying process…

       while also drastically lowering the cost of campaigning at ALL levels. 

That's why I'm convinced the political microtransaction is a necessity.

This isn't out of some ‘kumbayah' belief in the perfect wisdom of the masses. But rather arises out of a conviction that a better result will be achieved by allowing a more balanced input from those with “biases and self-interests” in conflict with those currently dominating the lobbying landscape.

This might seem paradoxical to many and is arguable. But I'll make this assertion: broadening monetary participation while drastically lowering costs will over time actually reduce the influence of money in politics… perhaps even to the point of irrelevancy.

Though I hold the patent… my goal is the broadest possible participation and to prevent any narrow control of these critical capacities. Give me back my little 500 square foot home so recently taken… and I'd just as soon get back to painting. But like others who feel they cannot stay quiet while the Enlightenment dies… I feel compelled to do what I can… while I can.

The Tool: The Patent (here) was issued January 11, 2011. To get specific…

Claim 1:

1. A donation method, comprising: establishing a first escrow account for a first donor with a first threshhold on a programmed electronic computer; removing funds from the first escrow account upon instructions from the first donor, the instructions having a transfer designation and the instructions being a contribution; comparing the funds to a second threshold donation level to determine if the funds are great enough for a donation to be made on a programmed electronic computer; aggregating the funds with the same transfer designation with the money from other donors to equal or surpass the threshold donation level; creating a sum of funds; transferring the sum of funds to the transfer designation, said transferring the sum of funds is depositing said sum of funds with a political candidate or cause; and reporting information about the first donor and the other donors upon transferring the sum of funds, said reporting information is done within the confines of jurisdictional requirements.

If you wade through that it is simply like a cash card… but the user's information and instructions are separated from the funds which go into Trust Account(s)… and ‘micro' designations can be made, pooled with designations of others to the same recipient… and reaching a viable threshold (determined by a variety of cost related factors)… transferred to the recipient with any reporting requirements reported and tracked.

This systems allows transfers of ANY size… but what it can do that others can't… is a very simple micro-transaction…. and pass through incurred transaction costs.

So it CAN function, if desired, just like any other gift card or Internet wallet…

BUT… with a vital added capability… a simple micro-transaction.

While the utility of this transaction has sometimes been questioned…

The POLITICAL microtransaction, at least, escapes all those objections.  (Its not a physical good, not digital content with free alternatives available, and hassle is eliminated.)

I'd also contend that its a fundamental of speech… designed for people.

[Editors note: a microtransaction can pertain to speech and participation in forms other than campaign financing. Open government can include venues for microtransactions in lobbying, public comment, opinion polling, and even legislation drafting and voting. ~PR]

The MetaCurrency Project is definitely a part of the movement of emergent currency systems, whether you think of them complementary currencies, alternative currencies, local currencies, digital currencies, virtual currencies, reputation currencies or targeted currencies. We are building the tools to enable all them. We even defined our Open Data approach for distributed, digitally-signed transaction chains over a year before bitcoin was invented.

We're connected to the movement to enable a truly P2P, distributed internet without central points of control or failure. Our project embodies the goals of the movement of the 99% which seeks to reclaim the capacities for wealth generation from a privileged few. To fully meet our criteria, people need to be able to transact directly with each other with no segment of that interaction relying on a centrally controlled system.

       Non-centralized rules (unlike the rules for money today)

       Non-centralized database (as 99.99% are today)

       Non-centralized name resolution (instead of DNS)

       Non-centralized address space (to play the role of IPv4 or IPv6)

       Agreements are made by mutual consent

       All levels of participation are sovereign

Allevo is a software vendor and consultancy in financial transactions and payment processing, focusing on banks, micro-finance institutions and corporate treasury departments. Their core product is called qPayIntegrator.  The the open source version is available to anyone to use and adapt and is called FinTP.  A long journey starts with small steps – their first step was to find a name for the open source community –FINkers United. The second step is the launch event – onMay 24th 2012 in Bucharest.

GNU MediaGoblin is a free software media publishing system for images, video, and audio. We're designing to support decentralization and tons of extensibility. You can think of it as a federated replacement for things like Flickr, YouTube or SoundCloud that you or anyone can run. MediaGoblin is building the world's most beautiful and user-responsive media publishing future.

ACTION PLAN FOR COPYRIGHT REFORM AND CULTURE IN THE XXIst CENTURY

This document is the first draft of a common platform of civil society for the reform of copyright and accompanying measures to ensure the sustainable development of culture in the XXIst century. It was drafted in and following the Free Culture Forum 2012 of Barcelona by a small group of individuals, having participated to and taking inspiration from the following existing proposals:

       The Free Culture Forum Charter and Guide for Sustainable Creativity

       The Communia recommendations and Public Domain Manifesto

       The Polish proposals prepared by Centrum Cyfrowe and the Modern Poland Foundation

       The Elements for the reform of copyright and related cultural policies of La Quadrature du Net

It is submitted for comments by interested citizens of all countries in view of subsequent revisions.

Anonymity vs Trust vs Cash (email thread p2p-hackers@lists.zooko.com)

From: Changaco

Subject: Re: [p2p-hackers] Bitcoin incentive on Kademlia networks

If you don't care about anonymity you “can” build a Web of Trust, in

order to know who's who and base money on people. That's what the

OpenUDC project is trying to do.

If you want anonymity, the only known option is proof-of-work, but

that's just a nice way of naming a waste of time and energy on useless

computations. That's how Bitcoin works, but I doubt people will want to

waste that much CPU time just to share files.

From: “Zooko Wilcox-O'Hearn”

Date: Sat, 3 Nov 2012

Subject: Re: [p2p-hackers] Bitcoin incentive on Kademlia networks

changaco@changaco.net's statements that “money has to be based on something”, that Bitcoin is “based on” proof-of-work and that people would need to waste CPU cycles in order to trade files (under danimoth's proposal) are all incorrect. ?

Money, to be useful as money, only has to be acceptable and valuable to enough people. It doesn't have to be “based on something”.

Bitcoin isn't really “based on” proof-of-work. It's mostly “based on” digital signatures. The proof-of-work part is really just to make it difficult (but not impossible) for attackers to perform a rewind attack. There are designs floating around which replace the proof-of-work with other mechanisms intended to deter rewind attack, and the properties of the resulting systems are almost the same as the properties of Bitcoin.

People would not have to burn CPU cycles in order to trade files in danimoth's proposal. Only the transaction-verification-servers (also called “miners” in Bitcoin) need to do any proof-of-work (in order to deter rewind attack). Normal users who want to send or receive Bitcoin do not need to do any proof-of-work.

From: ianG

Sent: Saturday, November 03, 2012

Subject: Re: [p2p-hackers] Bitcoin incentive on Kademlia networks

The essential solution to all trade imbalances relies on money.  So if

your problem is some form of asymmetric trading, you need a payment

system, of some form, and you need an exchange of some form.

Beyond this simple statement, however, is a sea of ideas, in which one

can easily drown.  E.g., you've identified a simple exchange process,

discovered a weakness, and then proposed a reputation system to cover

the weakness.  Adding a reputation system to solve issues is like a deux

ex machina in systems;  Rep systems are little understood and generally

or frequently crap, so chances are you'll end up building something that

won't work, and wasting a lot of time in doing it.

Better to avoid that and come up with a payment system that doesn't need

reputation – or at least one that doesn't lean so heavily on it.

As a field you can research it, but you have to be extremely skeptical

because much of what is written is unreliable at some level or other.

For one example, everything written about gold is tainted by Central

Bank marketing (for their own currency).  This makes it very confusing

if one just reads and assumes what is written is fact…

Alternatively, one can build it and try it.  But the cycle times are

long, it takes a year or so to write a decent money system and get it up

and rolling.

Alternatively, you cut the gordian knot and make everything free.  The

system has to work under this constraint.  That works for somethings

(open source software, songs sharing, etc) but not for all things.

> 2. There have been previous experiments similar to what I'm proposing?

Mojo Nation tried to be an economically informed p2p system, but seemed

to run out of grunt as a project.  It failed because it tried to solve

every problem, and drowned.

http://financialcryptography.com/mt/archives/000571.html

http://financialcryptography.com/mt/archives/000572.html

In contrast, the projects that spun out of it – BitTorrent? Tahoe? –

reduced their problem set dramatically.  Either way, you might find

Mojo's design to be well worth studying, people say the design wasn't wrong.

> [1] Enforcing Collaboration in Peer-to-Peer Routing Services

>      (by Tim Moreton and Andrew Twigg)

That's an unfortunate turn of phrase there, which rather strikes at the

heart of the problem you are trying to solve 🙂

Date: Sat, 3 Nov 2012

From: Changaco

Subject: Re: [p2p-hackers] Bitcoin incentive on Kademlia networks

On Sat, 3 Nov 2012 10:49:45 -0600 Zooko Wilcox-O'Hearn wrote:

> Money, to be useful as money, only has to be acceptable and valuable

> to enough people.

I agree with that. What I meant by “money has to be based on

something” is that money creation has to be based on something you

can't fake. Otherwise one can create as much money as one wants, and

it's worth nothing.

Money creation is an important part of a monetary system, because when

money is created it devalues the one previously created.

Unless I'm mistaken, the Bitcoin creation process is based on

proof-of-work. The more processing power one has, the bigger the share

of the monetary creation one gets. But the Bitcoin monetary mass is

limited, just like the quantity of gold on Earth, so mining gets

harder and harder until there is nothing left to extract.

> People would not have to burn CPU cycles in order to trade files in

> danimoth's proposal. Only the transaction-verification-servers (also

> called “miners” in Bitcoin) need to do any proof-of-work (in order to

> deter rewind attack). Normal users who want to send or receive Bitcoin

> do not need to do any proof-of-work.

Before being able to send Bitcoins one must receive some. How would a

new user get Bitcoins ?

Date: Sun, 4 Nov 2012

From: danimoth

Subject: Re: [p2p-hackers] Bitcoin incentive on Kademlia networks

On 03/11/12 at 10:09pm, Changaco wrote:

> Before being able to send Bitcoins one must receive some. How would a

> new user get Bitcoins ?

Regarding my proposal, he has two options:

*) Share some resources (hdd space and bandwith), and receive payments

for these

*) Buy bitcoin from other people, exchanging other goods (dollars for

example)

Free and Open-Source Text Mining / Text Analytics Software

       GATE, a leading open-source toolkit for Text Mining, with a free open source framework (or SDK) and graphical development environment.

       INTEXT, MS-DOS version of TextQuest, in public domain since Jan 2, 2003.

       LingPipe is a suite of Java libraries for the linguistic analysis of human language.

       Open Calais, an open-source toolkit for including semantic functionality within your blog, content management system, website or application.

       RapidMiner Text Mining.

       ReVerb: Open Information Extraction Software, extracts binary relationships like high-in(winter squash, vitamin c) without requiring any relation-specific training data.

       S-EM (Spy-EM), a text classification system that learns from positive and unlabeled examples.

       The Semantic Indexing Project, offering open source tools, including Semantic Engine – a standalone indexer/search application.

       Many of the following offer free limited or trial versions:

       WordStat Content Analysis and Text Mining – From Text to Discovery

       Ranks.nl, keyword analysis and webmaster tools.

       Vivisimo/Clusty web search and text clustering engine.

       Wordle, a tool for generating “word clouds” from text that you provide.

       ActivePoint, offering natural language processing and smart online catalogues, based contextual search and ActivePoint's TX5(TM) Discovery Engine.

       Aiaioo Labs, offering APIs for intention analysis, sentiment analysis and event analysis.Aiaioo online demo.

       Alceste, a software for the automatic analysis of textual data (open questions, literature, articles, etc.)

       Angoss Text Analytics, part of KnowledgeStudio, allows users to merge the output of unstructured, text-based analytics with structured data to perform data mining and predictive analytics.

       Attensity, offers a complete suite of Text Analytic applications, including the ability to extract “who”, “what”, “where”, “when” and “why” facts and then drill down to understand people, places and events and how they are related.

       Basis Technology, provides natural language processing technology for the analysis of unstructured multilingual text.

       Clarabridge, text mining software providing end-to-end solution for customer experience professionals wishing to transform customer feedback for marketing, service and product improvements.

       ClearForest, tools for analysis and visualization of your document collection.

       Clustify, groups related documents into clusters, providing an overview of the document set and aiding with categorization.

       Compare Suite, compares texts by keywords, highlights common and unique keywords.

       Connexor Machinese, discovers the grammatical and semantic information of natural language.

       Copernic Summarizer, can read and summarize document and Web page text contents in many languages from various applications

       Crossminder, natural language processing and text analytics (including cross-lingual text mining).

       Dhiti, providing an API for text-mining; can work on a document collection and mine out topics and concepts in realtime.

       DiscoverText, a powerful and easy-to-use set of text analytic solutions for eDiscovery and research.

       dtSearch, for indexing, searching, and retrieving free-form text files.

       Eaagle text mining software, enables you to rapidly analyze large volumes of unstructured text, create reports and easily communicate your findings.

       Enkata, providing a range of enterprise-level solutions for text analysis.

       Entrieva, patented technology indexes, categorizes and organizes unstructured text from virtually any source.

       Expert System, using proprietary COGITO platform for the semantic comprehension of the language to do knowledge management of unstructured information.

       Files Search Assistant, quick and efficient search within text documents.

       IBM Intelligent Miner Data Mining Suite, now fully integrated into the IBM InfoSphere Warehouse software; includes Data and Text mining tools (based on UIMA).

       Intellexer, natural language searching technologies for developing knowledge management tools, document comparison software and document summarization software, custom built search engines and other intelligent software.

       ISYS Search Software, an enterprise search software supplier specializing in embedded search, text extraction, federated access solutions and text analytics.

       IxReveal, offering uReveal “plug-in” advanced analytic platform and uReka! desktop “search and analyze” consumer product, based on patented text analytics methods.

       KBSPortal, offers natural language processing as SAAS web service.

       Kwalitan 5 for Windows, uses codes for text fragments to faciliate textual search, display overviews, build hierarchical trees and more.

       KXEN Text Coder (KTC), text analytics solution for automatically preparing and transforming unstructured text attributes into a structured representation for use in KXEN Analytic Framework.

       Langsoft question-answering and content recognition/text attribution software, evaluation copy available.

       Lexalytics, provides enterprise and hosted text analytics software to transform unstructured text into structured data.

       Leximancer, makes automatic concept maps of text data collections

       Lextek Onix Toolkit, for adding high performance full-text indexing search and retrieval to applications.

       Lextek Profiling Engine, for automatically classifying, routing, and filtering electronic text according to user defined profiles.

       Linguamatics, offering Natural language processing (NLP), search engine approach, intuitive reporting, and domain knowledge plug-in.

       Megaputer Text Analyst, offers semantic analysis of free-form texts, summarization, clustering, navigation, and natural language retrieval with search dynamic refocusing.

       Monarch, data access and analysis tool that lets you transform any report into a live database.

       NewsFeed Researcher, presents live multi-document summarization tool, with automatically-generated RSS news feeds.

       Nstein, Enterprise Search and Information Access Technologies; On your public website, Nstein will guide your customers to the most relevant information more quickly than other solutions.

       Odin Text, actionable DIY Text Analytics, with a focus on market research.

       Power Text Solutions, extensive capabilities for “free text” analysis, offering commercial products and custom applications.

       Readability Studio, offers tools for determining text readability levels.

       Recommind MindServer, uses PLSA (Probablistic Latent Semantic Analysis) for accurate retrieval and categorization of texts.

       SAS Text Miner, provides a rich suite of text processing and analysis tools.

       Semantex from Janya Inc., enterprise-class information extraction system, detecting entities, attributes, relationships and events.

       SPSS LexiQuest, for accessing, managing and retrieving textual information; integrated with SPSS Clementine data mining suite.

       SPSS Text Mining for Clementine enables you to extract key concepts, sentiments, and relationships from call center notes, blogs, emails and other unstructured data, and convert it to structured format for predictive modeling.

       SWAPit, Fraunhofer-FIT's text- and data analysis tool (updated version of DocMINER), offers visual text mining and retrieval capabilities, including search, term statistics, and summary; visualises semantic relationships among text documents.

       TEMIS Luxid®, an Information Discovery solution serving the Information Intelligence needs of business corporations.

       TeSSI®, software components that perform semantic indexing, semantic searching, coding and information extraction on biomedical literature.

       Texifter, streamlines the process of sorting large amounts of unstructured textm with The Public Comment Analysis Toolkit (PCAT), DiscoverText and Sifter, off-the-shelf, enterprise-class business process applications.

       Text Analysis Info, offering software and links for Text Analysis and more

       Textalyser, online text analysis tool, providing detailed text statistics

       TextPipe Pro, text conversion, extraction and manipulation workbench.

       TextQuest, text analysis software

       Treparel KMX Text Analytics delivers fast and powerful search, clear visual insights and advanced analytics for information professionals, information consumers and in OEM partnerships.

       Readware Information Processor for Intranets and the Internet, classifies documents by content; provides literal and conceptual search; includes a ConceptBase with English, French or German lexicons.

       Quenza, automatically extracts entities and cross references from free text documents and builds a database for subsequent analysis.

       VantagePoint provides a variety of interactive graphical views and analysis tools with powerful capabilities to discover knowledge from text databases.

       VisualText™, by TextAI is a comprehensive GUI development environment for quickly building accurate text analyzers.

       VP Student Edition powerful text-mining and visualization tool for discovering knowledge in search results from science literature and other field-structured text databases.

       Xanalys Indexer, an information extraction and data mining library aimed at extracting entities, and particularly the relationships between them, from plain text.

       Wordstat, analysis module for textual information such as responses to open-ended questions, interviews, etc.

(Many of the commercial packages above offer free or limited trial versions.)

Desktop IRIS – CNET Download.comhttp://download.cnet.com/Desktop-IRIS/3000-2379_4-75220760.html#ixzz2Bq34kqKk

FromMobilVox: Desktop IRIS is an easy-to-use search program that can be successfully downloaded and accessed by anyone. It allows you to intuitively find stored information from your desktop and network without imposing any restrictions on the number of files and folder locations indexed. The system uses the same security model for the desktop and network operating systems, allowing full search capabilities across a wide range of information sources. In addition, it takes full advantage of an expandable and collapsible tree pane for a directory display and easy e-discovery. It allows you to easily search Outlook e-mail, contacts, calendar, and notes to find information without having to remember dates, e-mail contents, recipient lists, or sender lists. It enables you to quickly access information stored in your desktop and network paths. It gives you the ability to download information from any website such as letters, articles, reports, or even the whole website. Its expandable and collapsible tree pane allows you to search and organize files with a minimal search time at a maximal ease. You can summarize retrieved documents to quickly extract the most relevant sentences. Generates sophisticated lexical analysis statistics about a retrieved document. Easily open files or containing folders directly from the results list. Filter files by type to more easily locate what you are looking for. Enables Boolean, proximity, range, wildcards, and fuzzy searches. You can easily download the program and start searching your desktop computer and Outlook e-mail right away.

Others from CNET:

       Google Desktop Search your hard drive for e-mail, files, and your Web and IM…

       Everything Search your Windows system very quickly using an index of files…

       Copernic Desktop Search… Search files, e-mails, and multimedia formats on your PC's hard…

       Copernic Agent Pers… Combine the power of leading search engines.

       Large Text File Viewer… Perform high-speed complex text search

       Copernic Summarizer (free trial/$25) can analyze a text of any length, on any subject, in any one of four languages, and create a document summary as short or as long as you want it to be. It can summarize Word documents, Web pages, PDF files, email messages and even text from the Clipboard. Once summaries have been generated, they can be printed, saved (in plain text, Microsoft Word, HTML and XML formats)           or e-mailed, simplifying not only the way you store information but also how you share it with your friends and colleagues.                                                                                                               

       Intellexer Summarizer (free trial/$25) is an innovative program for your computer that will create a short summary from any document or a browsed Web page. You may read the summary instead of reading the whole document saving time for fun and leisure. Many additional tools will make your life even easier. User review: I used summarizer to create my degree work; I had little time to read all books, web pages, scientific articles and other documents, related to my theme, but I needed to know if they were worth reading. So, I used Intellexer Summarizer to get summaries and general concepts of required documents. It creates summaries of documents and web pages and provides a short summary (the length is adjustable) with a concept tree, so I can rearrange the summary accordingly to this. So, I managed to reach the main idea of a huge scientific document without hard efforts. Secondly, Intellexer Summarizer provided me with theme-oriented summary, easy to work with. The third thing, I would like to mention is that Summarizer is fast and fruitful, it created concise summaries and facilitates my work with scientific documents and articles of different formats (PDF, TXT, HTML/HTM, DOC, DOCX, MHTML and others).

       Intellexer Summarizer SDK (free) is intended as a base for developing customized applications to manage documents and knowledge data. Its special advantage is capability of analyzing text in natural language. Summarizer SDK can be integrated into an existing document circulation system. You can order SDK and then use it at your purposes. You can also order our software development services to receive a ready to use solution.

Zylab Technologies (proprietary/commercial) Finding Relevant Information Without Knowing Exactly What You Are Looking For: http://www.zylab.com/TechnologyModules/TextMiningAnalytics.aspx

       Text analysis is the next step in search technology and refers to the process of extracting interesting and non-trivial information and knowledge from unstructured text. ZyLAB’s text analysis differs from traditional search in that, whereas search requires a user to know what he or she is looking for, text analysis attempts to discover information in a pattern that is not known beforehand. This is achieved through the use of advanced techniques such as pattern recognition, natural language processing, machine learning, and so on. By focusing on patterns and characteristics, text analysis can produce better search results and deeper data analysis, thereby providing quick retrieval of information that otherwise would remain hidden.

       ZyLAB Supports Every File Format http://www.zylab.com/Advantages/ComprehensiveFileFormatSupport.aspx

       ZyLAB software supports every native file format—even audio! While you may be most concerned with the 10 formats you use on a regular basis, critical information may be stored in any number of less common file types. Our comprehensive approach is made possible by supporting more than 700 formats out-of-the-box and leveraging our series of traditional and custom connectors to capture the data from any non-standard sources or formats. In any case, the native version is always preserved.ZyLAB Delivers the XML Advantage

       ZyLAB is the only information management solution that archives your complete data pool in the open, non-proprietary Extensible Markup Language (XML) format. Our XML platform guarantees uniform handling of all enterprise data; our “X-to-XML” conversion tools assure that every file—even emails, bitmaps, and database and SharePoint content—benefits from the XML format. http://www.zylab.com/Advantages/XMLasStandard.aspx

XML benefits include:

       Digital sustainability—once your data is added to an XML archive it will never have to be converted again. It will be equally accessible 100 years from now, and the native file is always preserved.

       XML delivers the benefits of a database without the need for one, yet the XML archive from ZyLAB integrates with your databases (e.g. Oracle, mySQL,MS_SQL) when appropriate.

       XML reduces costs for licensing, upgrades, storage, encryption and back-up tools, as well as the hassle of migrations.

       XML archives are scalable to growing volumes of information. Once an XML archive reaches capacity, simply add another file system in parallel.

       XML archives enhance and accelerate indexing, searching and retrieval across all enterprise information.

dtSearch Product review:http://www.searchtools.com/tools/dtsearch.html

Price: $999 per server for dtSearch Web and dtSearch Engine. Desktop tool available for $199, intranet tool for $800. CD/DVD tool dtSearch Publish available for $2,500. Platform: Windows, .NET, Linux. Features:

       Indexes dozens of file formats including HTML, TXT, XML, ZIP, MS Word, Excel, PowerPoint, Open Office, MP3, TIFF, Outlook and Exchange message stores, more. New version has support for MS Office 2007 expanded, XMP metadata, Microsoft XML Paper formats.

       Supports fielded data.

       Natively indexes Access databases, plus databases in XML, CSV, and DBF formats including FoxPro, dBASE, etc. Indexes SQL databases with an included application. Handles BLOB data (binary documents in fields.)

       Robot spider follows links to discover pages.

       Can index via HTTPS, Basic Authentication (user name and password), forms-based authentication.

       Handles over a terabyte of textual data.

       Performs scheduled incremental index updates.

       Uses natural language algorithms.

       Provides indexed, unindexed, fielded and full-text search options. Can search across multiple indices.

       Supports phrase, Boolean, proximity and phonic searches, fuzzy searching, stemming, synonyms, and wildcards. Offers variable term weighting options for search terms.

       Unicode support permits indexing of many languages. Features such as fuzzy searching and stemming are available for English, Danish, Dutch, Finnish, French, German, Italian, Norwegian, Portuguese, Spanish, Swedish, Belarusian, Bulgarian, Czech, Estonian, Greek, Hungarian, Latvian, Lithuanian, Polish, Russian, Slovak, Slovenian, Turkish and Ukrainian.

       Language recognition algorithms detect text in a variety of languages.

       Results are ranked by relevancy, and can be instantly re-sorted by several different variables.

       Keyword highlighting in search results.

       Converts many file types to HTML for display with highlighted hits.

       Parses text segments in data blocks recovered through undelete processes, and from corrupted documents for forensics information recovery.

       Includes ASP, ASP.NET interfaces, and programming API.

       Requires IIS (Internet Information Server.)

Idealist text db You can get the Idealist3 installation files from public Dropbox folder at http://dl.dropbox.com/u/62208205/IDEAL3.EXE

Ultra Recall is personal information, knowledge, and document organizer software for Microsoft Windows.                                                                                                                                                                               

       Capture documents, web pages, notes, and emails from almost any application, with automatic capture of content, text, and images.

       Organize information in ways that make sense to you via flags, favorites, annotations, reminders, categorizing, and custom attributes.

       Recall items quickly with highlighted search results, tagging, multiple navigation methods, history, and advanced searches.

       Useful for online research, journaling, to-do lists, note taking, document archiving, GTD, issue tracking, product evaluation, and more.

mifluz is part of the GNU project, released under the aegis of GNU. The purpose of mifluz is to provide a C++ library to store a full text inverted index. To put it briefly, it allows storage of occurrences of words in such a way that they can later be searched. The basic idea of an inverted index is to associate each unique word with a list of documents in which they appear. This list can then be searched to locate the documents containing a specific word.

Implementing a library that manages an inverted index is a very easy task when there is a small number of words and documents. It becomes a lot harder when dealing with a large number of words and documents. mifluz has been designed with the further upper limits in mind : 500 million documents, 100 giga words, 18 million document updates per day. In the present state of mifluz, it is possible to store 100 giga words using 600 giga bytes. The best average insertion rate observed as of today 4000 key/sec on a 1 giga byte index.

mifluz has two main characteristics : it is very simple (one might say stupidly simple 🙂 and uses 100% of the size of the indexed text for the index. It is simple because it provides only a few basic functions. It does not contain document parsers (HTML, PDF etc…). It does not contain a full text query parser. It does not provide result display functions or other user friendly stuff. It only provides functions to store word occurrences and retrieve them. The fact that it uses 100% of the size of the indexed text is rather atypical. Most well known full text indexing systems only use 30%. The advantage mifluz has over most full text indexing systems is that it is fully dynamic (update, delete, insert), uses only a controlled amount of memory while resolving a query, has higher upper limits and has a simple storage scheme. This is achieved by consuming more disk space. Downloading info.

Semantic Turkey A Firefox Semantic Bookmarking and Annotation Extension. Semantic Turkey is a platform for Semantic Bookmarking and Ontology Development realized by theART Research Group at theUniversity of Rome, Tor Vergata. By adopting W3C standards for knowledge representation, such asRDF,RDFS andOWL, Semantic Turkey turns the popular Web Browser Firefox into a rich and extensible framework for knowledge acquisition, management and exchange. Users can adopt Semantic Turkey to keep track of relevant information from visited web sites and organize collected content according to imported/personally edited ontologies. Domain experts and ontology developers can now build ontologies starting from the very raw source of information which they find on the web, without any need of interconnecting different heterogeneous tools and applications

Semantic Turkey is built on top of several different technologies such as Java and Javascript,XUL,XBL, and features a three layered (data, business and interaction models) architecture, exploiting the AJAX paradigm for UI/Business logic communication. By exploiting acclaimed modularization frameworks such as OSGi compliant Apache Felix and the Mozilla extension environment, Semantic Turkey can be easilyextended with new plug'n'play applications, embracing the best of both worlds of Knowledge Engineering and Web Browsing. Depending on their needs, extension developers can thus rely on different RDF management libraries, such asSesame orJena, as well as reuse and integrate functionalities from the full range of extensions in theFirefox Add-ons repository.  Visit theSemantic Turkey main site for documentation and  requirements  for running Semantic Turkey!

BetterPrivacy Remove or manage a new and uncommon kind of cookies, better known as LSO's.The BetterPrivacy safeguard offers various ways to handle Flash-cookies set by Google, YouTube, Ebay and others…

Recommended comprehensive Flash-cookie article (topic: UC Berkeley research report)

http://www.wired.com/epicenter/2009/08/you-deleted-your-cookies-think-again/

Wikipedia LSO information:

http://en.wikipedia.org/wiki/Local_Shared_Object

See what Google finds:

http://google.com/search?q=flash-cookie+super-cookie

Privacy test:

http://nc.ddns.us/BetterPrivacy.htm (right column, Flash needed)

——————————————————————————————————–

PeerPoint — https://docs.google.com/document/d/1TkAUpUxdfKGr_5Qio2SlZcnBu_sgnZWdoVTZuD_Regs/edit# is licensed under aCreative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


Michael Maranda:

I recommend taking wagn for a spin and engaging with Gerry Gleason on ideas/issues.   Can try it out on cloudstore

Poor Richard:

Thanks for the edits, David

Poor Richard:

Thanks for the edits, Rebentisch

Jeremy Taylor:

for those following my email thread, this is your primary problem that I have a solution to!

Poor Richard:

Jeremy, since you've read some of the PeerPoint proposal I'd be very interested in a brief synopsis of how you think CouchDB might fit in as a possible component of this project. I haven't been able to investigate it in any detail yet. Also, I'm going to be disappointed if you have a solution for any of the PeerPoint requirements and don't share just a little bit. Do you have a website where I can get more info?

Poor Richard:

good corrections

Poor Richard:

I couldn't find a quote. Can you?

arebentisch:

I read it recently in F.Kittler Grammophone, Film, Typewriter

Financial Liberty at Risk-728x90




liberty-risk-dark