Sepp Hasslberger: Rebuilding the Internet as a Commons — Local Mesh First

Access, Autonomous Internet, BTS (Base Transciever Station), Cloud, Design, Hardware, Innovation, P2P / Panarchy, Software, Spectrum
Sepp Hasslberger
Sepp Hasslberger

The internet needs to be re-built from the bottom up. Network locally first and only then connect to the world “out there”.  A local wireless network might be coming to your neighbourhood soon. 

The Rise of the Network Commons, Chapter 1 (draft)

Armin Medosch

Continue reading “Sepp Hasslberger: Rebuilding the Internet as a Commons — Local Mesh First”

Yoda: CISCO $1B for Cloud — How Open? Safe?

Cloud
Got Crowd? BE the Force!
Got Crowd? BE the Force!

Cisco Unveils $1B Cloud Plan

EXTRACTS

On Monday, Cisco unveiled an investment worth more than $1 billion to launch the world’s largest global Intercloud over the next two years. This open network of clouds will be hosted across a global network of Cisco and partner data centers featuring application process interfaces (APIs) for rapid application development, built out as part of an expanded suite of value-added application- and network-centric cloud services.

. . . . . . .

Riegel said the global Intercloud differentiates itself from Amazon Web Services in five ways. First, its focus will be on apps, not the infrastructure. Secondly, he explained, Cisco is the only company that can provide quality of service from the network all the way up to the application. Cisco has also designed the Intercloud for local data sovereignty in the post Eric Snowden era, where customers increasingly request that their data stay in their own home countries or the particular countries where they do business.

The fourth big difference is the Intercloud is firmly based on an open model, or open source innovation based on OpenStack. The last difference is the Cisco network will incorporate real-time analytics.

Read full article.

Stephen E. Arnold: Pricing the Cloud – Amazon Web Services Not Worth It For Most

Cloud
Stephen E. Arnold
Stephen E. Arnold

Calculating How Much Amazon Costs

 

Amazon Web Services are a good way to store code and other data, but it can get a little pricey. Before you upload your stuff to the Amazon cloud, check out Heap’s article, “How We Estimated Our AWS Costs Before Shipping Any Code.” Heap is an iOS and Web analytics tool that captures every user interaction. The Heap team decided to build it, because there was not a product that offered ad-hoc analysis or analyzed an entire user’s activity. Before they started working on the project, the team needed to estimate their AWS costs to decide if the idea was a sustainable business model.

 

They needed to figure how much data was generated by a single user interaction, but then they had to find out where the data was stored and what to store it on. The calculations showed that for the business model to work a single visit would have to yield an average one-third of a cent to be worthwhile for clients.

 

CPU cores, compression, and reserve instances reduced costs, but there are some unexpected factors that inflated costs:

 

1. AWS Bundling. By design, no single instance type on AWS strictly dominates another. For example, if you decide to optimize for cost of memory, you may initially choose cr1.8xlarge instances (with 244GB of RAM). But you’ll soon find yourself outstripping its paltry storage (240 GB of SSD), in which case you’ll need to switch to hs1.8xlarge instances, which offer more disk space but at a less favorable cost/memory ratio. This makes it difficult to squeeze savings out of our AWS setup.

2. Data Redundancy. This is a necessary feature of any fault-tolerant, highly available cluster. Each live data point needs to be duplicated, which increases costs across the board by 2x.”

 

Heap’s formula is an easy and intuitive way to calculate pricing for Amazon Cloud Services. Can it be applied to other cloud services?

 

Whitney Grace, January 30, 2014

 

Sponsored by ArnoldIT.com, developer of Augmentext

Stephen E. Arnold: In the Cloud Big Data Meta Data Hack

Advanced Cyber/IO, Cloud, Data, IO Impotency, IO Mapping, IO Sense-Making
Stephen E. Arnold
Stephen E. Arnold

Finally Some Cloudy News on Metadata

For Obama’s 2012 re-election campaign, his team broke down data silos and moved all the data to a cloud repository. The team built Narwhal, a shared data store interface for all of the campaigns’ application. Narwhal was dubbed “Obama’s White Whale,” because it is almost a mythical technology that federal agencies have been trying to develop for years. While Obama may be hanging out with Queequag and Ishmael, there is a more viable solution for the cloud says GCN’s article, “Big Metadata: 7 Ways To Leverage Your Data In the Cloud.”

Data silo migration may appear to be a daunting task, but it is not impossible to do. The article states:

Continue reading “Stephen E. Arnold: In the Cloud Big Data Meta Data Hack”

Patrick Meier: Best 15 Blogs of the Year from iRevolution [Big Data, Crisis Mapping, Disaster Response, Truth, Trust, Twitter]

Cloud, Crowd-Sourcing, Data, Design, Geospatial, Governance, Innovation, P2P / Panarchy, Resilience
Patrick Meier
Patrick Meier

The Best of iRevolution in 2013

iRevolution crossed the 1 million hits mark in 2013, so big thanks to iRevolution readers for spending time here during the past 12 months. This year also saw close to 150 new blog posts published on iRevolution. Here is a short selection of the Top 15 iRevolution posts of 2013:

Continue reading “Patrick Meier: Best 15 Blogs of the Year from iRevolution [Big Data, Crisis Mapping, Disaster Response, Truth, Trust, Twitter]”

Patrick Meier: Humanitarian Response in 2025

Cloud, Crowd-Sourcing, Culture, Data, Design, Geospatial, Governance, Innovation, Knowledge, Mobile, P2P / Panarchy, Resilience
Patrick Meier
Patrick Meier

Humanitarian Response in 2025

I’ve been invited to give a “very provocative talk” on what humanitarian response will look like in 2025 for the annual Global Policy Forum organized by the UN Office for the Coordination of Humanitarian Affairs (OCHA) in New York. I first explored this question in early 2012 and my colleague Andrej Verity recently wrote up this intriguing piece on the topic, which I highly recommend; intriguing because he focuses a lot on the future of the pre-deployment process, which is often overlooked.

Continue reading “Patrick Meier: Humanitarian Response in 2025”

Patrick Meier: Combining Radio, SMS, and Advanced Computing for Disaster Response

Cloud, Crowd-Sourcing, Design, Geospatial, Governance, Innovation, Mobile, Resilience
Patrick Meier
Patrick Meier

Combining Radio, SMS and Advanced Computing for Disaster Response

I’m headed to the Philippines this week to collaborate with the UN Office for the Coordination of Humanitarian Affairs (OCHA) on humanitarian crowdsourcing and technology projects. I’ll be based in the OCHA Offices in Manila, working directly with colleagues Andrej Verity and Luis Hernando to support their efforts in response to Typhoon Yolanda. One project I’m exploring in this respect is a novel radio-SMS-computing initiative that my colleague Anahi Ayala (Internews) and I began drafting during ICCM 2013 in Nairobi last week. I’m sharing the approach here to solicit feedback before I land in Manila.

Screen Shot 2013-11-25 at 6.21.33 AM

The “Radio + SMS + Computing” project is firmly grounded in GSMA’s official Code of Conduct for the use of SMS in Disaster Response. I have also drawn on the Bellagio Big Data Principles when writing up the in’s and out’s of this initiative with Anahi. The project is first and foremost a radio-based initiative that seeks to answer the information needs of disaster-affected communities.

Continue reading “Patrick Meier: Combining Radio, SMS, and Advanced Computing for Disaster Response”