School in the Cloud allows learning to happen anywhere by supporting children all over the world to tap into their innate sense of wonder and ability to work together in Self Organised Learning Envrionments.
On Monday, Cisco unveiled an investment worth more than $1 billion to launch the world’s largest global Intercloud over the next two years. This open network of clouds will be hosted across a global network of Cisco and partner data centers featuring application process interfaces (APIs) for rapid application development, built out as part of an expanded suite of value-added application- and network-centric cloud services.
. . . . . . .
Riegel said the global Intercloud differentiates itself from Amazon Web Services in five ways. First, its focus will be on apps, not the infrastructure. Secondly, he explained, Cisco is the only company that can provide quality of service from the network all the way up to the application. Cisco has also designed the Intercloud for local data sovereignty in the post Eric Snowden era, where customers increasingly request that their data stay in their own home countries or the particular countries where they do business.
The fourth big difference is the Intercloud is firmly based on an open model, or open source innovation based on OpenStack. The last difference is the Cisco network will incorporate real-time analytics.
Amazon Web Services are a good way to store code and other data, but it can get a little pricey. Before you upload your stuff to the Amazon cloud, check out Heap’s article, “How We Estimated Our AWS Costs Before Shipping Any Code.” Heap is an iOS and Web analytics tool that captures every user interaction. The Heap team decided to build it, because there was not a product that offered ad-hoc analysis or analyzed an entire user’s activity. Before they started working on the project, the team needed to estimate their AWS costs to decide if the idea was a sustainable business model.
They needed to figure how much data was generated by a single user interaction, but then they had to find out where the data was stored and what to store it on. The calculations showed that for the business model to work a single visit would have to yield an average one-third of a cent to be worthwhile for clients.
CPU cores, compression, and reserve instances reduced costs, but there are some unexpected factors that inflated costs:
1. “AWS Bundling. By design, no single instance type on AWS strictly dominates another. For example, if you decide to optimize for cost of memory, you may initially choose cr1.8xlarge instances (with 244GB of RAM). But you’ll soon find yourself outstripping its paltry storage (240 GB of SSD), in which case you’ll need to switch to hs1.8xlarge instances, which offer more disk space but at a less favorable cost/memory ratio. This makes it difficult to squeeze savings out of our AWS setup.
2. Data Redundancy. This is a necessary feature of any fault-tolerant, highly available cluster. Each live data point needs to be duplicated, which increases costs across the board by 2x.”
Heap’s formula is an easy and intuitive way to calculate pricing for Amazon Cloud Services. Can it be applied to other cloud services?
iRevolution crossed the 1 million hits mark in 2013, so big thanks to iRevolution readers for spending time here during the past 12 months. This year also saw close to 150 new blog posts published on iRevolution. Here is a short selection of the Top 15 iRevolution posts of 2013:
I’ve been invited to give a “very provocative talk” on what humanitarian response will look like in 2025 for the annual Global Policy Forum organized by the UN Office for the Coordination of Humanitarian Affairs (OCHA) in New York. I first explored this question in early 2012 and my colleague Andrej Verity recently wrote up this intriguing piece on the topic, which I highly recommend; intriguing because he focuses a lot on the future of the pre-deployment process, which is often overlooked.
I’m headed to the Philippines this week to collaborate with the UN Office for the Coordination of Humanitarian Affairs (OCHA) on humanitarian crowdsourcing and technology projects. I’ll be based in the OCHA Offices in Manila, working directly with colleagues Andrej Verity and Luis Hernando to support their efforts in response to Typhoon Yolanda. One project I’m exploring in this respect is a novel radio-SMS-computing initiative that my colleagueAnahi Ayala (Internews) and I began drafting during ICCM 2013 in Nairobi last week. I’m sharing the approach here to solicit feedback before I land in Manila.
The “Radio + SMS + Computing” project is firmly grounded in GSMA’s official Code of Conduct for the use of SMS in Disaster Response. I have also drawn on the Bellagio Big Data Principles when writing up the in’s and out’s of this initiative with Anahi. The project is first and foremost a radio-based initiative that seeks to answer the information needs of disaster-affected communities.