Yoda: Post-Google Internet? Recursive InterNetwork Architecture (RINA) UPDATE 1

Advanced Cyber/IO

Tip of the Hat to Alert Reader.

An alternative (post-Google) Internet appears to be emerging.  It is called the Recursive InterNetwork Architecture. It is more secure and reliable than the current internet which is based on the TCP/IP protocol suite. Dr. John Day of Boston University discovered/invented it.






Alert Reader adds:

Network architecture research: TCP/IP vs RINA


In early 2013 I had the good fortune of going to Barcelona for a
workshop on Recursive InternetWork Architecture (RINA), hosted by
i2cat, produced by the Pouzin Society, and presenting work done under
the IRATI programme. RINA is a radical alternative to the incumbent
TCP/IP paradigm; the gulf is as wide as that between chemistry and

The intellectual thrust of the event was provided by John Day, a
veteran of the early ARPANET, and author of one of the most brilliant
books on computer networking ever written. The workshop was extremely
educational, and I would like to share a few of the highlights, to the
best of my own meagre understanding.

The Internet is the Flat Earth network

To make sense of RINA’s very existence, you have to first understand
its context, and the state of networking today. The Internet and
TCP/IP are a product of a complex political, economic, social and
technical process. The triumphant “standard story” of the open packet
Internet vanquishing the closed evil telco circuit empire is
incomplete, misleading, or just simply wrong. There are no original
stone tablets on which these specifications were handed down from the
mountain: it is an entirely man-made networking world.

Why is TCP/IP dominant today? Try a free government-funded ISP, a
gratis networking stack that didn’t need to earn its keep, a
totalitarian approach to networking where everything has to be on IP,
ongoing wars between IBM and telcos that hobbled better-engineered
rivals, and a whole bunch of political manoeuvring (read the article
and pay special attention to the words ‘or had other motives’). Don’t
take the victors’ PR at face value!TCP/IP is not derived from deep
foundational principles in the same way that computation is anchored
in the work of Gödel, Turing, Church and von Neumann.

Indeed, Internet Protocol is “Bandwidth Division Multiplexing”! It’s
just the new TDM, but with the time/space coin flipped over and a
mirror set of issues. Rather than great flow isolation but weak
multiplexing gain, we instead get weak flow isolation and great
multiplexing gain. What we want is both to be great! (This is indeed
possible – see my previous newsletter “Network of Probabilities”.) We
can now see how the flat Internet model is fundamentally constrained
and flawed. The existence of a working alternative, one that allows us
to peer over the networking horizon, proves that there are other ways
of seeing the world. However, rather than being round, the networking
world is recursive.

The Internet is not an inter-net

Our networking world is a product of trial and error. Unfortunately,
there are a lot more errors than we would like. In the process of its
birth, the Internet lost a layer, and ceased to be an inter-net. There
are no inter-network gateways that hide the implementation of one
network from the next. The most basic level of separation and
abstraction is missing: the Internet is not an inter-net, but a
concatenated network of networks.

As Day notes, that makes the Internet more like DOS than Windows. Sure
it’s a ‘success’ – and so was DOS. That doesn’t mean it’s the last
word or the end of the technology journey. You can see a summary of
these arguments in Day’s presentation “How in the Heck Do You Lose a
Layer!?” [PDF], or in the paper “Is the Internet an unfinished demo?”

What lies behind the Internet is an unconscious belief that networks
deliver packets between computers. This is obvious, right? The problem
is, it sees networking as a mechanistic activity, and fails to capture
its true nature as a form of distributed computing that is all about
moving information between computing processes, not network

You can see this play out in the way IP only partially delivers data,
as it addresses network interfaces, not the true destination
application process. This absence of a separating layer between
networks is the outcome of a basic mis-categorisation of what networks
are. It ends up resulting in complex hacks at every stage to fill in
the gaps and compensate for these errors.

Increasingly the work of the IETF and similar bodies is to create new
hacks to deal with the side-effects of problems from the old ones!
This presumes that the hacks work: packet fragmentation in IP has
never worked, and new hacks (Codel is the latest and greatest)
introduce unknown and unforeseen new hazards and failure modes.

Key features of RINA

RINA takes the polar opposite approach of ‘rough consensus (and
groupthink) and working code (with unknown failure modes)’. RINA is a
return to the fundamentals of networking architecture, based on strong
invariant design principles, and a rigorous and scientific approach to
cause and effect.

The core insight behind RINA is the observation of a simple recurring
pattern in all of distributed computing. “Communicate this for me from
here to over there” is a ‘what’, which is then followed by a bunch of
common functions that are the ‘how’. Those ‘how’ functions relate to
dividing the data stream up into datagrams and reassembling them. That
can be done in any way the lower layer sees fit, subject to the
contract it has with the upper layer.

Network nodes at a shared layer can collaborate on the ‘how’ as part
of a ‘team’ structure (the Distributed Inter-process communication
Facility or DIF) which provides services to the ‘what’ (Distributed
Application Facility or DAF). It’s all unfamiliar and confusing, until
you see its simplicity and beauty. There is just a single layer that
recurses over and over, at different scopes, as we share distributed
state by copying information. The very thing that is missing from the
Internet – inter-network gateways – is actually the defining
characteristic of how scalable distributed computing should actually

Benefits of RINA over TCP/IP

The number of distinct protocols and mechanisms required to deliver
the equivalent core functionality of TCP/IP and RINA are compared

Protocols: Internet – 15; RINA – 3
Non-security mechanisms: Internet – 89; RINA – 15
Security mechanisms: Internet – 28; RINA – 7

The simplicity speaks for itself. The benefits of the RINA approach
are numerous:

Scalability. The recursive structure scales indefinitely. No more
router table size explosion.
Security. Each layer is a securable container, and most of your
firewalls, session border controllers and intrusion systems disappear.
No more port scanning, much less scope for mischief.
Performance. The overheads of routing are far lower, the algorithms
can be implemented simply in silicon.
Manageability. You can swap out protocols and mechanisms at lower
layers without upper layers knowing or caring. Reconfigure your data
centre whilst it is running!
Flexibility. You can implement any and all QoS mechanisms within the
architecture, not just ‘best effort’, and (if the mechanism supports
it) create a composable trading system for allocating resources
according to any policy you see fit.
Reliability. Multi-homing goes from being complex to trivial.
Reliability is ordinary, not outrageously hard.
Mobility. No more complexity to address mobility as a special case: it
falls straight out of the architecture. You can shred a lot of your
3GPP standards, too.
Cost. No more hacks-upon-hacks. This is the minimal ‘necessary and
sufficient’ amount of functionality needed.

The journey from theory to practise

The technology being built using the RINA approach is currently at an
early stage. There are prototype demos that work in the lab, being
built by multiple teams. Many questions and issues are still being
thrown up. A lot of PhD theses remain to be written. (Who will write
the last ever PhD thesis on optimising TCP, I wonder, and when?)

So there remains a lot of work to do to take RINA from the laboratory
demos under progress to commercial viable products. We will have to
learn how to wrapper TCP/IP networks, bridge these technologies, and
offer a viable escape route from the mistakes of the past. The early
markets still need to be identified. I can think of some examples, but
maybe I should talk to a patent lawyer before I tell you!

Perhaps the hardest thing for RINA will be to fully escape the failed
“network alchemy” approach of TCP/IP and fully adopt “network
science”. That means using algebra to model the entirety of the
success and failure modes of the system. This deeply contrasts with
the current approach of “think of an algorithm, try it, tinker around
a bit more, think of a theory to justify it, run it in a simulator,
and extrapolate those specific results to be a general truth in the

If you cannot spot the logical leap, then consider these sentences:
“The sky in Arizona is blue, so the sky is always blue everywhere.” –
“This TCP/IP algorithm works now and here, so this TCP/IP algorithm
works always and everywhere.” Much networking research is riddled with
such basic methodological flaws.

Some controversial conclusions

There are some immediate – and undoubtedly controversial –
consequences of this work on RINA.

The first is that IPv6 is a waste of time and money. It is the wrong
answer to the wrong question. It fails to tackle the fundamental
problems of Internet Protocol: addressing the wrong thing (interfaces,
not applications); tightly coupling the whole system; confusing naming
and addressing; perpetuating hacks like DNS and Mobile IP to paper
over the gaps; and a host of other sins condemning us to networking
purgatory. Indeed, IPv6 will create a whole new slew of performance,
security and implementation problems we have yet to fully experience.
The absence of user benefit explains the slow take-up.

The long-term future of the Internet, without a “scientific networking
revolution” is a gradual increase in complexity and cost, and gradual
decrease in performance and security. There’s no sudden cliff and
collapse. Whilst everyone admires the elaborate baroque architecture,
the foundations are missing, and the cost of pinning the edifice
upright keep rising. The current Internet is a digital Venice:
fabulous, fancy and flawed as a long-term solution to the needs of a
modern civilisation.

Welcome to Networking Science

This was perhaps the most profound professional event I have ever
attended. I was fortunate enough to be asked to present an abbreviated
version of my “Lean Networking” presentation. In the audience was John
Day, my colleague Neil Davies, as well as Louis Pouzin – the inventor
of the datagram (aka packet). It very much felt like presenting to
Feynman, Bohr and Oppenheimer – an experience I shall not forget in a

Indeed, those conversations reminded me of this Wikipedia quote:
Feynman was sought out by physicist Niels Bohr for one-on-one
discussions. He later discovered the reason: most of the other
physicists were too in awe of Bohr to argue with him. Feynman had no
such inhibitions, vigorously pointing out anything he considered to be
flawed in Bohr’s thinking. Feynman said he felt as much respect for
Bohr as anyone else, but once anyone got him talking about physics, he
would become so focused he forgot about social niceties.

The design principles behind RINA [PDF], plus the 3 basic laws of ΔQ,
are effectively the foundational concepts of an emerging Networking
Science. RINA is the necessary and sufficient instantiation of those
design principles into an architecture. Likewise, Contention
Management (CM) is the necessary and sufficient means of implementing
control over allocation of ΔQ.

RINA requires letting go of a “flat (Earth)” model of networks; CM
requires letting go of a “work” model of networks. Both failed
paradigms are unconscious anthropomorphic models of packet systems –
“beads on a string” — that treats packets as if they were physical
objects, and misapplies the mathematics of the physical world to a
virtual one.

Letting go of both sets of false belief — at the same time — is
something very few people have contemplated thus far. Some distributed
computing nuclear arms technology is about to enter a conventional
networking war. My personal plan is to open a high-end arms dealership
for exotic means of slaying the competition. You can be sure that only
the finest quality of intellectual weaponry will be on offer.

Alert Reader says:

In TCP/IP, IP addresses identify an interface. For example if you have
three network cards on your desktop computer, then each card has an ip
address assigned to it manually or automatically through DHCP. In
RINA, nodes are named not the interfaces. Also RINA does not have port
ids which are also an interface. Instead, RINA has application names.

The current internet has a flat global address space. It really isn’t
an network of networks because each network would have its own
namespace if that were the case. In RINA, applications do not know or
have access to node names and all connections are authenticated. This
makes IP spoofing very difficult. Hackers can not attack what they can
not see. Even though a system is compromised with malware, the malware
will have no way to communicate outside of the system.

System adminstrators are able to secure applicatioins and operating
systems but can do very little to secure the network. If you have been
keeping up with the news you will have noticed that there have been
several major data breaches including banks, hospitals, insurance
companies, and credit rating agencies. It we had RINA, I believe that
these breaches could have been prevented.

Alert Reader Adds:

There are other technologies that are also pretty interesting that are
“open source”. There is RISC-V which is an open source instruction set
architecture. This is an alternative to x86 and ARM instruction sets.
This is a project that was started by Dr. David Patterson of U.C.
Berkeley. He wrote the book on computer hardware architecture. In a
computer, there may be several micro-processors and micro-controllers.
There is at least one on the motherboard. One in the hard-drive. One
in the graphics card. And one in the network card. It would be better
if all of these chips had the same instruction set architecture from
both a design and cost point of view. Having an open standard
instruction set architecture simplifies the design of a computer. In
computing complexity is the enemy of security.

Regarding, operating systems, there is Redox OS. This is a Linux like
operating system that has a micro-kernel as opposed to a
monolithic-kernel. Operating systems that are micro-kernel based are
more modular than monolithic ones. This means that the various
components of the operating system have been separated and are
independent of the other components of the operating system. The
components include the kernel itself, the drivers, the network
stack(TCP/IP suite), and administration tools such as the shell and
terminal. If any one of these components crashes it will not take down
the entire operating system. They can be respawned as new processes.
There have been other attempts at bringing micro-kernel based
operating systems to the market/public but Redox is the most promising
for two reasons. Firstly, one of the design goals of Redox is to be as
compatible with Linux as possible. Secondly, Redox is based on the
Rust programming language. It is a system programming language like
“C” but has the ability of check for errors while you are writing

Lately, Linux has become less Linux-like. Most of the major
distributions have adopted Systemd starting with Redhat. Systemd does
not adhere to the Linux/Unix philosophy and lacks transparency. There
are a few Linux distributions that are trying to avoid Systemd, but it
is becoming more difficult to do as the Linux ecology is slowly
transformed by Systemd proponents.

Alert Reader Adds

This is related to RINA in the sense that RINA provides for a more
stable, reliable, and secure infrastructure than TCP/IP. Any type of
business depends on trust whether that be physical or virtual. With
TCP/IP this is not possible. I think that we need to implement RINA
first and then cryptocurrencies, cloud computing, software defined
networking, network function virtualization, and the internet of
things. Software defined networking and network function
virtualization are fancy ways of saying remote system administration
and the automation of task related to system administration. To a
large organization with several(1000’s) servers SDN and NFV are a
boon. TCP/IP is broken beyond repair.

The following interview provides more insight.


A new kind of network: RINA progress update

An interview with Dr Eduard Grasa, i2CAT

Recursive InterNet Architecture (RINA) is a “return to fundamentals”
redesign of data networking. (For an introduction, see my 2013 article
“TCP/IP vs RINA”.) I interviewed Eduard Grasa of Fundacio i2CAT, a
research centre in Barcelona. Eduard is a member of the European
network research community who is actively leading cutting-edge

[Disclosure: My associates at Predictable Network Solutions Ltd are
part of the PRISTINE consortium. I am on the external advisory board
of the ARCFIRE project.]

MG: What is i2CAT and what is your role?

EG: The goal of i2CAT is to to perform innovation and technology
transfer within the Internet infrastructure and applications space. We
bring together universities, companies, government and citizens. Our
focus is more on applied research, innovation and prototype
development, rather than basic research (although there are some

We have lots of freedom to explore new ideas. We decided to get
involved in the RINA initiative, since it has the kind of long-term
big impact that we seek. My personal involvement in RINA came after I
completed my PhD here at i2CAT. I subsequently read John Day’s book on
the fundamentals of networking, and knew i2CAT had to be involved in
RINA somehow. Right now we are a great team of four people dedicated
to RINA R&D activities.

Who should care about RINA and why?

RINA should interest users, operators, vendors and policy makers. All
of these stakeholders should be concerned about society having a
reliable underlying platform for distributed computing.

At the end of the day, networks are just an enabler, not an end goal
in themselves. They help applications to deliver experiences of value
to users. RINA is part of a process of deepening our understanding, so
we can make networks as simple as possible. This frees up resources
for more and better applications.

At present the people who are most interested in RINA are network
researchers (although we see growing interest from funding and
standards bodies). They have been working with TCP/IP for 40 years,
despite its inflexibility. However, TCP/IP has been taken as being a
general framework, whereby all problems have to be solved within an
“all-IP” approach: Internet of Things, telephony over packet data,
e-health, etc.

What’s the attraction of RINA to network researchers?

At present, each solution has to be developed for a specific context.
There is a lot of pain as the supposedly general framework of TCP/IP
doesn’t fit their application needs. As a result, they offer proposals
for improving TCP/IP. This makes it ever more complex over time.

A different framework like RINA makes it easier to transfer general
concepts to the real world. As it is a more general framework, it
helps to reduce complexity. This means fewer bugs and fudges. So the
real value of RINA is as a means of solving real-world problems of
reliability, performance, capability and cost.

With TCP/IP you can combine RFCs to get point solutions to be
RINA-like; but when you put everything together you get an expensive
mess. Our working hypothesis is that RINA is indeed a general theory
of computer networking.

For instance, there are innovative researchers working on the ∆Q
performance calculus and resulting contention management technology.
That technology was first implemented in TCP world. Its developers are
looking at how it could have a better and simpler implementation in
the RINA world. This is a win for everybody.

At what stage of development is RINA?

To gauge the development of RINA you have to differentiate the overall
general reference model and its specification from specific
(prototype) implementations. We are architects responsible for that
general model. As researchers we are not professional implementers of
products for sale and public use.

What motivates us is how RINA, being much newer than TCP/IP, offers a
“playground” for new ideas. With the overall specification we have
made good progress, advanced enough to demonstrate interoperability
between RINA implementations. This is comparable to “vanilla” TCP/IP
internetworking, with more advanced experimental specifications being
currently drafted in European research projects.

We currently have three RINA prototypes:

A software-based simulator
An academic, Java-based research implementation by Boston University
A Linux-based implementation here in Europe funded by the EU

One way of measuring RINA’s progress is through “technology readiness
levels”(TRLs). These run from one (merely a concept never tried)
through to ten (a fully matured market). TCP/IP is clearly at TRL10.
Without the aim of being super-precise, but to provide some references
to the reader:

Software Defined Networking (SDN) is TLR9,
Network Function Virtualisation (NFV) around TRLs 7-8, and
some technologies labelled as “5G enablers” are around TRLs 4-5.

Our current work on RINA prototypes will soon reach TRL5. Once we’ve
done the demonstrations at scale over the next couple of years, it
will reach TRL6.

The EU is backing RINA through its R&D investments. What work are
those projects undertaking?

There are three projects:

IRATI (2013-2014): This was a RINA implementation for Linux, doing
data networking over Ethernet (bypassing IP). The use case was to
apply RINA at the data centre space. It’s an open source prototype
that continues to be used.
PRISTINE (2014-2016): This is a bigger project than IRATI. It is
looking at the programmability of RINA, and how a single architecture
offers solutions to a number of areas: congestion management, resource
allocation, routing, security, and network management. It also in
expands the number of use cases: SasS-style cloud networking, telco
applications (e.g. NFV), and distributed clouds. There is an SDK and
improved prototype, so more people can use it without needing to take
a PhD first.
ARCFIRE (2016-2017): This latest project has two main goals. Firstly,
to take the prototype to TRL6. RINA is then a proven general design
for delivery of services on any access technology. Secondly, we show
that RINA is a simplerdesign for operator networks. This is framed as
a “5G” goal, but we see it as broader: a design for a better kind of

The EU has dedicated pots of funding for technologies that are seen as
“strategic”, such as 5G, SDN, and NFV. RINA is still not yet at that
level; it is one of a number of point investments. That said, it is
closely related to SDN and 5G. We are raising its visibility to make
the EU more aware of its potential impact and strategic importance.

What are the benefits that RINA has demonstrated so far?

So far RINA has been demonstrated through concept analysis,
simulations, and proofs of concept at a small scale. We know that it
offers a complexity collapse in designing and building networks. This
brings down skill level required to design and operate the network.

There are specific benefits in every aspect of networking. For
example, take security, where there are many inefficiencies
contributing to the insecurity of TCP/IP. This framework exposes
network addresses to applications, so it’s easy to get information on
the internals of an IP-based network. The mechanism to attack the
network is built-in by design! In contrast, RINA is a set of
completely isolated containers that can communicate in a controlled
way. How those associate can be secured, hiding the network’s internal
structure from applications.

Also, as one well-known security expert says, “complexity is the enemy
of security”. If you can’t fully understand a system and reason about
it, you can’t know where the vulnerabilities lie. Just look at the
number of new RFCs and protocols coming out of the IETF each year: it
just goes up and up. This results in a very complex security problem
due to the lack of any clear structure.

Do you have any other examples of RINA’s benefits?

Yes. Another example is the bounding routing table sizes. The Internet
is misnamed: it is really a “catenet” the concatenates networks, not a
true internetwork with intermediate gateways. That means it has a flat
structure, so internal and inter-network routing are done in a single
layer. All applications also have to be in that single layer.

In order to scale, you have to make that layer get bigger, so routing
tables go up in size. This makes routers more expensive and brittle.
Whilst people may see the Internet as being big now, projections show
it’s tiny compared to what we want to achieve. Yet we already have a
lot of scaling problems.

We see proposed solutions like IPv6, but these all have the same
fundamental problems, such as trying to scale a single layer. The
constraints of the real world make it hard to deploy those
technologies in an economic manner.

Mobility and multi-homing are also other areas where RINA adoption
will bring considerable simplification, again due to the current
Internet’s incomplete naming and addressing scheme. One of the results
that ARCFIRE will be showing is how a RINA mobile access network is
more efficient that current state of the art designs (such as LTE

What are RINA’s benefits that you anticipate in future?

I see RINA offering increasing simplicity and lower overhead, with
more security. Networks will over time become ever more responsive to
application needs. That support is missing today, as applications lack
a clean way in which to express their requirements.

RINA’s generality also means we can reason in abstract about networks,
so it is far easier to secure. We can perform more sophisticated
architecture and policy changes, and make their outcome predictable.
This means we can make the network both more dynamic and cheaper at
the same time.

Indeed, RINA is a framework to address many efficiency issues, such as
power consumption. One way to get to low power use is to put lots of
effort into electronics design (e.g. hardware with less energy
consumption). The other way is to make more efficient use of what
already exists. RINA’s smaller routing tables allow us to use 1/10 to
1/20 of the memory.

What are the technical challenges facing RINA going forward?

RINA is a network architecture, not a protocol (i.e. a style of
construction, not a building). It tries to separate the general theory
(“mechanism”) from specific solutions (“policies”). The key problem is
to get this abstraction right. In doing so, we face two pressures.

One is to go faster, so as to make RINA visible soon and to get
critical mass. The other is to slow down, as we can’t grow too fast.
The risk is we get divergent solutions, or “polluted” versions of RINA
that aren’t right due to invalid assumptions. So we need to balance
these desires, so as to keep this abstraction “neat”.

We must also understand what sort of policies are the best ones for
specific problems. So I might be an engineer who covered the basic
concepts in university. I now need to apply those ideas to my data
centre. The RINA community can offer me the right mechanisms, policies
and trade-offs, and caution me against unwise or infeasible

So we need practice at how to instantiate the general theory to the
point solutions to develop this body of experience and expertise.
After all, if you need to update the general theory when you
instantiate it, you don’t yet have a general theory! We so don’t want
to recreate another mess like TCP/IP.

That means RINA is not a closed end state, as that’s not science. It
will and must evolve since it is a working hypothesis for a general
theory of computer networking. At the end of the day, we are not here
to promote RINA, but rather to understand how computer networks
actually work. That’s real science! In ten years’ time, our progress
will be measured by how well we learn to see things differently.

What are the non-technical barriers that RINA faces?

The key problem is inertia: TCP/IP has a whole industry behind it,
with a lot of investment and momentum. Changing minds is complicated.
People in universities who are learning computing get taught IP as
“the” way we do things; it’s all Cisco routers and subnets. They don’t
get taught abstract models, or how to question these approaches.

When you come along as say “this is wrong for X, Y, Z reasons!”,
people find it a big surprise. “Come on, how come it’s wrong? We have
WiFi and the Internet!”, they say. So we need a critical mass of
people offering a credible alternative. Our job is not to convince
people for RINA’s own sake, but to offer them the deep understanding
as to why the current model is a money pit. Once they understand that,
then they will join our movement.

Another challenge is that you need the right people who can think
beyond the short term. We have to develop a new theory, with
supporting protocols and implementations. It’s so easy to carry on
with existing approach, even if it doesn’t take us to where we want to

How does RINA relate to SDN, network virtualisation and 5G?

With SDN there are as many (if not more) SDNs as there are engineers
writing SDN code! In contrast, RINA is a single flexible framework
that can adapt to any application’s requirements. It is the “perfect”
incarnation of SDN, and also much cheaper.

RINA makes the functions of a layer programmable, but only what’s
important. Meanwhile, SDN can originate yet another complexity mess:
it’s the “machine code” of network programming, while RINA is like a
high-level language that makes it “safer” and “simpler”. There are
more restricted definitions of SDN that are basically platform
implementation approaches that separate out packet processing “faster”
parts (e.g. forwarding done in silicon) from the rest (route
computation done in general x86 hardware). RINA routers can leverage
the same implementation strategy where it makes sense.

Virtualisation in RINA is both nowhere and everywhere: the Distributed
IPC Functions (DIFs) are the layers that abstract (and hence
virtualise) the underlying connectivity. Today you get taught the
five-layer model in network engineering courses. When you have those
fixed layers you virtualise each one independently. But if you have
(only) as many layers as you need, don’t need to define
“virtualisation”, as its already built-in as a first class part of the

5G for some people is just a new radio interface for mobile access
networks, maybe with some updates in the mobile network core. Seen
more holistically, it is about being able to support any application
on any access technology with just one network infrastructure. For
example, a polyservice network is a potential incarnation of 5G. With
flexible layers we can adapt the policies of the layers (flow control,
routing etc.) to the nature of the bearer and/or underlying layers.
This is what we are looking into in the ARCFIRE project.

Where do you see the early commercial applications of RINA and when?

Ah, that needs a crystal balls that nobody has! Nonetheless, the RINA
community sees two broad approaches to its initial adoption: On top of
IP, and under IP. (RINA can also be deployed alongside IP.)

For “under IP”, the data centre is one example. We can also replace
telecoms protocols like MPLS. This lets us carry IP (as well as many
other network technologies) on top of one well-behaved network with
traffic engineering and security. Another example is Metropolitan Area
Networks (MANs) offering carrier Ethernet services. We can substitute
for their “queue and mark” structures.

For “on top of IP”, RINA acts as a virtualisation solution, with
network overlays on top of an IP substrate. You can add as many layers
as you want to do services like VPNs, SD-WAN, or distributed cloud to
the home.

I would envisage deployed and working commercial applications (at a
small scale) around 2020, using RINA for strategic competitive

Who should get involved, and what’s the next step?

RINA opens up new “green field” markets, and is of immediate interest
to network researchers, operators, vendors and standards bodies. We
are coordinating these institutions through an informal group, the
Pouzin Society (named after Louis Pouzin, a co-inventor of packet

The role of the Pouzin Society is to disseminate the science and
technology to the R&D community. It also curates the RINA
specifications, so you can think of it as an embryonic standards body.
As we grow and reach critical mass we will eventually reach size where
it justifies having formal funding to do these tasks a more
professional way.

A specific target group at present are standards development
organisations (SDOs). We are starting to work with some of them,
making the point that RINA is a better platform for new ideas then
TCP/IP. We are in the process of engaging ISO and ETSI as our initial

In order to make the first step, I would encourage readers to visit
the Pouzin Society website, where we have a number of videos and
papers. We are planning a workshop in Europe around October.

See Especially:

Below is a presentation by Dr. John Day about RINA.


This is a presentation by Eduardo Grasa about RINA.


This is a presentation by Vladimir Vesely about RINA.


See Also:

If you are interested in RINA, might also be interested in the
economists Henry George and David Ricardo.










Opt in for free daily update from this free blog. Separately The Steele Report ($11/mo) offers weekly text report and live webinar exclusive to paid subscribers, who can also ask questions of Robert. Or donate to ask questions directly of Robert.