Topic: Security Vulnerabilities: Up and Down the OSI Stack


Just to review, the OSI stack model has seven layers. They are:

By default, when thinking about network security, there is something of a tendency to focus on issues at Layer 3.

However, in reality, we need to look both up and down the stack to address the security risks we face today.

Let's begin by looking down the stack.

1) Down the OSI Stack

It is a fundamental rule that higher layers cannot be secured without the lower layers also being secured, yet in recent years there has been limited attention to insecurities at the physical layer or data link layer, despite changes in network operational practice that include things like nation-wide layer two networks, and national and regional optical networks.

Currently known/familiar threats at lower levels of the OSI stack include ARP spoofing MITM (man-in-the-middle) attacks at layer two, and physical layer attacks such as passive optical taps or the interception of wireless network signals by attackers. While these attacks are well known, little research is currently focused on addressing those concerns. That needs to be corrected.

Less familiar attacks which may be relevant to the lower levels of the OSI stack (such as the physical layer) over the next five to fifteen years include:

Addressing those known and other reasonably anticipated threats will require a substantial program of additional research, including:

2) Up the OSI Stack

Simultaneously, at the same time there is a need to look "down the stack" and insure that all higher layers are built upon a sound foundation, we note that there is also increased miscreant interest "up the OSI stack," particularly at the application layer.

As noted by SANS Institute in their Top 20 Security Risks report, nearly half of the 4,396 total vulnerabilities reported in SANS @RISK data from November 2006 to October 2007 relate to web application vulnerabilities such as SQL Injection attacks, cross-site scripting, cross-site request forgeries, and PHP remote file inclusions (see )

This change of emphasis reflects miscreant efforts to obtain sensitive financial information such as credit card numbers or other personally identifiable information; in the government and commercial sector, an information-centric focus is presumed in counterintelligence and the protection of proprietary competitive information.

Arguably, proper application of encryption to data in transit and data at rest, along with improved application development practices to eliminate things like SQL injection issues, should largely mitigate these risks, and yet we know that is not the case.

Phishing, a social engineering attack on confidential data, continues to be a problem, for example. Because system integrity can be undercut by users volunteering their passwords, we need additional research into human factors so we can better understand how to keep human participants in complex security systems from serving as the "weakest link."

Similarly, SSH and SSL/TLS encryption along with two factor authentication (the use of both something you know, such as a password, and something you have, such as a hardware cryptographic token), should largely make technical credential capture attempts a futile exercise, yet we know that end-to-end strong encryption and two factor authentication is still the exception rather than the rule, nominally because of economic and ease-of-use issues.

We urgently need research work into how we can eliminate continued reliance on simple passwords transmitted in plain text, an outdated and incredibly insecure foundation security technology that is still rife across the Internet.

We don't know how to deploy two factor authentication at scale. Users don't want to tote a bandoleer of hardware tokens with them wherever they go, with perhaps one token for access to routers and other network devices, another for access to servers, and still others for commercial sector tasks such as personal bank access and stock brokers. Federated approaches based on Shibboleth have great potential in this area, but deployment/adoption has been slow to-date.

Or consider messaging security: while PGP/GnuPrivacyGuard has the potential to substantially improve the privacy and integrity of a ubiquitous application (email), we know that the uptake of that technology has been virtually non-existent beyond a small number of technical elites. We need to understand how to overcome those obstacles.

We also know that spam is now rampant. In fact, spam now constitutes 90% of all email, and it would not be an exaggeration to say that within five to fifteen years, 99%, 99.9% or even a greater percentage of all email may be spam unless effective measures take place.

When we get to the point where only one message in a thousand or one message in ten thousand is "real" (non-spam) will email continue to be viable as a foundation collaboration technology supporting scientific research? Ironically, because email is such a familiar technology, and because the sheer volume of spam is so overwhelming, and the email environment has become unsound in such a gradual way, scant attention tends to be paid to combating spam as a research topic. Failure to reverse that tendency may result in our loss of one of the Internet's foundation collaboration applications.


Over the next five to fifteen years, we believe substantial additional work will be needed to understand potential security threats at layers one and two, and to identify solutions which may mitigate those risks.

Substantial additional research will be required to address application security issues, psychological and economic issues related to computing and network security, scalable deployment and gradually worsening "mundane" security issues with potentially profound ultimate consequences.