In 2011. I wrote a small position paper in which I argued that IT (or ICT if you wish to be trendy) systems are complex systems. That paper is a consequence of risk assessment process I had to do and it summarized what I was thinking about risk analysis at that time. Then, as well as now, I firmly believe that risk analysis, as it is currently done, isn't a right way to go to achieve security of IT. Too many possibilities, too subjective, too dependent on specific situation and environment, too slow, no way of testing it, not to mention measuring how good it is, etc. Just to be clear, it is not that I'm for abolition of risk assessment, because currently it is the only thing we have, but I strongly believe that we should and could much better.
This post updates on the paper. I decided not to write a new version, but to add to it using blog.
First, let me say that in the paper I missed one important component, people. People are very important part of IT systems that is strongly intervened with it, as users, administrators, even attackers. In general, any person that comes into connection with the system, is part of it. I tinkered with that thought for some time now, but after I watched Igor Nikolic's talk on TEDxRotterdam, I was certain. So, based on that I can very confidently claim that IT system is a complex system. Now, this can look like I invented a hot water as there is a long known fact that people are the weakest link in the security. But, despite this fact people and technology we treated, and are treated, separately. Not only they are treated separately, but even specific persons and components of IT are treated separately (as in risk assessment process).
I'll also mention two references that I think are related and important for this topic. The first one is Complexity and Emergent Behaviour in ICT Systems. That one was written in 2004. and it beat me for 8 years. :( Ah, well, I suppose I should have done research a bit more thoroughly. But then, after reading it,it doesn't seem to me that there is overlap between what I'm claiming and what they do. Nor we are talking about the same things. They are definitely talking about complexity of ICT systems, but for them, ICT systems are large scale systems. I haven't had impression that they are talking about information systems of companies. Well, overlap could happen if we are talking about large enterprises, but I'm talking about information systems of all sizes. They talk a lot about complex systems in general, and they also survey research about complex systems in general.
The second reference is analysis of supposedly emergent phenomena on the Internet: Internet Failures: an Emergent Sea of Complex Systems and Critical Design Errors?. This one is interesting because it dissects whether certain perceived behavior is or is not emergent behavior. I agree with the conclusions of that paper. Especially about failure of root DNS not being emergent behavior. :)
Random notes of what's on my mind. Additional materials you'll find on my homepage.
Showing posts with label research. Show all posts
Showing posts with label research. Show all posts
Tuesday, September 4, 2012
Friday, February 8, 2008
New Internet architecture, my take at it no. 1
Reading all those papers about new Internet architecture simply doesn't give me peace. What is the solution? Probably it is a simple one in a concept, though , as always, the devil is in the details. Look at the Internet now. When it was first proposed to use packet switching it looked like lunatics' idea and now it's so normal we don't even think about it and take it for granted. So, it's strange feeling that probably I'm looking and thinking about solution but I'm not aware of it.
So, let me make try number one!
What about making Internet in an onion layered style? The most inner layer, 0th layer, forms the core and makes the most trustfull and protected part of the network. It's not possible for outer layers to access anything inside inner layers (here we could maybe take inspiration from Bell-LaPadula and similar models here?). The infrastructure of the Tier 1 NSPs could form this 0th layer. N-th network layer offers transportation services to (N-1)-th layer. This model would protect inner layers from the outer layers, as outer layers would have no access to inner layers of the network. Something similar is already done with MPLS. But MPLS is deployed inside autonomus system, not as a global concept.
There could be several layers corresponding to current Tier 1, 2 and 3 ISPs. Each layer with more and more participants, and accordingly, more and more untrustworthy. Lower layers could form some kind of isolation layer between all the participants and thus, protect them from the configuration errors. Or mallicius attacks. Note, that this could be problematic as it means that lower layers not only encapsulate higher layers, but also inspect them, or assemble and disassemble. It could be hard to do so it's questionable whether and how this is achiavable.
Each layer could use it's own communication protocol, most suited for the purpose and environemnt it works in. For example, in the core layer there is necessity for fast switching as huge speed could be expected in the years to come with extremly low loss rate, so packet formats best adjusted to that purpose should be used. Probably, the outer - user - layers, would need to have more features, for example, quality of service, access decisions and a like. Futhermore, maybe lossy network is used, e.g. wireless network, so some additional features are necessary.
Communication of request to lower layers could be done withih the format of the packets, as ATM did where it's cells had different format when entering network and inside the network, so called UNI and NNI.
We could further envision (N-1)th layer of the onion for the content distribution. This layer's task could be to distribute content using services from the (N-2)th layer. Content could be anything you can think of, e.g. different documents (openoffice, pdf), video, audio, Web pages, mails, even key strokes and events for remote work and gaming. Those are very different in nature, with probably many more yet to be invented, so, this layer should be extensible. It could take care of access decisions and a like. Note that content layer doesn't work with parts of the objects, but with the whole ones. So, if user requests a movie, this movie is completly transfered to content network ingerent for the user at it's current location.
This could make servers less susceptible to attacks as they wouldn't be directly visible to the users!
Finally, Nth layer could be a user layer. In this layer user connects to the network and requests or sends content addressed with variaty of means. For example, someone could request particular newspaper's article from the particular date. The content network would search for the nearest copy of this contents, and use core network to transfer the object to the user. Someone else could request a particular film, and content network would search for it and present it to the user.
Just as a note, I watched VJ's lecture in Google and this is on the track of what he proposes.
So, let me make try number one!
What about making Internet in an onion layered style? The most inner layer, 0th layer, forms the core and makes the most trustfull and protected part of the network. It's not possible for outer layers to access anything inside inner layers (here we could maybe take inspiration from Bell-LaPadula and similar models here?). The infrastructure of the Tier 1 NSPs could form this 0th layer. N-th network layer offers transportation services to (N-1)-th layer. This model would protect inner layers from the outer layers, as outer layers would have no access to inner layers of the network. Something similar is already done with MPLS. But MPLS is deployed inside autonomus system, not as a global concept.
There could be several layers corresponding to current Tier 1, 2 and 3 ISPs. Each layer with more and more participants, and accordingly, more and more untrustworthy. Lower layers could form some kind of isolation layer between all the participants and thus, protect them from the configuration errors. Or mallicius attacks. Note, that this could be problematic as it means that lower layers not only encapsulate higher layers, but also inspect them, or assemble and disassemble. It could be hard to do so it's questionable whether and how this is achiavable.
Each layer could use it's own communication protocol, most suited for the purpose and environemnt it works in. For example, in the core layer there is necessity for fast switching as huge speed could be expected in the years to come with extremly low loss rate, so packet formats best adjusted to that purpose should be used. Probably, the outer - user - layers, would need to have more features, for example, quality of service, access decisions and a like. Futhermore, maybe lossy network is used, e.g. wireless network, so some additional features are necessary.
Communication of request to lower layers could be done withih the format of the packets, as ATM did where it's cells had different format when entering network and inside the network, so called UNI and NNI.
We could further envision (N-1)th layer of the onion for the content distribution. This layer's task could be to distribute content using services from the (N-2)th layer. Content could be anything you can think of, e.g. different documents (openoffice, pdf), video, audio, Web pages, mails, even key strokes and events for remote work and gaming. Those are very different in nature, with probably many more yet to be invented, so, this layer should be extensible. It could take care of access decisions and a like. Note that content layer doesn't work with parts of the objects, but with the whole ones. So, if user requests a movie, this movie is completly transfered to content network ingerent for the user at it's current location.
This could make servers less susceptible to attacks as they wouldn't be directly visible to the users!
Finally, Nth layer could be a user layer. In this layer user connects to the network and requests or sends content addressed with variaty of means. For example, someone could request particular newspaper's article from the particular date. The content network would search for the nearest copy of this contents, and use core network to transfer the object to the user. Someone else could request a particular film, and content network would search for it and present it to the user.
Just as a note, I watched VJ's lecture in Google and this is on the track of what he proposes.
Subscribe to:
Comments (Atom)
About Me
- Stjepan Groš (sgros)
- scientist, consultant, security specialist, networking guy, system administrator, philosopher ;)