Reading all those papers about new Internet architecture simply doesn't give me peace. What is the solution? Probably it is a simple one in a concept, though , as always, the devil is in the details. Look at the Internet now. When it was first proposed to use packet switching it looked like lunatics' idea and now it's so normal we don't even think about it and take it for granted. So, it's strange feeling that probably I'm looking and thinking about solution but I'm not aware of it.
So, let me make try number one!
What about making Internet in an onion layered style? The most inner layer, 0th layer, forms the core and makes the most trustfull and protected part of the network. It's not possible for outer layers to access anything inside inner layers (here we could maybe take inspiration from Bell-LaPadula and similar models here?). The infrastructure of the Tier 1 NSPs could form this 0th layer. N-th network layer offers transportation services to (N-1)-th layer. This model would protect inner layers from the outer layers, as outer layers would have no access to inner layers of the network. Something similar is already done with MPLS. But MPLS is deployed inside autonomus system, not as a global concept.
There could be several layers corresponding to current Tier 1, 2 and 3 ISPs. Each layer with more and more participants, and accordingly, more and more untrustworthy. Lower layers could form some kind of isolation layer between all the participants and thus, protect them from the configuration errors. Or mallicius attacks. Note, that this could be problematic as it means that lower layers not only encapsulate higher layers, but also inspect them, or assemble and disassemble. It could be hard to do so it's questionable whether and how this is achiavable.
Each layer could use it's own communication protocol, most suited for the purpose and environemnt it works in. For example, in the core layer there is necessity for fast switching as huge speed could be expected in the years to come with extremly low loss rate, so packet formats best adjusted to that purpose should be used. Probably, the outer - user - layers, would need to have more features, for example, quality of service, access decisions and a like. Futhermore, maybe lossy network is used, e.g. wireless network, so some additional features are necessary.
Communication of request to lower layers could be done withih the format of the packets, as ATM did where it's cells had different format when entering network and inside the network, so called UNI and NNI.
We could further envision (N-1)th layer of the onion for the content distribution. This layer's task could be to distribute content using services from the (N-2)th layer. Content could be anything you can think of, e.g. different documents (openoffice, pdf), video, audio, Web pages, mails, even key strokes and events for remote work and gaming. Those are very different in nature, with probably many more yet to be invented, so, this layer should be extensible. It could take care of access decisions and a like. Note that content layer doesn't work with parts of the objects, but with the whole ones. So, if user requests a movie, this movie is completly transfered to content network ingerent for the user at it's current location.
This could make servers less susceptible to attacks as they wouldn't be directly visible to the users!
Finally, Nth layer could be a user layer. In this layer user connects to the network and requests or sends content addressed with variaty of means. For example, someone could request particular newspaper's article from the particular date. The content network would search for the nearest copy of this contents, and use core network to transfer the object to the user. Someone else could request a particular film, and content network would search for it and present it to the user.
Just as a note, I watched VJ's lecture in Google and this is on the track of what he proposes.