Showing posts with label internet. Show all posts
Showing posts with label internet. Show all posts

Sunday, June 29, 2014

Private addresses in IPv6 protocol

It is almost a common wisdom that 172.16.0.0/12, 192.168.0.0/16, and 10.0.0.0/8 are private network addresses that should be used when you don't have assigned address, or you don't intend to connect to the Internet (at least not directly). With IPv6 being ever more popular, and necessary, the question is which addresses are used for private networks in that protocol. In this post I'll try to answer that question.

The truth is that in IPv6 there are two types of private addresses, link local and unique local addresses. Link local IPv6 addresses, as the name suggests, are valid only on a single link. For example, on a single wireless network. You'll recognize those addresses by their prefix, which is fe80::/10, and they are automatically configured by appending interface's unique ID. IPv4 also has link local address, though it is not so frequently used. Still, maybe you noticed it when your DHCP didn't work and suddenly you had address that starts with 169.254.0.0./16. This was a link local IPv4 address configured. The problem with link local addresses is that they can not be used in case you try to connect two or more networks. They are only valid on a single network, and packets having those addresses are not routable! So, we need something else.

Unique local addresses (ULA), defined in RFC4193, are closer to IPv4 private addresses. That RFC defines ULA format and how to generate them. Basically, those are addresses with the prefix FC00::/7. These addresses are treated as normal, global, addresses, but are only valid inside some restricted area and can not be used on the global Internet. This is the same as saying that 10.0.0.0/8 addresses can be used within some private networks, but are not allowed on a global Internet. You choose how this conglomerate of networks will be connected, what prefixes used, etc.

There is  difference, though. Namely, it is expected that ULA will be unique in the world. You might ask why is that important, when those addresses are not allowed on the Internet anyway. But, that is important. Did it ever happened to you that you had to connect two private IPv4 networks (directly via router, via VPN, etc.), and coincidentally, both used, e.g. 192.168.1.0/24 prefix? Such situations are a pain to debug, and require renumbering or some nasty tricks to make them work. So, being unique is an important feature.

So, the mentioned RFC, actually specifies how to generate ULA with /48 prefix and a high probability of the prefix being unique. Let's first see the exact format of ULA:
| 7 bits |1|  40 bits   |  16 bits  |          64 bits           |
+--------+-+------------+-----------+----------------------------+
| Prefix |L| Global ID  | Subnet ID |        Interface ID        |
+--------+-+------------+-----------+----------------------------+
First 8 bits have a fixed value 0xFD. As you can see, prefix is 7 bit, but L bit must be set to 1 if the address is specified according to the RFC4193. So, first 8 bits are fixed to the value 0xFD. Note that L bit set to 0 isn't specified, it is something left for the future. Now, the main part is Global ID, whose length is 40 bits. That one must be generated in such a way to be unique with high probability. This is done in the following way:
  1. Obtain current time in a 64-bit format as specified in the NTP specification.
  2. Obtain identifier of a system running this algorithm (EUI-64, MAC, serial number).
  3. Concatenate the previous two and hash the concatenated result using SHA-1.
  4. Take the low order 40 bits as a Global ID.
The prefix obtained can be used now for a site. Subnet ID can be further used for multiple subnets within a site. There are Web based implementations of the algorithm you can use to either get a feeling of the generated addresses, or to generate prefix for your concrete situation.

Occasionally you'll stumble upon so called site local addresses. Those addresses were defined starting with the initial IPv6 addressing architecture in RFC1884 and were also defined in subsequent revisions of addressing architecture (RFC2373, RFC3513) but were finally deprecated in RFC3879. Since they were defined for so long (8 years) you might stumble upon them in some legacy applications. They are recognizable by their prefix FEC0::/10. You shouldn't use them any more, but use ULA instead.

Thursday, December 6, 2012

Biser naših neukih novinara 8...

Dakle, evo novog bisera u kojima naši neuki novinari prenose (neću reći pišu!) s nerazumijevanjem o stvarima o kojima ništa ne znaju. Povod ovaj puta je pokušaj ITU-a da uvede kontrolu nad Internetom, i o tome sam već pisao, a i još ću kako stvari stoje. Nego, vratimo se temi ovog posta, a to je način na koji su naši novinari pisali o tom događaju.

Prvo što sam vidio bila je vijest na Monitoru o tome, i kao i uvijek na Monitoru su stavili linkove na druge novine koji pišu o toj temi detaljnije. Ali, pogledajmo prvo što su na Monitoru napisali. Evo ovdje c/p za referencu:
Članovi UN-ove Međunarodne telekomunikacijske unije dogovorili su usvojiti novi internet standard koji će omogućiti telekomunikacijskim kompanijama lakši nadzor podataka na internetu. Osnovat će se posebna inspekcija DPI koja će, prema tvrdnjama UN-a, štititi autorska prava. No stručnjaci upozoravaju da će inspekcija kopanjem po podacima kršiti privatnost korisnika.
Dakle, prva nebuloza koju su napisali je ITU usvaja novi internet standard! Ajmo' sada svi zajedno: ITU ne usvaja Internet standarde! I ako nije jasno, predlažem da se ponovi to još nekoliko puta, a novinaru predlažem da to i napiše nekoliko puta! Internet standarde može jedino predložiti IETF i odobriti RFC Editor, i nitko više!

Drugi gaf je još bolji: "Osnovat će se posebna inspekcija DPI...". Ovdje sam se odvalio od smijeha! Dobro, ajde, prvo nisam mogao vjerovati da su to napisali, a onda sam se odvalio od smijeha! Naime, DPI znači Deep Packet Inspection i označava pregledavanje svega što paketi nose. Nekakav Hrvatski prijevod bio bi "Dubinska analiza paketa", "Detaljna analiza paketa", ili tako nešto. Sam postupak ne zvuči kao nešto posebno, međutim, potrebno je poznavati neke osnove računalnih mreža kako bi se shvatilo koliko to odudara od standardnog postupanja s paketima tijekom njihova prijenosa kroz mrežu. Konačno, i najbitnije za ovaj post, radi se o tehnici ili metodi, kako god hoćete, a ne o nekakvom tijelu koje se osniva. Dakle, novinar je englesku riječ "inspection" shvatio u stilu Državnog inspektorata/inspekcije, a ne kao opću riječ "ispitivanje/provjera". Konačno, to je potvrdio i zadnjom rečenicom: "No stručnjaci upozoravaju da će inspekcija kopanjem po podacima kršiti privatnost korisnika!" Hahahaha... da mi je samo vidjeti tu inspekciju koja bi bila u stanju pratiti sav promet na internetu!

Ok, odlučih onda kliknuti na stranicu Večernjeg lista da provjerim što su oni napisali. Dakle, i oni lupetaju o "standardu za Internet!". Mislim, taj izraz je toliko besmislen da i uz najbolju volju ne mogu doći do nekog smislenog objašnjenja! Nadalje, i oni izgleda kao i Monitor (ili je obratno) tretiraju da se radi o nekakvoj inspekciji, kako drugačije protumačiti sljedeću rečenicu: "Iz UN-a tvrde da će uvođenje inspekcije zvane DPI..".

E da, potom je tu izjava tipa: Britanski računalni stručnjak kojega se naziva i "ocem interneta" Tim Berners-Lee... Ajmo mi sada jednu stvar utvrditi: Internet postoji od cca. 70-tih godina prošlog stoljeća, a dotični gospodin je smislio Web 1992. Dakle, kako on može biti "otac interneta" ako je internet postojao prije njega jedno 20-tak godina?

Ok, potom novinar nastavlja s:
To je ozbiljno kršenje privatnosti. Netko provede široku inspekciju na vašem priključku te pročita sve podatke i sve internetske stranice, pohrani ih na vaše ime te može dati adresu i telefonski broj Vladi kada ga pitaju u vezi s prodajom najboljem ponuđaču – rekao je Berners-Lee.
I za parsiranje ovoga mi je trebalo, i nisam uspio! Prvo, što znači "široku inspekciju"? I ako postoji "široka inspekcija", postoji li "uska inspekcija"? I koja bi bila razlika? Dalje kaže da netko pročita "sve podatke" i "sve internetske stranice", a ja ne mogu shvatiti po čemu su "sve internetske stranice" različite od podataka? Ali dobro, ajmo reć da se htjelo istaknuti zbog širokih narodnih masa da su tu i internetske stranice uključene. No, onda slijedi prava nebuloza "...pohrani ih na vaše ime..." koju također ne mogu nikako shvatiti, i totalno besmisleni šlag na kraju "...te može dati adresu i telefonski broj Vladi kada ga pitaju u vezi s prodajom najboljem ponuđaču".

Potom malo dramaturgije u vidu naslova "SAD gubi kontrolu nad internetom". I nastavak u revijalnom tonu:
Glavni tajnik ITU-a odbacio je kritike te dodao da takav prijedlog ne predstavlja "prijetnju slobodi govora". – Ovo nam je šansa da nacrtamo svjetsku kartu i spojimo ono što trenutačno nije spojeno dok osiguravamo da je to investicija za kreiranje infrastrukture potrebne za eksponencijalni rast u glasovnom, video i prometu podacima – kazao je Hamadoun I. Toure te dodao da je "zlatna prilika da se omogući dostupan pristup internetu za sve, uključujući i milijarde ljudi diljem svijeta koji danas ne mogu na internet".
Ovo prvo što sam označio masnim slovima nikako da shvatim. No, u ovom paragrafu vjerojatno ima istine u smislu da je to Toure izjavio jer vjerujem da je ovo zadnje otisnuto masnim slovima mogao izgovoriti neki birokrat iz ITU-a.

Ostale postove iz ove "serije" možete pronaći ovdje.

Friday, November 30, 2012

Internet Freedom - Well done EU!

If you think that Internet brought revolution only to individuals (and maybe different businesses that market themselves over the Internet)  than you miss one important link, telecommunication companies. Before the Internet they were in charge of everything related to communication and they did whatever they wanted, in the supposed name of the customers. If they thought that something isn't good, then no matter what people wanted, they weren't getting it. And we shell not forget pricing, which generated huge revenues. But, after the tremendous success of the Internet, things drastically changed. For some of the underlying reasons you can read in my other post, but the key is that the control was given to users, not network (i.e. telecoms). Now, telecoms are what they should be: data carriers only.

All good, but the problem is that there are no huge profits in data transfer, at least not as it used to be and telecoms don't just sit and wait. And so, every now and then we hear of some brilliant idea coming from telecommunication industry by which they either try to bring back good old days, or they try to offer something that doesn't make sense. Just in case you didn't know, ATM was one such idea that, fortunately  was a big failure! Even more interesting is a comment on this blog post from a guy (or guys) that are trying to reimplement some protocols from mobile telephony. They criticize specifications produced by telecoms (and related industry) for introducing new things, not because they are necessary, but because they are patented and in that way allow manipulation!

But, these days there is one other "very interesting" idea. Probably not many people know that ITU is trying to introduce mechanisms in order to regulate the Internet. Fortunately, EU isn't approving that, along with US. I approve that wholeheartedly  and I can not describe how outraged I am when I think about telecoms and ITU!

But, it is probably enough to point who is proposing regulation and to be clear what real motives are. Also interesting are requirements by some countries that Google and other Internet providers would have to pay to them to be allowed to distribute content to their citizens. This is absurd, because who forces users to access Google?

And ITU is also something I really dislike, a lot! It is a bureaucratic institution that produces standards for telecommunications. It's a dinosaur of the past. If you, as a single person, want to propose something, or just take part in some activity, you first have to be member of some member state standardization body, which isn't free. Then, you have to be delegated as a representative to ITU, and only then you can take part in some activity. And now we come to the best part, specifications that were produced common purpose were quite pricey. Truth to be told, they are now distributing specifications free of charge, but if it weren't the Internet, we would still have to pay for them. Contrast that to IETF, where membership and participation is open to everyone who wants to participate. Also, all the specifications produced by IETF are available for free to anyone. Now, I'm not claiming that IETF is perfect, but I certainly do claim that IETF is much better than ITU.

And while I'm at ITU/IETF, it happened to me several years ago that I called our Ministry in order to ask for funding to visit IETF. Apparently, this particular Ministry was willing to do that, or so it was written on their Web pages. The only caveat was that it didn't include IETF for a simple reason it isn't so bureaucratic as ITU. To cut the story short, bureaucrat I talked with didn't understand what I was talking about, nor he was interested to find out. And it ended without a grant...

Tuesday, February 7, 2012

A bit more of history...

Well, I wrote about a post in which Rob Landley explains how directory hierarchy within Unix is actually artefact of the shortcoming of the available technology in a specific point in time. And, what's more interesting, not technology in general but in this case creators of Unix didn't have larger disks available to them!

Today, I stumbled on another post, (also here and see here about password "problem" in general) which describes, probably, the first password hack. But reading that post I learned several more things. First, how mail was actually invented before Internet, but also I have read a more detailed history of CTSS system. CTSS was very influential operating system and precursor to Multics, another very influential operating system. I think that knowing at least Multics, could be regarded as a basic knowledge of anyone calling himself a computer scientist or anything similar.

It was also interesting to read how people that made mail were afraid of US postal service. Basically, they thought that this could be regarded as a competition to regular postal service and that they could be fined. I believe that US Postal Service, as well as AT&T, were unimaginable monopolies from today's standpoint. Nevertheless, the same situation was similar in Europe, too. This became clear to me while reading a book about Internet history. First, the fact that AT&T didn't think ARPANet will ever work and thus were not interfering, actually helped ARPANet a lot. On the other hand, Franch PTT actually killed CYCLADES network that had all prerequisites to become the first true Internet.

And while I'm at mail, it is widely regarded that the first spam was sent in 1978, but according to the post about mail, first spam message was an Vietnam anti-war message sent by a MIT engineer who abused his privileges in order to be able to send a message to everyone.

All this, very influential operating systems, electronic mail service, Internet, spam, all that happened in the MIT during 60ties and 70ties. Somehow I wish I had a chance to take part in that, but then again, we still have a chance to do something else. :)

For the end, have you ever wondered when the term device driver was invented? Here is an explanation. Also, here are descriptions of early C compilers and a history of C programming language.

Friday, October 7, 2011

The first use of the term "protocol" in networking...

I'm just reading the book Where wizards stay up late - The origins of the Internet and in there I found a statement about the first use of the term protocol to denote the rules to be followed in order for computers to be able to exchange information, i.e. to communicate.

Everything happened in 1965 when Tom Marill, psychologist by formal education, proposed to ARPA an experiment of connecting two machines, TX-2 from Lincoln laboratory at MIT and SDC Q-32 in Santa Monica. Marril founded a company within which he started that experiment but investor backed out and thus Marril turned to ARPA. ARPA agreed to finance experiment, but since Marril's company (Computer Corporation of America - CCA) was too small ARPA also suggested that Lincoln laboratory heads the project. This was accepted and for a project head was appointed Larry Roberts, another Internet pioneer. For the connection itself, a rather primitive modem was used that was able to send 2000 b/s via four-wire full-duplex service leased from Western Union. Marill set up a procedure that composed messages from characters, sent them to other machine and checked if the messages arrived (i.e. waiting for acknowledge). If there was no acknowledge, the message was retransmitted. The set of procedures for sending messages was referred as "message protocol" by Merill, and that is, as far as I know, the first use of that word in such a context. What's interesting is that a colleague apparently asked Marill why he was using that word because it reminds him of diplomacy. Today, word protocol is standard word to denote mechanisms and rules used by computers in order to be able to exchange data.

Anyway, if you know for some earlier use of this word, or more details about this first protocol I would be very interested to here it.

Finally, let me say that the book Where wizards stay up late - The origins of the Internet is a great book about Internet and how it was created. This book is targeted to a less technically knowledgeable people and I strongly recommend it. You can buy one on Amazon, but there are also other services specialize for selling used books, like e.g. AbeBooks. Maybe I'll talk a bit more about that book in some later post.

Sunday, September 25, 2011

Privatnost na Internetu - Uvod i osnove

Ponukan postovima ovakvog tipa odlučio sam početi pisati malo i o zaštiti privatnosti na Internetu i to na hrvatskom jeziku te što je moguće razumljivije ljudima kojima Internet nije struka. Naime postova na engleskom jeziku ima mnoštvo, ali mislim da ih je na hrvatskom puno manje, ako ih uopće i ima. Prvenstveno namjera mi je objasniti načine na koje se ljude može pratiti dok surfaju, iako ću se dotaknuti i drugih aktivnosti i tema. Primjerice, jedna od stvari koju bi volio pokazati su tragovi koji ostaju kao rezultat Vaše upotrebe računala, a koji omogućavaju drugim korisnicima da otkriju što ste radili. Naravno da u hodu namjeravam dati i nekakve upute kako zaštiti svoju privatnost.

Odmah da raščistimo, ovaj post nije poziv za paniku ili bilo što slično tome. Naime, veliki sustavi koji nas prate prate istovremeno i ogromnu količinu drugih korisnika i samim tim na neki način smo zaštićeni jednostavnim pravilom mase (da ne kažem krda ili jata, koji su prirodnih mehanizam zaštite od predatora). Međutim, stvari se mijenjaju ukoliko nas netko "uzme na pik". U tom slučaju taj netko specifično prati baš naše aktivnosti i onda stvari postaju dosta opasnije.

Jedna napomena vezana uz ove tekstove. Naime, ja koristim Linux te nemam Windows operacijski sustav na računalu. To znači da u određenim situacijama neću moći demonstrirati neke stvari, ili da ću to napraviti u kasnijim postovima, kada uspijem doći do nekog Windows računala da provjerim kako se to tamo postiže. Također, primjeri vezani uz preglednike bit će orijentirani prvenstveno na Firefox (koji najviše koristim) i dijelom na Chrome. Internet Explorer ću jako rijetko spominjati.

I prije nego što krenem s objašnjenjima, ako nešto što sam spomenuo ili objasnio bi htjeli da detaljnije objasnim ostavite mi komentar.

Osnovni mehanizmi praćenja

Počet ću s pitanjem koji su osnovni načini praćenja korisnika. Jako je bitno da razjasnimo te mehanizme prije nego što počnemo s ostalim detaljima budući da većina "problema" u stvari nastaje zbog njih. Dakle, dva su primarna mehanizma za praćenje korisnika, prvi su kolačići (engl. cookies) a drugi su IP adrese. S obzirom da su IP adrese jednostavnije i manje robusne, krenut ću s njima, a onda ću se detaljnije pozabaviti kolačićima.

IP adrese

IP adrese su rudimentarniji način praćenja korisnika iako danas ne baš pretjerano robustan. IP adresu mora imati svako računalo koje se spaja na Internet pa ga tako ima i vaše računalo. Međutim, zbog nekih dodatnih mehanizama (NAT) više računala dijeli IP adresu, bilo istovremeno, bilo slijedno u vremenu. Iz tog razloga IP adresa nije pouzdan mehanizam iako se povremeno koristi. Dva primjera situacija u kojima se koristi ovaj mehanizam:
  1. Ako ste išli na Web stranice koje vam omogućavaju skidanje raznih datoteka (uploading.com, filesonic.com, depositfiles.com i slični). Onda vas oni prate po IP adresama te oni po jednoj IP adresi omogućavaju skidanje samo određenog, ograničenog, broja datoteka.
  2. Kada se registrirate na neke Web stranice (primjerice igre) onda na temelju IP adrese ograničavaju koliko puta se možete registrirati. Dakle, ovo je odgovor ako ste se pitali kako to Web poslužitelj na drugoj strani zna da ste se već registrirali.
Kao što sam rekao, ovo je prilično jednostavan i ne baš robusan mehanizam te ga je vrlo lako zaobići, bar ako koristite ADSL ili UMTS. Sve što trebate napraviti je prekinuti i ponovo uspostaviti vezu (ili ugasiti i upaliti ruter) i dobit ćete novu adresu! Uostalom, na ADSL-u ionako T-Com i ostali telekomi jednom dnevno vam promijene adresu kako bi vas onemogućili da imate svoj vlastiti Web/mail poslužitelj.

Kolačići

Kolačići su vrlo bitni mehanizam koji u biti omogućava današnji Web. Taj mehanizam sastavni je dio Web poslužitelja i Web preglednike (Firefox, Opera, Internet Explorer, Chrome, ...). Kolačići su izmišljeni kako bi prevladali jedan temeljni problem Weba te omogućili rad različitih aplikacija koje se danas koriste. Primjerice, da njih nema ne bi bilo Amazona, eBay-a, GMaila i niza drugih.

Naime, Web kako je prvotno zamišljen, je bio takav da nije bilo baš nikakvog načina da se povežu dva učitavanja stranice od jednog korisnika. Drugim riječima, ako otvorite neku stranicu, ulogirate se tamo te potom ponovo otvorite tu istu stranicu, u originalnom protokolu odnosno kako je Web originalno napravljen nema načina da Web poslužitelj zna da ste vi korisnik koji ste se maloprije prijavili. I baš zato služe kolačići! Kada se ulogirate Web poslužitelj vašem pregledniku predaje kolačić i preglednik od tog trenutka na dalje pri svakom pristupu tom poslužitelju šalje taj isti kolačić.

Sad se mogu postaviti dva pitanja u vezi kolačića:

P1: Ako se ja ulogiram, dobijem kolačić i onda mi netko preotme taj kolačić, taj netko se onda može pretvarati da sam ja i napakostiti mi?"
P2: Kada odem na drugi poslužitelj on će dobiti i moje kolačiće koji ne pripadaju njemu i može napraviti što god želi!?

Na ova dva pitanja bolje je odgovoriti odjednom. Naime, doista kolačići jesu izuzetno bitni i ako ukradete nekome korisniku kolačić, efektivno možete se predstavljati da ste on! No, zbog toga je uveden niz zaštitnih mehanizama čija zadaća je zaštita kolačića. Za početak, kolačić je vezan uz Web poslužitelj i preglednik će dati kolačić samo onome poslužitelju koji mu je kolačić i dao (s time je odgovoreno na drugo pitanje). Nadalje, kolačići uglavnom imaju i ograničeno trajanje, bilo vremenski ili dok je preglednik uključen. Zatim, kolačići se štite tijekom prijenosa preko mreže tako da ih nitko ne može vidjeti, i ukrasti. Kao još jedna mjera zaštite može se vezati kolačić uz IP adresu.

Naravno da su stvari idealne ne bi bilo problema, ali nisu. Neki poslužitelji nisu dobro podešeni, ili koriste loše aplikacije, i dovode svoje korisnike u opasnost. O tome ćemo još pričati.

Dakle, da sada objasnim funkcioniraju kolačići tijekom logiranja (na pojednostavljenom primjeru GMail-a):
  1. Vaš preglednik se spoji na poslužitelj.
  2. Poslužitelj vam daje kolačić i šalje stranicu koja traži da upišete korisničko ime i lozinku.
  3. Preglednik od vas uzima korisničko ime i lozinku (dakle, vi mu to ukucavate).
  4. Preglednik šalje korisničko ime, lozinku i kolačić poslužitelju.
  5. Poslužitelj provjerava korisničko ime i lozinku i ako su ispravni kolačić povezuje s vama u svojoj bazi podataka, potom šalje odgovor da ste uspješno logirani.
  6. Vi kliknete na neku poruku elektroničke pošte. Preglednik šalje identifikator poruke i kolačić.
  7. Poslužitelj prihvaća identifikator i provjerava da li kolačić pripada vama, tj. da li ste logirani. Ako jeste, šalje nazad sadržaj poruke te ju možete vidjeti.
Ako nakon koraka 5 obrišete kolačić, preglednik će poslati samo identifikator poruke, bez kolačića. Na to će poslužitelj vidjeti da poruka pripada nekom korisniku a vas ne pozna (jer niste mu predočili kolačić) i odbit će prikaz poruke!

Dosta teorije, idemo malo na praksu...

Praksa

IP adrese

Dosta smo pričali o IP adresama i kolačićima, vrijeme je da ih i vidimo!

Što se tiče IP adresa, da bi ste vidjeli koju IP adresu trenutno koristite kada surfate po Internetu, posjetite ovu stranicu! Obratite pozornost na liniju IP information. Odmah u nastavku su četri broja i ta četri broja su IP adresa koju trenutno koristite. Pokušajte sada eksperiment da bi vidjeli kako se adresa mijenja. Isključite i uključite ADSL ruter (ili prekinite i uspostavite ponovo UMTS vezu) te ponovo odite na navedenu stranicu. Primjetit ćete da je sada broj drugačiji! To je razlog zašto IP adresa nije baš dobra za praćenje korisnika, ali svakako ima svog potencijala. Primjerice, na navedenoj stranici također vidite da ste  relativno dobro locirani, do nivoa grada.

Međutim, IP adrese imaju još jednu manu u slučaju da koristite ADSL. Naime, ako imate više uređaja spojenih na ADSL ruter (stolno računalo i prijenosno računalo, pametni telefon) i na svakome od njih odete na navedenu stranicu uvijek će pokazivati istu IP adresu! Pa je pitanje, a kako vas razlikuju kada vam poslužitelji šalju odgovor.

Za ovo postoji analogija, ne baš najbolja, ali relativno dobra. Naime, to je isto kao kada u kući ima više osoba. Svi ukućani imaju istu poštansku adresu, ali tek kada poruka stigne u kuću onda se raspoređuje onome kome je namijenjena. Za tu raspodjelu koristi se ime osobe. U slučaju interneta, onaj koji raspoređuje unutar kuće promet je ADSL ruter, on zna kome što treba poslati. A to zna tako što svakome ukućanu daje posebnu IP adresu! Zavrzlama, jel'? :) Opet nekakva adresa!? Međutim, ta interna adresa vidljiva je samo ruteru i unutar kuće, ali ne i za sve ostale vani! O tome ću još poslije, ali bitno je za zapamtiti da ako jedan ukućan ide na neke stranice, poslužitelj izvana ne zna koji je to ukućan, to zna samo ADSL ruter. I to je još jedan razlog zašto IP adrese nisu dobre za praćenje korisnika.

Kolačići

Želite li vidjeti kolačiće, to je jednostavno. U Firofox pregledniku, kliknite na Edit, potom na Preferences i tamo na Privacy (radi u engleskoj verziji 6.0.2). Na stranici ćete potom vidjeti link remove individual cookies. Kliknite na taj link. Ono što se dešava je da vam se otvara prozorčić unutar kojega su izlistani svi kolačići koje pregledik ima trenutno pohranjene. Prvo što ćete primjetiti je da su grupirani po domeni, a drugo da imate kolačiće i od domena koje dugo niste posjetili ili niste trenutno na tim stranicama! To je zato što se kolačići mogu dosta dugo čuvati i to različite stranice koriste kako bi vas pratile. U donjem dijelu prozora vidjet ćete detalje kada kliknete na neki kolačić. Evo slike, pa ću na njoj objasniti detaljnije:



U ovom slučaju selektiran je kolačić koji sam dobio od poslužitelja unutar bing.com domene. Nadalje, taj kolačić traje sve do 19.8.2013!!! Dakle, dvije godine od sada. U svakom slučaju, vidi se da praćenjem kolačića može se pratiti koje sam sve domene posjećivao.

Sada možete napraviti ekspriment u kojem možete vidjeti da kada se ulogirate dobijate kolačić putem kojega vas poslužitelj prepoznaje. I to ćemo napraviti na primjeru gmail-a. Dakle, otvorite gmail.com i ulogirajte se na svoj GMail korisnički račun. Potom, zatvorite prozor u kojemu je otvoren GMail. Za test, otvorite novi prozor i upišite adresu gmail.com. Ono što će se desiti je da će se automatski otvoriti vaš popis pošte i neće vas se tražiti da se ulogirate! To je zato što je zapamćen kolačić koji vam to omogućava.

Sada, otvorite popis kolačića i potražite kolačić koji pripada domeni account.google.com. To napravite tako da u polje Search upišete account.google.com i pritisnete enter. Odaberite sve kolačiće i obrišite ih (tipa Remove Cookies) te ponovo otvorite novi prozor i upišite gmail.com. Ovaj puta će vas tražiti da se ulogirate! Ukratko, uklonili ste kolačić i poslužitelj vas nije mogao povezati s prethodnim posjetom.

Eto, toliko za sada... valjda nije bilo previše. U nastavku ćemo još  to produbiti, a i vidjeti druge mehanizme kako Vas se sve prati.

Sunday, July 13, 2008

The critique of dshield, honeypots, network telescopes and such...

To start, it's not that those mechanisms are bad, but what they present is only a part of the whole picture. Namely, they give a picture of the opportunistic attacks. In other words, they monitor behavior of automated tools, script kiddies and similar attackers, those that do not target specific victims. The result is that not much sophistication is necessary in such cases. If you can not compromise one target, you do not waste your time but move on to the next potential victim.

Why they only analyse optimistic attacks? Simple, targeted attacks are against victims with some value, and honeypots/honeynets and network telescopes work by using anallocated IP address space and thus there is no value in those addresses. What would be interesting is to see attacks on high profile targets, and surrounding addresses!

As for dshield, which might collect logs from some high profile site, the data collected is too simple to make any judgements about the attackers sophistication. What's more, because of the anonymization of data, this information is lost! Honeypot, on the other hand, do allow such analysis, but those data is not collected from the high profile sites.

In conclusion, it would be usefull to analyse data of attacks on popular sites, or honeypots placed in the same address range as those interesting sites. Maybe even combination of those two approaches would be interesting for analysis.

That's it. Here are some links:

dshiled
honeynets

Sunday, July 6, 2008

Reputations for ISP protection

Several serious problems this year made me think about security of the Internet as a whole. Those particular problems were caused by misconfigurations in BGP routers of different Internet providers. The real problem is that there are too many players on the Internet that are treated equally even though they are not equal. This causes all sorts of the problems and it is hard to expect that those problems will be solved any time soon.

Internet, at the level of the autonomous systems, is kind of a peer-to-peer network and similar problems in those networks are solved using reputations. So, it's natural to try to apply similar concept to the Internet. And indeed, there are few papers discussing use of reputations on Internet. Still, there are at least two problems with them. The first one is that thay require at least several players to deploy them, even more if they are going to be usefull at all. The second one is that they are usualy restricted in scope, e.g. try to only solve some subset of BGP security problems.

The solution I envision assumes that ISP's differ in quality and that each ISP's quality can be determined by measuring their behivor. Then, based on those measurements all the ISPs are ranked. Finally, this ranking is used to penalize misbehaving ISPs. The penalization is done by using DiffServ to lower the priority of the traffic and when some router's queues start filling up, then packets are droped, but first of the worst ISPs. This can further be expaned, as each decision made can use trustworthiness of the ISP in question. E.g., when calculating BGP paths, trustworthiness of AS path can be determined and this can be taken into account for setting up the routes. Furhtermore, all the IDS and firewalls can have special set of rules and/or lower tresholds for more problemattic traffic. I believe that possibilities are endless. It should be noted that it is envisioned that this system will be deployed by a single ISP in some kind of a trust server, and that this ISP will monitor other ISPs and appropriately modulate traffic entering it's network!

In time, when this system is deployed by more and more ISPs (well, I should better say IF :)), there will be additional benefits. First, communication between trust servers of ISPs could be established in order to exchange recommendations (as is already proposed in one paper). But the biggest benefit could be the incentive that ISPs start to think about security of the Internet, their own security and security of their custerms. If they don't then their traffic and their sevices will have lower priorites on the Internet and thus their sevice will be worse that those of their competitors which will reflect on income!

Of course that it's not so easy at it might seem at first glance. There are number of problems that have to be solved, starting with the first and the most basic one: How practical/useful is this really for network operators? Then, there are problems of how to exactly calculate reputation. And when the reputation is determined, how will routers mark the packets? They should match each packet by the source address in order to determine DS codepoint but the routers are already overloaded and this could prove unfeasible.

I started to write a paper that I planned to submit for HotNets08, but I'm not certain if I'm going to make it before deadline as I have some other, higher priority work to do. The primary reason for sending this paper is to get feedback that is necessary in order to continue developing this idea. But, maybe I get some feedback from this post, who knows? :)

20081229
I missed the deadline because of the omission, but the paper is available on my homepage. It is under work in progress section on the publication page. Maybe I'll try to work a bit on it and send it to some relevant conference next year. Are there any suggestions or opinions about that?

Friday, February 8, 2008

New Internet architecture, my take at it no. 1

Reading all those papers about new Internet architecture simply doesn't give me peace. What is the solution? Probably it is a simple one in a concept, though , as always, the devil is in the details. Look at the Internet now. When it was first proposed to use packet switching it looked like lunatics' idea and now it's so normal we don't even think about it and take it for granted. So, it's strange feeling that probably I'm looking and thinking about solution but I'm not aware of it.

So, let me make try number one!

What about making Internet in an onion layered style? The most inner layer, 0th layer, forms the core and makes the most trustfull and protected part of the network. It's not possible for outer layers to access anything inside inner layers (here we could maybe take inspiration from Bell-LaPadula and similar models here?). The infrastructure of the Tier 1 NSPs could form this 0th layer. N-th network layer offers transportation services to (N-1)-th layer. This model would protect inner layers from the outer layers, as outer layers would have no access to inner layers of the network. Something similar is already done with MPLS. But MPLS is deployed inside autonomus system, not as a global concept.

There could be several layers corresponding to current Tier 1, 2 and 3 ISPs. Each layer with more and more participants, and accordingly, more and more untrustworthy. Lower layers could form some kind of isolation layer between all the participants and thus, protect them from the configuration errors. Or mallicius attacks. Note, that this could be problematic as it means that lower layers not only encapsulate higher layers, but also inspect them, or assemble and disassemble. It could be hard to do so it's questionable whether and how this is achiavable.

Each layer could use it's own communication protocol, most suited for the purpose and environemnt it works in. For example, in the core layer there is necessity for fast switching as huge speed could be expected in the years to come with extremly low loss rate, so packet formats best adjusted to that purpose should be used. Probably, the outer - user - layers, would need to have more features, for example, quality of service, access decisions and a like. Futhermore, maybe lossy network is used, e.g. wireless network, so some additional features are necessary.

Communication of request to lower layers could be done withih the format of the packets, as ATM did where it's cells had different format when entering network and inside the network, so called UNI and NNI.

We could further envision (N-1)th layer of the onion for the content distribution. This layer's task could be to distribute content using services from the (N-2)th layer. Content could be anything you can think of, e.g. different documents (openoffice, pdf), video, audio, Web pages, mails, even key strokes and events for remote work and gaming. Those are very different in nature, with probably many more yet to be invented, so, this layer should be extensible. It could take care of access decisions and a like. Note that content layer doesn't work with parts of the objects, but with the whole ones. So, if user requests a movie, this movie is completly transfered to content network ingerent for the user at it's current location.

This could make servers less susceptible to attacks as they wouldn't be directly visible to the users!

Finally, Nth layer could be a user layer. In this layer user connects to the network and requests or sends content addressed with variaty of means. For example, someone could request particular newspaper's article from the particular date. The content network would search for the nearest copy of this contents, and use core network to transfer the object to the user. Someone else could request a particular film, and content network would search for it and present it to the user.

Just as a note, I watched VJ's lecture in Google and this is on the track of what he proposes.

Tuesday, February 5, 2008

DDoS attacks, Internet, new Internet and POTS...

I was just thinking about many initiatives (e.g. GENI) to design Internet from scratch! It certainly requires us to break out from the current way of thinking, that's with us for about 40 years now, and to find and propose something new. The good example of this break through was the Internet itself, i.e. the concept of packet switched network. As a side note, Van Jacobson has an idea of how this new might look like and I recommend the reader to find his lecture he held in Google on Google Videos.

While thinking about what is this "new" thing, I took as an example DDoS attacks. There are no DDoS attacks in POTS and they are a big problem for the Internet. So, how this new mechanism should work in order to prevent DDoS attacks. The key point of DDoS attack (or more generally, DoS attack) is that there are finite resources that are consumed by attacker and thus, regular users can not access those resources, they are denied service.

And, while I was thinking about it, I actually realised that there is DDoS attack possibility in the POTS as there are also finite resources. Ok, ok, I know, I managed to reinvent the wheel, but hey, I'm happy with it. :) So, if possible, why there are no DoS attacks in telephony? The key point is that end devices in POTS are dumb and thus, not remotely controllable. If they were remotely controllable, then the attacker would be able to gain access to them and to use huge number of those devices to mount an attack on selected victim. Maybe this attack would be even more effective than the one on the Internet since resources taken by end devices are not shared even though the end devices don't use them.

It turns out that DDoS attack is actually a consequence of giving more power to the user via the more capable end devices. Furthermore, because those end devices are complex systems it's inevitable that there would be many ways of breaking in and controlling them.

Of course, someone might argue that the problem is in ease with which IP packets can be spoofed. But, this is actually easily solvable, at least in theory, if each ISP would control it's access network for spoofed addresses. The more serious problem is actually DoS attack made by legitimate IP packets. It is traceable if coming from a single source, or small number of sources, but the real problem is a network of compromized hosts (botnets). There is no defence from those networks as they look as legitimate users.

So, because we are limited with real world and we'll always have only finite resources on our disposal it turns out that the only way of getting rid of DDoS is to restrict end devices, which by itself is impossible. Now, this is thinking within current framework. But, what if we can made finite resource apparently infinite, or somehow restrict end devices.... This is something for further thinking...

About Me

scientist, consultant, security specialist, networking guy, system administrator, philosopher ;)

Blog Archive