Friday, September 30, 2011

Captcha and few variants...

I'm downloading some stuff from the Internet, and as a part of that process I had to solve captcha in order to prove that I'm a human. Captcha can be thought of as a puzzle that have to be solved and, by assumption, only a human can solve it. But, the reality is that there are automated ways that can solve capthcas, particularly badly designed ones. So, I was thinking a bit about captcha and decided to write about it...

There are several way of circumventing captcha. Two I heard of a very interesting. The first one, and the older one, is that some sites that host piracy materials or sexual materials, require you to enter chaptcha prior to accessing materials. But that captcha is from another site that is being abused. Let me provide a simple hypothetical example. Suppose that spammer wants to register as much mail addresses as posible with gmail. Gmail actually has protection in the form of captcha that is aimed at just that, preventing mass registrations. So, what spammer does is that he starts to provide some service to users, e.g. download of pornography. But, in order for the user to download something it has to first solve captcha, and the captcha to be solved is the one presented by GMail to spammer, which is redirected to the user.

The other form of captcha is even more bizarre. There are companies in India and China that employ humans that manually solve captchas. You are provided with API through with you send request, this request is routed to some human that solves it and sends back results. What a combination of automation!? And cheap one while we are at that, few dollars for thousands captchas, something like that.

So, what can be done? Well, there is a reload button on captcha that allows you to request another puzzle, so, there could be a sentence that requires you to reload and if you try to enter that particular captcha, you are banned. This would help for two cases. The first ones are automated recognitions that actually don't understand what's written in the captcha. The case when humans solve captcha could be restricted by localizations. Namely, if you require reloading you would present that in, e.g. Croatian because request comes from Croatia. But, if that someone sends then captcha to India, the gay there wouldn't know the meaning of the sentence and so wouldn't be able to solve captcha. Another possibility is to give a sentence that requires you to enter only third word, or to choose synonymous for a given word between several words.

Bringing this idea to a higher level would mean that apart from requiring user to retype what's written in captcha it would also require him/her to understand what's written in there and to do some particular action based on that!

In the end, this isn't perfect solution, but only a step in a play of catch between cat and mice which, for a short period of time, gives advantage to mice (or cat, depending on the view)...

Thursday, September 29, 2011

Why I think it is in RedHat's interest to help CentOS...

Today I was asked if there are any security implications in selecting particular licensing model from Microsoft. Basically, I know nothing about that particular subject, and as far as I can remember from some previous experiences, this is something that requires specialization in itself. To cut the story short, I don't intend to waste my time in studying Microsoft's licensing models! So, in the end I basically said that any option is valid from my perspective as long as we have access to security updates. No more no less...

But since everything was about selecting the least expensive solution, I mentioned that it might be beneficial to introduce LibreOffice (OpenOffice) instead of Microsoft Office and/or Linux on certain workstations because people don't use all the functionality of Windows and especially Microsoft Office. It is true that LibreOffice isn't quite a match to Microsoft Office, but for people that only write a single page of something and then send this to a printer it is to much to pay for a whole office suite! Or, those that access remote machines and do their work there it is also to much to maintain the whole workstation with a full productivity suite on it. My idea was, unsurprisingly, rejected because of a slew of problems, like compatibility between different office suites, support for equivalent functionality of Outlook, potential problems with user support, etc. Those might or might not be the problems, but in the end, I was asked what distribution I would recommend if there would be (partial) migration?

I said, without almost any thinking, latest version of Ubuntu LTS! Let me first clarify that I'm actually die hard user of Fedora, and also CentOS, and I use them as much as I can. But, I also stand firmly on the ground and I'm aware of problems associated with that route. First, you'll probably ask why I didn't recommend RHEL? Well, the reason is simple, it costs, and price cut wouldn't be large enough to justify such transition. Scientific Linux, as I already blogged about, has a problem with a name. If I'm going to say "Use Scientific Linux!" probably I would be rejected with a comment something in a line with "Wow, we are not scientific institution!". And for CentOS, well, no timely security updates! Period. Ok, to be honest, I do install CentOS on servers in a good hope that things will become better, but it is on a small scale and I'm usually directly in charge of those servers. Note that I didn't mention Fedora as an alternative. Well, the reasons are bitten to death by now, so I won't go into that.

So this leaves me with Ubuntu or Debian. The clear winner is definitely Ubuntu, more specifically, Ubuntu LTS. The reasons in favor are strong. First, quite user friendly, second, long time support (LTS!), third advanced almost like Fedora, but without Fedora's short support timeframe. Finally, there is possibility of obtaining support contract.

And what's the conclusion? The conclusion is that Ubuntu slowly and certainly is introduced into business environments which might or might not pose a threat to RHEL... decide for yourself...

Tuesday, September 27, 2011

CentOS... something is happening!

I just noticed that RPM packages from RHEL 6.1 appeared on mirrors. Actually, they announced it few days earlier but that was sooner that I was expecting. :)

There is a small catch. In order for yum to be able to catch those packages you'll have to add new repository. Namely, CentOS team decided to go with a mechanism they call Continuous updates. In that way they'll try to be faster, but, time will tell if it will work or not.

The quickest way to do that is to run the following command:

rpm -ivh

or for 64-bit systems:

rpm -ivh

This will install necessary data for yum. Then, just run 'yum update' and that's it!

Still, we'll have to wait a bit more for 6.1, and especially for FreeIPA 2 that I'm waiting for!

Sunday, September 25, 2011

Privatnost na Internetu - Uvod i osnove

Ponukan postovima ovakvog tipa odlučio sam početi pisati malo i o zaštiti privatnosti na Internetu i to na hrvatskom jeziku te što je moguće razumljivije ljudima kojima Internet nije struka. Naime postova na engleskom jeziku ima mnoštvo, ali mislim da ih je na hrvatskom puno manje, ako ih uopće i ima. Prvenstveno namjera mi je objasniti načine na koje se ljude može pratiti dok surfaju, iako ću se dotaknuti i drugih aktivnosti i tema. Primjerice, jedna od stvari koju bi volio pokazati su tragovi koji ostaju kao rezultat Vaše upotrebe računala, a koji omogućavaju drugim korisnicima da otkriju što ste radili. Naravno da u hodu namjeravam dati i nekakve upute kako zaštiti svoju privatnost.

Odmah da raščistimo, ovaj post nije poziv za paniku ili bilo što slično tome. Naime, veliki sustavi koji nas prate prate istovremeno i ogromnu količinu drugih korisnika i samim tim na neki način smo zaštićeni jednostavnim pravilom mase (da ne kažem krda ili jata, koji su prirodnih mehanizam zaštite od predatora). Međutim, stvari se mijenjaju ukoliko nas netko "uzme na pik". U tom slučaju taj netko specifično prati baš naše aktivnosti i onda stvari postaju dosta opasnije.

Jedna napomena vezana uz ove tekstove. Naime, ja koristim Linux te nemam Windows operacijski sustav na računalu. To znači da u određenim situacijama neću moći demonstrirati neke stvari, ili da ću to napraviti u kasnijim postovima, kada uspijem doći do nekog Windows računala da provjerim kako se to tamo postiže. Također, primjeri vezani uz preglednike bit će orijentirani prvenstveno na Firefox (koji najviše koristim) i dijelom na Chrome. Internet Explorer ću jako rijetko spominjati.

I prije nego što krenem s objašnjenjima, ako nešto što sam spomenuo ili objasnio bi htjeli da detaljnije objasnim ostavite mi komentar.

Osnovni mehanizmi praćenja

Počet ću s pitanjem koji su osnovni načini praćenja korisnika. Jako je bitno da razjasnimo te mehanizme prije nego što počnemo s ostalim detaljima budući da većina "problema" u stvari nastaje zbog njih. Dakle, dva su primarna mehanizma za praćenje korisnika, prvi su kolačići (engl. cookies) a drugi su IP adrese. S obzirom da su IP adrese jednostavnije i manje robusne, krenut ću s njima, a onda ću se detaljnije pozabaviti kolačićima.

IP adrese

IP adrese su rudimentarniji način praćenja korisnika iako danas ne baš pretjerano robustan. IP adresu mora imati svako računalo koje se spaja na Internet pa ga tako ima i vaše računalo. Međutim, zbog nekih dodatnih mehanizama (NAT) više računala dijeli IP adresu, bilo istovremeno, bilo slijedno u vremenu. Iz tog razloga IP adresa nije pouzdan mehanizam iako se povremeno koristi. Dva primjera situacija u kojima se koristi ovaj mehanizam:
  1. Ako ste išli na Web stranice koje vam omogućavaju skidanje raznih datoteka (,, i slični). Onda vas oni prate po IP adresama te oni po jednoj IP adresi omogućavaju skidanje samo određenog, ograničenog, broja datoteka.
  2. Kada se registrirate na neke Web stranice (primjerice igre) onda na temelju IP adrese ograničavaju koliko puta se možete registrirati. Dakle, ovo je odgovor ako ste se pitali kako to Web poslužitelj na drugoj strani zna da ste se već registrirali.
Kao što sam rekao, ovo je prilično jednostavan i ne baš robusan mehanizam te ga je vrlo lako zaobići, bar ako koristite ADSL ili UMTS. Sve što trebate napraviti je prekinuti i ponovo uspostaviti vezu (ili ugasiti i upaliti ruter) i dobit ćete novu adresu! Uostalom, na ADSL-u ionako T-Com i ostali telekomi jednom dnevno vam promijene adresu kako bi vas onemogućili da imate svoj vlastiti Web/mail poslužitelj.


Kolačići su vrlo bitni mehanizam koji u biti omogućava današnji Web. Taj mehanizam sastavni je dio Web poslužitelja i Web preglednike (Firefox, Opera, Internet Explorer, Chrome, ...). Kolačići su izmišljeni kako bi prevladali jedan temeljni problem Weba te omogućili rad različitih aplikacija koje se danas koriste. Primjerice, da njih nema ne bi bilo Amazona, eBay-a, GMaila i niza drugih.

Naime, Web kako je prvotno zamišljen, je bio takav da nije bilo baš nikakvog načina da se povežu dva učitavanja stranice od jednog korisnika. Drugim riječima, ako otvorite neku stranicu, ulogirate se tamo te potom ponovo otvorite tu istu stranicu, u originalnom protokolu odnosno kako je Web originalno napravljen nema načina da Web poslužitelj zna da ste vi korisnik koji ste se maloprije prijavili. I baš zato služe kolačići! Kada se ulogirate Web poslužitelj vašem pregledniku predaje kolačić i preglednik od tog trenutka na dalje pri svakom pristupu tom poslužitelju šalje taj isti kolačić.

Sad se mogu postaviti dva pitanja u vezi kolačića:

P1: Ako se ja ulogiram, dobijem kolačić i onda mi netko preotme taj kolačić, taj netko se onda može pretvarati da sam ja i napakostiti mi?"
P2: Kada odem na drugi poslužitelj on će dobiti i moje kolačiće koji ne pripadaju njemu i može napraviti što god želi!?

Na ova dva pitanja bolje je odgovoriti odjednom. Naime, doista kolačići jesu izuzetno bitni i ako ukradete nekome korisniku kolačić, efektivno možete se predstavljati da ste on! No, zbog toga je uveden niz zaštitnih mehanizama čija zadaća je zaštita kolačića. Za početak, kolačić je vezan uz Web poslužitelj i preglednik će dati kolačić samo onome poslužitelju koji mu je kolačić i dao (s time je odgovoreno na drugo pitanje). Nadalje, kolačići uglavnom imaju i ograničeno trajanje, bilo vremenski ili dok je preglednik uključen. Zatim, kolačići se štite tijekom prijenosa preko mreže tako da ih nitko ne može vidjeti, i ukrasti. Kao još jedna mjera zaštite može se vezati kolačić uz IP adresu.

Naravno da su stvari idealne ne bi bilo problema, ali nisu. Neki poslužitelji nisu dobro podešeni, ili koriste loše aplikacije, i dovode svoje korisnike u opasnost. O tome ćemo još pričati.

Dakle, da sada objasnim funkcioniraju kolačići tijekom logiranja (na pojednostavljenom primjeru GMail-a):
  1. Vaš preglednik se spoji na poslužitelj.
  2. Poslužitelj vam daje kolačić i šalje stranicu koja traži da upišete korisničko ime i lozinku.
  3. Preglednik od vas uzima korisničko ime i lozinku (dakle, vi mu to ukucavate).
  4. Preglednik šalje korisničko ime, lozinku i kolačić poslužitelju.
  5. Poslužitelj provjerava korisničko ime i lozinku i ako su ispravni kolačić povezuje s vama u svojoj bazi podataka, potom šalje odgovor da ste uspješno logirani.
  6. Vi kliknete na neku poruku elektroničke pošte. Preglednik šalje identifikator poruke i kolačić.
  7. Poslužitelj prihvaća identifikator i provjerava da li kolačić pripada vama, tj. da li ste logirani. Ako jeste, šalje nazad sadržaj poruke te ju možete vidjeti.
Ako nakon koraka 5 obrišete kolačić, preglednik će poslati samo identifikator poruke, bez kolačića. Na to će poslužitelj vidjeti da poruka pripada nekom korisniku a vas ne pozna (jer niste mu predočili kolačić) i odbit će prikaz poruke!

Dosta teorije, idemo malo na praksu...


IP adrese

Dosta smo pričali o IP adresama i kolačićima, vrijeme je da ih i vidimo!

Što se tiče IP adresa, da bi ste vidjeli koju IP adresu trenutno koristite kada surfate po Internetu, posjetite ovu stranicu! Obratite pozornost na liniju IP information. Odmah u nastavku su četri broja i ta četri broja su IP adresa koju trenutno koristite. Pokušajte sada eksperiment da bi vidjeli kako se adresa mijenja. Isključite i uključite ADSL ruter (ili prekinite i uspostavite ponovo UMTS vezu) te ponovo odite na navedenu stranicu. Primjetit ćete da je sada broj drugačiji! To je razlog zašto IP adresa nije baš dobra za praćenje korisnika, ali svakako ima svog potencijala. Primjerice, na navedenoj stranici također vidite da ste  relativno dobro locirani, do nivoa grada.

Međutim, IP adrese imaju još jednu manu u slučaju da koristite ADSL. Naime, ako imate više uređaja spojenih na ADSL ruter (stolno računalo i prijenosno računalo, pametni telefon) i na svakome od njih odete na navedenu stranicu uvijek će pokazivati istu IP adresu! Pa je pitanje, a kako vas razlikuju kada vam poslužitelji šalju odgovor.

Za ovo postoji analogija, ne baš najbolja, ali relativno dobra. Naime, to je isto kao kada u kući ima više osoba. Svi ukućani imaju istu poštansku adresu, ali tek kada poruka stigne u kuću onda se raspoređuje onome kome je namijenjena. Za tu raspodjelu koristi se ime osobe. U slučaju interneta, onaj koji raspoređuje unutar kuće promet je ADSL ruter, on zna kome što treba poslati. A to zna tako što svakome ukućanu daje posebnu IP adresu! Zavrzlama, jel'? :) Opet nekakva adresa!? Međutim, ta interna adresa vidljiva je samo ruteru i unutar kuće, ali ne i za sve ostale vani! O tome ću još poslije, ali bitno je za zapamtiti da ako jedan ukućan ide na neke stranice, poslužitelj izvana ne zna koji je to ukućan, to zna samo ADSL ruter. I to je još jedan razlog zašto IP adrese nisu dobre za praćenje korisnika.


Želite li vidjeti kolačiće, to je jednostavno. U Firofox pregledniku, kliknite na Edit, potom na Preferences i tamo na Privacy (radi u engleskoj verziji 6.0.2). Na stranici ćete potom vidjeti link remove individual cookies. Kliknite na taj link. Ono što se dešava je da vam se otvara prozorčić unutar kojega su izlistani svi kolačići koje pregledik ima trenutno pohranjene. Prvo što ćete primjetiti je da su grupirani po domeni, a drugo da imate kolačiće i od domena koje dugo niste posjetili ili niste trenutno na tim stranicama! To je zato što se kolačići mogu dosta dugo čuvati i to različite stranice koriste kako bi vas pratile. U donjem dijelu prozora vidjet ćete detalje kada kliknete na neki kolačić. Evo slike, pa ću na njoj objasniti detaljnije:

U ovom slučaju selektiran je kolačić koji sam dobio od poslužitelja unutar domene. Nadalje, taj kolačić traje sve do 19.8.2013!!! Dakle, dvije godine od sada. U svakom slučaju, vidi se da praćenjem kolačića može se pratiti koje sam sve domene posjećivao.

Sada možete napraviti ekspriment u kojem možete vidjeti da kada se ulogirate dobijate kolačić putem kojega vas poslužitelj prepoznaje. I to ćemo napraviti na primjeru gmail-a. Dakle, otvorite i ulogirajte se na svoj GMail korisnički račun. Potom, zatvorite prozor u kojemu je otvoren GMail. Za test, otvorite novi prozor i upišite adresu Ono što će se desiti je da će se automatski otvoriti vaš popis pošte i neće vas se tražiti da se ulogirate! To je zato što je zapamćen kolačić koji vam to omogućava.

Sada, otvorite popis kolačića i potražite kolačić koji pripada domeni To napravite tako da u polje Search upišete i pritisnete enter. Odaberite sve kolačiće i obrišite ih (tipa Remove Cookies) te ponovo otvorite novi prozor i upišite Ovaj puta će vas tražiti da se ulogirate! Ukratko, uklonili ste kolačić i poslužitelj vas nije mogao povezati s prethodnim posjetom.

Eto, toliko za sada... valjda nije bilo previše. U nastavku ćemo još  to produbiti, a i vidjeti druge mehanizme kako Vas se sve prati.

Friday, September 23, 2011

Faster than light?!

This is really hot! For a past few days there are news about neutrinos going faster than a light. I spent past few hours on this, i.e. watching webcast, searching the Net and reading blogs. I wanted to read/hear opinions from all the possible sides, not just the one, and especially not from some half-informed journalists. I also wanted to understand as much as possible, which is unfortunately not much. :)

As I said, I watched webcast from CERN in which researchers presented their findings. These findings were obtained within OPERA experiment/project. Unfortunately, I started to watch somewhere in the middle of the presentation, and also I'm not trained physicists so much of the information presented was very vague to me. Nevertheless, it seems that the researches themselves don't yet believe to the results and they are trying to find the error, which they are trying to do for the past several years. Furthermore, they published their findings in order for other researchers to test results, and hopefully to repeat them. The problem is that there are only two laboratories in the world that could verify the results. One is in America, Fermilab, but it seems that they don't have equipment precise enough. The other one is in Japan that currently doesn't work because of the tsunami. Oh, and it turns out that few years ago measurements were perfomed in USA that also got faster-then-a-light results, but the error margin was to big for the results to be relevant.

Also what you'll find is that faster-than-a-list neutrinos doesn't match observed behavior from supernovae SN1987A. Namely, in that case neutrinos arrived only three hours before photons, i.e. light. According to the results obtained from the ORION experiment they should arrived few years earlier. The reason they arrived earlier is that neutrinos don't interact with matter, while photons do. Still, the energies that were generated neutrinos are different, and much larger in ORION experiment.

The value that was measured
Here are few more links:
Do neutrinos move faster than the speed of light?
This Extraordinary Claim Requires Extraordinary Evidence!
Can Neutrinos be Superluminal? Ask OPERA!
Neutrino experiment sees them apparently moving faster than light
CERN Press Release
Faster-than-light neutrino claim bolstered
And what would happen if it is possible to travel faster than a light? Here are some explanations:
Faster-than-Light Travel

Komercijalna podrška otvorenom kodu u Hrvatskoj

Već dulje vremena me kopka jedna misao. Naime, čini mi se kako je problem s otvorenim kodom u Hrvatskoj kako  nema odgovarajuće podrške. Pod odgovarajućom podrškom mislim na dovoljno veliku tvrtku koja je sposobna pružiti usluge na cijelom teritoriju Republike Hrvatske i to s prilično striktnim SLA. Potencijalni korisnici, a to su prvenstveno velike tvrtke (pod tim mislim na velike tvrtke za Hrvatske pojmove), si ne mogu priuštiti da ovise o tvrtkama sa svega par zaposlenih. Iz tog razloga te male tvrtke jako rijetko dobijaju posao koji se tiče jezgre poslovanja velikih tvrtki i većim dijelom pružaju podršku malim tvrkama. Podrška malim tvrtkama znači relativno veliku količinu posla i malu zaradu te ograničeni rast.

Naravno da pojedine velike tvrtke nude podršku za otvoreni kod ili ju planiraju ponuditi. No, problem s velikim tvrtkama je da su već "oženjene" s velikim proizvođačima (Cisco, Microsoft, IBM, ...) te prvenstveno guraju njihove proizvode i rješenja, a ne otvoreni kod. Razlog zašto je to tako je dvojaki. Prvo, puno je lakše zaračunati veliku cijenu održavanja ako već sama oprema košta puno. Primjerice, ako prodajete opremu (sklopovsku i programsku) u vrijednosti milijun kuna, tada će vas rijetko tko pitati ako zaračunate godišnje održavanje od 100,000kn. Međutim, ako prodate otvoreni kod (koji je besplatan sam posebi) i zaračunate opet to isto održavanje, tada će vas svi gledati poprijeko i pitati kako možete toliko zaračunati za nešto što je besplatno!? Drugo, odgovornost se uvijek prebacuje na drugoga. Dakle, prodate primjerice Cisco opremu i ona ne radi. Tada se pravdate da to nije vaš problem te da ste poslali zahtjev Ciscu i čekate odgovor. U biti, to je varijacija na temu izreke iz 80-tih kada se je govorilo da nitko nije dobio otkaz zato što je kupio IBM. Sve u svemu, ziceraški pristup.

Što se tiče malih firmica, one opet imaju dva problema. Prvi prvi problem je da rade isključivo prilagodbu otvorenog koda. Drugim riječima, instaliraju programsku podršku, naprave odgovarajuću prilagodbu (bez programiranja!) i to je to. Rijetki su oni koji rade razvoj te se na taj način profiliraju i ističu, ako uopće i postoje. Razlog je jednostavan. Previše je komponenti, a svaka od njih traži odgovarajuću ekspertizu. Kada to spojite, dobijete rezultat da je nemoguće malim tvrtkama odraditi neku napredniju prilagodbu, u smislu programiranja, velike većine stvari koju nude. Drugi problem koji imaju je da se uglavnom radi o malim tvrtkama koje onda, posljedično, rade za druge male tvrtke. Rezultat je da imaju puno posla, a malu dobit.

Dakle, što bi po meni trebalo napraviti? Trebalo bi ujediniti te male tvrtkice, plus neke još koje se bave programiranjem i sklopovljem tako da se dobije tvrtka veličine oko 100 do 150 zaposlenih. Potom bi ta tvrtka trebala nastupati na natječajima u kojima nudi rješenja velikim tvrtkama. Način na koji bi se to napravilo je kupovinom malih tvrtki (s odgovarajućim uvijetima zaposlenima i upravljačkoj strukturi), a možda i vrbovanjem ljudi u slučaju da tvrtka ne želi pristati na kupovinu. Za to jasno treba kapital kojega ja osobno nemam i onda je sve ovo jedna velika teorija...

Thursday, September 22, 2011

Implementing IF, AND, OR, etc. in iptables...

I saw that some people accessed my blog while searching for OR, AND, IF and similar operators in iptables. These operators are indeed implemented within the subsystem but not in the usual sense, that is, they are not so obvious. In other words, they are implicit in the way you are writing rules. But, if you understand how that works, writing iptables rules becomes much easier.

Before continuing you have to understand how packets are processed by netfilter/iptables framework. Firewall rules are data driven language, but actually, the things are very simple. All the packets traverse different parts of the Linux kernel. At certain points those packets are stopped (figuratively speaking) and set of rules is "executed" that can alter, drop or pass packet. If packet is passed (or modified and passed) it goes to the other parts of the kernel where potentially another set of rules is invoked.

The points where packets are "stopped" are chains, PREROUTING, INPUT, FORWARD, POSTROUTING, OUTPUT. In each chain there are different tables, but I'll ignore those for the moment as they are not important for this post.

Set of rules at each point (chain) is added or deleted using iptables command. The iptables command needs an argument that defines in which chain rule is added. That argument is option -A after which name of the chain follows. For example, to add something to INPUT chain, you would write:
iptables -A INPUT ...
Ok, let us now start with a simple example, what if you want to do some processing on every packet that has source address, i.e.
if (src(ip) == {
To achieve that functionality, just write the rule as follows:
iptables -A INPUT -s do_something_with_packet
The part do_something_with_packet I'll explain later, but basically this is a part that will do something useful with the packet on which this rule is executed.

Now, what if you want to add additional constraint, e.g. destination address is, i.e.

if (src(ip) == and dst(ip) == {
Well, what you'll write is the following:
iptables -A INPUT -s -d do_something_with_packet
easy, isn't it? All the constraint you wish to bind with operator AND are just written one after another. Operator AND is implicit. Ok, now you can ask: But if I want to have OR, what to do then? For example, something like the following:

if (src(ip) == or dst(ip) == {
Believe it or not, it's simple, write it like this:
iptables -A INPUT -s do_something_with_packet
iptables -A INPUT -d do_something_with_packet
I suppose that you figured it that when you add iptables one after the other, they are bound by OR, while, when you write constraints in a single command they are bound by AND. Alternatively speaking, reading from left to right is AND, and from top to bottom is OR.

Tuesday, September 20, 2011

OpenSSH and how to get around port 25 filters on local networks...

OpenSSH is a very capable tool and I'm using it for years. And even though I don't consider myself a beginner user, but rather an advanced one, every now and then I learn something new about this great tool. Here are two links to such sites that I found to be very interesting:
  1. SSH Can Do That? Productivity Tips for Working with Remote Servers
  2. 9 Awesome SSH Tricks
Be sure to also read comments there because they are useful too.

What I'm going to describe is how I'm using ssh tunneling capabilities to send email via remote server when local network blocks port 25 outside of the local network. Blocking port 25 is quite a frequent scenario, and useful security practice, to prevent, or at least lower the quantity of, outgoing spam from local network. Probably it was massively introduced during Slammer worm or somewhere around that time. Anyway, for an easier understanding here is a figure that tries to illustrate this particular scenario:

Network topology
In the given figure I'm using laptop computer and what I want to do is to send an email message using MY HOME MAIL SERVER as outgoing mail server. But, the exit router (or firewall) on LAN1 where I'm attached blocks any access to port 25 anywhere outside of the LAN1. In the same time, it allows outgoing ssh connections.

The general idea is to redirect mail client to connect to a localhost on port 25 and using ssh transfer this conection to remote mail host' local port 25. Note that, in order for this scenario to work you are not allowed to run local mail server, or, you have to redirect local mail client. The next premise is that the remote server allows ssh access. If it doesn't, then you have to find a host that allows. I'll deal with that scenario later, let us first go through this simpler scenario first.

To create tunnel that will transfer local connection to remote host run the following command as root user:


What this command does is that it binds to a local port 25 (protocol tcp) and anything that connects to that address is forwarded to the other side where it connects to IP address and port 25, i.e. to a local instance of mail server on MY HOME MAIL SERVER. You need to run this command as a root because of the local bind to privileged port (25).

One more thing you need to do is to trick your mail client to connect to localhost instead to MY_HOME_MAIL_SERVER. How to do this depends on how you configured your mail client. In case you entered symbolic name of MY_HOME_MAIL_SERVER into mail client then you can change it to, or better, change /etc/hosts and put there the following line:           MY_HOME_MAIL_SERVER

Don't forget to remove this line once you are finished. Otherwise, when you remove ssh tunnel you want be able to send mail any more!

Let me try to visualise what you did. Some time later I'll draw a figure, but now let me try with a words. With ssh you created a pipe that goes from the laptop to the MY_HOME_MAIL_SERVER. At the start of that pipe, on laptop, it is listening to port 25 at local addres. At the end, this pipe whatever comes, simply hands to the localhost and port 25, i.e. to a mail process running on the MY_HOME_MAIL_SERVER.

Finally, I what if you don't have ssh access to a MY_HOME_MAIL_SERVER? Well, in that case you have to find some computer to which you can ssh, and which can connect to port 25 of MY_HOME_MAIL_SERVER. Note that it can be any server on the Internet. To make things work now, you use almost the same ssh command, but with a little different arguments:


Note that MY_HOME_MAIL_SERVER is IP adress or DNS name of your mail server, while YOUR_SSH_SERVER is IP address or DNS name of a server you use as a middle hop.

And that's it. :) Actually, very simple. But, personally I'm not satisfied with visualization so I'll improve it when I find more time and inspiration. :)

Thursday, September 15, 2011

CentOS 6.1...

... or not!

Ok, before I continue, first a disclaimer. This is my personal view of the whole situation around CentOS. If you disagree, that's ok. If you wish you can argue via comments, but please, keep to the point and give arguments for you statements. Don't troll. Oh, and if you spot grammatical and similar errors, please email me corrections.

So, this whole situation is frustrating, a lot! First, there was a long long wait for 6.0 to appear. According to this Wikipedia article, about 240 days! Then, finally 6.0 appeared, and everyone was very happy. But now, there are no updates and no 6.1. And it's already about 120 days behind. Not to mention some serious bugs that are present in fully patched CentOS 6.0 installation. And note well, if you plan to use CentOS and security is important (e.g. Internet facing Web server), don't use CentOS 6. If you desperately need CentOS, use version 5.

On the other hand, Scientific Linux managed to track a Prominent North American Distributor quite well. Obviously, no matter what CentOS developers claim, it is possible to be faster.

Now, whenever someone says on mailing lists or forums: Hey, this is a problem, two things happen (actually, three because there are those that agree). First, there are those that repeat constantly that you get as much as you pay for! Second, there are those that repeat that  '... the CentOS developers have to work on CentOS and it's not good for them to waste time arguing about development process, so don't say those thing here!' But, there is one big BUT! And that is that at least some of the people that complain at the same time offer help! Also, if something is so important like CentOS is to many people and companies, developers can not behave as it if doesn't concern them. Well, they can but then they risk the project failure! 

And failure of CentOS is not only the problem for CentOS and it's users, but also for RHEL. The reason is simple, the majority that use CentOS won't buy RHEL anyway, so they'll search alternative. First alternative, Scientific Linux has a main problem in its name, it doesn't sound like something that some serious business would run, no matter how this argument is actually stupid. Anyway, what's the next alternative? Oracle's Unbreakable Linux. And, this means Oracle's distribution will have more users, and thus will be more commercially successful. But even if CentOS users do not go with Oracle's Linux, the only alternative they have is to go with Ubunut or Debian (not to mention other Unixes like FreeBSD) and those are completely different types which means that the ones that go that route, won't ever return to RHEL types.

For completeness, I have to say that some things did improve. For example, based on critiques they received, Atrium site was introduced to track development. Well, they improved microscopically, because if you look into this site you'll find that it's not used. Calendar is empty. There are some sporadic comments, many of them old, and that's it. Yes, I know, developers started to talk a bit more about what's happening in different ways, e.g. twitter. But that's not enough!


So, where is the problem? I don't know, to be honest, because the project is all but transparent. But there could be several problems.

Lack of people

The main problem is probably the lack of people working on CentOS. There are, as far as I understand, core developers and QA team. On main CentOS page there is a list of people with their functions (Select Information, The CentOS Team, Members). There are 11 people listed of which 6 don't participate in core development (infrastructure, wiki, Web, QA), four are listed as supporting older versions of CentOS (2, 3, 4), and this leaves one working on CentOS 5 and 6. The information there is certainly old (maybe the Web gay left?), but nevertheless, for such an important project more people are necessary.

To solve this problem CentOS team has to introduce more people into the project. But how will they do that when all of them (i.e. CentOS team) are heavily busy trying to catch RHEL and they don't have enough time to do anything else? The best way, IMHO, is to talk to vendors that use CentOS and ask them to provide payed engineers. And, with those people try to create community that will recruit new project members.

Decision/Strategic problem

Under this title I'll put the decision that was made during RHEL 6 Beta program. The decision was not to do anything and wait for RHEL release. The reason is to help RHEL better test new version of RHEL. Well, that's certainly good intent, but development of CentOS 6 had to progress intermediately because there was no way CentOS team could deliver beta version of CentOS in parallel with RHEL's beta version. In the end, they lost a lot of precious time!

Collaboration with Scientific Linux and others

This subject was beaten to the death, and nothing happened. I don't know where the problem is. The CentOS developers didn't say anything about it - did they try to approach SL developers? Was there any discussions? What they think? Nothing! What rings in my head is the perceived problem of compatibility, and, here we are with a next problem.

Strict compatibility

Actually, this is not a problem per se. CentOS has a mission to be as close to upstream as possible, and this is advantage actually. But this advantage is turning to a great disadvantage since CentOS is late because of that. The one reason cited why CentOS couldn't work with SL is that SL doesn't care much about strict compatibility. And this, I believe, is fallacy. Both projects have to remove RH's trademarks and such from packages and at least here there is possibility to cooperate.

Next, because of strict compatibility no package updates can be provided without upstream. The reason is that package name in that case (and version) has to be changed and this could bring two problems. The first is that, when upstream releases update, there could be divergence and collision. And second, package names/versions differ what might confuse some third party software made explicitly for RHEL.

As for this, I don't understand why yum can not be patched (or plugin provided) that will allow packages to have same name but also release date will be taken into account? Also, why there couldn't be pre-releases of CentOS, named for example 6.1.test0, 6.1.test1? With all the packages the same, but with different release dates that will be taken care of by yum?

Finally, people that use CentOS don't need support because they now how to do it. And if they don't, who cares?


In conclusion I'll say that CentOS isn't going the right direction. CentOS team has to do something and do it as quick as possible. Maybe the most important is to hire some capable project manager that will change all this and open up more.

Tuesday, September 13, 2011

iOS versus Android...

Well, there are two approaches to develop smart phone OS. The first one is Apple's way and that is to control everything. The other one is Google's way and that is to control (almost) nothing. My personal opinion is that Apple's way is better for people that don't know anything about computers and thus someone else has to make choices for them. Of course, Google's way is better for those that want freedom and/or know what they are doing. In the long term, I believe Google's approach is better, and for two reasons. The first one is that more companies (i.e. people) can innovate more. The second one is that anyone can develop applications for Android, while in case you want to develop for iOS you have to own Mac.

Currently Apple has larger stake and in long term this is not sustainable. So, the tactics it uses is sue everyone for patent infringements and keep monopolistic position. Do I need to say that I hate Apple because of that?! Oh yeah, and I hate software patents, too! Anyway, today I stumbled upon the following link where this gay lists what Apple took from Android. Great read...

Saturday, September 10, 2011

Implementing Turing machine using iptables...

Ok, today I decided that I'm going to try to implement Turing machine using iptables. From the start it was obvious to me that I'll use stream of IP packets as a tape. So, after reading a bit and thinking, I decided to implement this Turing machine. The reason I selected that particular one is because tape always moves to the right, and that simplifies things a lot. More precisely, I decided to implement example given in the linked Wikipedia article.

Now, next problem, after deciding which Turing machine to implement, was where to keep internal state of a Turing machine. For that part I found that I can have mark value (32bit number, i.e. state) per connection. In other words, this means that testing current state is performed with the following test:

-m conntrack --mark <desiredstate>

and to set new state I have to use:

-j CONNTRACK --set-mark <newstate>

It also means that packets that belong to a tape should belong to a single connection, i.e. TCP connection.

Also, I had to decide where to keep content of a cell. There are different possibilities, but for now the simplest one is to use MARK target. So, to test a value of the cell I use:

--mark <cellvalue>

and to set it, I use

-j MARK --set-mark <cellvalue>

The last bit is initialization, i.e. to set initial state, and halting. To set initial state I used state tracking feature of connections in iptables, i.e. the following test:

-m state --state NEW -j CONNTRACK  --set-mark <initialstate>

finally, to halt machine, I set connection tracking state to special state in which no iptable rule will be triggered again.

So, let me show initial version of a shell script that makes this a bit "higher level". First, some initializations, i.e. tape and states definitions:

# Tape is a stream of IP packets belonging to a single TCP connection
TAPE="-s -d -p tcp --dport 80 --sport 10000"

# States, assigned arbitrarily integer values

# Halt state
HALT=4          # Halt state, no instruction will be executed in this state

# Tests when Turing machine is in a particular state
IN_STATE_A="-m connmark --mark $STATE_A"
IN_STATE_B="-m connmark --mark $STATE_B"
IN_STATE_C="-m connmark --mark $STATE_C"

# Actions to change a state of a Turing machine

Next, few pseudo instruction:

INITIALIZE_STATE_A="-m state --state NEW --mark $STATE_A"

# Reading a symbol from a tape, i.e. packet
READ_SYMBOL_0="--mark 0"
READ_SYMBOL_1="--mark 1"

# Writing a symbol to a tape, i.e. packet
WRITE_SYMBOL_0="-j MARK --set-mark 0"
WRITE_SYMBOL_1="-j MARK --set-mark 1"

# Short-hand "pseudo-instruction" to add instruction to a Turing machine

Finally, all the instructions:

# If in state A and symbol 0 was read write symbol 1 and go to state B

# If in state A and symbol 0 was read write symbol 1 and go to state C

# If in state B and symbol 0 was read write symbol 0 and go to state A

# If in state B and symbol 1 was read write symbol 1 and go to state B

# If in state C and symbol 0 was read write symbol 1 and go to state B

# If in state C and symbol 1 was read write symbol 1 and halt

# Initialize Turing machine

Note that I have to use two iptables commands in order to implement writing a symbol and transitioning to a new state. The reason is that I can have only one target per iptables command. Certainly, it could be hidden by making those variables and pseudo-instructions fancier, but, for now this will do...

This particular  Turing machine didn't require me to have transition to a new state without moving a tape. But, there is a solution for this too. All rules have to be placed in a single user-defined chain, and then when no tape movement is required just use -g (iptables' goto target) and start all rules from start. In efect, what will happen is that all the rules will be executed again on the same packet which means on the same cell. So elegant, isn't it? :)

Nastanak novca i dug...

Danas sam naletio na jedn zanimljiv intervju. Mislio sam da se radi o razjašnjavanju pojma duga države o kojemu se stalno priča, ali je na kraju ispalo da se radi o nečemu širemu, a uključuje između ostalog i pitanje kako je nastao novac.

U školi su nas učili kako je prvotni način trgovanja bila trampa te da je novac izmišljen kao posljedica neadekvatnosti trampe (Imaš kravu i trebaš moje piliće, ali meni krava ne treba. :)). Tek nakon što je izmišljen novac, dolazi do pojave posudbe i kreditiranja. Ta hipoteza o nastanku novca vuče se barem od Adama Smitha i njegovog rada Bogatstvo naroda.

Međutim, na stranici koju sam spomenuo na početku nalazi se intervju s Davidom Graeberom. Preporučio bih da ipak pogledate ukratko što Wikipedija kaže o njemu budući da mi se čini kako njegovi opći stavovi dosta utječu i na njegovo razmišljanje, ili obratno - svejedno. U tom intervjuu on tvrdi kako je to u stvari obrnut redoslijed, tj. prvo je nastala posudba i kreditiranje, potom novac, a kada bi se skršio monetarni sustav onda bi se pribjegavalo trampi. Kada ovako, kao laik, malo razmislim o tome, čini mi se vrlo logično. Na kraju krajeva, u Hrvatskoj su vrlo popularne kompenzacije ovih godina, a razlog za to je što financijski sustav baš ne funkcionira (opet, zaključujem to kao laik). No, čini mi se da je glavni problem kako bi se utvrdilo koja teza je točna u tome što se već u najstarijim pisanim zapisi koji sežu 3200 godina prije Nove ere i koji su nastali u Mezopotamiji nalaze tragovi novca i financijskog sustava. Prema tome, novac je nastao ranije i za to nema pisanih tragova.

U drugom dijelu intervjua naglasak je na trenutnoj dužničkoj situaciji u svijetu. Koliko god u Hrvatskoj mislili da se živi na kredit i da je to nečuveno, na zapadu izgleda nije ništa bolje, pogotovo u Americi. Jedna od stvari s kojom se slažem je da mi se čini sasvim prirodno da ljudi uvijek kukaju kako žive u najtežim vremenima, ali da je svako vrijeme teško na svoj način. Nadalje, po njemu, ciklusi dužničkih kriza su se stalno ponavljali, ali to su ciklusi koji traju po 500tinjak godina. No, u novije vrijeme ti ciklusi se skraćuju, a stare metode otpisivanja dugova se više ne mogu primijeniti iz različitih razloga.

Čitajući ovaj intervju saznao sam još podosta stvari, a to u stvari čini čitanje takvih tekstova isplativo. Prvo, za sociologa Marcela Maussa koji je napisao vrlo utjecajnu knjigu The Gift. Marcel Mauss u toj knjizi analizira što je moglo potaknuti razvoj novca ako to nije trampa te je po njegovoj tezi to bio poklon. Ako sam dobro shvatio, osnovna teza je da ako ja nekome poklonim nešto, onda taj netko se osjeća dužan meni vratiti. Naravno da to ne vrijedi za apsolutno svaku situaciju. Zanimljivost je i da se ta analiza proteže na otvoreni kod.

Zatim, tu su različite ekonomske teorije novca (commodity theory of money, monetary circuit theory, chartalist theory of money) i općenito problem teorije novca koji meni kao laiku izgleda nepostojeći, ali je stvaran i vrlo problematičan.

Za kraj, ima priličan broj komentara na kraju koje također treba pročitati kako bi se vidjela i druga strana (ako ju je tko iznio). Uglavnom, namjeravam to jednom...

Thursday, September 1, 2011

CAs are broken... but... there may be a fix...

Everyone by now heard of security breach of DigiNotar. The Internet is full of stories about it! I won't go into details what happened. Instead, I'll try to pinpoint what actual problem is, and, based on that, I'll try to outline possible solution.

Let us start with the problem. The problem is that every single CA is actually single point of failure of the whole distributed system. Do you need fraudulent Google certificate? No problem, attack the weakest CA you can find, or try to attack more of them, and there you go.  Now, I can here you say: Remove weakest CA! Well, it's not so easy. Applying this rule recursively you'll end up with one, or no CAs at all. This is not a solution either. And this also adds another dimension to the problem, the less CAs the more fragile the Internet becomes because each CA is anyway highly likely target. And you know the main premise of security: You are never ever absolutely secure!

So, what is the solution? I believe that the solution is to keep the system as it is, but to introduce signatures from multiple CAs in a single certificate. This won't resolve the problem, but it will make life harder to hackers. Besides, absolute security doesn't exist, as I already mentioned.

From the implementation standpoint, it is possible to do this either by changing certificate structure, or to change implementations so that they can check multiple certificates. In case multiple certificates are used it's obviously necessary to have some common information that will allow all those certificates to be related.

Validity of certificate (or certificates) could be calculated probabilisticaly. Additionally,  some independent measure of correlation between CAs could be defined so that the validity of a single site that uses this system can be evaluated based on this correlation measure (meaning, the less correlated CAs signed the more valid it is).

Note that if a single CA goes into bankruptcy, or is removed from trusted CA list, doesn't mean that everyone has to issue a new certificate imediatelly.

I would say that CAs implemented this way would be somewhere between current CA system and PGP Web of Trust.

About Me

scientist, consultant, security specialist, networking guy, system administrator, philosopher ;)

Blog Archive