Tuesday, October 30, 2012

CFP: MIPRO ISS

Starting from this year I'm going to be a vice chair of Information Systems Security event that is a part of a larger MIPRO conference. The reason I took this role is that I believe that relevant security event is missing in this region and that this conference (I'll say conference not event from now on and by that I'll refer to ISS event) can fill the void. Furthermore, I believe there is lot of a room for improvements, which is of course mandatory if this conference is to become regional, and I have some ideas what and how to do it. But it will take me some time until I articulate what I intend to do. In the meantime, CFP was published [PDF].

I don't find conferences appropriate for publishing finished work, journals a better for that purpose. Conferences are, on the other hand, ideal for presenting your work in progress in order to solicit feedback so that, in the end, you improve quality of your research. I especially invite students, undergraduate, graduate and postgraduate, so submit their work for diploma thesis or PhDs. Also of great interest are findings of weakness (vulnerabilities) somewhere, I invite you to present your finding on the conference. Of course, in that case you should be careful first to notify those you found a vulnerability so that they have time to react.

Okrugli stol: Kome još treba cash?

Danas, 30. 10. 2012. godine, sudjelovao sam na okruglom stolu s temom Kome još treba cash? U najavi tog okruglog stola bilo je rečeno kako su prezenteri: László Szetnics (MasterCard Europe), Luka Tomašković (Zagrebačka banka) i Tihomir Mavriček (Hrvatski Telekom). Panelisti su Igor Strejček (Erste & Steiermärkische banka), Neven Barbaroša (Hrvatska narodna banka), Marijana Gašpert (Fina) i Marina Đukanović (Kolektiva). Moderator je bila Marina Ralašić iz magazina Banka. Još jedna žena je sudjelovala, ali koliko vidim njeno ime se ne spominje u najavi i ona je izgleda tek naknadno dodana. Razlog mog sudjelovanja je što me elektronički novac zanima iz tehničke, preciznije sigurnosne, perspektive. Ovo je moj osvrt na taj okrugli stol.

Pretpostavljam kako ste primjetili da cijelo vrijeme riječ okrugli stol stavljam u nakošena slova. Razlog leži u činjenici da nikakvog okruglog stola nije bilo - ni u doslovnom ni u prenesenom smislu, dakle skoro nikakve rasprave. Zaključaka još i manje. A o ciljevima se može samo sanjati. Istina, mogle su se čuti neke zanimljive informacije, ali imam dojam da je to jedina korist, odnosno, da osim zanimljivosti nemaju nekakvo dodatno značenje. Uglavnom, uvjeren sam da se radilo o "PR eventu" novinara. Tu ću ubaciti i jednu digresiju. Naime, nisam do sada razmišljao o naslovu ovog okruglog stola, ali sad tek uočavam riječ "cash" koja prilično strši, te se pitam zašto nije pisalo "gotovina"? Možda zato što to nije istoznačnica? Tko će ga znati, novinari iz magazina Banka su to organizirali, pa valjda oni kao novinari znaju dobro Hrvatski jezik, to im je dio struke, valjda?

Sve je prvo krenulo s prezentacijama, relativno kratkima. Prvo László Szetnics, potom Luka Tomašković i na kraju Tihomir Mavriček. László Szetnics je pričao o velikim vizijama koje MasterCard ima i u jednom trenutku je čak spomenuo da je sve sigurno. Lijepo od njega, ali ja sam poprilično skeptičan. Luka Tomašević mi se učinio dosta realnijim te je spomenuo neka iskustva i podatke koje Zaba ima. Primjerice, Zaba drži 90% tržišta mobilnog bankarstva i 20% vlasnika mobilnih telefona posjeduje pametni telefon. Zadnju prezentaciju držao je Tihomir Mavriček koji se je kleo u NFC (skraćenica od Near Field Communication) što je Luka Tomašević ispravno okarakterizirao kao način komunikacije, a ne nekakvo posebno sredstvo plaćanja. Inače, Mavriček je pustio promotivni film za uslugu plaćanja NFC-om. Radi se o očito namještenom filmu (što je i on komentirao), iako, bilo je zanimljivo vidjeti neka poznata lica u tom filmu (nekadašnji studenti fakulteta na kojemu ja radim, neki i dok sam ja bio student). Kasnije se moglo čuti kako je plaćanje NFC-om uvedeno u drugoj polovici 7. mjeseca i probni rok trebao je trajati do 31.10, ali je period proširen do 3. mjeseca 2013. Postoji trenutno 6 prodajnih mjesta, a broj korisnika je između 200 i 300, s tim da raste prema 300 te se radi isključivo o djelatnicima T-Coma. Tijekom "rasprave" jedan gospodin iz publike pitao je Mavričeka kako objašnjava činjenicu da je Apple odustao od NFC komunikacije u svom najnovijem proizvodu. Na to je Mavriček odgovorio protupitanjem, tko zna koja tvrtka ima najviše NFC patenata? Odgovor je, naravno, Apple. Valjda je time htio utvrditi kako Apple intenzivno radi na tome. Međutim, njegov argument je dvosjekli mač. Možda to znači i da je Apple, nakon prilično puno rada i stjecanja dosta iskustva s NFC-om (što je rezultiralo patentima) zaključio da je tehnologija neisplativa ili ju se iz nekog drugog razloga ne isplati ugrađivati u iPhone? To mogućnost nitko nije spomenuo.

Kada sam već toliko zapeo za prezentaciju T-Coma (koja je inače prštala animacijama i prava je, "hard core", marketinška prezentacija) ne mogu ne kritizirati upotrebu riječi inteligentno. Naime, na tim slajdovima pojavio se je izraz Inteligentna okolina. Jako me uzrujava besmislena (ilitiga marketinška) zloupotreba te riječi. Postoji bar približan koncenzus što inteligencija znači (sposobnost snalaženja u novoj situaciji), ali to nikako ne mogu povezati s načinom plaćanja koji T-Com pokušava uvesti, osim što se radi o pametnim telefonima, pa je možda tu poveznica prema inteligenciji! Uglavnom, ljudi bi trebali paziti što znače riječi koje upotrebljavaju, a ne upotrebljavati ih samo zato što lijepo zvuče!

Nakon prezentacije Tihomira Mavričeka, prvo je moderatorica postavila po jedno pitanje svakom od panelista, a onda je krenula "rasprava" (iako, teško bi mogao to zvati raspravom).  U tom dijelu moglo se je čuti nekakvih zanimljivih podataka:
  1. Transakcije na bankomatima se u Hrvatskoj ne naplaćuju, ali u okolnim državama da.
  2. Bankama treba gotovina kako bi mogli puniti bankomate, a to se prvenstveno nabavlja od trgovaca (nadam se da se dobro sjećam ovog drugog dijela).
  3. Vani se daje popust na korištenje kartica, dok se kod nas daje popust na gotovinu.
Moram reći kako sam pitanje popusta za gotovinu uvijek tumačio preskupom cijenom transakcija na POS uređajima pa me je to pitanje zašto je obrnuto kod nas iznenadilo. Ipak, tijekom "otvorenog dijela" jedan gospodin iz publike je jasno i glasno rekao da su cijene transakcija kod nas jako visoke i da je to glavna kočnica većem korištenju elektroničkog plaćanja!

Zanimljivo je bilo čuti Marinu Đukanović iz Kolektive. Naime, izgleda da je Kolektiva inicijalno nudila plaćanje samo kreditnim karticama, a kasnije je zbog pritiska tržišta i problema s karticama uvela plaćanje putem Internet bankarstva (odnosno, putem uplatnica). Kod problema s karticama istaknula je da su korisnici krivili Kolektivu, iako problemi nisu bili njihovi - što je u principu prirodna reakcija ljudi, tj. teško je očekivati nešto drugo. Ona je također istaknula kako ljudi ne žele upisivati brojeve kreditnih kartica te da "pune" PayPal račune i onda s njima plaćaju, tj. da nemaju povjerenja. Nakon toga se je osvrnula na potrebu za edukacijom o sigurnosti. Jedna od njenih izjava, ako se dobro sjećam, je i da je "mobilno plaćanje" u stvari "nazadno plaćanje". Naime, komplicirano je. Naknadno, tijekom diskusije, na pitanje o dobnim skupinama korisnika Kolektive rekla je da imaju statistike i da je većina korisnika između 18 i 35 godina, iako je u porastu broj korisnika od 35 do 55. Raspon njihovih korisnika inače je od 18 do 65 godina. Dobio sam želju u jednom trenutku komentirati njen komentar o povjerenju i PayPalu. Naime, nešto znam o sigurnosti i mogu reći da rijetko kome, ako ikome, vjerujem dovoljno da ću upisivati svoj broj kartice. U tom smislu, nikakva edukacija mi neće pomoći jer, ako već nekome moram vjerovati, onda ću vjerovati PayPalu i putem njega ću plaćati ostalima. Što se desi s brojem nakon što ga ja dam Kolektivi više ne znam niti mogu utjecati, a s obzirom na primjere provala na Internetu, teško mi je povjerovati da bi Kolektiva bila sigurna.

U zaključku moram reći nekoliko stvari. Prvo, niti u jednom trenutku nije bilo definirano što se podrazumijeva pod "elektronički novac" i to je cijelo vrijeme visjelo u zraku. No, na temelju prezentacija i cjelokupne priče moglo bi se zaključiti kako se radi o bilo kakvom plaćanju koji ide nekakvim (bilo kakvim) elektroničkim putem, ali da većina pod tim podrazumijeva elektronički način plaćanja. Zatim sigurnost je samo ovlaš spomenuta, dok se privatnost uopće nije spomenula niti jednom rječju. Konačno, osobno mislim da rasprave nije ni bilo - u smislu sukobljenih mišljenja - a nije bilo ni zaključaka.

Izvješće magazina Banka možete pronaći ovdje. Također, na istoj stranici dostupne su i prezentacije.

Installing ossec client on CentOS 6...

Ok, I did this already, but I managed to forget it. Still, it isn't strange, after all, it's not that you are adding new machines every day. Anyway, here are the steps that are need in order to install OSSEC client on a CentOS machine, more specifically CentOS 6. I decided to write this post if someone also needs these instructions, but certainly for me so that next time I have to do it I don't have to think a lot. Note that I like to install RPM packages because it is easier to update them instead compiling from source, and also someone else is worrying about new releases. Additionally, it's not so good to install development environment on production machines that don't need it, for security reasons. Ok, here we go.

First, make sure that you have EPEL repository added. The easiest way to do this is using the following command (note, bold is what you type, the rest is what you get from the machine):
# rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
Retrieving http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm
warning: /var/tmp/rpm-tmp.7IMdWB: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY
Preparing...                ##################################### [100%]
1:epel-release   ##################################### [100%]
Second, fetch necessary packages. I didn't want to install Atomicorp's repository, so I only fetched ossec packages using wgetossec-hids and ossec-hids-client are what you need. Select the newest versions you can find. Next, install them using yum command:
# yum localinstall ossec-hids-client-2.6-15.el6.art.x86_64.rpm ossec-hids-2.6-15.el6.art.x86_64.rpm
I assumed that yum is executed in the same directory where you placed downloaded packages. Also, if you downloaded some other versions, change names appropriately.

Open ossec's configuration file, /var/ossec/etc/ossec-agent.conf, and change the line that has <server-ip></server-ip> element. It has to point to your server's IP address. You can also add files to be monitored in addition to the existing ones, or remove some of the existing ones if they are not used on the machine you are installing ossec client.

Now, go to the OSSEC server and run there agent management tool. It is probably in /var/ossec/bin:
# ./manage_agents


****************************************
* OSSEC HIDS v2.5-SNP-100907 Agent manager.     *
* The following options are available: *
****************************************
   (I)mport key from the server (I).
   (Q)uit.
Choose your action: I or Q: A

- Adding a new agent (use '\q' to return to the main menu).
  Please provide the following:
   * A name for the new agent: centos6.domain.local
   * The IP Address of the new agent: 192.168.10.41
   * An ID for the new agent[030]: <just press ENTER>
Agent information:
   ID:030
   Name:centos6.domain.local
   IP Address:192.168.10.41

Confirm adding it?(y/n): y
Agent added.
Note that the tool doesn't display all the options you have on your disposal. Next what you need to do is to extract a key that you'll import into the client. This is also done using manage_clients tool, so either start it again, or in case you didn't exit after you added an agent just continue:
 ****************************************
* OSSEC HIDS v2.5-SNP-100907 Agent manager.     *
* The following options are available: *
****************************************
   (I)mport key from the server (I).
   (Q)uit.
Choose your action: I or Q: e

Available agents:
   ID: 002, Name: somehost, IP: 10.0.10.1
   ID: 030, Name: centos6.domain.local, IP: 192.168.10.41
Provide the ID of the agent to extract the key (or '\q' to quit): 030
Agent key information for '030' is:
<here a very long string will be printed>
** Press ENTER to return to the main menu.
Again, option to export the key isn't listed in the help message! Anyway, copy the very long string that is printed (agent's key) and you can quit from the tool and logout from the OSSEC server.

Go now to ossec client, change directory to /var/ossec/bin and run manage_client tool:
# ./manage_client


****************************************
* OSSEC HIDS v2.6 Agent manager.     *
* The following options are available: *
****************************************
   (I)mport key from the server (I).
   (Q)uit.
Choose your action: I or Q: I

* Provide the Key generated by the server.
* The best approach is to cut and paste it.
*** OBS: Do not include spaces or new lines.

Paste it here (or '\q' to quit):
<very long string copied here!>

Agent information:
   ID:030
   Name:centos6.domain.local
   IP Address:192.168.10.41

Confirm adding it?(y/n): y
Added.
Finally, restart ossec client:
# /etc/init.d/ossec-hids restart
Shutting down ossec-hids:                      [  OK  ]
Starting ossec-hids:                           [  OK  ]
You should see you new client in OSSEC's Web interface which should confirm that it is running OK.

Monday, October 29, 2012

yum and fastestmirror plugin...

Few hours ago I lost my nerves because when I started yum to update my system, download was painfully slow, somewhere around 20kB/s. It is outrageous because I was using 100 Mbps link and that is probably the slowest link in the chain that ends up somewhere in GEANT. Thus, things have to be much faster than that! The best speed that can be achieved is somewhere around 50Mbps and what I was getting wasn't even remotely close to it! This wasn't something I was prepared to accept as is, so I decided to see what's happening.

Yum has a plugin, fastestmirror. The purpose of that plugin is to determine the fastest available mirror and makes yum download from it, not some random one. Usually, this plugin works very well, but this time it didn't. I tried to reset everything with
yum clean all
and than again
yum update
But it didn't help. Googling around I quickly determined that the first command didn't remove fastestmirror's data. What is necessary is to remove cache file stored in /var/cache/yum/x86_64/17/timedhosts.txt (this is location on 64-bit Fedora 17). Well, guess what, this didn't help either. Namely, fastestmirror plugin determines which mirror is the best one based on measuring how much time is necessary to establish connection with a mirror, and then it immediately disconnects. This is all OK, until mirror starts to apply some throttling effectively capping maximum speed. And this was exactly what happened to me.

It used to be possible to send SIGINT signal to yum (pressing Ctrl+C) on which yum would switch to another mirror. But this doesn't work any more. When you press Ctrl+C yum exits. Now, this is expected behavior, but the previous one was actually useful! So, there should be some way to tell yum to switch to next mirror.

In the end I solved this by looking which mirror(s) yum was using. This is printed when yum starts, e.g.:

Loading mirror speeds from cached hostfile
 * fedora: gd.tuwien.ac.at
 * fedora-debuginfo: fedora.inode.at
 * rpmfusion-free: mirrors.coreix.net
 * rpmfusion-free-debuginfo: mirrors.coreix.net
 * rpmfusion-free-updates: mirrors.coreix.net
 * rpmfusion-free-updates-debuginfo: mirrors.coreix.net
 * rpmfusion-nonfree: mirrors.coreix.net
 * rpmfusion-nonfree-debuginfo: mirrors.coreix.net
 * rpmfusion-nonfree-updates: mirrors.coreix.net
 * rpmfusion-nonfree-updates-debuginfo: mirrors.coreix.net
 * rpmfusion-nonfree-updates-testing: mirrors.coreix.net
 * rpmfusion-nonfree-updates-testing-debuginfo: rpmfusion.blizoo.mk
 * updates: gd.tuwien.ac.at
 * updates-debuginfo: fedora.intergenia.de
The problem was Fedora's main repository, which was downloaded from gd.tuwien.ac.at. So, I edited fastestmirror's configuration file /etc/yum/pluginconf.d/fastestmirror.conf and added the following line:
exclude=.at
That excluded a bit more mirrors than I intended, but it definitely solved my problem.

Sunday, October 28, 2012

Research paper: "Before We Knew It..."

The paper I'll comment in this post was presented on ACM's Conference on Computer and Communications Security held on Oct. 16-18, 2012. The paper tries to answer the following question: How long, on average, does zero-day attack last before it is publicly disclosed? This is one of those questions, which when you see them are so obvious, but for some strange reason they didn't occur to you. And what's more, no one else didn't try to tackle them! In the same time this is a very important question from security defense perspective!

Anyway, having an idea is one thing, to realize it is completely another. And in this paper, the authors did both very well! In short, it is an excellent paper with a lot of information to digest! So, I strongly recommend anyone who's in security field to study it carefully. I'll put here some notes what I found novel and/or interesting while I was reading it. Note that for someone else, something else in the paper may be interesting or novel, and thus this post is definitely not replacement for reading the paper yourself. Also, if you search a bit on the Internet you'll find that others also covered this paper.

Contributions

The contributions of this paper are:
  • Analysis of dynamics and characteristics of zero-day attacks, i.e. how long it takes before zero-day attacks are discovered, how many hosts are targeted, etc.
  • A method to detect zero-day attacks based by correlating anti-virus signatures of malicious code that exploits certain vulnerabilities with a database of binary file downloads across 11 million hosts on the Internet.
  • Analysis of impact of vulnerability disclosure on number of attacks and their variations. In other words, what happens when new vulnerability is disclosed, how exactly does that impact number and variations of attacks.
Findings and implications

The key finding of this research is that zero day attacks are discovered, on average, 312 days after they first appeared. But in one case it took 30 months to discover the vulnerability that was exploited. Next finding is that zero day attacks, by themselves, are quite targeted. There are of course exceptions, but majority of them hit only several hosts. Next, after vulnerability is disclosed there is a surge of new variants of exploits as well as number of attacks. The number of attacks can be five orders of magnitude higher after they've been disclosed than before.

During their study, the authors found 11 not previously known zero-day attacks. But be careful, it isn't a statement that they found vulnerabilities now previously known. It means there are known vulnerabilities, but up to this point (i.e. this research) it wasn't know that those vulnerabilities were used for zero-day attacks.

So, here is my interpretation of implications of these findings. This means that currently there are at least dozen exploits in the wild no one is aware of. So, if you are a high profile company, this means that you are in a serious trouble. Now, as usual, everything depends on many things are you, or will you, be attacked. Next, when there is a disclosure of a vulnerability and there is no patch available, you have to be very careful because at that point there is a surge of attacks.

Moja poruka/pitanje mljekarima...

Stalno se u zadnje vrijeme piše o mljekarima i objavljuju razne vijesti, poput recimo ove. I da, to je istina, proizvođači mlijeka i trgovci "pojedu" značajan dio zarade, a mrvice ostaju proizvođačima. Međutim ta cijela situacija me toliko čudi, a i živcira, da sam odlučio napisati ovaj post - bar da se ispušem kad već nikakve promjene neće biti.

Živimo u kapitalizmu i sve se bazira na ponudi i potražnji, i naravno, na mjestu u lancu od proizvođača do potrošača. No, ono što naši mljekari rade je potpuno krivo! Oni žicaju državu da riješi njihov problem i s tim se ne slažem! Država ne može određivati komercijalnim tvrtkama što i kako će raditi, a s druge strane, zar bi država umjesto komercijalnih tvrtki trebala plaćati proizvođačima? Naši proizvođači mlijeka bi se trebali malo trgnuti i shvatiti u kojem kontekstu se nalaze. To nije više Jugoslavija, već jedno sasvim drugo vrijeme.

Dukat, Megle i slični imaju razgranatu  mrežu za otkupljivanje mlijeka i na taj način mogu dospijeti do bilo kojeg proizvođača i takvih prerađivača nema puno! A proizvođači s druge strane pasivno čekaju da prerađivač dođe do njih - da se doveze onaj kamion koji kupi mlijeko i to je to. Na taj način si sami smanjuju potražnju i sami su si krivi!

Da skratim filozofiju, moje pitanje (a i čuđenje) je: Ako je tako loše, zašto se ne udruže? Udruživanje će im omogućiti da sami pokrenu preradu i distribuciju mlijeka. Ako ne žele ići tako daleko, udruživanjem će moći organizirati transport mlijeka na veće udaljenosti i na taj način si povećati potencijalni broj kupaca, pa makar to bilo i u susjednim državama. Ima još hrpa malih mljekara po Hrvatskoj, postoje i famozni mljekomati (iako ne znam kako to ide). Dakle, potencijala u tom smislu ima. I ne samo to, osobno bi prije kupio mlijeko od nekog malog proizvođača - ali da znam da je domaće i kvalitetno - nego od velikih.

Dodatak: I da, zašto se ne udruže i počnu prodavati mlijeko na aukcijama. Na aukciju dođu mljekare i natječu se koja će na idućih godinu dana dobiti ugovor od proizvođača. Naravno, proizvođači mogu početi uvoziti mlijeko ali tu bi se trebala uplesti država s nametima, ili nekakvim drugim mehanizmom koji će spriječiti to. U konačnici, može se reklamirati s Kupujmo Hrvatsko kako bi se potrošači motivirali da kupuju Hrvatski proizvod.

Thursday, October 25, 2012

Blogger: A Nightmare...

Well, I have to say that blogger is becoming a nightmare for me. I already wrote about versioning feature missing, but now I have four more complaints.

First, try to open some existing draft post and press ^Z constantly. What will happen is that at some point your complete post will disappear!? Now, your reflex will be to close edit tab without saving changes. But, that is a big mistake! Autosafe feature already kicked in and, no matter that you've said it's OK to lose changes, the post is gone for good. In other words autosafe feature should save independent copy of post, not overwrite the existing until I click Save button!

And while I'm at that, second thing that annoys me is that I sometimes click on some draft in order to see what I wrote in it, and autosafe feature saves unchanged version but it also updates a timestamp and this post comes to the top. I don't want it to be on top if I didn't change anything in it!

Third, if I change something, and later I'm not satisfied, there is no way I can revert changes. There is no way to make snapshots of the post.

Finally, the editor itself is catastrophic! It is simple, that's true, but that is everything that is positive about it. Sometimes, it will not allow you to add space, and frequently what you see in edit mode will be different from what you see in preview mode, not to mention different font sizes! One thing that it does is very annoying. If you click on one paragraph and make formatting changes (e.g. bold, font size) and then you go to the later part of the text (e.g. two paragraphs lower) it transfers formatting even though it is completely different!

Google! Can we get a better Blogging tool?! Or otherwise, I'll seriously start to consider switch to WordPress...

Disclaimer: I have to say that I'm using Firefox 16 with different plugins, among others NoScript and RequestPolicy which might influence Blogger's behavior. But I, as an ordinary user, don't have time to investigate this, and, I think NoScript is important component of my protection.

Sunday, October 14, 2012

Microprocessor architectures...

One of the tings that interest me a lot, even though only as a spectator, is the architecture of modern processors and techniques used to make them as fast as possible. I stumbled recently on post and analysis of AMD's new processor (micro)architecture Bulldozer, which brought me to this PDF document that summarizes in one place characteristic details of modern processor architectures. Highly recommended read! On the pages of the guy who wrote that PDF document you'll also find a lot of other interesting stuff, mostly low level, for assembly programmers or those who care a lot about every bit of performance.

And while I'm at this low level details, I like to mention one other site also very good source of information. It is Ulrich Drepper's homepage. He's one of maintainters for glibc and on his homepage there is a lot of documentation with description inner workings of glibc. But one of the things I wanted to mention now, related to the topic of microprocessor architecture, is the paper titled Understanding CPU Caches.

Request for Blogger feature: Versioning for posts....

Yesterday and today I was looking for a possibility to have post on blogger versioned, both, before and after it is published. What I want is to have ability to:
  • track changes, especially after the post is published so that readers of the blog can request diff to see what I've changed (even though I usually only correct spelling and grammar errors),
  • ability to fork a certain post, rewrite it and publish it as a new one with automatic tracking of those forks so that readers know that.
Also, I decided that my students will have to write blog about what are they doing for their diploma thesis (and other things they have to done during their studies under my supervision). But, I require from them not to publish a post until I review it. And there is a problem, the support for review process within blogger is very weak. Now, I know we can use LibreOffice or MS Office to do that, or just plain simple files, but I somehow think that it will add another layer of complexity to the process that is already not so simple. Not to mention editing.

So, does anyone know about some plugin, or how to request feature on blogger?

Saturday, October 13, 2012

Connection vs. connectionless vs. protocol vs. service vs. ...

I was searching for an example of a connection oriented unreliable service and associated, or implementing, protocol and I stumbled on a post written by someone who claims to be CCIE that doesn't distinguish between terms service and a protocol, or at least the post was written in such a way that there is no distinction. Now, there are pages on Wikipedia that explain those terms but I was compelled to write my own post about those terms and to make distinction and characteristics clear. Also, because connection oriented service is frequently associated with TCP, and connectionless with UDP, then the characteristics of the protocols are often attributed to the connection oriented and connectionless services as well. But this is wrong, and let me also explain what is wrong and why.

Network layers and service

To understand the difference between service and a protocol you have to know that network functionality is divided into independent layers, stacked on one another. This is, obviously, true for all layers except for the first and the last one. This division is necessary because, for example apparently simple operation of opening Web page is actually very complex and includes a lot of functionality at the bottom of which is problem of sending and receiving bits using wireless communications, copper communication and/or fiber communication. Those areas alone are so complex that people specialize not only in, e.g. wireless communications, but in more specific parts like, e.g., antenna design. Anyway, the main purpose of each layer is to encapsulate some functionality and provide service to higher layer (note that I'm referring to layer immediately above) without higher layer being aware of what's happening in the layer below, or knowing what is the number of layers below. This is the same principle used in software design where applications are divided into modules to make them manageable. This process of using concepts of layers and services is iterative (or recursive, depending how you look at it) meaning that the layer that offers service to higher layer in the same time uses service of a lower layer to accomplish its goals. Again, first and last layer are somewhat specific, but I won't go into that.

Now, we came to the important fact that each layer provides service to a higher layer and uses services from the lower layer. So, service is just that, some functionality offered to a higher layer in which higher layer doesn't know how the service is implemented. Note that the layer that offers service is also called service provider, while the one that uses service from a lower layer is called service user.

Actually, this is enough knowledge of layering in networks to understand distinction between service and a protocol, but for completeness I'll mention few more things about layering. First, there is (almost) infinite number of ways this layering could be done. Not only with respect to the exact number of layers there are, but also with respect to specific functionality placed in layers. The most popular layering model is ISO/OSI Reference Model which has exactly 7 layers with each layer having prescribed functionality. It is called reference model because it is almost exclusively used as a reference for all other possible models. In other words, concrete networks like e.g. Internet, or even Ethernet, have different number of layers and/or functionalities in layers so they are frequently mapped to ISO/OSI Reference Model for a purpose of discussions and better understanding.

One final, very important thing. Layers don't implement functionalities, they are abstract concept so they don't exist as something material. What implements functionalities and offers services are different entities that logically belong to a certain layer. In other words, there are software and hardware modules written by programmers, or designed by hardware engineers, that exist in computers which implement some functionality and which are connected to other software/hardware modules that use them or which they use. For those software/hardware modules, by looking how they are connected and what they do, we say that they belong to a certain network layer. And when I say that layer implements, what I actually mean is that some entity in a layer implements, also when I say that layer uses a service of a lower layer, what is actually meant is that entity in a layer uses a service of an entity in a lower layer. There is a bit of ambiguity in those statements, but is easier to write and I think that with this clarification it isn't so confusing.

Protocols

The fact that each layer has services of a lower layer on its disposal, and doesn't know how the lower layer works nor the lower layer knows how higher is working, means basically that the communication is implemented between the same layers in different machines (or within the some one, which is actually a special case). So, to establish communication I, as an entity in say 3rd layer, am communicating with entity in 3rd layer on some another machine and we exchange information in order to allow communication of users in 4th layer. The same goes for 4th and other layers, too. But now, we have a problem. Namely, communicating entity in one layer on one machine is programmed by one company (say Microsoft) while entity on another machine, in the same layer, is programmed by someone implementing the analogous entity in Linux. Clearly, the two programmers probably don't know each other, and possibly they will never know. So, how do we make sure that their software will work, i.e. talk to each other! The answer is: by defining protocol. Human language is actually a protocol, albeit a very complex and ambiguous one. But nevertheless, if two secretaries don't speak the same language and have the same set of concepts that the language is referring to, they will never be able to pass messages from their bosses (users)!

So, protocol allows two (or more) entities within the layer to exchange information and establish communication and transfer data between their users. Protocol thus implements service, or is used to implement a service! More about that later. So, the protocol includes the following elements:
  1. Data units exchanged, called Protocol Data Unit, or PDU. For every data unit exchanged, format has to be rigorously defined!
  2. Behavior, usually defined and implemented using state machines. Behavior is actually how entity responds to information it receives from its peer (other entity), from its users and also what it expects from lower layer and how it uses it.
Note that each entity actually has communication with three other entities. The first one is a user in a higher layer - service user, the second one is entity in lower layer whose services are used to transfer data - service provider, and finally, there is a peer with whom the communication is established.

Connection and connectionless services

We saw in the section Network layers and services what the service is. Now we can say that there are two primary types of services. The first one is modeled according to how telephones work and is called connection oriented service while the second one is modeled according how post office works and is called connectionless service. It is interesting to note that connectionless is actually older, i.e. the telegraph system is connectionless and was in use before telephone was invented, but connection oriented is more dominant and before advent of digital computers was basically the only type in use.

The key difference between the two is that entity that uses connection oriented service from a lower layer entity has to first establish connection, i.e. to say with whom it is going to communication on the other end but without transferring any data yet. This is called connection establishment phase. Also, when the user is finished with data transfer, or communication, it has to explicitly break communication channel with its peer entity on the other end. This is called connection teardown phase. In between those two phases, data is transferred. Because of this, the identifier (i.e. address) of the other end is transferred only once, during connection establishment phase.

If you think a bit about this, you'll immediately see the similarity between telephone call and this service. In telephone call you first establish connection by dialing your peer's number, then you talk (i.e. transfer data), and finally you hang-up. Also, during telephone call you are user and telephone company offers you a service in which you don't know what's happening within telephone system. You only know and care that you have established communication channel with your peer entity, i.e. the person on the other end. Now, maybe you spouse told you that you call you friends to dinner. In that case, your spouse is your user and you are providing service to him/her.

On the other hand, connectionless service has only data transfer phase, i.e. no connection establishment nor teardown, you just send data. Obviously, when sending data you have to tell to your service provider to whom data should be sent and it has to be done each time you send something. Again, we said that postal office works that way and letters are sent that way, i.e. each one of them has an address and all the letters you've sent are mutually independent!

Relation between connectedness and protocol

Note again that, while talking about types of service, we didn't once talked about how things work, only how it appears to work. And that's the main point. Namely, service is one thing, protocol is other, and service can be connection oriented or connectionless, but the protocol is, well, just protocol. Now, the  terms connection oriented protocol and connectionless protocol are extensively used in the literature, but this connectedness attribute is actually bound to the service protocol implements, not to the protocol itself.

Let us, as an example, take protocols from the Internet, IP, TCP and UDP. TCP and UDP are transport layer protocols (meaning, they are part of the transport layer in ISO/OSI RM). IP on the other hand is network layer protocol and it is used for communication of network layer entities. In networking texts entities are almost exclusively called the same as protocol they use, so we have TCP entity that uses TCP protocol to communicate with other TCP entities, called simply TCP, UDP entity that uses UDP protocol to communicate with other UDP entities, called simply UDP. It is similarly for IP protocol/entity. This might be ambiguous sometimes, but from the context it should be clear if the authors are talking about entities or protocols.

Lets start with IP. IP offers connectionless service to its service user and uses connectionless service from its service provider. This means that each IP's protocol data unit (called datagram or packet or IP packet) carries destination address and data, and in order for two IP entities to communicate it is not necessary to establish connection. Actually, there is no way connection could be established wth IP protocol. Furthermore, entities using IP protocol offer connectionless service to users, in our case, TCP and UDP. And IP also uses connectionless service from lower layers. The reason for this is that connectionless is a least common denominator, it actually expects least from the network, and that's the one reason why IP is connectionless protocol. If the underlying network is connection oriented, like e.g. ATM is, than it has only to expose connectionless service that will be used by IP. And if in the implementation of these services it is necessary to establish and break connection for each packet, then so be it. It will work, though not particularly efficient.

The next entity is TCP. It offers connection oriented service to its users, and uses connectionless service from its service provider, the IP entity. But TCP's service is more that that, it is also reliable (more about that in the next section). Now, take a note that TCP uses IP for communication (more precisely it uses services provided by IP entity) which are connectionless! So, TCP offers connection oriented service on a top of connectionless service. This is actually very hard to achieve.

Finally, there is UDP entity that offers connectionless service to its users and it uses connectionless service from its service provider, IP entity. UDP is actually very thin layer in terms of functionality because it adds almost nothing to what IP already provides. In a way it only relays data.

Note that what each entity offers to its users (i.e. service) doesn't necessarily correspond to what it gets from its service provider.


Relation between connectedness and QoS

Ok, final thing to discuss is reliability, or more generally Qualit of Service (QoS) offered by service. As I said in the introduction, because connection oriented service is mostly associated with TCP, characteristics of TCP are associated with connection oriented service. Similarly goes for UDP. But that's not true. Connectedness of service and the guarantees it provides for a certain parameters of communicatoin (called QoS) don't have anything to do with each other. It is perfectly feasible to have connected oriented unreliable service as is to have connectionless reliable service.

Now, reliability is a bit of vague term here. In case of TCP it means that the service guarantees that all the data that was sent will arrive, in order sent, without duplicates. In case it couldn't fulfill those requirements, the service will be disconnected with an appropriate error indication. Note that fulfillment of those guarantees is part of the protocol operation, and there are different mechanisms to achieve that like sequence numbers, acknowledgments, timeouts, retransmissions, etc. Also note that one more important thing. There is no guarantee that there will be no errors in a stream, i.e. that some bit fill be accidentally flipped. TCP doesn't detect that. And if you think that errors might appear when data travels through the network, think about bugs in software and possible consequences on data...

Anyway, connectedness and QoS are separate things that can be combined in different ways.

Croatian terminology

This is actually note for Croatian readers. When I was thinking should I write this post I wasn't sure should I write it in Croatian or English. In the end, I decided to write it in English (obviously) but one of the reasons I was thinking about using Croatian is because of the terminology. I insist on using Croatian translations if they are available and I don't like when someone in Croatia is speaking half in Croatian and half in English. Even worse is when someone writes half English half Croatian. Ok, some level of mix is acceptable (especially in spoken language), but there are quite good translations and I don't see why it would be necessary to use English equivalents in talk.

So, I refer croatian readers to look at dictionary with all the translations.

Monday, October 8, 2012

Kako radi token za Internet bankarstvo...

Što zbog posla, što zbog čiste znatiželje, već dulje vremena pokušavam saznati kako točno rade tokeni koji se koriste u Internet bankarstvu, primjerice Zabe. Primjetite da je naglasak na riječi "točno" budući da znam načelno kako rade, a i guglajući se mogu pronaći neke odokativne informacije. Međutim, to nije zadovoljavajuće, a čak ni dovoljno. Na Internetu je relativno lako pronaći proizvođača i konkretan token koji Zaba i ostali koriste, iako ima puno vrsta tokena, ali traženje kako točno ti tokeni rade iz nekog razloga nije baš tako lako. Ako bi se pitali zašto bi htio znati kako točno rade, tada je odgovor da osim znatiželje u pitanju je i sigurnost. Naime, zanima me u kojoj mjeri je proizvođač predodredio što i kako se treba raditi, a u kojoj mjeri programeri dizajniraju protokole. Izrada ispravnih i sigurnih protokola je izuzetno težak problem na kojemu i profesionalci koji se bave s tim imaju poprilično problema, a ako to radi neki programer koji se prije toga nije bavio proučavanjem protokola onda je velika vjerojatnost da će napraviti neku grešku. To pogotovo postaje bitno ako se uzme u obzir činjenicu da se uvode razne vrste mTokena koje su čisto programske komponente i prema tome programeri imaju potpunu slobodu implementirati ih kako god žele.

Opis upotrebe tokena

Tokeni u Internet Bankarstvu upotrebljavaju se u dvije svrhe. Prva namjena je za autentikaciju, drugim riječima, dokazivanje da smo onaj/ona za kog se predstavljamo. U tom smislu token generira broj (APPL1) koji je potrebno upisati u neko polje Web aplikacije i na taj način dokazati tko smo. Na neki način to je slično lozinci. Umjesto korisničkog imena koristi se broj tokena i broj tokena je vezan uz naš račun, odnosno, po broju tokena aplikacija na poslužitelju će pronaći naše podatke. Dakle, s tim procesom prijave (kucanja broja tokena i broja kojeg token generira) dokazujemo da posjedujemo token, i neposredno, da smo vlasnici nekog računa. Posjedovanje tokena je prvi faktor autentikacije! Očito je kako bi gubitkom ili krađom tokena netko dobio potpuni pristup našem računu i da se to spriječi, token je zaštićen PIN-om, četveroznamenkastim brojem, koji bi morao znati isključivo vlasnik tokena! To je drugi faktor autentikacije. Dakle, za uspješnu autentikaciju potrebno je imati token i znati PIN koji ga otključava. To je tzv. two-factor authentication (2FA) ili dvo-faktorska autentikacija. S obzirom da je PIN relativno mali broj koji se sastoji od samo četiri znamenke, koje omogućavaju 10000 kombinacije, nije tako teško pogađati PIN, treba vremena, ali je moguće. Kako bi se token zaštitio od napada pogađanja PIN-a, on se automatski zaključava nakon tri uzastopna neuspjela pokušaja. Ovaj sustav prilično je siguran budući da se broj koji generira token i koji se mora upisati u aplikaciju stalno mijenja i jako je teško pogoditi koji će biti idući broj! S obzirom da se generirani broj za autentikaciju može upotrijebiti samo jednom, onda se naziva i jednokratna lozinka, ili engleski one-time password (OTP).

Druga namjena je za autorizacija transakcija. Naime, ako se netko uspije ubaciti u komunikacijski kanal između banke i klijenta (što u biti i nije tako teško) tada se otvara mogućnost napada u kojemu napadač modificira podatke neke transakcije ili jednostavno inicira transakcije bez znanja korisnika. Recimo da klijent plaća režije i zbog toga prebacuje XYZ kn sa svog računa na račun tvrtke kojoj treba platiti račun. Napadač može presresti podatke o plaćanju kada putuju od klijenta do banke, promijeniti odredišni račun tako da to bude njegov račun, a istovremeno može promijeniti i cifru, i onda to proslijediti banci. Korisnik neće ni znati što se desilo. No, ne samo to, već nakon autentikacije (koju napadač ne može tako lako zaobići) može inicirati bilo koju transakciju i opet klijent neće biti svjestan da je upravo prevaren. Dakle, bez nekakve dodatne zaštite očito je da napadač može otuđiti sva sredstva s klijentovog računa i da se radi o priličnoj ozbiljnoj prijetnji.

Jedna mogućnost da se to spriječi je da klijent mora upisivati jednokratnu lozinku prije svake transakcije, tj. na neki način se uvijek mora autenticirati. Ovo će istina spriječiti napadača da izdaje naloge bez znanja korisnika, ali neće spriječiti napadača da promijeni podatke o transakciji bez znanja korisnika. Zbog toga se upotrebljava drugi pristup (koji također ima problem vidjeti Nadopunu 1!). Naime, kada korisnik upiše podatke o transakciji oni se šalju u banku. U banci se na temelju podataka iz transakcije (brojevi računa, iznosi i slično) generira jedinstveni broj koji nazivamo izazov (engl. challenge). Potom se korisniku prikazuju svi podaci iz naloga (dakle ponovo se prikazuje nalog) ali se prikazuje i broj generiran od strane banke. Korisnik taj broj mora utipkati u svoj token (APPL2) koji na temelju njega generira odgovor (response). Nakon toga, odgovor, zajedno sa svim podacima o transakciji vraća se u banku. Aplikacija u banci ponovo provjerava da li generirani jedinstveni broj odgovara podacima u transakciji te da li je korisnik upisao očekivani odgovor (primjetite da aplikacija u Banci zna koji broj očekuje). Ako je sve OK, transakcija se provodi, ako nije, transakcija se odbija. Na taj način značajno smo otežali posao napadaču u njegovim pokušajima promjene podataka iz transakcije. Da bi zaštita bila uspješna, i od korisnika se traži određena doza pažljivosti.

Implementacija

Razlog zašto sam se odlučio na pisanje ovog posta je što sam konačno uspio pronaći kako je implementirano generiranje jednokratne lozinke. Naime, to je opisano u RFC dokumentu TOTP: Time-Based One-Time Password Algorithm (RFC6238). Taj RFC je nadogradnja RFC-a pod nazivom HOPT: An HMAC-Based One-Time Password Algorithm (RFC4226). U oba algoritma koristi se HMAC funkcija (definirana u RFC2104) koja na temelju tajnog ključa i dodatnog parametra generira novu jednokratnu lozinku. E sad, razlika između TOPT i HOTP je baš u tom dodatnom argumentu, iako je sve ostalo potpuno isto. U slučaju HOTP-a, dodatni argument je brojač, dok je u slučaju TOTP-a dodatni argument trenutno vrijeme u sekundama podijeljeno s nekom konstantom (recimo s 30). Razlog uvođenja TOTP-a je poboljšana sigurnost. Naime, kod HOTP-a se brojač povećava nakon svakog korištenja pa ako korisnik ne  bi upotrebljavao token neko vrijeme, stalno bi bio isti OTP i napadač bi ga mogao pogađati te bi u jednom trenutku i uspio. Kod TOTP-a, kako vrijeme prolazi, mijenja se i jednokratna lozinka (svake minute ako se koristi dijeljenje s 30) te napadač sada ima pokretnu metu što je dosta složenije za pogoditi. Ono što mi se posebno sviđa kod RFC-a o HOTP-u je analiza sigurnosti, dok se u oba RFC-a nalazi Java kod koji implementira algoritam opisan u RFC-u. I pogodite što? Pronašao sam taj kod u mTokenu jedne određene banke. Kako i što, prešutit ću, bar za sada. :)

No, ovdje ima jedan ALI. Preporučena minimalna duljina jednokratne lozinke prema RFC-u je 6 znakova, dok se u tokenu upotrebljavaju 4 znamenke. Pretpostavljam da je razlika u finalnom koraku kada se radi modulo operacija, ali nisam siguran, a pogotovo nisam siguran koliko to utječe na sigurnost (trebalo bi proći analizu iz RFC-a, što ću sigurno obaviti čim uhvatim vremena).

Što se tiče jednokratne lozinke još je samo ostalo reći što je s onim tajnim ključem. Pa, taj ključ se generira na autentikacijskom poslužitelju (o toj komponenti nisam pisao, ali se ona nalazi u banci) te se taj broj upisuje u token preko onih ledica na vrhu tokena. Sumnjam kako se to obavlja prema protokolu opisanom u RFC6030 ili da je upotrebljeni protokol barem sličan tome opisanome u RFC-u. Inače, preporučena vrijednost za dijeljeni ključ prema RFC-u je 128 bita, ali ako je vjerovati postu na forumu, onda se u HR upotrebljavaju vrijednosti od 256 bita.

Što se tiče implementacije izazova i odgovora, prvo pitanje je što se od podataka uključuje u generiranje izazova. Pretpostavljam da je odluka prepuštena onima koji rade aplikaciju. Naime, tokenu je svejedno kako je nastao broj koji se upisuje, on jednostavno na temelju broja generira novi broj. U tom smislu postoji nejasnoća, ali moguće je da se koristi neka varijacija HOTP-a. Ono što i aplikacija i token moraju imati zajedničko je algoritam uz pomoć kojega se broj generira. Pretpostavljam da se za to koristi postupak definiran u RFC6287 - OCRA: OATH Challenge-Response Algorithm. Konkretno, postupak iz odjeljka 7.1. No ukratko, opet se koristi HOTP ali su neki ulazi promijenjeni, konkretno umjesto brojača (C) koristi se sažetak koji server šalje. Moguće je da se koristi i vremenska oznaka (trebalo bi provjeriti), ali sigurno se ne koristi PIN. Naime, PIN mora biti poznat isključivo korisniku i budući da ga poslužitelj ne zna onda ga ne može koristiti za generiranje podataka!

Umjesto zaključka

Čini se da je token prilično siguran sustav. U stvari, siguran je što se tiče stvari definiranih odgovarajućim standardima (RFC), ali kada je u pitanju programerska implementacija moguće su ranjivosti. Token ipak ne štiti od nekih mogućih zloupotreba, primjerice, neporecivost nije najbolje osigurana zbog toga što je izazov vrlo mali broj i grubom silom se može generirati slična transakcija s istim izazovom. Dodatno, moguće je i određeno modificiranje računa u prijenosu od strane napadača, iako nije trivijalno. U tom smislu pametne kartice nude doista puno bolje rješenje, ali na uštrb više zahtjevanih resursa (čitači, instalacija dodatne programske podrške).

Nadopuna 1 [20121011]
Na žalost, moram se ispraviti. Mehanizam izazova i odgovora (challenge-response) ne štiti od MITM (ili MITB) napada. Naime, napadač koji se ubaci u komunikacijski kanal može prilikom slanja naloga na poslužitelj izmjeniti podatke, poslužitelj na to generira izazov i vraća nalog zajedno s izazovom korisniku. Međutim, napadač vraća originalne podatke u nalog ali ne dira izazov te to prikazuje korisniku. Korisnik ukucava odgovor te se nalog, zajedno s odgovorom šalje na poslužitelj. Međutim, napadač u prolazu modificira nalog tako da opet ima krive podatke. Poslužitelj na to provodi transakciju. Ovaj napad nije moguće detektirati samo na temelju izazova budući da korisnik ne zna da li je on ispravan za podatke trenutno prikazane u formi!

Sunday, October 7, 2012

Word kao pisaća mašina...

Dakle, jednu od stvari koju moram napraviti je procjena rizika korištenja jedne lokacije kao pričuvnog računalnog centra. Možda o tome napišem nešto u jednom drugom postu. Ono o čemu ovdje želim pisati je o neznanju korištenja Worda od strane uprave Grada Zagreba. Naime, jedan od bitnih dokumenata koji koristim u procjeni rizika je Procjena ugroženosti stanovništva, materijalnih i kulturnih dobara i okoliša od katastrofa i velikih nesreća za područje Grada Zagreba. Zakonska je obveza svih gradova (u biti, ne da mi se čitati zakon pa je moguće i da sela i tko zna tko ne ima istu obavezu) donošenje takvih procjena. Jednostavnom pretragom u Google-u ćete za čas pronaći takve procjene i za druge gradove. Moram reći da se radi o zanimljivim, a da i ne govorim koliko bitnim, dokumentima.

Međutim, razlog zašto sam se odlučio na ovaj post je konkretno dokument za Grad Zagreb koji je pisan u Wordu. Pregledavajući malo taj dokument shvatio sam da je pisan tako što je Word korišten kao pisaća mašina, drugim riječima, netko je mukotrpno formatirao svaki paragraf. Dobro, možda se malo provukao s kopiranjima i sličnim, ali upotrebi stilova nema ni traga ni glasa. Numeracija poglavlja je obavljena ručno (i postoji pogreška jer od 1.1.2 se prelazi na 1.1.3.1!), pobrojavanje je također obavljeno ručno, slike su isformatirane katastrofalno. Navikao sam inače da studenti koriste Word na takav način, ali da i profesionalci to isto rade to mi je nepojmljivo. Tim više što je Word napravljen tako da, ispravnim korištenjem, štedi značajne količine vremena u obradi teksta.

Konkretno što me je izbacilo iz takta je potreba za  sadržajem. Naime, htio sam gledajući sadržaj dobiti pregled i steći dojam o cjelokupnom dokumentu. No, to je bilo jednostavno nemoguće s postojećom verzijom. Iz tog razloga odlučio sam da ću malo preformatirati dokument, što sam u konačnici i napravio te ću ga u jednom trenutku postaviti i na Web da ga drugi mogu dohvatiti.

Moja poruka gradskoj upravi je pošaljite ljude na tečaj korištenja Worda jer se očito radi o priučenim ljudima koji su vjerojatno dobar dio života piskarali po pisaćim mašinama i onda im je računalo dano kao zamjena za pisaću mašinu!

Friday, October 5, 2012

Reset FreeIPA admin password...

Well, the other day it happened that I forgot password for admin user on FreeIPA2 installation. But, since I had root on that same machine, I didn't panicked. Instead, I fired up Google to see how to reset it. This actually wasn't so easy. For example, if you use search keywords 'freeipa admin reset', you'll get posts about replicas, KDC, and who knows what not. In the end, I managed to dig this post. So, I run the given command:
[root@ipa1 ~]# LDAPTLS_CACERT=/etc/ipa/ca.crt ldappasswd \
           -ZZ -D 'cn=directory manager' -W \
           -S uid=admin,cn=users,cn=accounts,dc=domain,dc=com

New password:
Re-enter new password:
Enter LDAP Password:
Result: Constraint violation (19)
Additional info: Password reuse not permitted
control: 1.3.6.1.4.1.42.2.27.8.5.1 false MIQAAAADgQEI
ppolicy: error=8 (New password is in list of old passwords)
But something wasn't right. It asked me to enter LDAP password, that's directory manager's password, but when I entered what I thought should be the password, it complained of password reuse. On the other hand, if I entered some random string, then it clearly said that credentials are invalid:
[root@ipa1 ~]# LDAPTLS_CACERT=/etc/ipa/ca.crt ldappasswd \
            -ZZ -D 'cn=directory manager' -W \
            -S uid=admin,cn=users,cn=accounts,dc=domain,dc=com

New password:
Re-enter new password:
Enter LDAP Password:
ldap_bind: Invalid credentials (49)
So, I decided to reset directory manager's password too. This was easier to find, and it is explained here. Well, you have to be careful when following that text since it is written for two different versions of directory server and you have to follow the one that's right for you. After you reset directory manager's password go back and reset FreeIPA's admin password. When it asks 'Enter LDAP Password:' type in directory manager's password you've just changed.

About Me

scientist, consultant, security specialist, networking guy, system administrator, philosopher ;)

Blog Archive