Tuesday, December 20, 2011

Problem with inactive agent in OSSEC Web Interface

I was just debugging OSSEC Web interface. Namely, it incorrectly showed that one host was not responding event though there were log entries that showed otherwise. The problem was that this particular host was transferred to another network, and thus, its address was changed.

I figured out that the list of available agents within Web interface is generated from a files found in /var/ossec/queue/agent-info directory. There, you'll find one file per agent. The file name itself consists of agent name and IP address separated by a single dash. In order to display if an agent is connected or not the PHP code from Web interface (which itself is placed in /usr/share/ossec-wui directory) obtains time stamp of a file belonging to a particular client and if this time stamp is younger that 20 minutes, it proclaims agent OK, otherwise, it shows it as inaccessible.

In this case it turned out that the old agent wasn't removed using manage_client tool (selecting option R, for remove). So, the old information remained, which wasn't updated and thus the Web interface reported inactive agent.

List all tabs across all windows in Firefox...

I have a lot of windows opened at the same time, and in each window there are many tabs. This makes it a nightmare to find a specific tab; you have to go window by window, and tab by tab. So, I just spent half an hour, maybe more, searching for a way to list all tabs that are opened in all windows. It turns out that there is not much information. In majority of cases you'll find news and tips on how to see all tabs in a single window (that one is easy) but not much than that. I also found a post how to do it in Safari, but not in Firefox. Finally, I came across this post in which poster is looking for a way to search all tabs in all windows. One of the responders mentioned a plugin called Tabhunter, which did the trick. So, to make this particular problem more visible in Google searches, I'm writing this post.

Sunday, December 11, 2011

Why I don't believe in God but strongly wish there is one...

... or in other words, why I'm agnostic. This is related to Christian way of thinking about God, and I suppose that it extends to some other large religions as well. It certainly doesn't cover every possible religion nor it is meant to. I'm not going to discuss every possible religion for a simple reason that I neither have time nor will nor interest to do so.

First, let me say that I strongly believe that humans are inherently good, in a sense that they are emphatic, caring, willing to help, etc. What's more, I believe that all living beings are such, not only humans! This is my personal belief, even though, I think there is a strong evidence in favor of such thinking. Namely, from biology and evolutionary psychology it is known that being emphatic increases the chances of survival! Next thing I do believe, is that this life is not meant to be enjoyable but rather living beings suffer during their life time. Constantly they are under different threats. Of course, some suffer more, some, lucky ones, less, but in the end we all suffer.

So, when some person is evil I think that the primary reason for being such is that the life made him/her that way. Many people never felt a love, never had anything, and how could they give something like that when they never received it, they even don't know what that is!? Yes, I know, you may now say that the behavior is genetically determined, but then the things are even more worse as it means we are "programmed" to be evil or god. And how can someone be guilty in that case?! So, here we are, I ask, why would someone be punished for eternity for something that's not his/her fault? Based on this simple question I refuse to believe there is a God.

This, I hope, answers the first part of the title. As for the second part, I desperately  wish there is a God for those that suffer, that help others without thinking on themselves, or do any other good to others. In this life they will not receive anything, and if this life is all there is, then it's unfair! You can now say something along the line that the whole is more important than the parts and thus it's unimportant what happens to individual, but I don't agree.

To conclude, this is a very simplified view, but it fairly well represents what I think about the life and an idea of a God.

Saturday, December 10, 2011

Higgs field vs. Higgs particle...

Well, it turns out that this week there will be a press conference in CERN that will present results of the search for Higgs particle. It could be that the physicists will not find it, or it could be they found some sign of its existence. Anyway, reading article What if there is no Higgs boson, a found a blog with The Higgs FAQ 1.0. If you are not physicists, like me, then I strongly recommend that you read that particular FAQ. It explains very good what's all the fuss about Higgs particle is, and basically stresses that the particle itself is not so important as it is the Higgs field! Actually, it seems to me that the Web site with The Higgs FAQ has a lot of material easy to understand for lay persons so I also recommend that you look into it.

Friday, December 9, 2011

Evolution & Thunderbird

I can not describe how much Evolution annoys me! I'm using it for years, maybe 10 or so by now, and I can say with quite a bit of a confidence that it's full of bugs, at least on Fedora, and there was no release in past few years that didn't have some quirks that made me go mad! And today, it happened that it didn't want to create a meeting within a Google calendar with a usual unhelpful error message about failed authentication. To make things even more weird, it did show my available calendars on Google, and that requires authentication so it should work! After some searching on the Internet I found workaround that includes removing calendars from Evolution and re-adding them back again. This, by itself, made me closer to look for an alternative and to switch. Anyway, I started to remove all Google calendars from Evolution, but then, removing some didn't work!? The ones that were turned off because I didn't provide password for them. Even restarting Evolution didn't help. What helped in the end was that I changed username and afterward removal was successful!

The reason I'm using Evolution for so many years was that it had integrated calendar with mail client, todo lists and memos. I need at least calendar function along with a mail client. I'm already using Thunderbird but as a secondary mail client for some unimportant mail accounts and I know that it progressed quite nicely, and more importantly, it has Calendar extension. So, I started contemplating about switching mail client NOW! Well, everything OK except that I have huge mail archives stored in Evolution and I have to import them into Thunderbird. It turns out there is no migration wizard and that it has to be done manually. Then, it turned out that Thunderbird uses Mbox format, while Evolution uses Maildir format (it also used Mailbox until a year or so ago).

In essence, Mbox uses one file for all mail messages, while maildir uses one file per message. Maildir has many advantages over Mailbox and thus has become preffered way of storing mail messages on a file system. One reason I want maildir is that I'm doing backups and when one mail message is stored in a mbox (which by itself is huge file containing many messages) backup process will copy the whole file again.

Anyway, judging from the information I found on the Internet this is a long requested feature in Thunderbird. Thunderbird supports pluggable mail stores since version 3.1, but maildir format is not planned before version 11. In the end, I decided to wait a bit more and then to switch to Thunderbird.

Update

I forgot to mention one more bug. When I try to save existing calendar into ical file the evolution simply crashes, and this is 100% repeatable. I discovered this when I decided to clean up all the old calendars from evolution, but first I wanted to save them, just in case.

Tuesday, December 6, 2011

Problems with resolver library...

I just had a problem that manifested itself in a very strange way. I couldn't open Web page hosted on a local network, while everything else seemingly worked. The behavior was same for Chrome and Firefox. In due course I realized that every application had this problem. On the other hand, resolving with nslookup worked flawlesly. This was very confusing. To add more to the confusion, while running tcpdump it was obvious that there were no DNS requests sent to the network! So, it was obvious that the problem was somewhere in the local resolver. At first, I suspected on nscd that was used as a caching daemon on Fedora, but in Fedora 16 this daemon is not installed. So, how to debug this situation? Quick google query didn't yield anything useful.

Reading manual page of resolv.conf there is section that says that you can use directive option debug. But trying to do that yielded no output! Neither there were any results using the same option but via RES_OPTIONS environment variable. This is strange, and needs additional investigation as why it is so, and more importantly to know how to debug local resolver.

In the mean time I figured out that the ping command behaves the same as browser and since ping command is much smaller it is easier to debug it using strace command. So, while running ping via strace I noticed the following line in the output:
open("/lib64/libnss_mdns4_minimal.so.2", O_RDONLY|O_CLOEXEC) = 3
which immediately rung a bell that the problem could be nsswitch! And indeed, opening it I saw the following line:
hosts:      files mdns4_minimal [NOTFOUND=return] dns myhostname
which basically said that, if mdns4 returns not found dns is not tried. It seems that mdns4 is used whenever the domain name ends in .local, which was true in my case. So, I changed that line into:
hosts:      files dns
and everything works as expected.

Since I didn't install explicitly mdns, I decided to remove it. But then it became clear that wine (Windows Emulator) depends on it. So, I left it.

Tuesday, November 29, 2011

Compiling Zeitgeist data sources plugins on Fedora 15...

I'm constantly in search for some way that will help me organize myself. I tend to do many things at once and keeping track of all of them is hard. One of the possibilites I wanted to try for a long time now was Zeitgeist. Basically, Zeitgeist records all your activities and they can be viewed later with gnome-activity-journal application. This system isn't finished yet, for example, it lacks better integration with many applications. Also, better options to manipulate that data are necessary, but all in all, it sounds promising.

In Fedora 15 there are already packages for Zeitgeist as well as for gnome-activity-journal. But the problem is that plugins for Zeitgeist are not packaged. This means that on Fedora Zeitgeist tracks only opened documents, not, e.g., what Web pages you were surfing. So I decided to compile Zeitgeist data sources, and  while doing so I had some problems for which I saw posted questions on how to resolve them, but without any answers. So I decided to document what I had to do in order to get this package to work.

Prerequisites

Before configuring and compiling package be certain that you have all appropriate development packages installed. Minimally those are:
  • libzeitgeist-devel
  • xulrunner-devel

Modifying system files

You'll need some editing of system files. I had a problem that pkg-config didn't recognize libxul. If you run the following command:
pkg-config --list-all
and you don't see libxul in the output than probably you have that same problem. It turned out that the problem were MPI files (if you didn't install any MPI libraries/tools you won't have that problem). In all of them pkg-config reported error and stopped further processing with a result that it didn't parse libxul file, and consequently, didn't report it's existence. This, further, confused ZDS configure script into thinking that there is no Firefox 4.0. Next, there is an error in configure script itself leading it to assume that Firefox 3.6 is installed! All wrong.

So, you need to edit the following files: mpich2-c.pc, mpich2-cxx.pc, mpich2-f77.pc, and mpich2-f90.pc. All of them are placed in /usr/lib64/pkgconfig directory. In each file you'll find an if statement:
if test "no" = yes; then
    plib=something
else
    plib=
fi
Remove that if statement, and also, remove string -l${plib} in line that starts with Lib:. If it happens that you don't have some of those files, just skip to the next one.

There was also a problem with libxul.pc file itself. In it you'll find the following line:
Version: 2
Which has a problem that pkgconfig doesn't treat that as 2.0 but strictly as 2. So, in ZDS configure script there is a requirement  that libxul is >= 2.0 which, for some strange reason, isn't true. Anyway, change the previos line to read as follows:
Version: 2.0
And that's for system changes.

Obtaining, patching and compiling the source

Grab Zeitgeist data sources (ZDS) tarball from the Launchpad site. I used version 0.8.0.1. Also, unpack the source and enter into the newly created directory. All the directory references that follow are relative to that directory.

The only thing you need to patch is to tell that Firefox extension will work for Firefox versions greater than 4.0. To do that, open file firefox-40-libzg/extension/install.rdf and the line that reads:
<em:maxVersion>4.0.*</em:maxVersion>
change into
<em:maxVersion>9.0.*</em:maxVersion>
Next, you need to configure the source. Before configuration, you have to say to configure script to use additional switch when invoking C++ compiler that will turn on some extensions used in XUL (e.g. char16_t type):
export CXXFLAGS=-std=gnu++0x
Now, run configure, then make, and finally (as root user), make install:
./configure
make
su
make install
And that should be it!

Note for Fedora 16

After writing this post, I installed Fedora 16. The steps necessary to compile data sources are almost the same. But, there is already gedit-plugin available in Fedora repository. If this is a problem, or not, I don't know yet. In any case, you can disable gedit plugin during configuration process.

One additional thing I had a problem with was the following error message:
emacs: error while loading shared libraries: libotf.so.0: cannot open shared object file: No such file or directory
This error message will occur only if you have Emacs plugin enabled and you don't have libotf installed. This is some weird problem in which one package (I think some MPI package) reports that it provides libotf which makes Emacs' dependency satisfied. But this libotf isn't the one that Emacs expects (and can find for that matter) and thus Emacs can not start. Anyway, just install package libotf and that should be it.

Thursday, November 24, 2011

"Obnovljivi" izvori energije...

I found a very interesting article about renewable energy sources. Since this article is in English and written very nicely, I don't have much to add (apart from some supportive thoughts on nuclear energy). Anyway, I decided to write a post in Croatian primarily for those that don't know English well.
Ja sam jedan od onih koji smatraju da je nuklearna energija najekološkiji mogući pristup proizvodnji dovoljne količine energije koju ljudska vrsta poznaje. Također, smatram da su najglasniji protivnici nuklearne energije oni koji o njoj najmanje znaju. Uglavnom, vidjeli gljivu na televiziji i sad fantaziraju gljive na svakom koraku.

Razlog za takav stav je vrlo jednostavan. Ne možemo se vratiti u pećine i sve nas je više, dakle, ljudsko društvo će imati sve veće i veće potrebe za energijom. A što se tiče "obnovljivih", i navodno ekoloških izvora energije, mislim da je to samo mit. Što god ljudi radili, ide na štetu okoliša, i neki se zanose da će - primjerice instalacijom solarnih ćelija - sve biti super. Pa, solarne ćelije uzimaju prostor, a i treba ih održavati - vodom. Slično i za vjetroturbine, biodizel, itd. Uglavnom, tko je zainteresiran nek' pročita ovaj članak.

I konačno, ovo ne znači da sam ja za uništavanje prirode, divlju gradnju nuklearki (ili bilo čega drugoga), potpunu zabranu vjetrenjača i solarnih ćelija i slično. Ja sam za racionalni pristup problemu u kojemu ne treba a priori odbacivati nekakva rješenja samo zato što tamo neki kvazi-stručnjaci pričaju gluposti. Istina je, kao i uvijek, negdje na sredini!

E i da, pojam "priroda" (koji sam spomenuo gore) je jedan vrlo zanimljiv pojam za koji u stvari ne znam što točno znači. Što je "priroda", i što je "prirodno" i "protuprirodno" (pojmovi koji se vežu uz pojam priroda!)? Ispada da kada se kaže "priroda" onda se misli na ovaj naš ultraultrasićušni dio svemira:

A kada se kaže "prirodno" i "neprirodno" onda se obično misli na društvene norme koje se - vidi čuda - mjenjaju kako vrijeme prolazi i kako se društvo mijenja.I onda se to sve upakira u jedno. Naravno da je stvarna situacija ponešto složenija, ali mislim da je to srž.

Vrijeme je da stanem. :) Što se tiče same slike, davno sam ju skinuo negdje s Interneta i bio bi jako zahvalan ako bi mi netko mogao reći tko je autor!

Re-adding SATA disk to software RAID without rebooting...

It happened second time that on the one of the servers I'm maintaining one of the SATA disks suddenly was disconnected from the server. Looking into log files, I found the following error messages:
kernel: mptbase: ioc0: LogInfo(0x31110d00): Originator={PL}, Code={Reset}, SubCode(0x0d00) cb_idx mptbase_reply
kernel: mptbase: ioc0: LogInfo(0x31110d00): Originator={PL}, Code={Reset}, SubCode(0x0d00) cb_idx mptscsih_io_done
kernel: mptbase: ioc0: LogInfo(0x31110d00): Originator={PL}, Code={Reset}, SubCode(0x0d00) cb_idx mptscsih_io_done
last message repeated 62 times
and then a lot of messages like the following one:
kernel: sd 0:0:1:0: SCSI error: return code = 0x00010000
kernel: end_request: I/O error, dev sdb, sector 1264035833
This triggered RAID to log the following type of messages:
kernel: raid5:md0: read error not correctable (sector 28832 on sdb2)
and finally to remove failed disk from array:
kernel: RAID5 conf printout:
kernel:  --- rd:3 wd:2 fd:1
kernel:  disk 0, o:1, dev:sda2
kernel:  disk 1, o:0, dev:sdb2
kernel:  disk 2, o:1, dev:sdc2
kernel: RAID5 conf printout:
kernel:  --- rd:3 wd:2 fd:1
kernel:  disk 0, o:1, dev:sda2
kernel:  disk 2, o:1, dev:sdc2
I yet need to find out what happened, but in the mean time the consequence of those error messages was that one disk was disconnected, and removed from RAID array, and I received the following mail from the mdmonitor process on the server:
This is an automatically generated mail message from mdadm
running on mail.somedomain

A Fail event had been detected on md device /dev/md0.

It could be related to component device /dev/sdb2.

Faithfully yours, etc.

P.S. The /proc/mdstat file currently contains the following:

Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdc2[2] sdb2[3](F) sda2[0]
      1952989696 blocks level 5, 256k chunk, algorithm 2 [3/2] [U_U]
     
unused devices:
Since this happened exactly at noot which is a time when everybody uses mail server it isn't exactly an option to reboot the server, not unless I absolutely have to. In this case I decided that I'm going to reboot it after work hours and in the mean time I can either just wait or try to rebuild RAID. If I wait, there is a risk of another disk failing and that would bring the server down. So, as this happened already, and I knew that the disk is OK and it will be re added after reboot, I decided to try to do that immediately and on a live system.

So, the first thing is to request kernel to rescan SATA/SCSI bus in order to find "new" devices. This is done using the following command:
 echo "- - -" > /sys/class/scsi_host/host0/scan
After this, disk reappeared, but the problem was that the name now is /dev/sde and not /dev/sdb. To get disk always the same name I would need to mess with udev, which I was not prepared to do now. (And, btw, I have recently read about a patch that allows you to do just that, to rename existing device, but I think it was rejected on the ground that this kind of stuff is better done in user space, i.e. modifying udev rules.)

Now, the only problem was to "convice" RAID subsystem to re add disk. I thought that it would find disk and attach it, but eventually, I just used the following command:
mdadm --manage /dev/md0 --add /dev/sde2
The command notified me that the disk was already a member of array and that it is being re-added. Afterwords, sync process was started, that will take some time:
 # cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sde2[3] sdc2[2] sdb2[4](F) sda2[0]
      1952989696 blocks level 5, 256k chunk, algorithm 2 [3/2] [U_U]
      [=>...................]  recovery =  7.6% (74281344/976494848) finish=204.9min speed=73355K/sec
    
unused devices:
It would be ideal for transient errors, like this one, that RAID subsystem memorizes only changes and when the disk is readded to apply only those changes. But, I didn't managed to find a way how to do that, and I also think that that functionality is no implemented at all.

Anyway, after synchronization process finished this is the content of /proc/mdstat file:
#cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sde2[1] sdc2[2] sdb2[3](F) sda2[0]
      1952989696 blocks level 5, 256k chunk, algorithm 2 [3/3] [UUU]
     
unused devices:
As you can see sdb2 is still here. Trying to remove it isn't possible because there is no corresponding device node:
# mdadm --manage /dev/md0 -r /dev/sdb2
mdadm: cannot find /dev/sdb2: No such file or directory
[root@mail ~]# mdadm --manage /dev/md0 -r sdb2
mdadm: cannot find sdb2: No such file or directory
So, I decided to wait until reboot.

Edit: I did reboot few days ago, and after reboot everything came to normal state, i.e. it was before disk was removed from array!

[201211114] Update: Again this happened almost exactly at noon. Here is what was recorded in log files:
Nov 14 12:00:02 mail kernel: mptbase: ioc0: LogInfo(0x31110d00): Originator={PL}, Code={Reset}, SubCode(0x0d00) cb_idx mptbase_reply
Nov 14 12:00:07 mail kernel: mptbase: ioc0: LogInfo(0x31110d00): Originator={PL}, Code={Reset}, SubCode(0x0d00) cb_idx mptscsih_io_done
Nov 14 12:00:08 mail kernel: mptbase: ioc0: LogInfo(0x31110d00): Originator={PL}, Code={Reset}, SubCode(0x0d00) cb_idx mptscsih_io_done
Nov 14 12:00:08 mail kernel: mptbase: ioc0: LogInfo(0x31130000): Originator={PL}, Code={IO Not Yet Executed}, SubCode(0x0000) cb_idx mptscsih_io_done
Nov 14 12:00:08 mail kernel: sd 0:0:2:0: Unhandled error code
Nov 14 12:00:08 mail kernel: sd 0:0:2:0: SCSI error: return code = 0x00010000
Nov 14 12:00:08 mail kernel: Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK,SUGGEST_OK
Nov 14 12:00:08 mail kernel: mptbase: ioc0: LogInfo(0x31130000): Originator={PL}, Code={IO Not Yet Executed}, SubCode(0x0000) cb_idx mptscsih_io_done
Nov 14 12:00:08 mail kernel: mptbase: ioc0: LogInfo(0x31130000): Originator={PL}, Code={IO Not Yet Executed}, SubCode(0x0000) cb_idx mptscsih_io_done
Nov 14 12:00:08 mail kernel: sd 0:0:2:0: Unhandled error code
Nov 14 12:00:08 mail kernel: sd 0:0:2:0: SCSI error: return code = 0x00010000
Nov 14 12:00:08 mail kernel: Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK,SUGGEST_OK
Nov 14 12:00:08 mail kernel: raid5: Disk failure on sdc2, disabling device. Operation continuing on 2 devices
Nov 14 12:00:08 mail kernel: sd 0:0:2:0: Unhandled error code
Nov 14 12:00:08 mail kernel: sd 0:0:2:0: SCSI error: return code = 0x00010000
Nov 14 12:00:08 mail kernel: Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK,SUGGEST_OK
Nov 14 12:00:08 mail kernel: raid5:md0: read error not correctable (sector 1263629840 on sdc2).
Nov 14 12:00:08 mail kernel: RAID5 conf printout:
Nov 14 12:00:08 mail kernel:  --- rd:3 wd:2 fd:1
Nov 14 12:00:08 mail kernel:  disk 0, o:1, dev:sda2
Nov 14 12:00:08 mail kernel:  disk 1, o:1, dev:sdb2
Nov 14 12:00:08 mail kernel:  disk 2, o:0, dev:sdc2
Nov 14 12:00:08 mail kernel: RAID5 conf printout:
Nov 14 12:00:08 mail kernel:  --- rd:3 wd:2 fd:1
Nov 14 12:00:08 mail kernel:  disk 0, o:1, dev:sda2
Nov 14 12:00:08 mail kernel:  disk 1, o:1, dev:sdb2
And then, the system by itself re-scanned array, but it didn't re add disk to array:
Nov 14 12:00:44 mail kernel: mptsas: ioc0: attaching sata device: fw_channel 0, fw_id 6, phy 2, sas_addr 0x8a843926a69f9691
Nov 14 12:00:44 mail kernel:   Vendor: ATA       Model: WDC WD1001FALS-0  Rev: 0K05
Nov 14 12:00:44 mail kernel:   Type:   Direct-Access                      ANSI SCSI revision: 05
Nov 14 12:00:44 mail kernel: SCSI device sde: 1953525168 512-byte hdwr sectors (1000205 MB)
Nov 14 12:00:44 mail kernel: sde: Write Protect is off
Nov 14 12:00:44 mail kernel: SCSI device sde: drive cache: write back
Nov 14 12:00:44 mail kernel: SCSI device sde: 1953525168 512-byte hdwr sectors (1000205 MB)
Nov 14 12:00:44 mail kernel: sde: Write Protect is off
Nov 14 12:00:44 mail kernel: SCSI device sde: drive cache: write back
Nov 14 12:00:44 mail kernel:  sde: sde1 sde2
Nov 14 12:00:44 mail kernel: sd 0:0:4:0: Attached scsi disk sde
Nov 14 12:00:44 mail kernel: sd 0:0:4:0: Attached scsi generic sg2 type 0
So I had to manually issue the following command:
mdadm --manage /dev/md0 --add /dev/sde2

Prijevod riječi "enterprise"...

In this post I'm discussing the possible translation of a word enterprise to Croatian. There is no adequate translation (as far as I know) and I'm suggesting that word enterprise actually means large business system so I'm starting with that  premise.
Riječ enterprise puno se koristi u Engleskom jeziku i dosta često vidim kako ljudi kada pišu tekst na Hrvatskom koriste tu riječ bez prijevoda, što baš i nije prihvatljivo - bar meni. Razlog za to je jednostavan, nema standardiziranog, niti opće prihvaćenog prijevoda. Dakle, moji prijedlozi za prijevod te riječi su:
  • veliki poslovni sustav
  • veliki sustav
  • poslovni sustav
Razlog za te varijacije je sljedeći. Prvo sigurno se radi o nekakvom sustavu i to najčešće vezano uz nekakav poslovni sustav (što nije uvijek slučaj kako ću malo kasnije pokazati). Nadalje, radi se definitivno o velikom sustavu jer se s tom riječi želi istaknuti nešto što je veće od uobičajnog. Naravno da odgovor na pitanje da li je nešto veliko ili ne, ovisi o mjeri. Primjerice, za Hrvatske uvjete INA je jako velika tvrtka, ali u svjetskim razmjerima (ili Kineskim ;)) radi se o tvrtci koja je bliže srednje veličine.

Dakle, zaključak je, kada se u Engleskom tekstu nalazi riječ enterprise bez dodatnih atributa, radi se o velikom poslovnom sustavu.

Ali, postoje i neke iznimke gornjeg pravila zbog kojega je potrebno imati i alternativne izraze. Za početak, dosta često se sreće pojam SME odnosno Small to/and Medium Enterprises. Dakle, ponekad se želi istaknuti kako se ne radi o najvećim mogućim sustavima već o srednjim ili malima. U tom smislu riječ enterprise ne može se prevoditi kao veliki poslovni sustav već samo kao poslovni sustav, pa onda možemo govoriti o malim i srednjim poslovnim sustavima, ili poslovnim sustavima male do srednje veličine. Naravno, opet je to podložno relativnosti, što sam već pojasnio.

Konačno, u programskom inženjerstvu, odnosno općenito u računarstvu, javlja se i izrazi enterprise software architecture te samo enterprise software. Pod tim izrazima podrazumijeva se programska podrška namjenjenoj upotrebi u velikim poslovnim sustavima, odnosno njena arhitektura, a koja zbog složenosti okoline u kojoj se upotrebljava onda sama po sebi mora biti velika, odnosno složena. Dakle, potencijalni prijevod je arhitektura programske podrške velikim poslovnim sustavima ili arhitektura programske podrške velikih sustava, odnosno, programska podrška velikim (poslovnim) sustavima. Međutim, istina je, ovo su malo nespretni prijevodi no u nedostatku boljeg i ovaj će poslužiti.

Tuesday, November 22, 2011

Some HTML(5) stuff...

It is no exaggeration to say that HTML5 is a hot stuff. Every now and then something cool pops up that is enabled by HTML5 and modern browsers. So, here in this post I'm going to present the links I collected during some period now. Basically, there will be more than just HTML5, but majority is definitely about HTML5.

First, I just finished watching this video. It's some basic stuff/intro to HTML5, but with a lot of interesting things. For example, definition of what is a tag and what is an element. The guy giving presentation is from Google Chrome team, so he naturally uses Chrome. What I liked very much is presentation within Web browser. I also like the idea that you see a small part of next slide. This helps a lot because you know what's coming. A also tried (I think for the first time) JavaScript console and Developer tools in Chrome, and they are very nice tools.

Let me summarize few things from HTML5 video that I remember:
  • Story about how <docroot> element was invented. It was introduced prior to HTML5 to distinguish old and new types of HTML documents. In HTML5 it is substantially simplified.
  • Ending tags not required, and attributes don't have to be within quotes.
  • Many tags optional, like head, body, tbody.
  • Story of how innerHTML element was introduced in IE and how it took almost 10 years to be implemented in Mozilla too.
  • Charset definition should be placed before title, so to avoid possible cross site scripting attack via UTF-7.
  • Parsing of HTML was never standardized. Part of the HTML5 specification is parser.
  • There is a reference implementation, html5lib.
The guy also gave a link to his presentation. It is here. But, I suggest that you look into the source because there you'll find few more interesting things. I opened source because I was curious how this presentation is made. Of the interesting things, first, there is a timeline of Web browsers from 1990 unitl today. Very detailed, and very interesting. Second, there is a graph of Web browser layout engine usage share, from 1994 till 2006.

For end, here are links to some cool stuff that can be done using HTML/CSS/JavaScript:

arping command

arping command is a very useful Linux command. It's purpose is to send ARP request on the local network and to print received responses. It's a sort like ping command, but it works on layer 2 (data link layer) instead on a network layer like ping. Unlike ping command you have to specify egress interface to this command since otherwise it doesn't know where to send request, and the command itself is programmed to send it to some predetermined interface, e.g. eth0.

Most frequently, you'll use it like this:
arping -I wlan0 192.168.1.1
which, in this particular case, tells arping command to send requests asking for a link layer address corresponding to IP address 192.168.1.1. Requests should go via wlan0 interface.

Now you may wonder what's so special about this command. Well, the special part is that there is no way you can block it, unlike ping command. For example, Windows 7 by default blocks protocol packets used by ping command and thus you wont be able to check if host is alive using ping. With arping you can definitely determine if it is alive or not. But there is a restriction, you can use it only on a local, Ethernet-like, network! So there is no way it can be used over the network, at least not without some heavy tricks. Additional restriction is that you need administrative (root) privileges on a host, unlike ping command that can be run by any user.

My new policy for use of this blog...

Well, I just managed to clarify to myself several principles that I'm going to follow for using this blog. Those are, in no particular order:
  • I'm going to use this blog as a diary. This means that I'm going to start to write many posts of which majority wouldn't be published for a long period of time. The rule for when some post will be published is simple when it's good enough.
  • In Croatian will be written posts in which I'm writing about something that already has many posts in English or that are only of interest to people living in Croatia (and surrounding countries). The reason for writing in Croatian is that not everyone speaks/reads English, and besides, it is good to emphasize Croatian terminology now and then.
  • In English I'm going to write something that I deem of wide enough interest, that writing it in Croatian would severely restrict it's reach.
And, I'll try to write an abstract in front of every post in opposite language, i.e. in English when post is in Croatian and vice versa.

SSH TAP tunnels: Using routing instead of bridging...

In the previous post about SSH tunneling, I used bridging functionality within Linux kernel in order to connect remote network with local laptop. But it is also possible to use routing in that case. The reason why you would prefer routing, instead of bridging, is the quantity of traffic that might be flowing from the remote network to your interface. And because you are probably connected with slower link than available bandwidth on the local network itself, that could be a real problem. So, routing could help in this case. Note that you are loosing ability to use some protocols, most notably those that use broadcasting since routing code won't route those packets.

The idea is to use routing in combination with proxy ARP. Basically, everything stays the same except you are not using bridging and you need to setup some basic forwarding features on remote host. So, here we go.

First add an explicit route for your remote host. That way it won't happen that you can not access it because the local network on which remote host is will be accessed in a special way:
ip route add remote_host via default_current_router
In case you don't know IP address of your default router (default_current_router) you can find it out using ip route sh command. And for remote_host you need to use IP address with network mask 32!

Now, log in to remote host using the usual ssh command that creates tap tunnel (for the meaning of parameters and what happens see previous post):
ssh -C -w any -o Tunnel=ethernet root@remotehost
After logging in, on local host add IP address from remote network that you are assigned when you directly connect on the remote network. In our example network this is the address 192.168.0.40/24 (see this previous post).

Now we have one interesting situation which I'm going to describe in some detail.

Start arping command on remote host like this (read this post about arping command):
arping -I tap0 192.168.0.40
Basically, you won't receive response. If you start tcpdump command, you'll notice that ARP requests are arriving, but there are no responses. The reason why there are no responses is because your local machine sees unexpected address in request and so ignores it.

You can turn on logging to see that those are really ignored with this command:
echo 1 > /proc/sys/net/ipv4/conf/tap0/log_martians
and now look into log file (/var/log/messages). You'll notice messages similar to the following ones:
Nov 22 14:37:25 w510 kernel: [147552.433215] martian source a.b.c.d from e.f.g.h, on dev tap0
Nov 22 14:37:25 w510 kernel: [147552.433221] ll header: ff:ff:ff:ff:ff:ff:16:90:4a:17:9d:d1:08:06
As you can see, it's called martian address. :)

Now, you can two options. The first one is to turn off filtering and the second one is to assign IP address to tap0 interface on remote host too. In general I don't suggest that you turn off rp filtering, but since in this case we are turning it off on a special (and restricted) interface I'll go that route, and also to save one IP address. So, on local machine do the following:
echo 1 > /proc/sys/net/ipv4/conf/tap0/rp_filtering
If you now try arping from local machine to remote machine, trying to reach IP address of remote machine, it won't work! You have to turn off rp filtering on remote machine too. So, execute the previous command there also.

One more thing for L2 part to fully work. When local machine asks from some address on local network, via tun0, remote host has to announce itself. For this purpose Proxy ARP is used but it has to be turned on. So, turn it on with the following command:
echo 1 > /proc/sys/net/ipv4/conf/tap0/proxy_arp
If you now try to "arping" any IP address, you'll aways get MAC address of tap0 on remote machine, and that's exactly what we wanted. But, you also need to do that because when machines on remote network search for you, then remote host has to announce himself and relay/forward everything over the tunnel to you.
echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp
I'm assuming that the interface name which attaches remote host to local network is named eth0, and I'm also assuming that you didn't create bridge device and attached eth0 to it (as it was described in the previous post).

Ok, time for L3 part. Everything has to be done on a remote machine. So, on a remote machine, first tell it that 192.168.0.40 is attached to tap0 interface:

ip route add 192.168.0.40/32 dev tap0
Next, allow IP forwarding:
echo 1 > /proc/sys/net/ipv4/ip_forward
And that's it. Only what's left is to automate the whole procedure.

Monday, November 21, 2011

SOPA ilitiga Stop Online Piracy Act...

This is a post about SOPA. There are many posts, news and other stuff about it on the Web primarily in English, and a lot less in Croation. So I decided to write a post in Croatian, but links are to English sites.
Ponekad se doista smatram sretnim što ne živim u Americi. Na žalost, činjenica što ne živim tamo ipak mi ne pomaže puno jer što god Amerikanci napravili, to utječe na sve ostale u svijetu, htjeli mi to ili ne.

Ovaj puta povod za takav stav je SOPA, ilitiga Stop Online Privacy Act, prijedlog zakona s kojim se pokušava dokrajčiti piratstvo na Internetu. Da se razumijemo, piratstvo je problem, ali rješenje tog problema sigurno nije SOPA! Naime, taj zakon bi omogućio Američkoj vladi, odnosno nekoj njenoj agenciji, da blokira bilo koju Web stranicu (ili nekakav drugi sadržaj na Internetu) i to na temelju samo sumnje da se na njemu nalaze piratski sadržaji! I da razjasnim "blokira" znači da se stranici ne može pristupiti unutar Amerike ako je stranica izvan, a ako je unutra, onda su stvari još zanimljivije. Uglavnom, to znači da vlasnik Web stranice ne mora biti obavješten niti čak može znati da se njegovim stranicama više ne može pristupiti, a o potencijalnoj reakciji na prijave i optužbe da ni ne govorim - ništa, samo se blokira pristup! Očito je da se radi o nečemu nečuvenome! Naime, na neki način stvara se vatrozid (firewall) koji odjeljuje Ameriku (jer u njoj vrijedi taj zakon) od ostatka svijeta. Na neki način Amerikanci kreiraju Veliki Kineski vatrozid (moj prijevod Great Firewall of China) čija prvenstvena zadaća je da kontrolira svoje vlastite građane, a ne da ih štiti od vanjskih prijetnji.

I tko to gura? Pa, MPAA, RIAA i slični, dakle udruge u rangu našeg ZAMPA (u Hrvatskoj bar, u drugim državama postoje ekvivalenti) čija zadaća je zaštita autora, ali koji postaju sami sebi svrha i čija zadaća je u međuvremenu postala iscjeđivanje novaca od ljudi! Oni definitivno ne shvaćaju moderno doba i očajnički pokušavaju spriječiti napredak. To je totalno krivi pristup, jer trebali bi u postojećoj situaciji naći načina za zaradu. Problem je što zahvaljujući Internetu više nisu monopolisti na distribuciju, a teško se odreći monopola...

I ako mislite da to nije problem, uzmite u obzir da je na nedavnom suđenju izašlo na vidjelo kako je Warner Bross slao prijave da se piratizira materijal koji nema veze s njima! Jednostavno su imali nekvu aplikaciju koja je "automatski", na sve sumnjivo, slala prijave! Da mi je to netko rekao ovako, pomislio bi da mi prepričava neki skeč Monty Pythona!

To je došlo toliko daleko da su Google, Facebook i još neke druge velike Internet tvrtke platile oglas preko cijele stranice u New York Timesu u kojemu upozoravaju na štetnost tog zakona.

U ovom postu (od niza drugih) analizira se SOPA, a posebno se osvrće na svjedočenja pojedinih osoba pred Kongresom za koje se kaže da su pristrane i da očito ne razumiju Internet (a s tim ni vrijeme u kojemu žive). Kao zanimljivost post također izvlači svjedočenje jednog od čelnih ljudi MPAA 80-tih godina kada su video rekordere htjeli zabraniti s opravdanjem da im otimaju profit! Uglavnom, nevjerojatna skupina nevjerojatno neupućenih ljudi.

I da, dobra stara Europa koliko god svojih problema imala - a hvala bogu ne manjka ih - ipak ne slijedi Ameriku u svemu (za razliku od nekih!). Te su tako jasno rekli kako smatraju da SOPA nije rješenje! Što reći drugo na to nego AMEN!

Sunday, November 20, 2011

Programmer's blogs...

There was very interesting question on Hacker News, What programming blogs do you read daily?! Well, that's very interesting question so I read some comments to see, maybe there will be those interesting to me too. For the time beeing here is only (a partial) list of links that were mentioned in the comments. And, someone posted a link to a similar (but more restricted) question on StackExchange about PHP and MySQL blogs.

Checked, with comments:
  • None yet :)

Yet unchecked/unsorted:

Physics news...

Very interesting news in physics lately. First, there are news about OPERA team repeating experiments (preprint here), this time with a greater precision and with the same results, i.e. neutrinos arrived sooner than expected. Now even those that were skeptical a becoming cautious, that is, everyone is now waiting for MINOS to repeat experiment so that we get independent results. If MINOS confirms OPERA results, that will certainly be revolution! An while I'm at those that are skeptical, I highly recommend blog from physicists Ethan Siegel, I find it very interesting and informative!

Then, there is a news about another breakthrough, in quantum mechanics. Basically, it was believed that wave function represents probability of where particle is. But this new results says that it is actually something that exists as such!

To continue, there is another news about possibility that LHC may have found a crack in modern physics. It's a mystery why matter "survived" while antimatter didn't. Results provided by LHC (more precisely LHCb detector) show that there is a 0.8% difference in decay between matter and antimatter. This is about 8 times more than predicted by the current theories. This results isn't conclusive yet as it has to achieve 5 sigma significance (currently it is only 3). But, there was experiment done earlier by Fermilab that showed the difference about 0.46%. This result had statistical significance of only 3 sigma, but since now there are two independent experiments with the almost same results, this is becoming interesting.

Finally, it seems that the hunt for Higgs boson is closing down, to show does it exist or not.

And while I was reading some comments I found recommendations for several other good physics blogs:
So much interesting stuff... so little time... :(

Saturday, November 19, 2011

Interesting fact about card shuffling...

Every now and then it happens to me that I stumble upon something obvious yet something that didn't occurred to me. This time it was this post. It's about shuffling cards, I and many others did it so many times. Yes, I knew that there are many combinations, but with a little bit of analysis it turns out that there are so many combinations that every shuffle is very likely unique in a history of card shuffling. That conclusion is what made my say Wow! Even more impressive is when you say that particular ordering of cards is uniqe in human history. Admittedly, that is a bit out of proportions since modern cards, accoring to the post, appeared in Europe somewhere around 14th century And, according to Wikipedia, they were invented in China, somewhere in 9th century. But even if the cards were with humans from the day one (however we interpret that), it wouldn't be enough time to try all the combinations!

The total number of combinations with a deck of 52 cards is 52!. This is exactly:
52! = 80658175170943878571660636856403766975289505440883277824000000000000
or approximately 8.0658X1067. It is a huge number, even though substantially less than number of atoms in visible universe (1080) :). Ok, and with overestimation of number of shuffles done in history of playing cards we end up with a number that is by 40 orders of mangitude smaller, i.e. somewhere around 1.546X1020 (note that I think that in the original post there is an error in the calculation).

And for the end I have an interesting rhetoric question: Can I patent a certain card combination as an invention? :) Obviously, I can not give decisive proof that no one else invented that particular combination, but this shows that it is highly likely so! :)

NVidia Linux drivers...

Well, after several hard lockups during past several months and few other bugs  in even longer time, I'll be certain next time I buy computer to try very hard to buy one with ATI video card. I can not describe how much I hate binary driver from NVidia, and by extension, NVidia! Here is why I hate it:
  1. I have to manually install it. This means that every time either kernel or X driver is updated, I have to go to a single user mode or runlevel 3 to compile NVidia driver. Yeah, I know, there are rpm packages in rpmforge or similar repository, but for some reason I wasn't satisfied and I don't use it for a long time now. Nevertheless, even if I were to use it, it want help for the next two problems!
  2. It lockups, and the lockups are hard, i.e. nothing but power button helps. This happens regularly without any signs before, suddenly system is freezed and nothing works! Nor it is possible to login via network to restart the computer!
  3. Some programs, most frequently LibreOffice, have problems with redrawing the screen. At first, I thought that it is a bug in those programs, but now I'm convinced that the problem is in the video driver.
And not to forget, when I had ATI card (on Lenovo W500) dual monitors and rotations worked fantastically! And all could be controlled from a small applet in the tray. With NVidia, nothing worked so flawlessly.

I tried to download newer driver from NVidia ftp site. That was 290.06 at the time I was looking. But it locked up machine even more frequently. Now, it is marked as beta and it is, in some way, expected. So I went on NVidia site to see which version is considered stable, and that was 285.05.09, the one I had problems with in the start and the one I tried to replace!

The reason I went with NVidia binary driver was gnome-shell. Namely, when Fedora switched to gnome-shell it required 3D support and nouveau didn't support 3D capabilities of my graphic card (Quadro FX 880M on Lenovo W510). That meant using fall-back GUI that wasn't usable for me, and besides, I wanted to try gnome-shell.

So, after all this, I decided to try again nouveau driver. And for that reason I had to disable NVidia driver. At first I thought that it will be simple, but it turned out not to be!

Disabling NVidia propriatery driver

First I switched to runlevel 3 in order to turn off graphics subsystem (i.e. X11):
init 3
Ok, the first thing I did is that I blacklisted nvidia driver and removed nouveau from blacklist. This is done in /etc/modprobe.d/blacklist.conf file. In there, you'll find the following line:
blacklist nouveau
Comment out (or remove) that line, and add the folowing one:
blacklist nvidia
Since I didn't want to reboot machine, I also manually removed nvidia driver module from the kernel using the following command:
rmmod nvidia
To be sure that nvidia will not be loaded during boot, I also recreated initramfs image using:
dracut --force
Finally, I changed /etc/X11/xorg.conf. There, you'll find the following line:
Driver "nvidia"
That one I changed into
Driver "nouveau"
Ok, now I tried to switch to runlevel 5, i.e. to turn on X11, but it didn't work. The problem was that NVidia messed up Mesa libraries. So, it was clear it isn't possible to have both, NVidia propriatery driver and nouveau, in the same time. At least not so easy. So, I decided to remove NVidia propriatery driver. First, I switched again to runlevel 3, and then I run the following command:
nvidia-uninstall
It started, complained about changed setup, but eventually did what it is supposed to do. I also reinstalled mesa libraries:
yum reinstall mesa-libGL\*
And, I again tried to switch to runlevel 5. This tome I was greated with GDM login, but the resolution was probably the lowest possible!!! I thought that I can login and start display configuration in order to change resolution. But that didn't work! After some twidling and googling, first I tried to add the following line to xorg.conf:
Modes "1600x900"
Note that 1600x900 is the maximum resolution supported on my laptop. I placed that line in Screen section, Display subsection. After trying again, it even didn't work! Then, I remembered that new X server is autoconfigured so I removed xorg.conf all together, and the resolution was correct this time!
Ok, I tried to login, but now something happened to gnome-shell. Nautils worked (because I saw desktop), and Getting Things Gnome also started, but in the background I was greeted with gnome-shell error message that something went wrong and all I could do is to click OK (luckily moving windows worked or otherwise because of Getting Things Gnome overlaping with OK button I wouldn't be able to reach that button). When I clicked it I was back to login screen. So, what's now wrong!?

I connected from another machine to monitor /var/log/message file and during login procedure I noticed the following error message:
gnome-session[25479]: WARNING: App 'gnome-shell.desktop' respawning too quickly
Ok, the problem is gnome-shell, but why!? I decided to enter again graphicsless mode (init 3) and to start X from the command line, using startx command. Now, I spotted the following error:
X Error of failed request: GLXUnsupportedPrivateRequest
Actually, that is only the part of the error (as I didn't saved it), but it gave me a hint. Library was issuing some X server command that X server didn't understand. And, that is some private command, probably specific to each driver. So, NVidia didn't properly deinstall something. But, what? I again went to check if all packages are correct. mesa-libGL and mesa-libGLU were good (as reported by rpm -q --verify command). Then I checked all X libraries and programs:
for i in `rpm -qa xorg\*`; do echo; echo; echo $i; rpm -q --verify $i; done
Only xorg-x11-server-Xorg-1.10.4-1.fc15.x86_64 had some inconsistency, so I reinstalled it and tried again (go to graphics mode, try to login in, go back in text mode). It didn't work.

So, I didn't know what to look more, and decided to reinstall again mesa libraries. Then, I checked again with 'rpm -q --verify' command and now, I noticed something strange, the links were not right (signified by letter L in the output of the rpm command). So I went to /usr/lib64 directory, listed libraries that start with libGL and immediately spotted the problem. Namely, libGL.so and libGL.so.1 were symbolic links that were pointing to NVidia libraries, not the ones that were installed with Mesa. It turned out that the installation process didn's recreate soft links!!! Furthermore, NVidia left at least five versions of its own libraries (that many times I installed new NVidia binary driver). So, I removed NVidia libraries, recreated links to point to right libraries, and tried to login again. This time, everything worked as expected!

So, we'll see now if this is going to be more stable or not.

Edit 1
Ok, at least one of those lockups wasn't related to NVidia. It happened that I, in the same time, upgraded kernel and NVidia driver. Starting from that moment I saw new kernel oops, and since I already had problems with NVidia, I wrongly concluded that it was NVidia's fault. But now, I suspect on wireless network driver. So, I downgraded kernel version to see what will happen.

Edit 2
More than a day working without a problem. So, it seems that it was the combination of newer kernel (wireless) and NVidia binary driver!

Edit 3 - 20111123
Several days now everything is working. Once I had a problem while suspending laptop that made me reset machine. Oh, and detection and control of second monitor works much better than with NVidia proprietary (i.e. binary) driver.

Thursday, November 17, 2011

Zimbra 7 and domain wide disclaimer...

In Zimbra 7 it is not necessary any more to hack scripts in order to get domain wide disclaimer. At least not if you want to add the same disclaimer to every mail that passes through your mail server. In case you want more control, like not attaching it to some mails, then you'll have to resort again to scripts and hacking.

So, to add disclaimer, the following would do:
zmprov mcf zimbraDomainMandatoryMailSignatureText <text disclaimer>
zmprov mcf zimbraDomainMandatoryMailSignatureHTML <html disclaimer>
zmprov mcf zimbraDomainMandatoryMailSignatureEnabled TRUE
and finally, reload amavis configuration using
zmamavisdctl reload
Note that whenever you change any of the previous three attributes you have to restart amavis. Those attributes are accessed only when amavis is started and they are used as an input to configuration writing routines.

You can query current values using gcf instead of mcf (don't forget to leave out last arguments because you are not setting values!).

Well, now, there were some problems.

The first one is that you disclaimer has probably more that one line, anyway more that you are prepared to type. In that case you can put it into a file and then use backtick shell notation with cat command to place the content of a file into appropriate place. Don't forget to place double quotes around everything so that spaces are not interpreted by the shell (like this "`cat somefile`")! There are other methods too, but this hack worked for me. Well, almost, becuse when text version of a disclaimer was attached to the text only mail, everything after the first line was cut off. HTML on the other hand worked, but, it also suffered from the second problem. And the second problem is...

The second problem is that I have Croatian in disclaimer which means it's encoded using UTF-8, and that, for some reason, doesn't play well with the LDAP, or some command inbetween, that transfers text/html disclaimer into LDAP. But I noticed that other tools, e.g. vi/vim, even bash, behaved strangely upon encountering some Croatian character. Trying to set environment variable LANG to en_US.utf8 before executing zmprov command didn't work either! And the command locale showed that C was still used. I managed to fix this by commenting out the following two lines in ~zimbra/.bash_profile file:
#export LANG=C
#export LC_ALL=C
After that (don't forget to login/logout) bash and vi started to work as expected but Zimbra still was messing up the disclaimer. That is, until I restarted amavis again. Then, it worked. In case it still doesn't work, set HTML attribute using zmprov command and restart amavis again!

This locale setting is nothing new in Zimbra. I checked previous version of Zimbra and it was also there. But until now I never had to put UTF8 into LDAP and so I never had hit this particular error.

This has left me with the last problem, text attachment isn't working as it should. So, I opened amavis configuration file /opt/zimbra/conf/amavisd.conf to see what's exactly going on. In there, I found that the file with the disclaimer should be in /opt/zimbra/data/altermime directory and it should be named _OPTION_.txt. Well, it is there but not called _OPTION_.txt. It could be that some postprocessing is performed on the values in amavis configuration file, but I ignored that as it is not important. What is important is that the file with disclaimer is called global-default.txt and it had wrong content, i.e. only the first line of the disclaimer. For a quick test I replaced it with the disclaimer I wanted to have, and voila, it worked!!!

So, why is there an error? I'll investigate later since now I have to do other things. :)

UIF format i MagicISO alat...

Upravo sam skinuo nekakvu datoteku s Interneta i ispostavilo se da je pohranjena u formatu UIF (bar je takva ekstenzija bila u pitanju). U takvim slučajevima koristim naredbu file, koja je instalirana na skoro svakom Unixu, te koja ispisuje o čemu se radi, uglavnom dovoljno da znam što dalje. Međutim, u ovom slučaju nije prepoznala datoteku, što se vidi po tome jer u opisu ispisuje data. Nakon malo guglanja otkrio sam da postoji konverter iz UIF u ISO oblik koji se prikladno zove uif2iso (a onda sam otkrio i da već imam od prije taj konverter! :)).

U svakom slučaju, preporučam da pročitate datoteku uif2iso.txt koja se nalazi unutar arhive. Ako ne znate engleski, ovo je kratki sažetak. Naime, UIF je format koji je specifičan za alat MagicISO. I sve bi bilo u redu da nema dva problema, prvi, već postoje standardni formati za pohranu slika CD/DVD-ova (konkretno ISO). Drugi problem je još gori, naime autor alata MagicISO (neki kinez), koristi taj format kako bi natjerao korisnike da kupuju MagicISO. Pri tome je prilično proaktivan, u smislu da svako malo mijenja format kako bi sve alate koji ga mogu čitati (između ostalog i uif2iso) onesposobio.

Dakle, ako naiđete na UIF datoteku, nemojte uzimati MagicISO već uif2iso.

Za kraj, zaključak je jednostavan. Držite se standardnih formata (i aplikacija na neki način) i ne kupujte svakakva sranja koja se nude po Internetu!

Inače, ovo me toliko živcira da sam ovaj post napisao i u engleskoj varijanti.

MagicISO and uif format...

I just stumbled on some file that was written in uif format. Linux's file command doesn't recognize it but google-fu quickly revealed a tool to convert uif format into iso image. This is a command line tool that works on both Linux and Windows.

Now, I suggest that you read uif2iso.txt file that is embedded into the archive. It turns out that uif format was made up by some Chinese guy who produces MagicISO tool. By "inventing" this totally new format it forces users to by his tool. But, it seems also that the guy is quite proactive in what he does. Alway, now and then, he changes the format so that other tools that were able to read he's format don't work any more.

Morale of this story is quite clear! Use standard tools and standard formats!

I think that it is important so spread information like this and thus I wrote Croatian version of this post too.

Monday, November 14, 2011

SSH VPNs: Bridged connection to LAN using tap

In the previous post I showed how to create SSH tunnel that ends on a network  layer (i.e. ppp and point-to-point based vpns) and on a link layer (ethernet type). Now I'm continuing with a series of scenarios that do configuration on a network layer (and few on a link layer too).

This is the first scenario in which I want remote host to look like it is directly attached to a local network. The network layout for this scenario is shown in the following figure:

As you can see there is a local network that has address 192.168.0.0/24. On that local network there is a gateway (192.168.0.1) as well as remote server (192.168.0.30) that we are going to use as the end point of a tunnel. What we want to achieve is that our laptop, that is somewhere on the Internet and has IP address 10.2.4.60, behaves as if it is attached directly on a local network and to use IP address 192.168.0.40. This kind of setup when a single remote machine connects to some network remotely via VPN is offten called road-warrior scenario. To accomplish this we are going to use bridging built into the Linux kernel. In the future post I'll describe variant of this setup that uses forwarding (i.e. routing) to achieve the same thing.

Preparation steps

The idea (for this scenario) is to use briging. But since manipulating bridge and active interface might (and very probably will) disconnect you, it is better to first configure brigde with only a single interface, the one that connects remote computer to a local network. In case you are using CentOS (RHEL/Fedora or some derivative) then do the following on your remote server:
  1. Go into directory /etc/sysconfig/network-scripts.
  2. Copy file ifcfg-eth0 into ifcfg-br0 (I'm assuming that your active interface is eth0, if it is something else change the name accordingly). Also make a copy of that file in case you want to revert changes! Remove UUID line if there is any.
  3. Edit file ifcfg-eth0. Add line BRIDGE=br0, remove all lines that specify IP address and related parameters (broadcast, netmask, gateway, DNS, ...). Also be certain that BOOTPROTO parameter is set to none. Remove UUID line if there is any.
  4. Modify ifcfg-br0. Change type from Ethernet into Brige, also change name from eth0 into br0 and finally add line STP=off.
Now, restart machine and see if everything is OK. First, you should be able to connect to the machine, and second, there should be new interface, br0,  which has to have IP address of interface eth0. Interface eth0 itself should work (UP, LOWER_UP flags) and also it must not have IP address attached.

One other thing you have to take care of, firewall. In case you have firewall configured (which you should!), during this test it's best to disable it and enable it later. To disable firewall on CentOS/Fedora (and similar distributions) do the following:
service iptables stop
This will turn off firewall until you turn it on again, or until you restart machine. To turn it off permanently (strongly not advised!) do the following:
chkconfig iptables off
SELinux might get into your way. In case that you at some point receive the following error message:
channel 0: open failed: administratively prohibited: open failed
it's the indication that tap device hasn't been created because of missing privileges. The simplest way is to turn off temporarily SELinux (again, I do not advise to do that in production environment!):
setenforce 0
If, at any point, something doesn't work three things you can use to debug problems. The first one is option -v to ssh command (or -vv) that will make it verbose and it will print out what's happening. The next thing is to look into log files on the remote machine (for sshd messages). Finally, you should know how to use tcpdump tool.

Creating and configuring tunnel

Ok, for a start, open two terminals on a laptop and in each one of them switch to root user (su command). Then, in one of them execute ssh command like this:
ssh -w any -o Tunnel=ethernet remote_machine
This will connect you to the remote machine (after successful authentication) and create two tap devices, one on the local machine and the other on the remote machine. I'll assume that their name is tap0, on both sides. Now, switch temporarily to other terminal and execute the following command:
ip link set tap0 up
tcpdump -nni tap0
The first command will start interface and the second will run tcpdump command. You'll be able to see traffic from remote network as soon as we attach the other part of the tunnel to bridge br0. Options n instruct tcpdump not to print symbolic names of anything, while option i binds tcpdump to interface tap0. When you want to stop tcpdump, use Ctrl+C keys.

Now, on the remote host execute the following commands:
brctl addif br0 tap0
ip link set tap0 up
The first command will add tap device to bridge (more colloquially switch), and the second one will activate the interface. The moment you execute the other command you'll notice that tcpdump command, running in the second windows, starts to print some output. This output is the traffic from the local network, transferred to the laptop. Of course, you'll see traffic if there is any traffic on the local network.

One final step, to add IP address from the local network to laptop, i.e. 192.168.0.40. But before that, we have to add explicit route for remote host or otherwise things will lock up. The laptop will think that remote host is now directly attached, while it is not and nothing will work. So, execute the following command on laptop (terminate tcpdump or open new windows and switch to root):
ip route add 192.168.0.30/32 via your_default_route
Change the string your_default_route to your default router (check what it is using 'ip route sh' command, it is an IP address in the same line as the word default). Finally, we are ready to add IP address, also on laptop, in the terminal where you added explicit route, execute the following command:
ip addr add 192.168.0.40/24 dev tap0
From that moment on, when you communicate with any host on local network this communication will go through the tunnel to the remote host that will send traffic to the local network for you. Hosts on the network 192.168.0.0/24 won't notice that you are actually somewhere else on the Internet.

One final thing. The question which destinations you want to reach via this tunnel. You can select all, or only a subset of destinations. In any case, you use routing to achieve that, i.e. you use the following command:
ip route add destination via gateway_on_local_network
Change destination to whatever you want to access via tunnel. In case you want everything than use word default (it could happen that you first need to remove existing route for default!). Or, you can set network, or IP address. Let's suppose that you want to reach google via local network. In that case find out IP address (or network) fo google and use that instead of the word destionation.

In place of gateway_on_local_network use gateway on the local network. In our case that is 192.168.0.1.

Finally, to tear down connection, just kill ssh.

Automating access

As a last thing, I'll describe how to automate the whole procedure. If you want fully automated solution, then first you have to configure passwordless login for ssh. Then, create two scripts. The first one will be called vpn_start.sh, will be placed on remote machine in a directory /usr/local/sbin and will contain the following lines:
#!/bin/bash
ip link set tap0 up
brctl addif br0 tap0
Lets call the second script also vpn_start.sh and let it be placed in the /usr/local/sbin/ directory. The content of that script should be:
#!/bin/bash
ssh -f -w any -o Tunnel=ethernet remote_machine /usr/local/sbin/vpn_start.sh
ip route add 192.168.0.30/32 via your_default_route
ip addr add 192.168.0.40/24 dev tap0
ip route add destination via gateway_on_local_network
Repeat the last command as many times as necessary and change all the parameters accordingly. Don't forget to make both scripts executable! Now, to run the configuration just execute the script on the laptop:
/usr/local/sbin/vpn_start.sh
And that's it! Of course, those scripts might be a lot fancier, but this will do just good!

Friday, November 11, 2011

Tunneling everything with SSH... or how to make VPNs...

In the previous three posts I described how to use OpenSSH to tunnel traffic for different applications. What all those three techniques had in common was that they tunneled only TCP traffic and that every time the connection was initiated from the local machine, i.e. in a way it was not possible for the machine on the other side to transfer data to us (actually, it is possible to circumwent some of those restrictions, but more about those techniques in some other post!). Furthermore, the implicit assumption was that there is only one TCP connection from your host to remote host. In case there are multiple connections opened on different ports, you'll need to run ssh as many times as there are those connections.

In this post I'm going to describe how to tunnel all traffic, regardless of it's type, from one machine to some other. In a way, I'll show how to create VPN networks using SSH. Actually, there are three ways to do that:
  • Tunneling using ppp protocol on top of SSH
  • Tunneling using tun devices natively supported by newer versions of ssh on, at least, Linux.
  • Tunneling using Ethernet-like tap devices, also supported on a Linux OS.
In all of those cases you'll need to have administrative privileges in order to implement them. In the end, whichever path you decide to take (i.e. ppp or tun/tap) you'll end up configuring network parameters and firewall. So, I'm going to break down the description into more posts. The first post I'll deal with link layer setup (ppp/tun/tap) and the following posts I'll describe network layer configuration for different scenarios.

Link-layer setup

The basic goal of this step is to provide network device on which network layer will work.

Using ppp program and protocol

For this method there is a very good small howto document. Here I'll only repeat some relevant bits in case you don't want to read the whole document. First, you have to configure passwordless authentication to a remote host. This is easy to do and there are a plenty of references on the Internet. Later, maybe I write one, too. :) Anyway, in the following text I'll assume that you are using root account on both machines, i.e. you are root on a local machine and you are connecting to a remote machine under the username root. Beware that this is very bad security practice, but for a quick test or in a lab environment it'll do.

Ok, after you configured passwordless login for a root user, run the following command (note: this is a single line up to and including IP addresses) but it is broken into more lines because of a formatting in a browser!):
# pppd updetach noauth passive pty 'ssh REMOTE_HOST -o Batchmode=yes /usr/sbin/pppd nodetach notty noauth' 10.0.0.1:10.0.0.2
Using interface ppp0
Connect: ppp0 <--> /dev/pts/12
Deflate (15) compression enabled
local  IP address 10.0.0.1
remote IP address 10.0.0.2
#
As you can see, you got some information about interface and your prompt is immediately back. You need to have ppp package installed on both machines, if it is not then there will be an error message, something like command not found, and no connection will be established. Anyway, in this case everything was successful and we are notified that the remote side has address 10.0.0.2 and local side has address 10.0.0.1. To verify that everything works, try to ping remote side:
# ping -c 3 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_req=1 ttl=64 time=19.3 ms
64 bytes from 10.0.0.2: icmp_req=2 ttl=64 time=17.5 ms
64 bytes from 10.0.0.2: icmp_req=3 ttl=64 time=18.7 ms

--- 10.0.0.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 17.503/18.534/19.380/0.777 ms
If you see the output like the one above, that means that the link is working and the next thing to do is to setup network layer parameters (apart from IP addresses).

So, what happened? SSH created a communication channel between the two machines. This channel is then used by pppd processes on both ends to create ppp interfaces (one per each side). Then, this interface is used to transfer IP packets via protocol PPP. What we are (mis)using here is that pppd requires direct link between two points and through this link it then sends IP packets. Direct link is some sort of a serial interface, for example direct cable, UMTS/EDGE or similar connection, etc. The exact means by which two endpoints communicate is unimportant from pppd's perspective, what only matters is that whatever is sent on one end is received by other, and vice versa. So, because of this we could place SSH between two pppd processes.

The way the things are exectuted is as follows:
  1. Local pppd starts (the first pppd in a line), without authentication phase (option noauth), in a passive mode (option passive) meaning it is waiting for someone to connect. It also allocates one master/slave pty (option pty) pair in which he controls master side, while slave side is connected to ssh proces (argument to pty option). There are also two ip addresses at the end of the command line that are actually arguments to this instance of PPP. They instruct local ppp process to assign to itself address 10.0.0.1 and to the remote end 10.0.0.2 address.
  2. ssh process connects to REMOTE_HOST in batch mode (option -o BatchMode=yes). Basically batch mode tells ssh not to use password/passphrase because there is no user to enter necessary parameters. The rest of the command until closing single quote is the command that ssh has to execute after successfully connecting to remote host.
  3. ssh that connected to the remote machine runs there second pppd process. The options instruct that pppd process to use stdin/stdout instead of a termina (option notty), to not detach from the controlling terminal (option nodetach), and to not require authentication (option noauth).
And that's it. Quite simple as you can see. But as I said, it is not good security practice to allow root login from the network! Locally, you can run this command directly from the root account! So, for some production deployment you'll need to three two additional things:
  1. Create separate account that will be used to connect to a remote machine.
  2. Configure sudo so that it allows that new account to run pppd binary without entering the password.
  3. Modify invocation of ssh command so that remote user is specified and pppd program is executed via sudo command.
Using tun with Ethernet like functionality

For this mode you need to have the following directive in the server configuration (/etc/ssh/sshd_config):
Tunnel any
After the change don't forget to reload ssh configuration. What this configuration option tells ssh deamon is to allow tunneling using tun and tap devices. In other words, we can make our tunnel looks like ethernet interface, or like point-to-point interface on the 3rd (network) layer. Option any allows both. In case you want to restrict to a certain kind use either ethernet or point-to-point, depending on which one you need.

In this case we want Ethernet-like functionality, so assuming that you provided either any or ethernet parameter to Tunnel option, as a root user run ssh client as follows:
ssh -f -N -w any -o Tunnel=ethernet root@remotehost
change remotehost part with the host name (or IP address) you are connecting to. It is necessary to specify Tunnel option in ssh command because default value is point-to-point. The -w option requests from ssh to do tunneling using tap or tun device (depending on the value of Tunnel option) and in our case it is tap. After successfully logging to a remote machine, check interfaces with ip command (of ifconfig). You should see on both hosts that there is tap0 interface. If you already had tap interfaces, than the new ones will probably have highest number. Options -f and -N cause ssh to go into background after performing authentication, since command line is not necessary for tunneling.

To stop tap device, send a SIGTERM signal to ssh (using kill command, of course).

Using tun with only network layer functionality

As with the ethernet like functionality, you have to enable this mode in sshd configuration file, and also, you need to do this as a root on both sides of the connection.

The procedure to create point-to-point tunnels is similar to creating the Ethernet ones, only the argument is point-to-point instead of ethernet. Since point-to-point type is default, you don't have to specify Tunnel option, i.e.
ssh -f -N -w any root@remotehost
After logging into the remote host, ssh will go into background and you'll see new tun device created.

About Me

scientist, consultant, security specialist, networking guy, system administrator, philosopher ;)

Blog Archive