Sunday, December 29, 2013

Gnome Shell's calendar and Thunderbird

After I installed Fedora 20, I noticed that the calendar in Gnome Shell's clock doesn't work, i.e. it doesn't show scheduled entries. Actually, this didn't work in the older version of Fedora, neither. But now, I decided to make it work. Before describing what I've tried, and what I did, I have to describe my setup. First of, I'm using Thunderbird. Evolution proved too unstable for me so I ditched it. Next, all my calendars are stored on Google, so, no local calendar in Thunderbird. The reason I'm using Google calendars is to be able to sync with mobile phone. The reason for local client, instead of Web client, is pure habit and comotion, I like more desktop clients than the Web based. Finally, I don't mind having some additional software being installed no matter if I use it directly or not. So, usually, I have Evolution installed alongside Thunderbird.

Gnome Shell's calendar allows integration only with Evolution, Thunderbird isn't allowed. This is actually expected, as Thunderbird is primarily mail application. But using Lightning plugin, it is a very good calendar solution, too. So, no easy way to define Thunderbird as default calendar application. After some quick googling, I found the following plugin for Thunderbird. Basically, it synces Thunderbird's calendar with Evolution's, one way, and that's it. When I wanted to install it, I had a small, and unrelated, problem. Namely, I could not find how to install it in Thunderburd?! There was no option in Thunderbird for managing Addons!? After some short Googling I finally realized that the menu option on the upper right hand side (icon with three parallel lines) isn't actually the full menu! I managed to open full Tools menu item by pressing Alt+F.

After I installed this plugin, and restarting Gnome and then Thunderbird, it didn't work So, the next I tried this. But first, I checked what was the current setting:
$ gsettings get org.gnome.desktop.default-applications.office.calendar exec
'evolution -c calendar'
Ok, now I changed the value using the following command:
$ gsettings set org.gnome.desktop.default-applications.office.calendar exec thunderbird
That didn't work either. It occured to me that the problem might be that I'm using Thunderbird profiles, so I also tried to define a profile in the command:
gsettings set org.gnome.desktop.default-applications.office.calendar exec 'thunderbird -P MyProfile'
Still, no luck! Then I went back to the plugin page and saw there that I have to create initial Evolution profile. I tried that too, but again, no luck. Maybe the problem was that I'm not using local calendars but remote ones. But then I realized that there is very easy solution. Namely, I created Evolution profile that is connected to Google calendars and synces with them. Google calendars are, in turn, connected to Thunderbird and everything works!

Yet, I didn't managed to get Thunderbird used when I click on Open Calendar option in the Gnome Shell's clock. Evolution is always used. Note that I tried two things, and both didn't help. First, I tried to define Thunderbird as default Calendar application using gsettings command as described above. Next, I tried to define Thunderbird default Calendar application in menu that is accessed by selecting All setting application, Details button, and there Default applications menu option. Note that Thunderbird isn't shown as a possible Calendar application. That is because its MIME type doesn't specify it as such. To change that I used procedure described here. In short, open file /usr/share/applications/mozilla-thunderbird.desktop in text editor and modify line MimeType so that it is
MimeType=message/rfc822;x-scheme-handler/mailto;text/calendar;text/x-vcard;:
Now, close text editor and update database file:
update-desktop-database -q
Finally, go to Details button and you'll see Thunderbird is now offered as a Calendar application, too.

In the end, I lost few hours investigating this and trying different solutions. Hopefully, someone will find this useful and will have to waste a lot less time.

Systemd waiting for external crypted disks...

I have an external, encrypted, disk that I periodically connect to the laptop. In order to have some easy to remember name, instead of UUID, I placed an entry in /etc/crypttab:
EXTDISK UUID=a809c218-f828-4149-bd9e-1c352a5f94df none
That way, when I connect disk, it will be automatically named EXTDISK and it will have an entry in /dev/mapper directory. Eventually, it will be mounted under /run/media/<userid>/EXTDISK. Note that there are only three fields in the line, each field separated from the other using space. The last field is passphrase placeholder, but I didn't want to have it written on the disk, so I used keyword none to signal that I want to type it each time the disk is opened.

The problem with this setup is that during  boot procedure, systemd waits for this disk to appear and, since there is no disk, it has to timeout. In system logs there will be messages like the following ones:
Dec 29 13:32:15 w530 systemd: Dependency failed for Cryptography Setup for EXTDISK.
Dec 29 13:32:15 w530 systemd: Dependency failed for dev-mapper-EXTDISK.device.
Dec 29 13:37:27 w530 systemd: Expecting device dev-mapper-EXTDISK.device...
The solution is simple. In newer versions of /etc/crypttab there is nofail option that should be added as a part of the fourth field. Note that, if there are multiple options in the fourth field, they all should be separated using commas and no spaces are allowed there. This option isn't listed in the manual page I linked in the introductory section of the post, so check your local manual page about crypttab.

As a side note, while searching for the solution of this timeout problem, I needed at one point to know which physical devices are beneath luks devices, i.e. my /dev/mapper directory looks like this:
# ls -l /dev/mapper/
total 0
crw-------. 1 root root 10, 236 Dec 29 13:27 control
lrwxrwxrwx. 1 root root       7 Dec 29 13:27 fedora-home -> ../dm-3
lrwxrwxrwx. 1 root root       7 Dec 29 13:27 fedora-root -> ../dm-2
lrwxrwxrwx. 1 root root       7 Dec 29 13:27 fedora-swap -> ../dm-1
lrwxrwxrwx. 1 root root       7 Dec 29 13:27 luks-a2c17ceb-222e-4cd2-3330-24a0a1111b43 -> ../dm-0
lrwxrwxrwx. 1 root root       7 Dec 29 13:27 luks-c7e8d2f7-1114-45c0-333b-fb8444222884 -> ../dm-4
What I was curious about are those two luks symlinks for which I didn't know their physical devices. It turned out its easy to find out, using cryptsetup tool:
# cryptsetup status luks-a2c17ceb-222e-4cd2-3330-24a0a1111b43
/dev/mapper/luks-a2c17ceb-222e-4cd2-3330-24a0a1111b43  is active and is in use.
  type:    LUKS1
  cipher:  aes-xts-plain64
  keysize: 512 bits
  device:  /dev/sda2
  offset:  4096 sectors
  size:    999184384 sectors
  mode:    read/write
So, there is an answer, /dev/sda2.

Sudden crash during uprgrades on Fedora 20

It happened several times already that when I started upgrade via 'yum update' gnome-shell, or X, would crash and leave package database in an inconsistent state. One of those inconsistent states is the following one:
# yum check
Loaded plugins: langpacks, refresh-packagekit
apr-1.5.0-2.fc20.x86_64 is a duplicate with apr-1.4.8-2.fc20.x86_64
apr-util-1.5.3-1.fc20.x86_64 is a duplicate with apr-util-1.5.2-4.fc20.x86_64
apr-util-ldap-1.5.3-1.fc20.x86_64 is a duplicate with apr-util-ldap-1.5.2-4.fc20.x86_64
bijiben-3.10.2-1.fc20.x86_64 is a duplicate with bijiben-3.10.1-1.fc20.x86_64
cifs-utils-6.2-5.fc20.x86_64 is a duplicate with cifs-utils-6.2-4.fc20.x86_64
duplicity-0.6.22-1.fc20.x86_64 is a duplicate with duplicity-0.6.21-1.fc20.x86_64
ghc-numbers-3000.2.0.0-1.fc20.x86_64 is a duplicate with ghc-numbers-3000.1.0.3-3.fc20.x86_64

...

Error: check all
Now, you could try with yum-complete-transaction command, which should complete transaction, as suggested by its name:
# yum-complete-transaction
Loaded plugins: langpacks, refresh-packagekit
There are 1 outstanding transactions to complete. Finishing the most recent one
The remaining transaction had 46 elements left to run
--> Running transaction check
---> Package apr.x86_64 0:1.4.8-2.fc20 will be erased
---> Package apr-util.x86_64 0:1.5.2-4.fc20 will be erased
---> Package apr-util-ldap.x86_64 0:1.5.2-4.fc20 will be erased
---> Package bijiben.x86_64 0:3.10.1-1.fc20 will be erased
---> Package cifs-utils.x86_64 0:6.2-4.fc20 will be erased
---> Package duplicity.x86_64 0:0.6.21-1.fc20 will be erased
---> Package ghc-numbers.x86_64 0:3000.1.0.3-3.fc20 will be erased
--> Processing Dependency: ghc-numbers = 3000.1.0.3-3.fc20 for package: ghc-numbers-devel-3000.1.0.3-3.fc20.x86_64
...
--> Running transaction check
---> Package ghc-numbers-devel.x86_64 0:3000.1.0.3-3.fc20 will be erased
--> Finished Dependency Resolution
Dependencies Resolved
Transaction size changed - this means we are not doing the
same transaction as we were before. Aborting and disabling
this transaction.
You could try running: package-cleanup --problems
                       package-cleanup --dupes
                       rpm -Va --nofiles --nodigest
Transaction files renamed to:
  /var/lib/yum/transaction-all.2013-12-29.10:53.49.disabled
  /var/lib/yum/transaction-done.2013-12-29.10:53.49.disabled
This didn't help, as evidenced by running 'yum check' command. Note that running suggested commands, package-cleanup and rpm, with given options will not help either.

I solved this using 'yum reinstall' command. Namely, for each line that 'yum check' showed, e.g.
apr-1.5.0-2.fc20.x86_64 is a duplicate with apr-1.4.8-2.fc20.x86_64
I run 'yum reinstall' with a first name, i..e.
yum reinstall apr-1.5.0-2.fc20.x86_64
The problem with this approach is that it is too demanding, you have to do it package by package. Now, if you try something like this:
yum reinstall `yum check | cut -f1 -d" "`
You'll receive a message like the following one:
Error:  Multilib version problems found. This often means that the root
       cause is something else and multilib version checking is just
       pointing out that there is a problem. Eg.:
     
         1. You have an upgrade for gnutls which is missing some
            dependency that another package requires. Yum is trying to
            solve this by installing an older version of gnutls of the
            different architecture. If you exclude the bad architecture
            yum will tell you what the root cause is (which package
            requires what). You can try redoing the upgrade with
            --exclude gnutls.otherarch ... this should give you an error
            message showing the root cause of the problem.
     
         2. You have multiple architectures of gnutls installed, but
            yum can only see an upgrade for one of those architectures.
            If you don't want/need both architectures anymore then you
            can remove the one with the missing update and everything
            will work.
     
         3. You have duplicate versions of gnutls installed already.
            You can use "yum check" to get yum show these errors.
     
       ...you can also use --setopt=protected_multilib=false to remove
       this checking, however this is almost never the correct thing to
       do as something else is very likely to go wrong (often causing
       much more problems).
     
       Protected multilib versions: gnutls-3.1.18-1.fc20.i686 != gnutls-3.1.17-3.fc20.x86_64
Looking a bit what is happening here didn't reveal a course. This is the problem with multilib programs/libraries and it seems that yum isn't processing them correctly due to either a bug or, more probably, some configuration setting. Luckily, in this case there is a way around this problem, and that is to first deinstall the old versions, and then to reinstall new ones, just in case. Because we are doing two operations that depend on state of the RPM database _before_ we begin, we'll save list of problematic packages into a file:
yum check > yum_check
Now, we first deinstall old versions of packages:
yum remove `cut -f6 -d" " yum_check`
That started all right, but again the GUI crashed!? Worse, I didn't paying attention on which package! Going through the whole process again, there were no problems!? Obviously a bug, but where is a whole different story. Anyway, after removing old packages, the new ones can be reinstalled using the following command:
yum reinstall `cut -f1 -d" " yum_check`
And that's it, the package database should be good again.

One cautionary note! When you issue 'yum remove' command very carefully check what is yum going to remove. It also removes dependencies and it might remove some critical components that can make your system unusable! As a rule of thumb, in this particular case, no dependencies should be listed for removal!

Tuesday, December 24, 2013

Upgrade to Fedora 20

This is a log of my tries to upgrade from Fedora 19 to Fedora 20 using fedup. The upgrade process wasn't really so flawless, but that was for three reasons. The first one was that on my laptop I have installed too many packages which creates problems by itself. Namely, on almost every update I have some problems with dependencies, so, it was expected to have problems during upgrade, too. Second problem was that I didn't pay attention to some details. Now, it might be argued that less skilled people will pay even less attention, and so, this will be a problem for them too, but then, I don't know. Finally, there was a third problem as well, and that were bugs in fedup package.

Ok, theoretically, to upgrade Fedora 19 to 20 you only have to do the following:
# fedup --network 20
and after that command finishes, reboot and the upgrade process will start. Finally, after one more reboot, you are in a new environment/OS. Of course, the process is a long one because everything is downloaded from the Internet. There is an option of using ISO image and downloading only updates/external repositories, but I didn't try it.

Problems

The first problems were related to available disk space. When fedup started download I realized that there could be problems with space. On root (/) partition I had available only few gigabytes, so I move two directories, /var/tmp/fedora-upgrade and /var/lib/fedora-upgrade, to /home partition where there was much more space, and created symlinks in the original directories so that fedup finds them.

After starting fedup again it downloaded everything necessary (continuing from where it stopped) and then it tested transaction. One problem was with the available space on the root (/) partition. That problem I solved by uninstalling some extra large packages. The easiest way to find out which packages take the most space is to use the following command:
# rpm -q --qf '%10{SIZE} %{NAME}\n' -a | sort -n
it will print out all packages with theirs sizes sorted from the smallest to the biggest one. So, I removed few biggest ones. I also removed some packages from /opt directory that I manually installed.

Ok, after I restarted fedup it didn't download everything again but after some tests of downloaded packages it tested the upgrade transaction. The following problems were reported:
WARNING: potential problems with upgrade
  ffmpeg2theora-0.29-4.fc19.x86_64 (no replacement) requires ffmpeg-libs-1.2.4-2.fc19.x86_64 (replaced by ffmpeg-libs-2.1.1-1.fc20.x86_64)
  gcc-python3-plugin-0.12-15.fc19.x86_64 (replaced by gcc-python3-plugin-0.12-15.fc20.x86_64) requires gcc-python3-plugin-0.12-15.fc19.x86_64 (replaced by gcc-python3-plugin-0.12-15.fc20.x86_64)
  perl-HTML-Template-2.95-1.fc19.noarch (no replacement) requires 4:perl-5.16.3-266.fc19.x86_64 (replaced by 4:perl-5.18.1-288.fc20.x86_64)
  rubygem-audited-activerecord-3.0.0-3.fc19.noarch (no replacement) requires 1:rubygem-activerecord-3.2.13-1.fc19.noarch (replaced by 1:rubygem-activerecord-4.0.0-1.fc20.noarch)
  gnome-shell-extension-xrandr-indicator-3.8.4-1.fc19.noarch (no replacement) requires gnome-shell-extension-alternative-status-menu-3.8.4-1.fc19.noarch (replaced by gnome-shell-extension-common-3.10.1-1.fc20.noarch)
  gcc-python3-debug-plugin-0.12-15.fc19.x86_64 (replaced by gcc-python3-debug-plugin-0.12-15.fc20.x86_64) requires gcc-python3-debug-plugin-0.12-15.fc19.x86_64 (replaced by gcc-python3-debug-plugin-0.12-15.fc20.x86_64)
  1:NetworkManager-wimax-0.9.8.8-2.fc19.x86_64 (no replacement) requires 1:NetworkManager-0.9.8.8-2.fc19.x86_64 (replaced by 1:NetworkManager-0.9.9.0-20.git20131003.fc20.x86_64)
  gcc-python2-debug-plugin-0.12-15.fc19.x86_64 (replaced by gcc-python2-debug-plugin-0.12-15.fc20.x86_64) requires gcc-python2-debug-plugin-0.12-15.fc19.x86_64 (replaced by gcc-python2-debug-plugin-0.12-15.fc20.x86_64)
  gcc-python2-plugin-0.12-15.fc19.x86_64 (replaced by gcc-python2-plugin-0.12-15.fc20.x86_64) requires gcc-python2-plugin-0.12-15.fc19.x86_64 (replaced by gcc-python2-plugin-0.12-15.fc20.x86_64)
  gazpacho-0.7.2-13.fc19.noarch (no replacement) requires python-kiwi-gazpacho-1.9.26-6.fc19.noarch (replaced by python-kiwi-glade-1.9.38-2.fc20.x86_64)
  python-iguanaIR-1.0.3-2.fc19.x86_64 (no replacement) requires iguanaIR-1.0.3-2.fc19.x86_64 (replaced by iguanaIR-1.0.5-2.fc20.x86_64)
Finally, it asked for reboot to start upgrade. But, I decided to remove offending packages which were those with f19 tag in the package name. Then, I started fedup again. It again found some problems:
WARNING: problems were encountered during transaction test:
  broken dependencies
    gazpacho-0.7.2-13.fc19.noarch requires python-kiwi-gazpacho-1.9.26-6.fc19.noarch
    perl-HTML-Template-2.95-1.fc19.noarch requires perl-4:5.16.3-266.fc19.x86_64
    python-iguanaIR-1.0.3-2.fc19.x86_64 requires iguanaIR-1.0.3-2.fc19.x86_64
    kmod-VirtualBox-3.11.10-200.fc19.x86_64-4.3.4-1.fc19.x86_64 requires kernel-3.11.9-200.fc19.x86_64, kernel-3.11.7-200.fc19.x86_64, kernel-3.11.10-200.fc19.x86_64
    gcc-python3-plugin-0.12-15.fc20.x86_64 requires gcc-python3-plugin-0.12-15.fc20.x86_64
    gnome-shell-extension-xrandr-indicator-3.8.4-1.fc19.noarch requires gnome-shell-extension-common-3.8.4-1.fc19.noarch
    rubygem-audited-activerecord-3.0.0-3.fc19.noarch requires rubygem-activerecord-1:3.2.13-1.fc19.noarch
    ffmpeg2theora-0.29-4.fc19.x86_64 requires ffmpeg-libs-1.2.4-2.fc19.x86_64
    gcc-python2-plugin-0.12-15.fc20.x86_64 requires gcc-python2-plugin-0.12-15.fc20.x86_64
    kmod-VirtualBox-3.11.9-200.fc19.x86_64-4.3.4-1.fc19.x86_64 requires kernel-3.11.9-200.fc19.x86_64, kernel-3.11.7-200.fc19.x86_64, kernel-3.11.10-200.fc19.x86_64
    gcc-python3-debug-plugin-0.12-15.fc20.x86_64 requires gcc-python3-debug-plugin-0.12-15.fc20.x86_64
    kmod-VirtualBox-3.11.7-200.fc19.x86_64-4.3.2-1.fc19.3.x86_64 requires kernel-3.11.9-200.fc19.x86_64, kernel-3.11.7-200.fc19.x86_64, kernel-3.11.10-200.fc19.x86_64
    NetworkManager-wimax-1:0.9.8.8-2.fc19.x86_64 requires NetworkManager-1:0.9.8.8-2.fc19.x86_64
    gcc-python2-debug-plugin-0.12-15.fc20.x86_64 requires gcc-python2-debug-plugin-0.12-15.fc20.x86_64
Continue with the upgrade at your own risk.
I tried again to remove some offending packages that were reported in the previous output, and started fedup again:
# fedup --network 20
setting up repos...
getting boot images...
.treeinfo.signed                          | 2.0 kB  00:00:00
setting up update...
finding updates 100% [===========================================================]
verify local files 100% [===========================================================]
testing upgrade transaction
rpm transaction 100% [===========================================================]
rpm install 100% [===========================================================]
setting up system for upgrade
Finished. Reboot to start upgrade.
I didn't managed to get rid of all transaction problems, more specifically there were problems with kmod-VirtualBox packages. But then, I gave up of trying to fix that and decided to continue with the upgrade. At that moment I made the next mistake which was that I rebooted before fedup finished. Namely, the last thing it printed was 'Finished. Reboot to start upgrade.' but it didn't give a prompt back. I mistakenly thought that it waits for me to reboot. But, it was actually still working! So, actually, nothing happened after boot. I was again in Fedora 19. After again starting fedup and checking what's happening I finally realized that fedup is still working (and consuming a lot of CPU) so I waited and finally I got the prompt back. Then I restarted the computer, selected fedora upgrade option, but nothing happened. I rebooted, and removed rhgb and quiet options from boot option to upgrade with fedup, and the last message I saw after removing rhgb and quite options from boot entry was:
[    5.733054] input: TPPS/2 IBM TrackPoint as /devices/platform/i8042/serio1/serio2/input/input7
Waiting dropped me finally into dracut shell with a message there is no root. It turned out that this message was obscuring passphrase message! Namely, I was asked for the passphrase to unlock disk, but I didn't saw it and everything stopped. After few reboots I realized this as the message was partially visible and I finally realized what was the problem.

Then, something else happened, the upgrade process didn't start. I ended up in the Fedora 19. It also took me some time to realize that fedup "forgot" to put the following options to boot entry:
upgrade systemd.unit=system-upgrade.target
and it did it repeatedly! In other words,  So, I manually added it and the upgrade finally started. Or at least it seamed so. It stopped suddenly, but I noticed that it complained about not finding file package-list and then I realized that symlink is wrong because all the partitions are mounted under sysroot. Namely, because I moved packages to other partition and symlinked it, upgrade process couldn't find it.  Symlinks looked like this:
/var/lib/system-upgrade -> /home/system-upgrade
But during upgrade process, home is mounted under /sysroot directory. So, I had to run fedup again, and after it prepared the system for upgrade, before rebooting, I changed symlinks so that they look like this:
/var/lib/system-upgrade -> /sysroot/home/system-upgrade
Note that after reboot to upgrade if you want to try again, you have to start fedup again. Namely, immediately before upgrade process starts, the boot option is removed from grub. This is safety measure, but it is annoying in case you are trying again.

Anyway, that symlink issue was the last problem, and it upgraded Fedora 19 to 20. It took me almost half a day to get to that point and I don't call this a most pleasant experience. In other words, I think that those things should be polished a bit better than they are.

One last thing, I had problems with fedup 0.7 and I had to upgrade to fedup 0.8. This was somewhere early in the whole process, but since I started to take notes later when I stumbled upon more problems, I didn't noted exactly where it happened, so I'm mentioning it here.

Monday, December 16, 2013

GnuCash - SMS i slični oblici kreditiranja

Vođenje evidencije o troškovima parkiranja koje ste platili SMS porukom je također jedan zanimljiv detalj. Naime, mogli bi to upisati direktno pod troškove parkinga (ako ste uveli tu stavku, ja jesam!) no problem je što niste tog trena platili. Platit ćete tek kada vam stigne račun za telekomunikacijske usluge koje vam isporučuje vaš mobilni operater. Drugi problem je što će taj račun, osim samih telekomunikacijskih usluga, uključivati i uslugu parkiranja i svega ostalog što ste platili putem SMS-a. A opet se tu vraćam na svoju želju da znam koliko mi godišnje novaca ode na razne stavke, konkretno u ovom slučaju želim znati koliko sam potrošio na parkiranje!

U tom smislu, odlučio sam uvesti račun kreditiranje putem SMS poruka te sam zbog toga stavku Kreditne kartice preimenovao u Krediti (klik desnom tipkom miša na stavku čije ime želite izmijeniti, odaberete opciju Edit Account u izborniku koji se otvara te potom promijenite ime i potvrdite izmjenu klikom na OK). Nakon toga, otvorio sam novu stavku, SMS plaćanja, pa sad taj dio izgleda kako je prikazano na sljedećoj slici:


Sada unutar stavke SMS plaćanja samo upisuje što sam platio i koliko. Primjerice, ako sam platio parkiranje u 3. zoni (2kn + 0.69kn naknade) tada ću to upisati na sljedeći način:


To je sve što se tiče vođenja troškova. Sada samo upisujete sve troškove učinjene SMS porukama.

Vođenje evidencije o naplati SMS kredita

Kada dođe račun za telekomunikacijske uslute, potrebno je srediti račune. Prvo i osnovno, ja sam upisao račun koji mi je došao od mobilnog operatera. Neka je račun bio okruglo 200kn. Taj račun upisujem u stavku Telekomunikacijske usluge (prije sam to imenovao Mobilni telefon, ali ovako mi se čini smislenije :)):


Sada je u redu to što su ti novci skinuti s tekućeg računa (trajni nalog), međutim, moram "sravniti" i stavke parkiranja putem SMS poruka. U tom smislu, postupak je sljedeći.

Prvo, desna tipka na stavku SMS plaćanja i u izborniku koji se pojavi odaberite opciju Reconcile.... Na to se otvara sljedeći dijalog:


U tom dijalogu, pod Ending Balance, upišite ukupnu sumu koja je naplaćena putem SMS-a. U ovom konkretnom slučaju imamo jedno SMS plaćanje parkinga čija cijena je bila 2,69Kn i ono je došlo na naplatu. Prema tome, treba upisati 2,69. Da smo imali još koje plaćanje koje je također došlo na naplatu zbrojili bi i njega te upisali novu sumu u Ending Balance.

Zatim se javlja novi prozor:


U tom prozoru, s desne strane pod Funds Out, nalaze se stavke koje su kupljene a nisu još naplaćene. Mi sada odabiremo što je stiglo na isplatu. Sve što je stiglo na isplatu mora imati oznaku ispod kolone R. Pri tome, kada se obavi odabir do kraja, stavka Difference na dnu mora imati vrijednost 0! U tom trenutku, omogućena je zelena tipka u traci s alatima, odmah ispod Help stavke menija. Kliknite na tu tipku i javlja se odmah potom novi dijalog:


Ovdje je sada problem budući da nam nije dozvoljeno odabrati račun za telekomunikacijske usluge. Naime, novac je s tekućeg računa išao na podmirenje računa za telekomunikacijske usluge, a od tamo je potom išao na podmirenje SMS usluge. Međutim, ipak postoji jednostavna zaobilazna metoda. Odaberite bilo što u ponuđenim opcijama (Transfer From) te kliknite na OK. Ja sam odabrao privremeno tekući račun. Nakon što sam kliknuo na OK, otvorio sam stavku u Krediti, SMS plaćanja i "ručno promijenio" kolonu Transfer u Telekomunikacijske usluge. Također sam odmah dodao opis Description, što sam zaboravio učiniti na prethodnom dijalogu. Sada ta stavka, SMS plaćanja, ima sljedeći oblik:


I to je što se tiče vođenja evidencije kreditiranja putem SMS poruka.

Friday, December 6, 2013

Modeling a simple system using multi agent simulation environments

Note: This isn't finished yet, but because I'm referencing this post in another post, I decided to publish it.

I'll probably participate in a project whose characteristics were such that I suggested that the best way to proceed was to use multiagent type of a simulation. The problem was that there are many different, and popular, multi agent simulation environments and I had to choose one, that will fit this project's use case the best. More specifically, candidate multiagent simulation environments were MASON, Netlogo and Repast, among others, that were constantly mentioned on the Internet and I decided to evaluate them. Note that there are others, too. Lists of available software can be found here, here, and here. But, if you google a bit, you'll probably find many others.

In any case, the requirements I had in mind when starting evaluation process were:
  1. Free licence. Preferably BSD like license, but LGPL, or even GNU, is OK.
  2. GUI that will allow easy experimenting with model.
  3. Ability to model agents with very complex behavior.
  4. Ability to do distributed simulations is definitely a big plus.
  5. NOT exclusively Microsoft based, i.e. C# or something similar.
To be able to better evaluate those tools, I set my self with a task of implementing something simple in the three different multiagent environments (MASON, Netlogo, Repast) and trying to determine which one will best suite my needs with respect to requirements. Note that there are already existing comparisons, but I wanted to gain some first hand experience in how it is to use them. So, in order to do that I modelled the following system in each one of them and recorded my experience in a due course:
The system consists of N identical agents performing some task emulated by using sleep or similar statement/function. Task processing by an agent has an exponential distribution with average processing time of 30 minutes. New tasks arrive according to Poisson distribution with average of one task each 45 minutes. It is necessary to determine average time each task spends in a system and average time waiting in a queue for processing.
For a start I'll set N to 1. So, note that this is a simple M/M/N queue. I'm going to complicate it a bit in a due course, but this is what I'm going to start with. The reason why I choose M/M/1 queue is that I'm able to compare simulation results with calculations.

The posts describing use of specific environments are:

  1. Mason
  2. Repast
  3. NetLogo

While searching for the tutorials, examples and documentation about those simulation environments I wished to try, I found a lot of useful resources. Here are some:

  1. Open Agent Based Modeling Consortium
  2. Comparison of many more agent simulation environments using a single scenario
  3. Agent Based Modeling - a site with lot of resources



Friday, November 29, 2013

Modeling a simple system in Mason...

In this post I'm describing how to implement a simple agent model in Mason multiagent simulation environment. See introductory post for additional details about this endeavour.

Installing Mason

Mason installation is easy. Just download the newest archive and unpack it somewhere on the disk. That's all that has to be done. In the following text I'm referring to this unpacked installation, and anything done is done within that directory. It doesn't have to, but it is easier for a start.

Running simulation

The next thing is how to run Mason simulation. But it turns out to be easy. As an example I'll show you how to run Tutorial2 example. This example simulates Conway's game of life and has a GUI that can be used to control the simulation. So, go to the directory where you unpacked archive that you've downloaded in the previous step and then enter sim/app/tutorial1and2 subdirectory. Java file is already precompiled, but nevertheless, we'll compile it again because it is easy and instructive. To compile Tutorial2 issue the following command:
CLASSPATH=../../../jar/mason.17.jar javac Tutorial2.java
Note that Mason framework is in mason.17.jar and that you have to specify it to Java compiler using CLASSPATH variable. The previous command shouldn't give you any messages. To run compiled example, issue the following command:
CLASSPATH=../../../jar/mason.17.jar:. java sim.app.tutorial1and2.Tutorial2
All in all, compiling and running models built using Mason framework is relatively straightforward.

Evolving the target system

The idea I'll pursue in this section is to gradually build a simulation system. The simulation system will be represented by one class that will instantiate and control all the other classes. Those other classes I'll call agents. There will be an agent that represents a job, one for server(s) and one for a queue that will hold jobs until the server is free to take them.

The simplest possible simulation

We'll start with the simplest possible simulation in Mason, and that is the following one:
package hr.fer.zemris.queue;

import sim.engine.*;

public class QueueSystem extends SimState
{
    public QueueSystem(long seed)
    {
        super(seed);
    }

    public static void main(String[] args)
    {
        doLoop(QueueSystem.class, args);
        System.exit(0);
    }
}
To compile it you have to place it into hr/fer/zemris/queue directory (corresponds to package statement at the beginning of the source). I'll assume that this directory is in the mason's toplevel directory. The name of the Java file has to be QueueSystem.java. In order to compile it, issue the following command:
CLASSPATH=jar/mason.17.jar javac hr/fer/zemris/queue/QueueSystem.java
and run it in the following way:
$ CLASSPATH=jar/mason.17.jar:. java hr/fer/zemris/queue/QueueSystem
MASON Version 17.  For further options, try adding ' -help' at end.
Job: 0 Seed: -1713501367
Starting hr.fer.zemris.queue.QueueSystem
Exhausted
Don't forget that dot at the end of the CLASSPATH variable's value, or else, you'll get an error about being unable to find a class.

This simulation is a very simple one and, as expected, it doesn't do anything useful. All it does is call doLoop method of SimState class which will instantiate QueueSystem object. In our case, we didn't specify anything for the simulation, so nothing happens.

In the following text this simulation will be extended so that it create and coordinate other agents.

First agent

Ok, let's create an agent. Our initial agent will, again, be very simple. It will only print it was instantiated, but nothing else. So, here it is:
package hr.fer.zemris.queue;

import sim.engine.*;

public class Server implements Steppable
{
    public Server()
    {
        System.out.println("Instantiated one Server");
    }

    public void step(final SimState state)
    {
        System.out.println("step() method called");
    }
}
Note that we have to define step() method, because it is required by Steppable interface. But, for the moment, it doesn't do anything.

Ok, to compile this agent, use the usual command:
CLASSPATH=jar/mason.17.jar javac hr/fer/zemris/queue/Server.java
Again, I assumed that you are positioned into mason's root directory, the agent is placed within hr/fer/zemris/queue directory and it is called Server.java.

Note that you can not directly run agents, at least not in this form (i.e. without main method). So, we'll instantiate and schedule execution of our agent in the main class that represents the whole simulation. The change is simple, in the class QueueSystem.java add the following method:
public void start()
{
    super.start();

    Server server = new Server();
    schedule.scheduleOnce(server);
}
Now, recompile QueueSystem.java class, and run it:
$ CLASSPATH=jar/mason.17.jar:. java hr/fer/zemris/queue/QueueSystem
MASON Version 17. For further options, try adding ' -help' at end.
Job: 0 Seed: -1710667392
Starting hr.fer.zemris.queue.QueueSystem
Instantiated one Server
step() method called
Exhausted
Note the lines in bold. First line is printed when constructor of our simple agent was called. The second one is outputted when agent's step() method was called. Note that step method was called only once, and that is because we used method scheduleOnce() that schedules a single occurrence of an event. Try to change scheduleOnce() into scheduleRepeating() and see what will change.

There is also a question of when this event was called. We used a simple version of schedule methods that schedule execution 1 time unit in the future, i.e. in getTime() + 1.0. Well, at least documentation says so! Try to check it by youself. Hint: to get current time in agent's step() method use state.schedule.getTime() method.

Creating jobs

Jobs are a bit different. They are not created at the start of the simulation, but instead are created dynamically according to Poisson distribution. So, what I'm going to do is to create class named JobFactory that will create Job. Each job will be represented using the following class:
package hr.fer.zemris.queue;

import sim.engine.*;

public class Job
{
    public double createTime;
    public double processingTime;
    public double finishTime;
}
Note that job isn't agent! It doesn't have step() method neither it's subclassed from some Mason's class. What I decided is that Job class will only have fields to keep statistical data and that's it.

To create jobs, I written JobFactory agent. Here is the agent:
package hr.fer.zemris.queue;

import sim.engine.*;
import sim.util.distribution.*;
import ec.util.MersenneTwisterFast;

public class JobFactory implements Steppable
{
    private Poisson poisson;
    private Exponential exponential;
    private QueueSystem queueSystem;

    public JobFactory(double lambda, double mu, QueueSystem qs)
    {
        MersenneTwisterFast randomGenerator = new MersenneTwisterFast();
        poisson = new Poisson(lambda, randomGenerator);
        exponential = new Exponential(mu, randomGenerator);
        queueSystem = qs;
    }

    public void step(final SimState state)
    {
        double currentTime = state.schedule.getTime();
        double nextEventTime = currentTime + poisson.nextDouble();

        Job job = new Job();
        job.createTime = currentTime;
        job.processingTime = exponential.nextDouble();
        queueSystem.pushNewJob(job);

        state.schedule.scheduleOnce(nextEventTime, this);
    }
}
So, how this JobFactory agent works? First, we have a constructor. Constructor instantiates two classes, Poisson and Exponential, that will be used to generate random numbers from respective distributions. The first two parameters of the constructor define distributions' mean values. The third parameter is used for sending newly created jobs into a system queue.

Note that, apart from generating new jobs according to the Poisson distribution, we also have to specify for how long will a single job be processed within the server. I think that a natural place to determine this is when the job is created since it is the characteristic of the job itself.

I thought about sending Job objects directly to the server agent. But the problem with that approach is that server has to schedule itself in case there are no other jobs waiting, i.e. the job immediately enters server. Namely, server has to wake up when some job is finished and remove it from the system.

But, in order to be able to do scheduling I had to have access to SimState object, which is accessible only from step() method. Now, I could save this object internally, but it would be a hack. Namely, I would have to somehow provoke step() to be executed immediately at the beginning. Oh, yeah, I could send SimState object via constructor. But in the end, I gave up from pursuing this approach as I haven't been able to find someone else already doing this (nor in the examples directory, nor on the Internet).

The second part of the JobFactory class, and the its workhorse, is the method step(). What this method does is create a new Job class initializes its processing time (job.processingTime) and adds it to the queue of jobs waiting for the server (via call to the method queueSystem.pushNewJob). Finally, this method draws new random number for the Poisson distribution which defines when a new job will be created. It schedules itself at that point in time.

Ok, our simulation class, QueueSystem, has to have a method for accepting new jobs. This method has name pushNewJob, and the code is the following:
public void pushNewJob(Job job)
{
    jobQueue.add(job);

    if (jobQueue.size() == 1)
        schedule.scheduleOnce(schedule.getTime() + job.processingTime, server);
}
jobQueue is a linked list, i.e. FIFO queue, that is used to hold jobs while being processed in Server and waiting for the Server. The job that is in front of the queue is the job that is currently processed by Server. Maybe I should have written code a bit differently, i.e. so that Server holds the job it processes in some internal attribute, but I did it this way and I didn't bother to rewrite it.

Apart from adding new job to a queue there is one additional thing I had to do. In case there is no job in queue, that means the server is idle, and it is not scheduled for the execution! So, the if statement checks this condition, and if the server is idle it schedules its execution when jobs is finished! Otherwise, server will execute at some point and it will take next job and schedule itself. We'll come to that part a bit later.

One more thing hasn't been specified with respect to QueueSystem, namely jobQueue and activation of JobFactory. Server isn't activated until there is a job, and that is handled by pushNewJob method.

So, in order to take care of that case, here is the new start() method of QueueSystem simulation/class:
public void start()
{
    double alpha = 3;
    double beta = 5;

    super.start();

    jobQueue = new LinkedList<job>();

    server = new Server(jobQueue);

    jobFactory = new JobFactory(alpha, beta, this);
    schedule.scheduleOnce(jobFactory.getFirstInvocationTimeStamp(), jobFactory);
}
So, what's going on in this method. There are alpha and beta parameters for M/M/1 queue. Next, I'm initializing FIFO queue, jobQueue. It's defined as follows as a QueueSystem's class atribute:
Queue<job> jobQueue;
Then, server agent is instantiated. Note that I'm sending queue to server. That is necessary since server has to take jobs from a queue. I'm also instantiating JobFactory agent. Finally, I'm scheduling initial run of JobFactory.

There is a small probelm. Namely, I have to schedule first invocation according to Poisson distribution. It is not correct to invoke it immediately, at least not in the form I wrote it. And, this class, QueueSystem, doesn't have access to poison distribution in order to get first random number. It would be also error to create another Poisson distribution. So, I added a method to JobFactory class/agent that will return me first random number. It is the following method:
public double getFirstInvocationTimeStamp()
{
    return exponential.nextDouble();
}
and you should place it in JobFactory agent/class.

Ok, the final piece of puzzle, Server agent. First, constructor is now a bit different, namely, it has to take queue reference:
public Server(Queue jq)
{
    jobQueue = (LinkedList)jq;
}
step() method is also a bit more involved:
public void step(final SimState state)
{
    Job job = jobQueue.remove();
    job.finishTime = state.schedule.getTime();

    jobs++;
    systemTimeAvg = systemTimeAvg + (job.finishTime - job.createTime - systemTimeAvg) / jobs;
    jobNumberAvg = jobNumberAvg + (jobQueue.size() - jobNumberAvg) / jobs;
    currentStep++;
    if (skipSteps == currentStep) {
        System.out.println(systemTimeAvg + " " + jobNumberAvg);
        currentStep = 0;
    }

    if (jobQueue.size() > 0) {
        job = jobQueue.peek();
        state.schedule.scheduleOnce(state.schedule.getTime() + job.processingTime, this);
    }
}
What does this method do? First, it pops a job from the front of the queue, the job that was processed within the server. Then, it updates and prints some statistics. Finally, it checks if there is another job in the queue, and if it is, it schedules invocation of itself when that particular job has to finish.

Basically, that's it.

Wednesday, November 27, 2013

GnuCash - evidencija plaćanja debitnom karticom

Evidencija plaćanja debitnom karticom gotovo je identična evidenciji plaćanja u gotovini, jedino što je tijekom upisa odakle se povlače novci potrebno staviti tekući račun - budući da vam debitna kartica skida s tekućeg računa.

U tom smislu, pretpostavimo da ste platili debitnom karticom gablec u restoranu. Kako bi to upisali, prvo je potrebno odrediti u koju stavku će to biti zavedeno. Možete si otvoriti zaseban račun za to, ili kao u mom slučaju, možete voditi unutar stavke Hrana. Dakle, kada upišete u tu stavku račun, to će izgledati otprilike kao na idućoj slici:


Dakle, jedina razlika je što se kao izvor sredstava navodi tekući račun.

Tuesday, November 26, 2013

Lecture about agile software development

On November 22nd and 25th, 2013, I gave a lecture about agile software development to a group of employees of Croatian Telecom. This lecture was part of their internal education in which they were educated about different IT and business technologies. The goal of the lecture was not to teach them specifics of agile software development but to give them some kind of an overview of agile development, to contrast it to a more traditional methods, more specifically to waterfall development model, and to compare them mutually. With that in mind, I ended up with a presentation that has the following parts:
  1. Introduction and motivation
  2. Why is software development hard?
  3. Traditional methodologies
  4. Principles of Agile software development.
  5. Some specific agile software development methodologies.
  6. Problems encountered when introducing agile methods.
  7. Experiences gained during introduction of agile methods from some companies.
  8. History of agile methods.
  9. Conclusion.
The presentation is in Croatian, and available here. In case there is demand, I'll translate it to English. I plan to enhance and improve the presentation (there is lot of place to do so). If I upload newer version I'll expand this post. But also, filename will be changed (it includes date of a release). Accompanying this presentations are references I deemed the most interesting for those employes. They can be downloaded as a single ZIP file. Note that those references are in English!

I'm using some pictures from the Internet in the presentation and I hope that I didn't broke any licence or copyright agreements. The same holds for papers in ZIP file which are given for purely educational purposes. In case I didn't do right something, please notify me.

Sunday, November 24, 2013

GnuCash - evidencija plaćanja gotovinom

Pretpostavimo kako ste si ujutro u pekari kupili sendvič za doručak. Dobili ste račun i odlučili ste to sada upisati u GnuCash. Račun je iznosio 14kn.

Kako bi upisali tu stavku u GnuCash, pokrenite program i potom odaberite gdje ćete voditi te troškove. Ja sam odlučio voditi zasebno troškove o hrani i za to sam si otvorio odgovarajuću stavku:


Svjestan sam kako navedena stavka nije baš najpreciznija, ali kao što rekoh, u hodu ću rješavati sve probleme. Kako bi se upisao račun potrebno je dva puta kliknuti na stavku, u ovom slučaju Hrana, te se potom otvara kartica gdje je potrebno upisati što je kupljeno. Inače, moguće je voditi evidenciju o svakoj kupljenoj stavci, međutim, ja sam se odlučio voditi evidenciju samo o računima pri čemu razmišljam da svaki račun slikam fotoparatom (nemam skener!) i pohranjujem slike na računalo. :) Vidjet ćemo kako će to ići. U međuvremenu, ovako izgleda upisan račun:


Primjetite da sam upisao i broj računa. To radim kako bi mogao povezati svaku stavku sa svakim računom, da kasnije ne moram pretraživati po iznosu. Još bolje bi bilo kada bi mogao negdje upisati JIR ili ZKI, ali nećemo pretjerivati. ;)

Primjetite također da je u kolonu Transfer upisano odakle sam uzimao novce za plaćanje. Plaćao sam gotovinom, iz novčanika, te je zato odabrana opcija Assets: Novčanik - Kune.

Stanje nakon te transakcije je sada sljedeće:


Kao što se vidi, u novčaniku sad moram imati 186Kn. Ako ih nema tamo, onda imam problem. :)

To je to što se tiče upisivanja računa. U osnovi, svaki drugi račun koji je plaćen gotovinom iz novčanika upisuje se na isti način.

Vođenje osobnog knjigovodstva upotrebom GnuCash alata - Uvod i postavljanje okoline

Već neko vrijeme razmišljam kako bi bilo dobro voditi evidenciju gdje trošim novce. Svako malo mi se desi da se uhvatim u razmišljanju gdje sam uspio potrošiti plaću i naravno da se ne mogu sjetiti. Drugo pitanje koje me muči s vremena na vrijeme je: Koliko novaca mi odlazi na režije mjesečno, godišnje? U tom smislu, prošao sam nekoliko faza. Prva faza je bila redovito skidanje izvoda iz banke te sam u LibreCalcu računao jednostavni zbroj potrošnje svaki mjesec koji sam potom uspoređivao s plaćom. To je bilo OK, ali nedovoljno precizno. Recimo, nisam znao koliko sam mjesečno trošio na režije unatrag zadnje godine dana (ili više). Zbog toga sam prešao na iduću fazu. Druga faza je bila vođenje evidencije u LibreCalcu gdje sam svaki mjesec upisivao koliko sam potrošio na što. To je bilo bolje, međutim, još uvijek nisam bio zadovoljan jer sam mislio kako su mi mogućnosti upravljanja i analize podataka relativno ograničene. Pri tome nisu ograničene zbog mogućnosti same aplikacije (LibreCalc) već zbog mog vođenja evidencije. Konačno, to me je potaknulo da krenem u treću fazu, korištenje programa GnuCash. U biti, imao sam jedan izlet u taj program puno ranije, međutim, nisam bio uporan i s obzirom kako je ipak nužno upoznati se s nekim temeljnim pojmovima iz knjigovodstva, brzo sam odustao. No, ovaj puta sam ozbiljnije odlučio pokušati koristiti tu aplikaciju.

Tome je pripomogao i relativno dobar uvod u Gnucash program sam pronašao na YouTubeu. Naime, na temelju tog videa sam se odlučio na pristup od jednostavnijeg prema složenijem, tj. da započnem koristiti program na najjednostanviji mogući način, baš kao što se u videu i pokazuje. S vremenom bi potom proširivao količinu podataka kako se pokazuje potreba za njima. Dodatno, odlučio sam iskoristiti mogućnosti prilagodbe različitih podataka kako bi što bolje reflektirali specifičnosti Hrvatske, odnosno, onoga što meni treba.

Inače, razlog zašto sam se odlučio za GnuCash je činjenica kako je besplatan, radi na svim bitnim platformama (Windows, Linux, Mac OS X) i često se spominje kao vrlo kvalitetan program. Osim osobnog knjigovodstva, moguće je voditi i poslovanje u njemu, a ima i Python API pa se može proširivati.

Uglavnom, nakon instalacije GnuCasha i njegovog prvog pokretanja otvara se dijalog New Account Hierarchy Setup.


Nakon odabira tipke Forward, nudi se odabir podrazumijevane valute. Tu nema puno nedoumica, u Hrvatskoj je podrazumijevana valuta kuna, tj. HRK. Konačno, dolazi odabir načina vođenja stavki. Tu sam odabrao opciju 'A Simple Checkbook'.


Konačno, još sam dva puta odabrao Forward i potom Apply. Zatim je trebalo odabrati mjesto na disku gdje će biti pohranjena datoteka. Pripazite kako GnuCash automatski čuva stare verzije datoteke, a istovremeno vodi evidenciju o promjenama (logs). U tom smislu, kreirajte zaseban direktorij u kojemu će biti pohranjeni ti podaci i u njega spremite datoteku.

Nakon toga, pobrisao sam sve što mi se nije sviđalo, te sam preveo i dodao određene stvari. Konkretno, izmjene koje sam napravio su sljedeće:
  • Checking Account sam preveo kao Tekući račun i premjestio ga direktno pod Assets stavku. Tekući račun je za fizičke osobe (pojedince) u HR temeljna stvar.
  • Current Assets stavku sam izbrisao.
  • Nadalje, pod Assets sam još dodao stavke Žiro račun i Novčanik - Kune. Ako imate gotovinu u devizama dodajte još stavku Novčanik - EUR (ili koja vam je već valuta u pitanju). Ovdje bi, osim toga, dodali i razne fondove u kojima držite novce, ako ih imate. U osnovi, dodajete sve gdje držite novce. U krajnjem slučaju, ako u nekoj čarapi držite novce, onda dodajete to tu.
  • Riječ Expenses sam preveo u Troškovi i potom sam tamo dodao stavke Auto, Internet, Mobilni telefon, Hrana, Režije. Pod Auto dodatno Gorivo, Servis, Registracija i Ostalo, te pod režije Struja, Voda, i slično.
  • Konačno, riječ Income sam pretvorio u Prihodi, a onda sam i tu dodao podstavku Plaća. Ako tijekom vremena budem imao nekakav drugi prihod, onda ću ga dodatni naknadno. Slično vrijedi i za troškove.
Promjene su se svodile na uređivanje postojeće stavke, odnosno, dodavanje nove. Dodavanje nove stavke obavlja se tako da kliknete desnom tipkom na stavku unutar koje želite dodati novu stavku i potom odaberete opciju New Account... Promjena postojeće stavke se obavlja slično, kliknete desnom tipkom na stavku koju želite izmjeniti i potom odaberete opciju Edit Account...

Nakon svih tih mojih uređivanja, ono što sam ja imao bilo je sljedeće:


Mogao sam tu još uređivati pojedine elemente, ali za sada sam stao na ovom stanju. S vremenom ću to poboljšavati i mijenjati.

Bitna stvar je da s GnuCashom počinjete voditi evidenciju o stanju svojih računa i troškovima počevši od jednog specifičnog trenutka. Taj trenutak može biti proizvoljan, ali stvar je u tome da negdje morate upisati koliko čega imate u tom početnom trenutku. Ja sam odabrao kako ću početi voditi knjigovodstvo od dana kada sam započeo upotrebljavati GnuCash. Za definiranje početnog stanja služi stavka Equity. Unutar te stavke ima stavka Opening Balances. Tamo sam dodao stavke koje su opisivale koliko imam na tekućem računu, deviznom, u novčaniku i slično. S obzirom da sam imao nešto deviza, a Opening Balances je u kunama, dodao sam i podstavku Devize. Tamo sam potom upisao eure. Možda ima nekakav način da se u jednu podstavku upisuju različite valute, ali trenutno to ne znam napraviti. Osim toga, GnuCash je prilično fleksibilan i moguće je naknadno napraviti reorganizaciju. Prema tome, nema potrebe težiti savršenstvu odmah od prve već je moguće s vremenom poboljšavati proces.

Primjera radi, pretpostavit ću da sam na tekućem računu započeo s 3000kn, na žiro računau 50kn, te na deviznom računu 100EUR. Dodatno, u novčaniku sam imao 200kn i 10EUR. Prvo sam, dakle, dodao sve kune koje imam. Dodavanje se obavlja tako da se dva puta klikne na opciju Opening balances te se potom upisuju odgovarajuće stavke. Nakon upisa, stanje je bilo sljedeće:


Iduće što je trebalo upisati su euri. S obzirom da ne znam način kako se može na stavke koje su definirane u kunama (Opening Balances je u kunama) upisivati druga valuta, konkretno euri, zbog toga sam otvorio novu kategoriju pod Equity naziva Početno stanje - EUR. Istovremeno s tim, stavku Opening Balances sam promijenio u Početno stanje - Kune.


U tu stavku sam potom upisao početne vrijednosti u Eurima:


Eto, to je što se tiče postavljanja inicijalnog stanja. Sada slijedi skupljanje računa i vođenje evidencije o onome što potrošite i što zaradite. O tome ću u drugim postovima:
  1. Evidencija pristigle plaće.
  2. Evidencija troškova na MasterCard kartici.
  3. Plaćanje gotovinom u trgovini.
  4. Plaćanje debitnom karticom u trgovini.
  5. Podmirenje troškova kreditne kartice.
Za kraj, par napomena:

  1. Volio bi kada bi me neki knjigovođa ispravio u vezi pojmova koje sam upotrebljavao, a također, ako koji savjet oko boljeg korištenja ovog alata bi jako dobro došao. S vremenom će se sigurno iskristalizirati nekakve prakse, no, bolje je preskočiti lutanje ako je moguće. :)
  2. Ako ima koji stručnjak za GnuCash alat, također bi volio da malo prokomentira moj post, a i kasnije postove!
  3. Kada započnete voditi evidenciju troškova u GnuCash-u, ili bilo kojem sličnom alatu, to trebate raditi detaljno jer u suprotnom neće biti poklapanja između stanja vaših računa (i novčanika) i onoga što piše u programu te je cijeli postupak besmislen.

Tuesday, October 1, 2013

URL redirection in Google searches...

It annoys me a lot when I search for a PDF document using Google because I need a link to the specific document and what I get is URL that is rewritten so that I can not right click and select Copy link location... Note that I disabled PDF opening within a browser, but PDFs are not the only thing I'm having a problem with. Sometimes, when links are short enough I can manually copy a link from beneath linked title, but in general links are too long, and thus shortened using three dots.

It took me some time to find a solution, but in the end it turned out to be relatively simple. What I had to do is turn off web history and the links were not obfuscated any more. To turn off Web history, when search results are shown, click on the wheel in the right upper corner and select Web History from drop down menu. Then again click on a wheel and now select Settings. Finally, click on the Turn off button.

Now, I would like to have a history of what I was doing (for my personal use, of course!) but when the price is this annoyance with links, I opted to turn it off.

UPDATE: It turns out that this doesn't help! Google still obfuscates links and I'm still searching for a solution...

I realized that, at first, it did help, but then again links become obfuscated. So, I continued search on the Internet and found the following page that explains what's going on (there is also another page for the same script). On the script's page you can click on the Install button in the upper right corner which will install Greasemonkey script that disables link obfuscation. Obviously, you have to have Greasemonkey installed. In case you are using Chrome, there is a link that takes you to extension's home page on a Chrome web store.

So, I think I finally solved this annoyance. And yes, I turned on Web history tracking on Google again.

Tuesday, September 17, 2013

DHCPNAK messages in log file

When I was checking log files I spotted the following log entries that were strange:
Sep  7 11:32:20 srv dhcpd: DHCPREQUEST for 1.1.1.151 from 00:40:5a:18:83:56 via eth0
Sep  7 11:32:20 srv dhcpd: DHCPACK on 1.1.1.151 to 0:4:5:1:8:5 via eth0
Sep  7 11:32:20 srv dhcpd: DHCPREQUEST for 1.1.1.151 from 0:4:5:1:8:5 via 1.1.1.10
Sep  7 11:32:20 srv dhcpd: DHCPACK on 1.1.1.151 to 0:4:5:1:8:5 via 1.1.1.10
Sep  7 11:32:20 srv dhcpd: DHCPREQUEST for 1.1.1.151 from 0:4:5:1:8:5 via 1.1.0.10: wrong network.
Sep  7 11:32:20 srv dhcpd: DHCPNAK on 1.1.1.151 to 0:4:5:1:8:5 via 1.1.0.10
The problem is that DHCP request is received three times, on two of which the answer is positive (DHCPACK) while one received negative response (DHCPNAK) and dhcpd logged the error message 'wrong network'.

The important thing is the network configuration in this specific scenario, which looks something like follows:
  +----+            +-----+              +----+
  |    |------------|     |--------------|    |
  +----+            +-----+              +----+
  Client      Firewall/DHCP relay      DHCP server
1.1.1.151    1.1.1.10     1.1.0.10       1.1.0.4
Looking into log entries, not much can be inferred. The only thing that can be seen is that third DHCPREQUEST came from 1.1.0.10 which isn't on the same network with a client requesting IP address. Sniffing the network gave a bit more information on what's happening. Analyzing the network trace the following were conclusions:

  1. There are three DHCPREQUEST messages with the same transaction ID, the same destination (1.1.0.4, i.e. DHCP server) and also client IP address field within DHCP request is set to 1.1.1.151.
  2. The first DHCPREQUEST comes directly from the client. It has source IP 1.1.1.151, and there is no relay field (i.e. the value is 0.0.0.0). Also, client MAC address field within DHCP request has MAC address of a given client. 
  3. The second DHCP request comes from DHCP relay on the firewall. It has source set to 1.1.0.10, and relay field is properly set to 1.1.1.10, i.e. the IP address from the client's network,.
  4. The third DHCP request also comes from DHCP relay on the firewall, but this time relay field is set to 1.1.0.10. This contradicts client's IP address and DHCP server rejects this request.
So, the conclusion is that client sends request to 1.1.0.4. This request is forwarded by the firewall to the server, but also intercepted by DHCP relay on the firewall that creates two proxy requests and sends them to DHCP server too, one of which is rejected.

The interesting thing, not visible in logs, is that DHCP relay upon receiving NAK from the DCHP server, generates new NAK that is broadcasted on the network where DHCP server lives. 

So, the conclusion is that firewall is wrongly configured. It should not forward DHCP requests if there is a relay agent running. Furthermore, those NAKs aren't seen by the client, only by DHCP relay that reflects them back to DHCP servers.

Thursday, September 12, 2013

Adding Zimbra disclaimer using shell scripts...

While Zimbra 8 (and 7, too) have domain wide disclaimer support built in, there are two shortcomings that forced me to fall back to the old way of doing it:
  1. There is no support for not adding disclaimer if it already exists, and
  2. No support to exclude some addresses from adding disclaimer.
The second problem I managed to solve by patching Amavis script. That approach adds extra effort for maintainability (primarily during the upgrades), but it works. To solve the first problem the same way was too much work that I wasn't prepared to invest so I had to abandon domain wide disclaimer provided by Zimbra. There was also a third problem. Namely, for all mail messages sent from Outlook, Zimbra added two extra characters at the end of a HTML disclaimer, namely characters "= A". Why is this, I don't have slightest clue. I suspect it has something to do with encoding and decoding messages while going through the mail system, but exact reasons are unknown to me.

So, I went to solve all those problems and first I tried the old way, namely modifying postfix subsystem. It turned out that it didn't work. Just for a reference, at the end of this post, I described what I did. Next, option was modifying amavis. But that turned out to be too complicated and error prone - as I said in the introduction paragraph. Finally, I decided to put a proxy script in front of altermime that will be called by amavis and that will check if there is already disclaimer. If it isn't, then it calls altermime. Note that in this way there was no need to change amavis, and that means a lot from the maintenance perspective. So, here is what I did.

First, I created the following simple script in /opt/zimbra/altermime directory:
#!/bin/bash
echo "`date +%Y%m%d%H%M%S` $@" >> /tmp/altermime-args
exec /opt/zimbra/altermime-0.3.10/bin/altermime-bin "$@"
What it does is it just logs how it was called and then it calls altermime. Note one more important thing here. In order to be able to put this script before altermime, I had to call it altermime, and altermime binary I renamed to altermime-bin. If you are doing this on a live system be very careful how you do this switch. I suggest that you first create script called altermime.sh, check that it works, and then use the following command to make a switch:
mv altermime altermime-bin && mv altermime.sh altermime
Ok, in this way I was able to find out how altermime is actually called. This is what I saw in /tmp/altermime-args file:
20130912100915 --input=/opt/zimbra/data/amavisd/tmp/amavis-20130912T100229-30384-pc8afS_K/email-repl.txt --verbose --disclaimer=/opt/zimbra/data/altermime/global-default.txt --disclaimer-html=/opt/zimbra/data/altermime/global-default.html
That's just one line of the output. As it can be seen, the first argument specifies file with mail message, and the rest specify disclaimer to be added. So, in order not to add disclaimer, if there is already one, I modified the altermime.sh script to have the following content:
#!/bin/bash
grep "DISCLAIMER:" ${1#--input=} > /dev/null 2>&1
if [ ! "$?" = 0 ]; then
    exec /opt/zimbra/altermime-0.3.10/bin/altermime-bin "$@"
fi
Again, be careful if you are modifying this script on a live system.

Now, in order to control where disclaimer is added, you can modify this simple shell script. One more thing you should be aware of, this approach impacts performance as, instead of running one process, it now runs at least 3 per mail message, and there are few extra file accesses. 

Finally, as a side note, I managed to get rid of those strange characters added to Outlook's email messages. I just edited a little bit html file that contains disclaimer, and those characters were gone. That's definitely a bug somewhere, but who knows where...

The old way that didn't work

As I said, the first approach I tried is to use the procedure from Wiki. But it didn't work. Anyway, for a reference, here is what I tried to do. Note that, as Zimbra already ships with altermime, there is no need to install it. The altermime is in /opt/zimbra/altermime/bin directory and you can safely use it. Ok, now to changes:

First, change a line in master.cf.in that reads
smtp    inet  n       -       n       -       -       smtpd
into
smtp    inet  n       -       n       -       -       smtpd        -o content_filter=dfilt:
and also add the following two lines:
dfilt   unix  -       n       n       -       -       pipe
        flags=Rq user=filter argv=/opt/zimbra/postfix/conf/disclaimer.sh -f ${sender} -- ${recipient}
Note that by this last line you specified that your script is called disclaimer.sh and that it is placed in /opt/zimbra/postfix/conf directory. This script, when run, should be run with a user privileges filter. Also, be careful where you put those lines. Namely, put them after the following three lines:
%%uncomment SERVICE:opendkim%%  -o content_filter=scan:[%%zimbraLocalBindAddress%%]:10030
%%uncomment LOCAL:postjournal_enabled%% -o smtpd_proxy_filter=[%%zimbraLocalBindAddress%%]:10027
%%uncomment LOCAL:postjournal_enabled%% -o smtpd_proxy_options=speed_adjust
The reason is that those line logically belong to the first smtp line, and if you add dfilt in front of it, you'll mess things, probably very badly, depending on your luck!

If you had Zimbra's domain wide disclaimer enabled, then disable it using:
zmprov mcf zimbraDomainMandatoryMailSignatureEnabled FALSE
as a zimbra user, and then restart amavis:
zmamavisdctl restart
still as a zimbra user.

Finally, to active custom script to add disclaimer run the following command as zimbra user:
zmmtactl restart
After I did all that, it didn't work. :D But, then I realized that there are two content_filter options to smtp which might not work, and so I resorted to proxying altermime.

About Me

scientist, consultant, security specialist, networking guy, system administrator, philosopher ;)

Blog Archive