Monday, October 31, 2011

Examples of ovaldi checks: sysctl variables

After I described how to compile ovaldi on CentOS and a simple test to verify it is working, in this post I'm going to describe how to use ovaldi to check the values of sysctl variables. More precisely, I'm going to check that IPv4 forwarding is turned off. The general idea behind this example is to give you a starting example on which you can build more complicated checks. Note that there is even more general idea. Namely, you can create your own security benchmarks that can check if certain security criteria are met, and if not, you can be alarmed by automatic monitoring process that is based on ovaldi.

In the text that follows I'm referencing the following file. That is a simple and complete file that will check the value of net.ipv4.ip_foward sysctl variable. After you've downloaded this file, and assuming that you have your environment properly configured (see posts I referenced at the beginning) then you can run ovaldi to do the check:
ovaldi -m -o sysctl-test.xml -a /opt/oval/share/ovaldi/xml
ovaldi will create the usual output files after running this command: ovaldi.log, results.xml, results.html and system-characteristics.xml. Each one of them you can open in a brower. You should open them and check their content. system-characteristics.xml is interesting because there you can find what information was collected about the system. Those values can be checked for in oval definitions XML file. Note that ovaldi collects only referenced data, not everything it could possibly collect.

Let us now dissect a bit oval definitions file, sysctl-test.xml. The basic structure of this file is:
<oval_definitions ...>
    <definitions>...</definitions>
    <tests>...</tests>
    <objects>...</objects>
    <states>...</states>
</oval_definitions>
Basically, what oval definition does is to define a series of test, each one describing what expected state of certain object is. Test themselves can be combined in many different ways using AND and OR operators and nesting.

In our simple example object whose state we are interested in is ip_forwarding variable. So, if you look into XML file, inside element, you'll find that we define object of interest:
<sysctl_test id="oval:hr.sistemnet.oval:tst:1"
   version="1"
   comment="forwarding is disabled"
   check="at least one"
   xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#unix">
  <object object_ref="oval:hr.sistemnet.oval:obj:1" />
  <state state_ref="oval:hr.sistemnet.oval:ste:1" />
</sysctl_test>
xmlns attribute is important. I had some problems with undefined element until I got that one correctly. In other words, all the objects, their state and tests are defined in XML documents in /opt/oval/share/ovaldi/xml directories. But, when using those be certain to correctly define namespace where they are defined, or otherwise ovaldi will complain that you are using unknown test, objects and/or states.

This particular test reference object that has to be checked and the state in which this object has to be. Object itself is defined in <objects> element as follows:
<sysctl_object id="oval:hr.sistemnet.oval:obj:1" version="1"
         xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#unix">
    <name>net.ipv4.ip_forward</name>
</sysctl_object>
As you can see, within name element you specify which sysctl variable you wish to check. The second part of the test is the state in which object has to be. We want our object to have value 0, meaning forwarding is disabled. That check is performed using the following within states element:
<sysctl_state id="oval:hr.sistemnet.oval:ste:1"  version="1"
      xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#unix">
    <value>0</value>
</sysctl_state>
Obviously, you just place desired value within value element.

So, we saw that test consists of an object that has to be in particular state. The test itself is referenced in definition element (which is placed within definitions element). For that purpose you are using criterion element:
<criteria operator="AND">
    <criterion test_ref="oval:hr.sistemnet.oval:tst:1" comment="forwarding is disabled" />
</criteria>
As you can probably guess, multiple criterions can be specified and in this case they will be bound with AND operator. You can nest criteria and criterion elements to get very complex tests.

Testing ovaldi on CentOS 6...

In the previous post I described how to compile ovaldi tool for CentOS. In the mean time I tested that installation and found few more bugs in rpm handling code:
  • query format was wrong, i.e. the tag used was %{SIGGPG:pgpsig} but actually it should be %{SIGPGP:pgpsig}. I tested this on CentOS 6 and Fedora 15 and on both the second form is right. The first form returns (none).
  • After obtaining signature key from rpm, the code wrongly calculated starting offset  of the key, so you ended up with space before and last digit cutt of. (NOTE: This has been fixed in Ovaldi 5.10.1.1 so I removed that part from my patch!)
Both of those I corrected and the changes are included in the provided patch. If you downloaded that patch (or binaries) before this post was published, then download them again.

Since I had problems with rpm I extracted problematic part of the code into separate program and used it to test its functionality. You can obtain the test program here. If you compile it and start it, you'll note that it functions exactly as the following rpm query command:
rpm -q --qf '%{SIGPGP:pgpsig}' <packagename>
To compile it, use the following command:
gcc -o rpmq rpmq.c -lrpm
On Fedora you'll also need -lpopt option added on the end.

Oval definitions for RedHat's security advisories can be found on the following address. I downloaded rhsa.tar.bz2 which includes all the advisories, unpacked it and then modified OVAL description com.redhat.rhsa-20111409.xml. This particular description checks for a vulnerable openssl. To see if the check will detect vulnerability I downgraded openssl to the original version shipped with CentOS, i.e. openssl-1.0.0-4.el6.x86_64. Furthermore, I also had to heavily modify aforementioned OVAL description because CentOS doesn't have packages like RedHat, nor it is using the same signing key. So, the version I ended up can be obtained here (hope RedHat won't be mad on me for this! :))

Running that description within ovaldi on a vulnerable system produces the following output:
$ ovaldi -m -o org.centos.cesa-20111409.xml

----------------------------------------------------
OVAL Definition Interpreter
Version: 5.10 Build: 1
Build date: Oct 30 2011 21:40:11
Copyright (c) 2002-2011 - The MITRE Corporation
----------------------------------------------------

Start Time: Mon Oct 31 00:14:16 2011

 ** parsing org.centos.cesa-20111409.xml file.
    - validating xml schema.
 ** checking schema version
     - Schema version - 5.3
 ** skipping Schematron validation
 ** creating a new OVAL System Characteristics file.
 ** gathering data for the OVAL definitions.
      Collecting object:  FINISHED                         
 ** saving data model to system-characteristics.xml.
 ** running the OVAL Definition analysis.
      Analyzing definition:  FINISHED                      
 ** applying directives to OVAL results.
 ** OVAL definition results.

    OVAL Id                                 Result
    -------------------------------------------------------
    oval:org.centos.cesa:def:20111409        true          
    -------------------------------------------------------


 ** finished evaluating OVAL definitions.

 ** saving OVAL results to results.xml.
 ** running OVAL Results xsl: /opt/oval/share/ovaldi/xml/results_to_html.xsl.

----------------------------------------------------

Basically, it detects that there is the vulnerability present (clearly indicated by the result field which I set to bold to be more visible!). After performing an update to CentOS and running test again produces negative results, as expected, i.e.
$ ovaldi -m -o org.centos.cesa-20111409.xml

----------------------------------------------------
OVAL Definition Interpreter
Version: 5.10 Build: 1
Build date: Oct 30 2011 21:40:11
Copyright (c) 2002-2011 - The MITRE Corporation
----------------------------------------------------

Start Time: Mon Oct 31 00:16:55 2011

 ** parsing org.centos.cesa-20111409.xml file.
    - validating xml schema.
 ** checking schema version
     - Schema version - 5.3
 ** skipping Schematron validation
 ** creating a new OVAL System Characteristics file.
 ** gathering data for the OVAL definitions.
      Collecting object:  FINISHED                         
 ** saving data model to system-characteristics.xml.
 ** running the OVAL Definition analysis.
      Analyzing definition:  FINISHED                      
 ** applying directives to OVAL results.
 ** OVAL definition results.

    OVAL Id                                 Result
    -------------------------------------------------------
    oval:org.centos.cesa:def:20111409        false         
    -------------------------------------------------------


 ** finished evaluating OVAL definitions.

 ** saving OVAL results to results.xml.
 ** running OVAL Results xsl: /opt/oval/share/ovaldi/xml/results_to_html.xsl.

----------------------------------------------------
This time ovaldi produced the following files ovladi.log, results.xml, results.html and system-characteristics.xml.

With this I'm now pretty sure that ovaldi works on centos. Still, more extensive testing is absolutely necessary, but for the time being this, I think, is a great step forward.

So, here are some conclusions from this exercise:
  • CentOS doesn't have assigned CPE values past version 5. So, some procedure has to be initiated in that respect.
  • RedHat's oval descriptions can not be used for two reasons. First, the legality is questionable, and second, the change is not straightforward.
  • Editing of OVAL XML description files is very hard and error prone. Furthermore, ovaldi itself is not very helpful. For example, if you don't get IDs and references right, it will complain but the diagnostic information is basically useless.
Just as a note, when I had a problem that some test, or object or something else, is referenced but not defined, I used the following quick hack to find the offending ID:
for i in `grep _ref org.centos.cesa-20111409.xml | cut -f2 -d\"`do grep -q id=\"$i org.centos.cesa-20111409.xml || echo $i ; done
Which printed the offending ID.

That concludes this post. In some future post I'll describe in more detail the structure of OVAL description, and in the mean time you can find some old information on my homepage.

Sunday, October 30, 2011

Compiling OVALDI for CentOS 6

Note: Take a look at the newer version of this post. Things are simpler now.

I described in one earlier post the purpose of OVAL and the benefits it gives to a user. Here I'm going to describe how to setup OVAL interpreter on CentOS 6. The problem is that there is no prepackaged Oval interpreter for CentOS 6. Actually, there is but it's only for 32 bit version of CentOS 4 and 5, an it is an older version, not the latest one. So, here I'm going to describe how to build it from source. The build process consists of building XML processor Xalan, then XSLT processor Xerces and finally in building interpreter itself. There are certain prerequisites you need to have in order for Oval to build, I'll mention those also.

I'll assume that you created working directory for this purpose and that you run all the following commands within that directory. When necessary, I'll reference that directory as $WORKDIR and when you see that string replace it with full path of your working directory. Also, I'm going to install oval interpreter into directory /opt/oval. The reason I'm not placing it into some of the "system" directories like /usr/bin, /usr/lib and similar is to avoid clash with versions of xalan and xerces shipped with distribution itself.

In case you trust me enough, here is archive of final content of directory /opt/oval, so you can unpack it and skip to the Running ovaldi section.

Installing prerequisites

Xerces
Download version 2.8.0, or whatever is the latest version of Xalan 2. Don't use Xalan 3 because API was changed with respect to version 2 and OVAL won't work with it! In the following text, I'll reference version 2.8.0 and if there is a newer one replace version numbers as necessary.

After downloading some package it is a good practice to check MD5 sum (or SHA1). In this case md5 sum will give the following output:
$ md5sum xerces-c-src_2_8_0.tar.gz
5daf514b73f3e0de9e3fce704387c0d2  xerces-c-src_2_8_0.tar.gz
which matches the one given on the dowload page.

Now, unpack the archive using the following command:
$ tar xzf xerces-c-src_2_8_0.tar.gz
and you'll get directory xerces-c-src_2_8_0/. Go into that directory and then into src/xercesc subdirectory. Before configuring distribution set the environment variable XERCESCROOT to the top level directory of the unpacked archive, i.e.
export XERCESCROOT=$WORKDIR/xerces-c-src_2_8_0
now, start configuration process:
./runConfigure -p linux -c gcc -x c++ -b 64 -P /opt/oval
In that command option p specifies platform on which you are performing build process, option c specifies compiler to use, x specifies c++ compiler, option b determines bit width of the platform (32 or 64 bit) and option P specifies installation directory. All the other options have appropriate default values. Note that you must specify c++ instead of g++! If you specify g++, then while building Xalan, you'll get the following errors:
$XERCESCROOT/lib/libxerces-c.so: undefined reference to `stricmp(char const*, char const*)'
$XERCESCROOT/lib/libxerces-c.so: undefined reference to `strnicmp(char const*, char const*, unsigned int)'
The problem is that the configuration process misidentified that GNU's compiler is used that doesn't have stricmp and strnicmp functions and it didn't include replacement functions!

If everything went without an error, start build process by issuing make command:
make
and finally, install xerces (you should switch to root user to run the following command):
make install

Xalan
Go to the download page and take most recent version of Xalan. I was using 1.10 which was the latest one at the time this post was written. So, after download it, and checking signature(!), unpack it with the following command:
tar xzf Xalan-C_1_10_0-src.tar.gz
This will create new directory, xml-xalan/. Before building Xalan, you should apply a patch to it. The problem is that gcc developers made some changes to header files (removed unnecessary includes) in recent version available on CentOS and that means that some prerequisite includes have to be explicitly specified. The problem is manifested with the following error messages:
home/zavod/sgros/work/xml-xalan/c/src/xalanc/XalanDOM/XalanDOMString.cpp: In member function ‘xalanc_1_10::XalanDOMString& xalanc_1_10::XalanDOMString::assign(const xalanc_1_10::XalanDOMString&, xalanc_1_10::XalanDOMString::size_type, xalanc_1_10::XalanDOMString::size_type)’:
/home/zavod/sgros/work/xml-xalan/c/src/xalanc/XalanDOM/XalanDOMString.cpp:251: error: ‘memmove’ was not declared in this scope
/home/zavod/sgros/work/xml-xalan/c/src/xalanc/XalanDOM/XalanDOMString.cpp: In static member function ‘static xalanc_1_10::XalanDOMString::size_type xalanc_1_10::XalanDOMString::length(const char*)’:
/home/zavod/sgros/work/xml-xalan/c/src/xalanc/XalanDOM/XalanDOMString.cpp:780: error: ‘strlen’ was not declared in this scope
So, download the patch and enter into xml-xalan directory. Then, run the following command:
$ patch -p1 < ../xml-xalan.gcc-4.4.patch
patching file c/src/xalanc/TestXPath/TestXPath.cpp
patching file c/src/xalanc/XalanDOM/XalanDOMString.cpp
patching file c/src/xalanc/XalanExe/XalanExe.cpp
patching file c/src/xalanc/XMLSupport/FormatterToHTML.cpp
patching file c/src/xalanc/XSLT/ElemNumber.cpp
this assumes that you've downloaded patch into the same place where you downloaded Xalan itself (i.e. $WORKDIR).

Now, enter into subdirectory named c/. Before configuring the build process, define the variable XALANCROOT. You should set it to $WORKDIR/xml-xalan/c with the following command:
export XALANCROOT=$WORKDIR/xml-xalan/c
Also, note that Xalan depends on Xerces and to be able for Xalan to find Xerces you need to set the environment variable XERCESROOT, or Xerces has to be in some system directory that is searched by default (e.g. /usr/include and similar directories).  If you followed this post withouth interruption, you probably have it defined already. Now, initiate configure process using runConfigure command:
./runConfigure -p linux -c gcc -x c++ -b 64 -P /opt/oval
the options used are same as for Xerces. Initiate build process using make, and after build finishes, install it using 'make install' command switching before to root user.

Necessary development packages
As a final prerequisite you should check that the following development packages are installed. The simplest way to do that is to initiate install process and yum will react appropriately: pcre-devel, libgcrypt-devel, rpm-devel, openldap-devel, libblkid-devel, and libselinux-devel.

Building and installing Ovaldi
Go now to the download page of Ovaldi and download the latest version. Version 5.10.1.1 is the latest one at the time of writing this post. So download it and upack it. This will create directory ovaldi-5.10.1.1-src. Also, download the following patch. Note that this patch is made so that ovaldi can be compiled on CentOS 6 and it is not applicable for other distributions, neither it will allow ovaldi to be compiled on other platforms (though, very unlikely it might :)).

Now, enter ovaldi-5.10.1.1-src directory and apply patch:
patch -p1 < ../ovaldi-5.10.1.1-centos6.patch
Three changes are in the patch file. The first one is addition of /opt/oval/include and /opt/oval/lib directories in main Makefile. The second are some changes to RPM part of the code since API has changed in recent versions of RPM. More specifically, I introduced compatibility switch (-D_RPM_4_4_COMPAT) and also replaced int_32 with int32_t types.

Third change resolves the following error message already reported on some forums:
Error running rpm query in child process: blah: -q: unknown option
There is also additional patch that isn't always necessary, and that's why I separated it. Namely, I placed ovaldi in /opt/oval directory, while ovaldi expects by default its shared files to be within /usr/share/oval. So, this patch changes this:
patch -p1 < ../ovaldi-5.10.1.1-sharepath.patch
Since for some unknown reason (I didn't have will/time to investigate further) linker can not find libxalanMsg.so.110 library, even though it has appropriate path in -L option, define LD_LIBRARY_PATH using the following command prior to comilation:
export LD_LIBRARY_PATH=/opt/oval/lib/
Finally, enter project/linux subdirectory and initiate build process:
make
When the build process is over, copy ovaldi binary (you'll find it in project/linux/Release subdirectory) to /opt/oval/bin directory. Also, create directory /opt/oval/share/ovaldi and move there directory xml (you'll find it directly beneath ovaldi-5.10.1.1-src directory).

Running ovaldi
Finally, we are ready to run ovaldi interpreter. Before running ovaldi you should define LD_LIBRARY_PATH and optinally PATH variable. In other words, before running ovaldi execute once these commands:
export LD_LIBRARY_PATH=/opt/oval/lib
export PATH=/opt/oval/bin:$PATH
Then, try to run ovaldi, you should get help message.

This concludes this post. In the next one I'm going to try to run ovaldi using RedHat's provided files. Until I do this note that the patches I provided may turn to have errors that would prevent ovaldi from correctly functioning.

The rise and fall of the great companies...

It is very interesting to read news about Apple's and ARM's successes on the market on the one hand, and the fall and strugle of some of the undisputed rulers of computing and consumer markets of the not so distant past. Actually, I first think about computing markets, but since it seems that computing market penetrated consumer market and it's hard to distinguish the two, I'll treat them combined.

The main premise of this post is that it seems to me that company's long term success depends on the ability of the top management to anticipate future trends. Let me try to explain that in some more details.

I could start with the illustrations from many time points. For example, DEC. DEC rose from the change cause by the invention of minicomputers. But DEC, as later IBM too, failed to anticipate the rise of personal computers and it costed it its existence (probably along with other management failures).

Then, there was IBM too. In 70-ties it was said that no one has been fired because of buying IBM equipment. IBM's mistake was also that it didn't foresaw personal computers. True, it did make PC and it also make it possible for others to produce PC clones. But that's all, and when they saw their mistake, they tried with PS/2 series, which was failure! Anyway, two giants emerged riding on the wave of PC revolution, Intel and Microsoft, frequently called Wintel. Transition period was 80-ties, and the world domination came in 90-ties. In those times, there were many players, among others HP, Compaq, AMD, Dell. All of them, more or less, managed to profit from PC sales. There was also Apple. Apple succeeded during 80-ties to position itself as a producer of successful workstations with GUI, but because it was expensive it was always niche, and because it was niche, it fall down during 90-ties. Possibly, it was also a lack of vision, but I think that somehow the main reason is that at that time computers were used by people that know something about computers and which wanted something cheap and rarely they were thinking about the design.

So, at the end of the 90-ties Microsoft was on a height of its power along with Intel. And very few people, including myself, saw anything that would change that any time soon. But then, something happened that triggered the change. Actually, we can see two different things that caused two different effects. Basically, two changes happened that damaged existing companies and allowed new ones to appear and/or rise.

The first one was the fusion of mobile phones and computers, and penetration of computers into consumer markets! Intel and, especially Microsoft, were caught unprepared. They didn't have adequate products for that segment of market, and what they had wasn't marketed appropriately. In a way they were prisoners of desktop mentality. Apple on the other hand had Steve Jobs that not only foresaw this coming, but in a way was the initiator of this change! This change cause damage to Intel and Microsoft. Intel was producing desktop and server processors, and had no product for mobile phone. Here ARM benefited. And not only that ARM dominates mobile phone markets (by mobile phones I also mean tablets and such) but this momentum is allowing them to slowly enter the server markets too (e.g. read this)! Now both, Intel and Microsoft, are trying to catch that wave.

The second wave is the shift from computer production to services. Mass computer production become less and less profitable, and notebook market is rising. The exemption here are high end Unix servers (and partially Windows server). That change was foreseen by Samuel Palmisano. He oriented IBM from mainly computer production company to services company. One of the notable steps he did is when he sold Thinkpad brand to Lenovo. Of course, IBM still produces high end Unix servers and mainframes. Anyway, that brought IBM from its knees to become stronger than Microsoft, something unimaginable 10 years ago.

It's very interesting to watch what's happening because it seems to me that this is in a way comparable to a fall of great empires of the past, and USA of lately. :) It's also interesting to find out how to predict those changes, because who manages to predicts the changes that will come, will have a chance to rule the global market of the future.

Friday, October 28, 2011

Installing minimal CentOS 6.0 distribution

This post starts a three part series in which I'll describe in detail how to install Zimbra Open Source mail server on 64-bit CentOS distribution. The first part deals with CentOS installation itself. The second part talks about setting up split DNS server, and finally, the third part will talk about setting up Zimbra server itself.

Before describing installation, I'm going to define the environment, and some basic parameters, in which this server is going to be deployed. Note that you can implement this network topology using VMWare or some similar product and in that way you can test that everything is working before doing actual installation.

So, the network topology I'm going to assume is given in the following figure:

Network topology for Zimbra Mail server
What you can see in this figure is a future Zimbra server (on the right) with the IP address 10.0.0.2/24. This is the server I'm going to describe how to install. We'll also assume domain example-domain.com. For the moment no additional parameters are needed, and in later post I'll introduce all the necessary parameters on an as needed base.

Preinstallation considerations

When I perform CentOS installation I usually do minimal install because in that way I'm getting more secure system. Then, as need arises, I add additional packages. Sometimes it happens that even minimal installation (as defined by CentOS installer) has some packages I don't need and so I remove them. But this state changes from release to release. For example, at one time minimal installation included isdn4k-tools which I didn't need as I was connecting my servers to Ethernet LAN. Apart from security concerns for such behavior, there used to be additional reason to make minimal installation. Namely, to save disk space. But because of the abundance of available disks space today, that reason is not valid any more, at least not for the majority of cases.
 
Performing base system installation is in principle very easy. The potential problem is that you need to anticipate some parameters, three of which we are going to discuss in some detail. Those are file systems (and disks), network configuration and 32 or 64-bit installation.

For file systems the following details have to be considered: partitions sizes, use of logical volume management and RAID. There is also question of exact file system type to use, but I won't discuss that one here. ext4 suffices in majority of cases.

When we talk about sizes of different directories, specially problematic ones in general could be /var and /home. But also, for example, /opt, or any other directory with application data and/or logs. Directories like /etc, /usr, /lib, and some others are in general constant in size during the system's deployment. What I would suggest is that you start with a minimum disk space required and when some of the aforementioned partitions has to have more space, you just create new partition, move content of the directory this partition will replace, and finally mount the partition. Additionally, the application you intend to install could significantly influence how your partitions are laid out. In any case, I don't allow installer to do manual partitioning by itself.

I try to avoid logical volume management if I can, if nothing else, just to remove one additional layer of complexity. But, in certain scenarios you'll have no choice but to use it, unless of course you want to have some nightmares later. When, for example, you are installing a production system that is going to be used for a long time and there will be a large quantity of data (but you are uncertain how much exactly), in that case I would suggest that you use logical volume management. So, we have two extremes, on one side there is a static system that wouldn't grow much in size with a simple file layout, and on the other side there is heavily loaded server with lots of recorded data and/or very complex file system layout. Note that for small systems, maybe medium ones too, where you can have few hours of downtime any decision you made can later be changed. For example, you start without LVM, and then decide that you need to implement it so you add LVM partition under a single directory only, or you change everything apart from the boot partition. It is relatively easy to do so and I'll describe that process in some future post.

Finally, there is also question about the use of RAID, should you use it or not. There are several different possibilities:
  1. You are installing system on a local disk subsystem, with or without hardware RAID support.
  2. You are using remote disk storage.
  3. Installation is performed within virtualized environment (e.g. VMware, Citrix Xen, KVM)
In case you are using virtualized environment then you don't have to use RAID, actually, it is an overkill. The assumption is that the host itself has RAID to protect all the hosted virtual machines. Still, there is one exemption, and that is a production server running within ESXi. In case you are using ESXi with local storage and you don't have hardware RAID, then you have to implement RAID in virtual machine. But I suppose that this case will be rare as it signals that you are using some poor hardware for production environment. Nevertheless, it is possible to do so, and maybe I'll describe that scenario in some future post too.

Next, if you are installing test server or something not particularity important, RAID is definitely an overkill. And finally, if you are using remote storage, then also it is not necessary to use RAID because remote storage takes care of that (or at least it should).

This leaves us with the scenario of using local storage, installing an important server and a question should we use software and hardware RAID (if there is no hardware RAID, there is obviously no dilemma). I personally prefer software RAID for a simple reason that I'm allowed to access individual disks using smartctl tool to monitor their health status. This is also a better solution for a number of low cost RAID solutions because those are, in essence, software RAIDs. Still, when you have some high end hardware that has very good hardware RAID and/or you need high performance then your route to go is definitely hardware RAID.

So, the last thing to consider is how to combine software RAID and LVM? I personally prefer using md RAID, and on top of that I install LVM.

While we are at disks we have to also consider swap partition size too. I doubt that more than few gigs of swap is of any use. It used to be a rule to have twice as much of swap as you have RAM. But in case you have 64G of RAM, to have 128G of swap is exaggeration. I usually put 2G, maybe 4G at most. Simply, this can be considered as a space for dormant applications. But if you have so many dormant applications that they fill so much of a swap, then you should probably tune your applications. And yes, if swap is used as a short term space for applications (i.e. they are swapped out, and then shortly after that swapped in) that is also not good as it severely impacts the performance of a server. Finally, RAM is cheap, buy more RAM, not larger disk.

Second consideration, after file systems, we also have to consider network. Basically, there are only two options: dynamic or static addresses. That choice is relatively easy. If you are installing some sort of a server, machine that will be accessed by another machines/people, than it's better to assign static IP address. With dynamic address it could happen that DHCP server is unreachable for some reason and that server loses its IP address and stops functioning. On the other hand, if you are installing workstation, that is, a machine that will access other machines, then better option in majority of cases is to use dynamic assignment of addresses, i.e. DHCP. It brings some flexibility into the system, with a price in lower security (which also can be adequately solved).

Finally, the third consideration is whether to install 32 or 64 bit system. I strongly suggest that you install 64 bit system. Only in case you are running some application that requires 32 bit operating system and it is only supported on 32 bit operating system, you should use 32-bit system. In all other cases, as I said, use 64 bit. Here I implicitly assume that the hardware you use is 64 bit. If it is not, then that's also the case when you'll use 32-bit operating system. Note that it is possible to run 32-bit application on a 64-bit operating system! That is, it is not mandatory to install 32-bit installation to use 32 bit applications!

So, that's all about preinstallation considerations. Let us proceed to base system installation.

Installing base system

After all the preinstallation considerations, I'll assume that we are going to install 64-bit system in a virtualized environment and that we don't expect this system to grow much in terms of the installed size and recorded data. So, I won't use RAID and neither I'm going to use LVM. Furthermore, it's definitely a server, so we'll use static IP address. Also, we'll assume that you have 8G of RAM in server, and we'll also allocate 2G of swap and 4G for a single root partition (no special /var, /home, etc.). Actually, minimal installation takes about 600MB, but this will grow for about 200M after first update. So, you have to have at least 1G for base system install.

Start by putting CD and booting the machine (or attaching ISO image and starting virtual machine).

After the installation starts, it asks you the following series of questions:
  1. Should the installer check CD/DVD? In case you are using ISO image there is certainly no need to do that. If you are using real DVD media, then decide for yourself. I usually skip this step. After this question, graphical installation starts. Note that if you don't have enough RAM, you'll be forced into text based installation which has severely restricted number of options, e.g. you can not manually partition hard disk! Take a look into this post in case you did installation in text mode and want to switch to RAID.
  2. After you select Next you are first asked for language to be used during installation as well as for keyboard layout. The two are used only during the installation process. Select the ones that suite you, and select Next.
  3. Storage types used for installation. There are two options: Basic Storage Devices and Specialized Storage Devices. The first one you use when you are performing installation on local disks, while the second one is for a shared storage. Just select Basic Storage Devices.
  4. Then, if this is a new computer, or a new disk, you are presented with a warning that disk(s) need to be reinitialized. Select button 'Re-initalize all'.
  5. You are asked to provide computer name. Enter here mail.example-domain.com. Then, click on button Configure Network. A new dialog will open.
  6. In the newly opened dialog select tab Wired (if it isn't already selected) and in there select option 'Auto eth0' and click on the button Edit. New dialog will open.
  7. It is not necessary, but I change the name to be only eth0. Then, I select checkbox Connect automatically. This is mandatory because otherwise your server will be unavailable until someone logs into it and connects it to network. This isn't something you want. :)
  8. Clik on the tab IPv4 Settings. You'll see under Method option Automatic (DHCP). Change that into Manual and click on Add button. Then, add the address 10.0.0.2, change network mask to 24 (you'll be automatically offered 8) and enter gateway 10.0.0.1. Also, enter the IP address of public DNS server you are using until we configure our own DNS server. Finally, click Apply. Click Close to close network connections editor.
  9. Select the zone you are in and click Next.
  10. Next, you have to enter root password. Note that this is a vary important password so you should pick a strong one, or be certain in what you are doing! Anyway, after entering root password (twice) click Next. If you entered a weak password you'll be warned about it. Decide for yourself what you'll do, ignore it or change it to better one. In any case, eventually you'll proceed to next step.
  11. Now we came to partitioning step. Select Create Custom Layout and then Next. You'll be transferred to disk editor. In disk editor create swap partition (2G) and root (6G) partition. Both are standard partition so when asked about partition type (after clicking Create button) just confirm default value (i.e. Standard Partition). When you click Next, you'll be asked if you are certain that changes should be written to a disk. To confirm, press button Write Changes to Disk.
  12. When asked about grub loader, just select Next.
  13. Now you are presented with a screen to select package selection to be installed. Select Minimal and then Next.
Installation now starts so you should wait. Because it is minimal install it is finished quite soon. When all the packages are installed press Reboot. At this moment, on CentOS 6.2 the disk usage is:
# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             7,4G  759M  6,3G  11% /

As a final step of a base system installation you should do update. But, in order to do so you'll have to add additional repository that isn't include by default, see some details here. In short, you should run the following command as a root (this is one line, but it could be broken because of formatting in your browser!):
rpm -ivh ftp://ftp.funet.fi/pub/mirrors/centos.org/6/cr/i386/RPMS/centos-release-cr-6-0.el6.centos.i686.rpm
After that command successfully finishes, run the following command to pick up all the updates:

This additional repository isn't used any more, as far as I know. So just use the following command to update installation:
yum update
When asked, confirm update. You'll also be asked to import CentOS signing key into RPM database. Check that this is a valid key, and confirm import process. That's all, base system is installed! Don't forget to reboot machine after upgrade since probably many important packages replaced with newer versions and to activate them in already running processes you should reboot machine.

After update finished my disk usage was:

# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             7,4G  986M  6,1G  14% /
But the exact values heavily depend on number of updates, so take this only as a rough guideline.

Adding some useful packages

As a final step of base system installation I'll list some additional packages you might want to install. I find them very useful for debugging problems and checking system's correctness. Those packages are:
  • tcpdump - this is the packet sniffer. If something is wrong with a network you'll use this tool to see what's going on (or not, depending on the problem :)).
  • strace - sometimes process behave oddly and in those cases you can use this tool to trace them to see what's going on. It's not exactly dtrace, but in many cases is very hapeful.
  • telnet - when some server is apparently listening on some port and you can not access it for whatever reason this simple telnet client can help you try to connect, and using tcpdump see what's going on. It will even allow you to interact with server, e.g. mail server to send test email message.
  • lsof - swiss army knife that allows many thing to be queried from processes. For example, which ports are opened by a process, of to which process particular port belongs. Then what files are opened, etc. Very usefull tool, indeed.
  • ntpdate - this is a network time protocol that allows you to synchronize you machine's time clock with some accurate time server (e.g. zg1.ntp.carnet.hr).
  • rsync - for more efficiently copying data from and to server.
  • openssh-clients - to allow rsync to work and also to allow you to connect to remote machines from this server.
All those packages can be installed using yum followed by the package name (the name in bold).

Tuesday, October 25, 2011

Installing and testing ovaldi on Windows 7...

When you are dealing with a single computer with a particular operating system, it is relatively easy to keep it safe. But, as the number of machines grows and becomes more heterogeneous, keeping them safe becomes very donating task. You may have automated updates and such, but they have to be checked from time to time in order to see if they function correctly. Still, if those computers are used (and by definition they are, more or less frequently), then they are like living organisms, they change. No matter if you are tweaking particular installation because user requested some new functionality or he requested removal of something that annoys him, or you are trying to diagnose why something worked and now it doesn't work, you will change something. After you are finished, you might think that changes you've made won't influence anything and leave for some later time to reverse them, and eventually you'll forget about them. But, any unintended change might bring system into a risk. So, it is important to perform regular checks in order to spot changes. Since such checks are time consuming and error prone, it is a good practice to use some tool that will do it for you. That tool could be OVAL.

But even if you are not an system administrator, but e.g. auditor, you can also benefit from OVAL since checks that you have to perform could be in some way prescribed and automated. In that way you can check larger sample of systems and achieve better accuracy and confidence in obtained results than by manual checks.

OVAL is basically a language that describes checks to be made, more concretely, it's an application of XML. Those checks could be conditional (i.e. depend on a system under audit, or if a particular component is installed or not), and they can be grouped with operators like AND, OR and NOT. There are many existing checks defined, for example, here are latest additions and updates, while here are complete databases for download. The tests are provided by some vendors (like RedHat) and also by community. Finally, you can add your own checks customized to your particular environment.

In themselves, those checks are worthless without a proper tool that will execute them. And here we have open source reference implementation, Ovaldi. Some security vendors have their own versions, which of course cost money. Ovaldi, on the other hand, is free, but you are forced to use command line. Ovaldi interprets (in a way) given database and produces reports, in XML and HTML formats. HTML is great for viewing results, while XML for parsing and automating scans.

I was testing ovaldi on Linux before with mixed success, but now I decided to try it on Windows 7. The reason that I believe that its use on workstations and servers on a periodic basis will make those computers more secure, and, by extension, the whole system more secure. In the text that follows I'm going to describe a process of installing, manually running the tool and analyzing the results. Automated testings I'll leave for some future post.

Download and install Ovaldi

Download page for Ovaldi is here. Note that this will take you to the latest version at the time this post was written, i.e. 5.10.1. So, before downloading check if there is a newer version, and if is, use that one. Don't forget to change all the references from version 5.10.1 to your version in the text that follows.

Anyway, you'll find there EXE versions for Windows, so select one that suits your environment. In my case that was 32-bit version, but if you have 64-bit version of Windows, download that one instead.

The file you've downloaded isn't regular installation file, so to install it you have to follow a bit different procedure. After download finishes, left click on file you downloaded and select option Run as administrator. Winzip dialog will appear which will ask you where to unzip (i.e. install) the files. Enter C:\Program Files\OVAL, or anything you wish but don't forget to change reference to that directory in later text to the one you've entered. Click Unzip button, and that's it. Ovaldi is installed.

Environment setup

To be able to run ovaldi without typing the whole path to it, add it to the PATH environment variable. To do that, click on Windows menu (left bottom corner) and then do left click on Computer item. Select Properties item and in window that appears select Advanced System Settings (option on the left). New window appears and there you'll notice Environment Variables... button on the right bottom. Click on it and new window appears. In this window there is System variables pane. Find there variable PATH and click on Edit. At the end of the line add the following text:
;C:\Program Files\OVAL\ovaldi-5.10.1\
Be carefull not to erase existing values! Close all the windows by clicking on OK, and close final window (the one opened with Properties on Computer) by clicking on X in upper right corner. Now, open command prompt and enter ovaldi followed by return. If you get help message then everything is OK and you can proceed to the next step. Otherwise, review previous steps.

Download file definitions

Now you have interpreter and you need definitions that will be run by interpreter. Go to the following page. There you'll see section Downloads by Version and Namespace. You need to select class to download based on the version of oval interpreter you have. The following classes are available:
  • compliance - checks that the installation is compliant with good security practices.
  • inventory - checks that produce results of what is installed.
  • miscellaneous
  • patch
  • vulnerability - test that verify if there is a vulnerability present on the machine.
When you click on one of those classes you are presented with a new page that gives you a list of available definitions grouped by different criteria. For example, by clicking on vulnerability class (probably the largest one) you can select download by platform, family or all. There are pros and cons of each one. If you select by family (or all) you don't have to think which platform you have, you get everything and oval interpreter will not be confused that, e.g. there are Windows XP specific checks and you are running on Windows 7. But, this commodity goes at the expense of the execution time.

For the purpose of initial testing of oval, I went to download by platform/vulnerabilities, and there I downloaded file microsoft.windows.7.xml which I renamed into microsoft.windows.7.vulnerabilities.xml. I also downloaded equivalent files from compliance and inventory classes naming them microsoft.windows.compliance.xml and microsoft.windows.inventory.xml, respectively. All those files I placed into working directory that, from now on, I'll reference by WORK_DIR identifier. So, whenever you see that string, replace it with the full directory path of your working directory.

Running Ovaldi and viewing results

Ok, lets do the first scan to see what we are going to get. To start scan, open terminal windows, go to your working directory, and run the following command (this is a single line!):
ovaldi -m -a "c:\program files\oval\ovaldi-5.10.1\xml" -o microsoft.windows.7.vulnerability.xml -r 20111025-result.xml -x 20111025-result.html -d 20111025-system-characteristics.xml
This command will check vulnerabilities that are present on the system it runs on. Of course, only vulnerabilities defined in the database (microsoft.windows.7.vulnerability.xml) will checked. If the tool reports that there are no vulnerabilities, it only means there are no known vulnerabilities! The other options are:
  • Option -m. Don't check md5 sum of oval definitions file (in this case that is microsoft.windows.7.vulnerability.xml).
  • Option -a specifies where all the auxiliary files necessary for interpreter are. For example, default style sheet file is there, also, XML definitions and tests are also there. The default value of this option assumes that you are running ovaldi in its base directory (i.e. where it is installed) so it has to be specified in order for everything to work.
  • Option -o specifies oval definition file to use. 
  • Option -r specifies XML result file. The default value is results.xml and in the case of multiple runs, default file name will be overwritten. So, using this option prevents that from happening.
  • Option -x specifies HTML result file. This file is generated from XML result file by applying style sheet (XSL) file. Default file is used if none is specified on the command line.
  • Option -d specifies in which file will be saved system characteristics, i.e. installed options, existing files, etc. used during interpreter run of oval definition file.
After this command finishes you'll have three new files in the directory in which you run it (provided no errors occured). All of the files can be viewed by Web browser (e.g. Mozilla Firefox) but only the file specified as the argument to -x option is specifically meant to be viewed in such way. XML files are primarily used for automated processing.

When you open results file (20111025-results.html if you used the command given above) then you'll see four section named OVAL Results Generator Information, System Information, OVAL System Characteristics Generator Information and OVAL Definition Results.

The largest one will be OVAL Definition Results, which is a table with 5 columns. This first column is ID of a test performed, the second is result of a test, either it is positive (true) or negative (false). Then there is a class of a test, either inventory (i.e. something is installed or not), vulnerability checked, reference ID that links you to the description of that particular item on the Internet, and finally title that gives a short description of item.


Friday, October 21, 2011

rsync files from cd...

I just started rsync to copy files from CD ROM. This was a usual form I use:
rsync -av source destination
But this time I ended up with some error messages and not a single file or a directory was copied. The error messages were like the following ones:
rsync: recv_generator: mkdir "/home/user/CD_final/ns-3-dev/bindings" failed: Permission denied (13)
*** Skipping any contents from this failed directory ***
ns-3-dev/build/
rsync: recv_generator: mkdir "/home/user/CD_final/ns-3-dev/build" failed: Permission denied (13)
*** Skipping any contents from this failed directory ***
After some wondering what happened, as this was the first time I got behavior like that, I very quickly realized that the problem is with write permissions, or more precisely directory write permissions. Namely, files and directories don't have write bit set on CDROM since there is no point in having it on read-only medium. But rsync was also clearing this bit on hard disk _before_ copying files into this directory. And what happened is that it couldn't write files and thus skipped the directory.

Some quick googling didn't give any results so I turned to man page. Quick search for a word permission and there was solution. :) I needed to add option --chmod which is used to change permissions of files and directories. POnly, I wanted only directory permissions to be changed, not of every file. But, there was example in the manual page that showed me that I have to call rsync like this:
rsync -av --chmod=Du+w source destination
After that, all the files were copied and the problem was quickly solved! :)

Tuesday, October 18, 2011

How to resume download in Firefox...

I started a large download in Firefox and it turned out to be very slow. Since I was downloading from Oracle, I was probably given "wrong" mirror to download from. So, I was thinking about stopping this particular download and resuming later but I didn't want to lose what I already had. The problem was that it seemd that the Firefox doesn't support that particular scenario, at least not without some fiddling with files.

Anyway, I found a tip on the Internet how to resume broken downloads which helped me manage this particular case. The algorithm is simple, first, go to the directory where partially downloaded files are and move them to some temporary location. Note that there are actually two files per download. One with the "right" name and the other with ".part" extension. Next, start download as usual, then, pause it immediately after download starts. It's important to pause it, not stop. Now, move saved files back to the download directory, overwriting existing files, and resume download. And that's it!

There is just one thing to note. In case you want to stop downloading and resume it later, don't press 'stop' button because it will remove everything that's partially downloaded. Use pause button instead.

Sunday, October 16, 2011

Pizzerija Orogoro

Eto, bio i ja u Orogoro pizzeriji pa da podijelim dojmove sa svekolikim pučanstvom. Ako vas samo zanima konačno mišljenje, to je: osrednje. Ako vas ne zanima mišljenje, to je vaš problem. :)

A sada nešto dulje. Kada dođete do same pizzerije, sa ceste izgleda malo, neugledno, ali i zanimljivo. Međutim, onda shvatite da ima cestica koja vodi na parkiralište iza same pizzerije i kada dođete tamo shvatite da je to cijeli kompleks u pitanju! Očito je nekada mala pizzerija, i možda nekakav kafić za večernje izlaske (primjetio sam stol za Billiard u starom dijelu unutrašnjosti), proširena na terasu i tako je postala _ogromna_ pizzerija - due to the popular demand.  No, to je tako napravljeno, i tako izgleda, da kada je gužva imate dojam da ste na ranžirnom kolodvoru. Pogotovo ako vam je stol blizu ulaza, ili nekakvog prolaza. A nedjeljom oko ručka je doista velika gužva - u to sam se osobno uvjerio. Ljudi stalno dolaze, odlaze, čekaju, klinci ulijeću i izlijeću van, a tu i tamo se pojavi neki neandertalac(ka) koji izgleda nema vrata doma pa ne zna da ih za sobom treba zatvoriti. U svakom slučaju, ako imate sreće, stol ćete dobiti odmah, ako ne, čekat ćete, do nekih 15-tak min - tako kaže konobar. I da, to još ne znači da ćete dobiti stol odgovarajuće veličine. Istini na volju, možete rezervirati mjesto. Nekoliko stolova bilo je rezervirano i piše od koliko sati su rezervirani.

Pizzerija ima i veliku terasu koja mi se učinila zanimljiva jer je pogled prema polju i brdima. To je lijepo, a i različito od gradskih pizzerija gdje je pogled na cestu i u prometnu gužvu - ponekad ste i na cesti pa ne morate gledati daleko. No, terasa koliko god velika i lijepa bila upotrebljiva je samo u proljeće i jesen. Tek rijetko kojem optimistu i van toga. U konačnici, ambijent i nije nešto posebno. No, odmah da se razumijemo, ako su hrana i usluga dobri, onda ambijent i nije toliko bitan. Međutim, ...

Konobari su ponešto zbunjeni, bar sada su bili. Jedan ne zna koji stol je slobodan pa se pregovara s drugim. Onda kada se konačno smjestite jedan vam uzme narudžbu, pa dođe drugi i pita jeste li naručili, i pri tome već ima olovku u ruci i nekakvo prijenosno računalo te je spreman zapisati narudžbu. Onda mu lijepo kažete da ste već naručili, i dobro, ode. Na kraju, tražite jednog kutije da spakirate što je ostalo istovremeno s još jednim društvom za drugim stolom.  Onda drugi donese kutije tom društvu za drugim stolom i vrati se s jednom kutijom. Naknadno treći dolazi s još dvije kutije tom istom društvu, vidi da već imaju kutije i vraća se. Onda smo morali za njim vikati da smo i mi tražili kutije. Ah da, zaboravih i da smo se mijenjali za stol. Nas je bilo 3 i trebali smo dobiti veliki stol, a neka ekipa kojih je bilo pet su imali stol za četvero. Promjena je obavljena na ne baš pretjerano kulturan način (ono, pitaš jel' može i slično - više tip stampeda), ali nećemo cjepidlačiti, no, onda dolazi konobar s narudžbom za tu ekipu koja je odselila i totalno je zbunjen te mu treba vremena da shvati što mu se objašnjava. Uglavnom, zanimljivo.

Što se tiče samih pizza. Ogromno, definitivno! Dakle, za ljude koji traže maksimalan omjer veličine pizze i uloženih novaca (ili u malo prizemnijem prijevodu što puniji želudac za što manje novaca) to je pun pogodak. Ako tražite nešto normalnije, onda je pizza dobra, ali nije ništa revolucionarno. Naravno, ja ne radim pizze i mogu tu filozofirati, al' to mi je trenutno mišljenje o tome. Osobno sam protiv tih pretjerivanja (ala Zagrebački od 70cm i slično), no ekipa to puši... što je je...

Cijena je OK. Mala pizza oko 40kn, velika 50-tak i jumbo oko 70-tak. Na žalost, nisam previše zapamtio tih detalja. Mogu samo reći da se može računati na cijenu od oko 40-tak kuna po osobi.

Toliko, kao što rekoh, osrednja ocjena i, ako niste impresionirani ogromnim porcijama, onda ništa posebno u konačnici.

Thursday, October 13, 2011

Dennis Ritchie died...

Well, another great figure of computing has prematurely left us, according to reports on the Internet. This one isn't so well known like Steve Jobs, but his work certainly matches the one done by Jobs, and in my humble opinion, even exceeds it. His "problem", sort to speak, is that he did everything in the core area of computer science, not in the consumer part, and he did majority of his work during the years when most people even didn't know that computers exists.

The guy is Dennis Ritchie, and he invented C programming language and also took important part in the development of Unix operating system. His influence was and is great. For example, Android smart phones today all run on top of Linux, which itself started as a Unix derivative. MS DOS was a very poor copy of Unix, and it was evident that it tried to copy Unix. Windows NT in part was also influenced by Unix. Not to mention MacOS X which, in its core, is Unix! And today's biggest businesses run their core services on Unix machines, not Windows.

The C programming language was, and still is, extremely influential. First, majority of today's operating systems are written in C, and all the other languages have ability to link with libraries written in C. There are numerous applications and libraries written in C. C is, in essence, lowest common denominator. Furthermore, we have today many languages which directly or indirectly borrow features from C. For a start C++ started as an extension to C. Which itself influenced many other object-oriented programming languages. C's influence can be traced also in all other non-OO languages.

All in all, I'm very sad that he passed away. RIP Dennis Ritchie.

Friday, October 7, 2011

The first use of the term "protocol" in networking...

I'm just reading the book Where wizards stay up late - The origins of the Internet and in there I found a statement about the first use of the term protocol to denote the rules to be followed in order for computers to be able to exchange information, i.e. to communicate.

Everything happened in 1965 when Tom Marill, psychologist by formal education, proposed to ARPA an experiment of connecting two machines, TX-2 from Lincoln laboratory at MIT and SDC Q-32 in Santa Monica. Marril founded a company within which he started that experiment but investor backed out and thus Marril turned to ARPA. ARPA agreed to finance experiment, but since Marril's company (Computer Corporation of America - CCA) was too small ARPA also suggested that Lincoln laboratory heads the project. This was accepted and for a project head was appointed Larry Roberts, another Internet pioneer. For the connection itself, a rather primitive modem was used that was able to send 2000 b/s via four-wire full-duplex service leased from Western Union. Marill set up a procedure that composed messages from characters, sent them to other machine and checked if the messages arrived (i.e. waiting for acknowledge). If there was no acknowledge, the message was retransmitted. The set of procedures for sending messages was referred as "message protocol" by Merill, and that is, as far as I know, the first use of that word in such a context. What's interesting is that a colleague apparently asked Marill why he was using that word because it reminds him of diplomacy. Today, word protocol is standard word to denote mechanisms and rules used by computers in order to be able to exchange data.

Anyway, if you know for some earlier use of this word, or more details about this first protocol I would be very interested to here it.

Finally, let me say that the book Where wizards stay up late - The origins of the Internet is a great book about Internet and how it was created. This book is targeted to a less technically knowledgeable people and I strongly recommend it. You can buy one on Amazon, but there are also other services specialize for selling used books, like e.g. AbeBooks. Maybe I'll talk a bit more about that book in some later post.

Thursday, October 6, 2011

Steve Jobs...

Internet is full of news about the premature death of Steve Jobs, after all he was only 56 years old! And no matter what we think about Apple, or maybe even Steve Jobs, we have to agree that he, and the company he founded, made a significant mark on many lives, actually even more than that, in a way, he changed our culture.

I was reading what others have said about Jobs, most notably Bill Gates, and it occured to me that Jobs but also Bill Gates, Steve Wozniak, and many others represent one period of computer industry development in which individuals were the main driving force! This was the period of invention and popularization of microcomputers. And this period was actually over by 2000, or something like that. Of course, there is other one, newer and equally important and popular event, and that is popularization (not invention!) of the Internet. But that's another story, since the people that took part in it are on average younger and will be with us for a much longer, actually, some of them probably even after us (me!).

Now, it is also true that even though the computing industry is quite young many pioneers have already died. Still, Jobs is specific for two reasons. The first one is that he created Apple, and lived within a time period when I was growing up and learning about microcomputers. I read and heard so much about Apple and NeXT, about Apple II and Macintosh, but also about him during majority of my life. All this means that, in a way, he was a part of a world I was used to live in. The second reason why Jobs death is so important is that he was known by many people, he made computer and other "computerized" gadgets status symbol and commodity in the same time. So many people are actually aware of him.

I have to mention Google. Google is the only company that payed tribute to Steve Jobs by placing a link beneath search box on their main page, which will take you to Apple's pages. I looked what Microsoft and IBM did, and they did nothing. Now, I know they are the businesses and as such can not and should not pay tribute to someone not from the company, and not highly positioned within the company. Still, because of this I admire Google's action even more.

For the end, I'll just say R.I.P. Steve Jobs, the computing industry, even the world, will not be the same without you.

Here are two links I recommend: Apple's logo variation and Steve Jobs' great speech given to Stanford University students.

Tuesday, October 4, 2011

Otvoreni kod i taktika zauzimanja tržišta...

Dosta često se nađem u raspravi u kojoj pokušavam objasniti kako je otvoreni kod dobra taktika da se neki proizvod proširi na već ustaljenom tržištu, i to pogotovo kada je taj proizvod napravila mala tvrtka koja nema odgovarajuće (čitati: velike) distribucijske kanale. Naravno da je nužni preduvijet za sve to dovoljno veliko tržište i/ili zanimljiv i koristan proizvod. U ovom postu pokušat ću objasniti što točno mislim.

Kao jako dobar primjer kako se nekakav proizvod može proširiti na već razgrabljenom i jako kompetitivnom tržištu, zahvaljujući licenci otvorenog koda, uzet ću Asterisk. Asterisk je programski kod, koji uz dodatak odgovarajuće sklopovske podrške, postaje prava telefonska centrala. Količina sklopovske podrške koju treba dodati ovisi o konkretnoj situaciji/zahtijevima, i u najosnovnijoj varijanti svodi se na "obično" računalo koje je potrebno da bi izvršavalo taj program. Iza Asteriska stoji tvrtka Digium koja ga razvija, proizvodi specijalizirano sklopovlje te pruža podršku. Asterisk je vrlo popularan proizvod te se dosta puno koristi po cijelom svijetu od malih do velikih tvrtki.

Međutim, tržište telefonije je staro tržište u kojemu dominiraju igrači kao što su Ericsson, Panasonic, Siemens, Motorola, a u novije vrijeme zahvaljujući razvoju VoIP telefonije tu je još i proizvođač mrežne opreme Cisco, a polako i sigurno se gura i Microsoft sa svojim novim verzijama poslužitelja Exchange.

U takvoj situaciji proizvod kao što je Asterisk nema puno šansi. Naime, kada dolazite u tvrtku nuditi svoj proizvod tada što je veća tvrtka (i prema tome zanimljivija) to je manja šansa kako ćete uspjeti prodati svoj proizvod. Prema tome, prvo treba izgraditi bazu malih tvrtki i potom napredovati prema sve većima i većima. To je lakši ili teži put, ovisno o tome koliko je već tržište razvijeno, odnosno, koliko i kakvih već igrača ima na tom tržištu. U ovom konkretnom primjeru, tu je već bilo puno velikih tvrtki.

Tvrtka Digium je tome pristupila na jedan zanimljiv način. Naime, oni su Asterisk objavili pod licencom otvorenog koda. U prvi mah bi mogli pomisliti kako na taj način gubite jer će vam drugi preuzeti proizvod. Ali to nije točno. Naime, nije baš samo tako lako uzeti neki kompleksni proizvod i mijenjati po njemu, malo je tvrtki koje su to u mogućnosti. A čak i da jesu, to vam koristi. Naime, Digium ne može podržavati sve tvrtke u svijetu, njima je u interesu zgrabiti samo određen dio, i to velikih kod kojih je omjer uloženog truda i zarađenog novca najpovoljniji. A baš te male tvrtkice po svijetu koje su počele podržavati i prodavati Asterisk su stvorile bazu na kojoj je potom Digium mogao iskoristiti kako bi pristupio velikim tvrtkama s argumentom kao se radi o respektabilnom proizvodu koji se može koristiti za tako kritičnu infrastrukturu kao što je telefonija.

Rezultat je vidljiv. Asterisk se puno koristi u različitim tvtkama, a zanimljivo je i da sam vidio ponude za VoIP telefoniju proizvođača Cisco koji se eksplicitno osvrću na Asterisk i pokušavaju uvjeriti kupca kako to rješenje ne valja. Sigurno to ne bi radili da ne osjećaju odgovarajuću prijetnju s te strane. No, to ne znači da Cisco ne valja, postoje situacije u kojima ne bi razmišljao o ničemu drugome, ali o tome u jednom drugom postu...

Yet more fun with SSH tunnels... accessing forbidden Web pages...

This is very interesting and simple hack, and analogous to SMTP one. or inverse version of Web access. Suppose that you want to access some Web site that is blocked by a firewall in your local network where you reside. In case you have some machine outside the local network (and of course, SSH isn't disabled) that you can access blocked Web site. I'll assume that the IP address of that outside machine is o.o.o.o. Furhtermore, suppose that the Web site in question is www.forbidden-web.com. Here is what you have to do:

Step 1. Find out which IP address this www.forbidden-web.com site has. You can use nslookup, host or dig commands for that, e.g.
$ nslookup www.forbidden-web.com
Server:        name_or_ip_address
Address:    some_ip_address_and_port

Name:    www.forbidden-web.com
Address: f.f.f.f
In this example, you are interested in the last line, i.e. IP address f.f.f.f.

Step 2. Edit your local /etc/hosts file and add the following line in it.
127.0.0.1      www.forbidden-web.com
Step 3. Create tunnel:
ssh -L 80:f.f.f.f:80 remoteuser@o.o.o.o
You have to be root in order to run that command. Furthermore, if the target site is accessed via https instead of http, change both number 80 into 443.

Step 4. Open Web browser and try to access forbidden Web site.

And that's it, you are done.

Of course there are some gotchas. For example, if the site you managed to access references some other forbidden site, then things won't fully work. Also, if it switches between protected (https) and unprotected (http) access you'll have problems using this simple method. Still, you can basically get around all those problems in many cases using variations of the previously given procedure.

More fun with ssh tunnels... accessing Web

Suppose that you have some Web application and you can access it only from local network, either because firewall on the host itself protects access or there is firewall at the network perimeter. Either way, you are currently somewhere on the Internet and you have to access this application, e.g. some administrative interface.

In my case, I have Zimbra Web administrative console access confined to local network only and sometimes it happens that I have to access it from remote location. Suppose that the remote site is zimbra.domain.com and that Zimbra Web administration interface is at default port 7071. I'll use z.z.z.z to denote IP address of that server. Additionally, you need to have some server within your local network that allows SSH access. This server has to be visible from the Internet, and if it is directly accessible that everything is fine. Otherwise, if you are using NAT you'll have to punch a hole in your firewall to forward all the connections from the outside to that machine. Either way, suppose that this server has public IP address s.s.s.s.

Ok, here is what you have to do. From your local machine, i.e. the one that you are currently work on and that is outside of you local network, execute the following ssh command:
ssh -L 7071:z.z.z.z:7071 s.s.s.s
All you have to do now is to open Web brower and enter the following URL:
https://127.0.0.1:7071
In case when virtual hosts are used, you'll have to add the following line into your /etc/hosts file:
127.0.0.1           web.server.name
and then, URL you'll use is:
https://web.server.name:7071
While this is necessary in general case, in case of Zimbra it is not since Zimbra should be only service running on a particular IP address.

Saturday, October 1, 2011

Installing Snort 2.9.1 on 64-bit CentOS 6...

I just installed Snort 2.9.1 on CentOS 6, and since that wasn't straightforward process, I decided to document all the steps I did for a later reference. Also, maybe someone will find this useful so I placed it here.

The process of setting up Snort is divided into three phases, compilation, installation and configuration. Compilation phase is done entirely on auxiliary host, while installation and configuration phases are done on the target host, i.e. on the host where you wish to install snort.
Binary Snort packages from the download pages are all for 32 bit machines. Furthermore, SPEC file within provided SRPM has two bugs. The first one is that it wrongly links with libdnet.1 library that doesn't exist. I circumvented that problem as described below. The second problem is that not all pretprocessors are included into the final binary package. If you try to start snort and it fails with the following message in the log file:
FATAL ERROR: /etc/snort/snort.conf(463) Unknown preprocessor: "sip".
then this is manifestation of that problem. Apart from sip; imap, pop and reputation pretprocessors are also missing. I have fixed spec file, and made the new Snort SRPM package. If you trust me enough (but don't! :)), you can skip the compilation phase and obtain directly binary packages for daq and snort from my homepage. In that case, go to the installation phase and continue from there.

Compilation

As I said, the first problem with Snort is that on the download page there are no precompiled binaries for 64-bit versions of Linux distributions. Still there are SRPMS packages of Snort (extension src.rpm) and its prerequisite Daq so it isn't so bad. Download those packages, and rebuild them, first daq and then, after installing daq, snort itself. For rebuild process development environment is mandatory, i.e. compiler, development libraries, etc. Since probably you are going to run snort on firewall, or some machine close to firewall, it isn't good security practice to install development environment on target machine (i.e. firewall). So, find another machine with CentOS 6 and all the latest updates (or install one) and perform build process there. You'll need at minimum to have package rpm-build-4.8.0-16.el6.x86_64, afterwards, any missing package will be reported and you can install it using yum. So, install rpm-build package, and try to start build process (do this as ordinary user!):
rpmbuild --rebuild daq-0.6.1-1.src.rpm
If missing packages are reported then install them (as superuser) and try to start build process again. Note that libdnet you can find in EPEL repository. Repeat this until build process is successful. Binary package you'll find in the directory ~/rpmbuild/RPMS/x86_64/. Go there and install daq package:
yum localinstall --nogpgcheck daq-0.6.1-1.x86_64.rpm
Option nogpgcheck is necessary since we didn't sign binary package. Then, go back to directory where you downloaded daq and snort, and start snort build process:
rpmbuild --rebuild snort-2.9.1-1.src.rpm
This too can stop due to the missing packages, so install any required package and restart build process. Do this until build process is successful.
Now you have daq and snort packages ready in the build output directory ~/rpmbuild/RPMS/x86_64/. There are files daq-0.6.1-1.x86_64.rpm and snort-2.9.1-1.x86_64.rpm.

Installation

Transfer binary packages of snort and daq to the target machine and install them there:
yum localinstall --nogpgcheck daq-0.6.1-1.x86_64.rpm \
            snort-2.9.1-1.x86_64.rpm
It could happen also that you'll need additional packages, but any dependencies will be automatically retrieved and installed by yum. So, that's for the installation phase.

Build process, for whatever reason, wrongly got dependency on libdnet library, it looks for libdnet.1 instead of libdnet.so.1. To check if this is problem in your case, just try to start snort:
# /etc/init.d/snortd start
Starting snort: /usr/sbin/snort: error while loading shared libraries: libdnet.1: cannot open shared object file: No such file or directory
                                                           [FAILED]
In case the output looks like that one, you have the problem with libdnet.1 too. To solve it, to the directory /usr/lib64 and run there the following command:
# ln -s libdnet.so.1 libdnet.1
This is actually a hack, since build process has a bug, but as I didn't want to look or modify build process, this was easier to do and I did it that way.

The error with library libdnet was caused by the manually installed libdnet in /usr/local/ which had name libdnet.1 for whatever reason and that was picked by configure script. In other words, if you compile snort manually you'll not have that problem, only if you used old binary that I provided (now that is fixed!).
You'll also need to obtain snort rules and that requires you to register on Snort Web page. After registering, and downloading rules, unpack the archive you obtained in some directory. In the following text I'm using package snortrules-snapshot-2910.tar.gz from the September 1st, 2011 (and which was obtained on October 1st, 2011).

What you'll get is the following structure:
$ ls -1
etc
preproc_rules
rules
so_rules
Move directories preproc_rules, rules and so_rules into /etc/snort directory. Also, move the content of etc directory to /etc/snort directory overwriting any files there.

In case you have SELinux enabled snort will be prevented from starting because of wrongly labeled preprocessor plugins. This manifests itself with the following line in the log files:
FATAL ERROR: Failed to load /etc/snort/so_rules/precompiled/RHEL-6-0/x86-64/2.9.1.0//smtp.so: /etc/snort/so_rules/precompiled/RHEL-6-0/x86-64/2.9.1.0//smtp.so: failed to map segment from shared object: Permission denied
Of course, the exact paths will differ depending on your exact installation. Note that snort runs as unconfined process and until I find a way to confine it this can be solved by running the following command in the directory /etc/snort/so_rules/precompiled/RHEL-6-0/x86-64/2.9.1.0 (note that this is the directory reported in the log file!):
# chcon system_u:object_r:lib_t:s0 *
Configuration

The final step is snort configuration prior to running it. Master configuration is stored in the /etc/snort/snort.conf file, so open it with your favorite text editor and modify the following lines:
  1. Line that reads ipvar HOME_NET any (cca. 45th line). Replace any with you network address. In my case that was 192.168.1.0/24.
  2. Line that starts with dynamicpreprocessor directory words (cca. 234th line). Parameter is directory and change this parameter to /usr/lib64/snort-2.9.1_dynamicpreprocessor/.
  3. Immediately following the previous line is the line that starts with dynamicengine. Change the parameter of that line with the value /usr/lib64/snort-2.9.1_dynamicengine/libsf_engine.so.
  4. And, immediately following the previous line is the line that starts with words dynamicdetection directory whose parameter should be /etc/snort/so_rules/precompiled/RHEL-6-0/x86-64/2.9.1.0/.
  5. Also, you have to create two empty files, /etc/snort/rules/white_list.rules and /etc/snort/rules/black_list.rules. Alternatively, you can disable reputation pretprocessor (find line that begins with preprocessor reputation and comment out the whole block.
Additionally, open /etc/sysconfig/snort file and look if there is something you need to change. For example, in case you have multiple interfaces on which you would like to run snort, you'll have to configure them in that file.

Finally, start snort with the following command:
# /etc/init.d/snortd stop
and, if snort should be started during the boot process, also run the following command:
# chkconfig snortd on
And, that's it! :)

About Me

scientist, consultant, security specialist, networking guy, system administrator, philosopher ;)

Blog Archive