Showing posts with label configuration. Show all posts
Showing posts with label configuration. Show all posts

Monday, November 12, 2012

Do you want to connect to IPv6 Internet in a minute or so?

Well, I just learned a very quick way to connect to IPv6 Internet that really works! That is, if you have Fedora 17, but probably for other distributions it is equally easy. Here are two commands to execute that will enable IPv6 network connectivity to you personal computer:
yum -y install gogoc
systemctl start gogoc.service
First command installs package gogoc, while the second one starts it. Next time you'll need only start command. After the start command apparently nothing will happen but in a minute or so you'll have working IPv6 connection. Check it out:
# ping6 www.google.com
PING www.google.com(muc03s02-in-x13.1e100.net) 56 data bytes
64 bytes from muc03s02-in-x13.1e100.net: icmp_seq=1 ttl=54 time=54.0 ms
64 bytes from muc03s02-in-x13.1e100.net: icmp_seq=2 ttl=54 time=55.0 ms
^C
--- www.google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 54.023/54.551/55.080/0.577 ms
As you can see, Google is reachable on IPv6 addresses. You can also try traceroute6:
# traceroute6 www.google.com
traceroute to www.google.com (2a00:1450:4016:801::1011), 30 hops max, 80 byte packets
1 2001:5c0:1400:a::722 (2001:5c0:1400:a::722) 40.097 ms 42.346 ms 45.937 ms
2 ve8.ipv6.colo-rx4.eweka.nl (2001:4de0:1000:a22::1) 47.548 ms 49.498 ms 51.760 ms
3 9-1.ipv6.r2.am.hwng.net (2001:4de0:a::1) 55.613 ms 56.808 ms 60.062 ms
4 2-1.ipv6.r3.am.hwng.net (2001:4de0:1000:34::1) 62.570 ms 65.224 ms 66.864 ms
5 1-3.ipv6.r5.am.hwng.net (2001:4de0:1000:38::2) 72.339 ms 74.596 ms 77.970 ms
6 amsix-router.google.com (2001:7f8:1::a501:5169:1) 80.598 ms 38.902 ms 39.548 ms
7 2001:4860::1:0:4b3 (2001:4860::1:0:4b3) 41.833 ms 2001:4860::1:0:8 (2001:4860::1:0:8) 46.500 ms 2001:4860::1:0:4b3 (2001:4860::1:0:4b3) 48.142 ms
8 2001:4860::8:0:2db0 (2001:4860::8:0:2db0) 51.250 ms 54.204 ms 57.569 ms
9 2001:4860::8:0:3016 (2001:4860::8:0:3016) 64.727 ms 67.339 ms 69.540 ms
10 2001:4860::1:0:336d (2001:4860::1:0:336d) 80.203 ms 82.302 ms 85.290 ms
11 2001:4860:0:1::537 (2001:4860:0:1::537) 87.769 ms 91.180 ms 92.931 ms
12 2a00:1450:8000:1f::c (2a00:1450:8000:1f::c) 61.213 ms 54.156 ms 55.931 ms
It simply can not be easier that that. Using ip command you can check address you were given:
# ip -6 addr sh
1: lo: mtu 16436
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
3: wlan0: mtu 1500 qlen 1000
    inet6 fe80::f27b:cbff:fe9f:a33b/64 scope link
       valid_lft forever preferred_lft forever
5: tun: mtu 1280 qlen 500
    inet6 2001:5c0:1400:a::723/128 scope global
       valid_lft forever preferred_lft forever
And also routes:
# ip -6 ro sh
2001:5c0:1400:a::722 via 2001:5c0:1400:a::722 dev tun  metric 0
    cache
2001:5c0:1400:a::723 dev tun  proto kernel  metric 256  mtu 1280
2a00:1450:4008:c01::bf via 2a00:1450:4008:c01::bf dev tun  metric 0
    cache
2a00:1450:400d:803::1005 via 2a00:1450:400d:803::1005 dev tun  metric 0
    cache
2a00:1450:4013:c00::78 via 2a00:1450:4013:c00::78 dev tun  metric 0
    cache
2a03:2880:2110:cf01:face:b00c:: via 2a03:2880:2110:cf01:face:b00c:: dev tun  metric 0
    cache
2000::/3 dev tun  metric 1
unreachable fe80::/64 dev lo  proto kernel  metric 256  error -101
fe80::/64 dev vmnet1  proto kernel  metric 256
fe80::/64 dev vmnet8  proto kernel  metric 256
fe80::/64 dev wlan0  proto kernel  metric 256
fe80::/64 dev tun  proto kernel  metric 256
default dev tun  metric 1
Probably I don't have to mention that if you open Google in a Web browser you'll be using IPv6. :) In case you don't believe me, try using tcpdump (or wireshark) on tun interface.
You can stop IPv6 network by issuing the following command:
systemctl stop gogoc.service
If you try ping6 and traceroute6 commands after that, you'll receive Network unreachable messages, meaning Google servers can not be reached via their IPv6 address.

Sunday, August 12, 2012

About active responses in OSSEC

I already wrote about OSSEC's active response feature, and I said that I'm going to write a bit more after I study a bit more thorougly how it works. After I did analysis of log collecting feature of OSSEC, I decided to finally look at this too. So, here is the essey about what I found out by analysing code of OSSEC 2.6 with respect to active response. This essay will intergrate also the previous post, so, you don't have to read that previous one if you didn't already. Since active responses are bound to rules - triggering rule triggers active response - I'll touch also on rules, but not in a lot of details. Only as much is necessary to explaing active response.
I'll start with the purpose of active response, then I'll continue to configuration of active response, and I'll finish with implementation details and conclusions.

Purpose of active response

The idea behind active response in OSSEC's is to activelly block potential offenders and in that way to allow system to defend itself instead of passively monitoring what's happening. In some way this classifies OSSEC also as an IPS. As with all other IPSes, the feature is great but can bite you in case you are not careful, or, if you are using it without realising how it actually works. In the current implementation of OSSEC, active response is restricted to act on offending IP addresses and users.

Configuring active response

Active response configuration consists of two parts. In the first part you define all the commands, and in the second part - active response - you define how and when they are called. When writing configuration you should first list all of you commands, and then active responses that reference them. The reason is the way how the code was written, i.e., you will get undefined command error in case you don't list commands before they are used.
Note that final component, and important, connection between rules and active responses. Namely, rules are those that get triggered, and if they are triggered then active responses bound to them will be activated. In this post, we'll use manual method of activation, but I'll also describe how active responses are connected to rules.
It is interesting, and important, that when the configuration files are analyzed, the code writes all enabled active responses into the file shared/ar.conf, which is placed underneath the main OSSEC directory (by default it is /var/ossec). We'll come to exact format and purpose of that file later.
The following text is based on the analysis of config/active-response.[ch], analysisd/active-response.[ch] files, util/agent_control.c and some others.

Defining commands

All the commands that will be run as the part of the active response should be defined using <command> element placed directly beneath <ossec_config> top-level element. The structure should look something like this:
<ossec_config>
    ...
    <!-- Definition of command 1 -->
    <command>
    </command>
    ...
    <!-- Definition of command N -->
    <command>
    </command>
    ...
</ossec_config>
Each <command> element can have the following subelements:
  • name - This will be identifier by which the command will be referenced in active response definitions, i.e. the second part. Obviously, this is a mandatory subelement. You shouldn't use dashes here because the way parsing is done throughout OSSEC and that will most likely confuse it! It is also advasible to avoid using spaces for the same reasons!
  • expect - Mandatory to specify. The value can be either user or srcip. Due to the use of OS_Regex, and the way it calls OSRegex_Compile, those two strings are case insensitive, i.e. you can capitalize them as you wish, the meaning is the same.
  • executable - Mandatory element, defines the exact name of the executable. The path where the executable is placed is predefined during compilation time. Default value is active-response/bin.
  • timeout_allowed - Allowed values are yes and no. Basically, this tells OSSEC if the effect of the command (after it is called) should be reversed, i.e., it's effect have to be cleared by new invocation. This is optional and default value is no. To signal to a script if it is called for the first time, or as a part of a reversal process, the first argument to the script will be add or delete.

Defining active responses

The second part of the configuration defines active responses themselves. Active responses are configured (again) in OSSEC's configuration file using <active-response> elements that are placed directly within <ossec_config> top-level element, like this:
<ossec_config>
    ...
    <!-- Definition of active response 1 -->
    <active-response>
    </active-response>
    ...
    <!-- Definition of active response M -->
    <active-response>
    </active-response>
    ...
</ossec_config>
Under each <active-response> element the following subelements can be used to more specifically define active response:
  • command - string value that references one <command> element. This is mandatory to specify.
  • location - one of the following values: AS, analysisd, analysis-server, server, local, defined-agent, all, or any. AS, analysisd, analysis-server, server are synonyms, and so are all and any. You must specify this.
  • agent_id - in case the value of the location is defined-agent, then this defines which agent and is mandatory to specify.
  • rules_id - comma separated list of rule IDs to which this active response should be bound. This is an optional element without a default value.
  • rules_group - name of a rule group to which this active response will be bound.
  • level - Integer between 0 and 20, inclusive. If some rule has level equal to or grater than this value than it is bound to active response.
  • timeout - Numerical value without constraints.
  • disabled - Can have values yes or no.
  • repeated_offenders - This element and its value are ignored.
Note one interesting thing. Namely, there is no ID bound to a specific active response. This means that the relationship between command and active response is 1:1. It seems somehow meaningless, beacuse, it is not possible to have some command (i.e. one <command> element) bind to multiple active responses commands each with a different set of parameters. Or, I'm missing something important?
When OSSEC agent starts it reads and parses <command> and <active-response> elements and writes shared/ar.conf file (actually, when any component of ossec starts if the previous paragraph is true). In that file each line defines one active response. Lines have the following printf like format:
%s - %s - %d
The first string is name of the active response and of the command (becaues active response doesn't have it's own name, as I explained). The second string is the executable. Finally, the third is a timeout value if it is defined. If not, 0 will be written in that field.

Manually triggering active responses - execution flow

To manually test how active response works you can use the utility agent_control whose source can be found in util/ top-level directory of OSSEC source. Note that only responses that require IP address can be activated. This tool should be run on a manager node and it actually connects to local ossec-remoted instance and sends it the appropriate commands that will trigger requested active response.
To trigger active reponse, using the utility, you have to specify three things:
  1. Offending IP address (option -b).
  2. ID of an agent where active response will be triggered (option -u).
  3. Which active response to run (option -f).
For example, to tell an agent whose ID is 003 that it should run firewall-drop600 active response and pass it IP adress 192.168.1.1, you would invoke agent_control in the following way:
agent_control -u 003 -f firewall-drop600 -b 192.168.1.1
What will happen than is that agent_control will connect to the local ossec-remoted instance and send it the following command:
(msg_to_agent) [] NNS 002 firewall-drop600 - 192.168.1.1 (from_the_server) (no_rule_id)
The interesting thing here is that there is possibility to run active response on all nodes but this functionality is not enabled in agent_control utility. It boils down not to specify agent ID, in which case all agents are assumed and the previous command would look now like this:
(msg_to_agent) [] ANN (null) firewall-drop600 - 192.168.1.1 (from_the_server) (no_rule_id)
ossec-remoted accepts message from agent_control utility and based on that message sends a message over the network to agent. This message is encrypted, but in our case (running firewall-drop600 with IP address 192.168.1.1 as an argument) it will have the following format:
#!-execd firewall-drop600 - 192.168.1.1 (from_the_server) (no_rule_id)
The three characters at the beginning (#!-) are message header, and "execd " is a command. More commands can be seen in headers/rc.h file. Note that there is no agent ID in the message. That is because this information is used only by ossec-remoted so that it knows to whom it should send the message. So, no matter if you sent the message to some specific agent, or all of them, the message will look the same.
The message is received and preparsed by ossec-clientd in function receive_msg (in file client-agent/receiver.c) and then sent to ossec-execd (in top-level directory os_execd) via local message queue starting with a letter f, i.e. message header and command are stripped off by ossec-clientd.
It is interesting that ossec-execd, when started, reads standard configuration file (in XML) and, based on active response configuration, creates shared/ar.conf (unless changed during compilation). Then, it enters main loop, and there when active response is triggered for the first time, it reads shared/ar.conf instead of main XML configuration file. It has also one additional interesting behavior. Namely, if unknown active response is triggered it will first reread shared/ar.conf and if the active response isn't found then it will signal error. This, I suppose, can be used to dynamically update a set of active responses.
But, back to our example. ossec-execd receives message and assumes that everything to the first space is the command name. Using that token it then searches for a command. There are two interesting thing here:
  1. The list of commands is taken from shared/ar.conf file that is dynamically created, not from ossec.conf or some similar.
  2. If the command isn't found, then the shared/ar.conf is read again, and the search is peformed again. Only now, if there is no command, an error is reported and processing of a message is stopped. Note that no return message is sent to manager!
When the command is found, standard execve(2) structures are created, but twice. One for the first execution and the other for timeout value. Only later it is checked if timeout is supported/requested by command, and if it is not then memory is released. The arguments are copied from the message (spaces are used to separate arguments) with additional argument inserted first. Namely, add is inserted if the command is called first time, while delete is inserted in case it is called as a consequence of a timeout.
Finally, before executing command, one more check is performed. Namely, if the command was run, and timeout is hanging waiting to trigger, then command will not be called again, nor new timeout value will be called. The message from the manager will be ignored!

Automatic triggering of active responses

An active response is tirggered when the rule it is bound to is triggered. A single rule can have multiple active responses attached to it. Note that there are three ways in which you can bind rule and active response:
  1. Using level element in active response. Namely, if the rule has attached level that is greater than or equal to the level of active response then the active response is attached to the rule.
  2. If a rule ID is listed in the rules_id element of the active response, then the active response is attached to the rule.
  3. If the group name to which rule belongs is listed in the rules_group element of active response, than they are bound together. The check is done very simply, namely it only checks that the value specified in rules_group occurs within group name. For example, if group name is "syslog,pam" then rules_group value "syslog" will match, as well as "pam", but "apache" will not. Note that even the value "syslog,pam" will match!
Rules and active responses are bound together (the code in analysisd/rules.c) when the ossec-analysisd is started.

Some implementation details

Configuration reading and parsing is implemented in the config/active-response.c file.
The data contained in each <active-response> element is kept in a structure active_response defined in src/config/active-response.h file. The elements of that structure quite closely mimic the configuration layout, i.e. this structure is defined as follows:
/** Active response data **/
typedef struct _ar{
    int timeout;
    int location;
    int level;
    char *name;
    char *command;
    char *agent_id;
    char *rules_id;
    char *rules_group;
    ar_command *ar_cmd;
}active_response;
That's it, not much. :)

Conclusion

Ok, I think I managed to document active response in OSSEC at least better then the majority of the existing documents. Tell me what you think. :)
Next, let me say that I'm impressed with XML parsers shipped with OSSEC. It seems to be very featerfull even though it is written in C, which is usually not used for XML processing.
One of the interesting things of OSSEC is that it reads the same configuration files multiple times. For example, in ossec-analysisd deamon, the configuration file is first read by active response initialization code (active-response.c) and then by the deamon itself. The reason why is this so interesting is that, during parsing of element <active-response> the file shared/ar.conf is written each time. In other words, the operation isn't idempotent. This is also problematic from the performance reasons standpoint, but probably less so.
While reading the code I sumbled on several interesting things, not directly related to the topic of this post. But since they are important, I'll mention them here and maybe I devote some future post to some of them:
  • Top level directory addagent is misleading. It actually contains binary manage-agents which is much more encompassing then that. So, it should be renamed.
  • In the top-level directory util/ there are some tools that should be moved or integrated with manage-agents utility.

Friday, August 10, 2012

Adding new log types to OSSEC...

I'm using OSSEC for log monitoring, and while it is a great tool there are some drawbacks. One is that (not so) occasionally Java stack traces are dumped into syslog and they are not properly handled/parsed by OSSEC. The other drawback is that I want OSSEC to monitor mod_security logs, which are multi line logs with variable number of lines belonging to each HTTP request/response processed by mod_security. So, I wanted to modify OSSEC in order to allow it to handle those cases, too. To be able to modify OSSEC I started with the analysis of its source (version 2.6) and based on the analysis, I wrote this text. The goal of the analysis was to get acquainted on how OSSEC handles log files, document this knowledge, and to propose solution that would handle Java stack traces and mod_security logs.

Configuring log files monitoring

First, ehere are some global parameters that control global log monitoring behavior of each OSSEC agent, or server. More specifically, logcollector.loop_timeout in internaloptions.conf defines how much time (in seconds) OSSEC will wait before checking if there are additions to all log files that it monitors. Default value is 2s, with minimal allowed value of 1 second and maximum 120s.
This is actually the most important parameter. There are two additional ones, logcollector.open_attempts with allowed values between 2 and 998, and logcollector.debug with allowed values 0, 1 and 2.

Supported log types in OSSEC

If I correctly understand it, OSSEC treats log files as consisting of records. Each record being independent from the previous, or any later ones. Rules, then, act on records. In the majority of cases the record is identical to a single line of a log file. This is, I suppose, legacy from syslog in which each line was (and still is) separate log entry. In the mean time OSSEC was extended so that it supports more different log files:
  • multiline log (read_multiline). The problem with this log format is that it expects that each record consists of a fixed number of lines.
  • snort full log (read_snortfull). This is the closest one to what I need with respect to mod_security, i.e. this one reads several lines, combines them, and sends them to log analyzer.
  • some others I won't analyze now.
The length of the record is restricted to 6144 characters (constant OS_MAXSTR defined in headers/defs.h). Everything above that length will be cut off and discarded, with an error message logged.

Configuring log files

So, there are different log types and to tell ossec for some file which type it is, you use <logformat> element associated with each monitored file. Each monitored file is defined in element <localfile> directly beneath <ossec_conf> element in the ossec configuration file, for example:
<ossec_config>
   ...
   <localfile>
      <log_format>syslog</log_format>
      <location>path_to_log_file</location>
   </localfile>
   ...
</ossec_config>
The configuration file, along with all of its elements, is read by configuration loader placed in src/config subdirectory. The localfile element is processed in function Read_Localfile (file localfile-config.c). For each <localfile> one logreader structure (defined in localfile-config.h) is allocated and initialized. This structure has the following content:
typedef struct _logreader
{
    unsigned int size;
    int ign;

    #ifdef WIN32
    HANDLE h;
    int fd;
    #else
    ino_t fd;
    #endif
    /* ffile - format file is only used when
     * the file has format string to retrieve
     * the date,
     */
    char *ffile;
    char *file;
    char *logformat;
    char *djb_program_name;
    char *command;
        char *alias;
    void (*read)(int i, int *rc, int drop_it);
    FILE *fp;
}logreader;
Analyzing localfile-config.c file I came to the following conclusions:
  • Underneath <localfile> element the following elements are accepted/valid: <location&gt, <command>, <log_format>, <frequency>, and <alias>.
  • <location> defines the file, with full path, that is being monitored.
  • <log_format> tells OSSEC in which format are records stored. The following are hardcoded values in the Read_Local.c file: syslog, generic, snort-full, snort-fast, apache, iis, squid, nmapq, mysql_log, mssql_log, postgresql_log, djb_multilog, syslog-pipe, command, full_command, multi-line, and eventlog. What's interesting is that some of those (marked in bold) don't appear later in log collector!
  • the syntax of accepted multi-line log_format is:
    <log_format>multiline[ ]*:[ ]*[0-9]*[ ]*</log_format> 
    note that it is not an error, i.e. you can avoid writing any number, but that number defines how many lines should be concatenated before being sent to analyzer module, so it is important you don't skip it. The number part is made available in logformat field of logreader structure. Furthermore, when check is made to determine which type of logformat is some logff structure, to detemine it is a multiline the first character is checked if it is a digit. If it is, then it is a multiline format. This is actually a hack because in the original design it wasn't planned to have parameters to certain log types!
  • <command> is used to run some external process, collect its output, and send it to central log. Each line outputed by the command will be treated as a separate log record. Empty lines are removed, i.e. ignored. Note that command is passed to shell using popen library call. So, you can place shell commands there too.
  • in the code there is a reference to a full_command log type format, but it is not supported in the configuration file. The difference is that this form reads command's output as a single log record.
  • the value of element <frequency> is stored into logreader.ign field. But the use of this field is strange because it is overwritten with number of failed attempts to open log file. I would assume that this parameter would, somehow, allow rate limiting.
  • When defining the value of <location> element certain variables can be used. In case of Windows operating systems, that means % variables (e.g. %SYSTEM% and similar). In case of Unix/Linux glob patterns are allowed (these are not allowed on Windows). Also, strftime time format specifiers can be used and in that case strftime function will be called with time format specifier and current local time to produce final log's file name.
  • <alias> defines an alias for a log file. It is used in read_command.c and read_fullcommand.c files, only. Probably to be substituted instead of command in the output sent to log collector (for readability purposes, instead of the whole complex command line you see just its short version).
  • The function pointer should point to a function returning nothing (void). But, all the read functions return void * pointer and this return value is forced during assignment of a function to this field. Finally, this return value is never used.
  • The return code (*rc parameter) of read functions is also used in a strange way and only once it is different than 0.
Reading and processing log files

Actual monitoring of log files is done by logcollector module (contained in the src/logcollector subdirectory). If you look at process list you'll see it under the name ossec-logcollector.
C's main() function of Logcollector module obtains array of logreader structures from config module and calls main function LogCollectorStart(). LogCollectorStart() references logreader structures through logff array pointer.
logreader structures are then initialized.
Functions to read different type of files are placed in separate files prefixed with string read_. Each file has one globally visible entry function that has the following prototype:
void *read_something(int pos, int *rc, int drop_it);
The function return value isn't used, and most (but not all!) functions just return NULL. pos is index into structure that defines which particular file should be checked by the function. Basically, it indexes array logff. So, logff[pos] is the log file that has to be processed. rc is output parameter that contains return code. Finally, drop_it is a flag that tells reader function to drop record (drop_it != 0) or to process it as usual (drop_it == 0).
So, the conclusion is that I should/would create the following log readers:
  • regex reader that uses regex to define start of the block, so everything between one line that matches given regex is treated as a log record until the first line that matches the regex again. The variation of the theme is to have separate regex for the end of the block.
  • modsec_audit reader. A separate log reader that would combine the multiline output of modsec into a single line/record understood by ossec. In particular, I'll read only audit log of mod_security, there is also debug log which I'll ignore for the time being.
Plan

So, as I said, the goal was to solve Java and mod_security log problems. While studying the source I decided to implement two log readers. I'll leave Java for now, since I think it is better solved with modifying syslog (concatenating next line if it doesn't start at column 1). So, one reader that will process mod_security's audit log files and another one that will use regex to search for start and end of each record. To do so, obviously those two have to be implemented, but also appropriate configurations has to be defined.
Since I definitely decided to go with attributes instead of hacks (as multiline is) I also decided to convert multiline to use attribute to define number of lines that has to be concatenated.
So, new multiline definition in the configuration file will be:
<log_format lines="N">multiline</log_format>
And the attribute lines is mandatory, it defines how many lines in the log file makes one log record! On the other hand, mod_security for now will use very simple configuration style:
<log_format>modsec_audit</log_format>
Finally, regex based log type will use the following type:
<log_format start_regex="" end_regex="">regex</log_format>
And if end_regex isn't specified, then the value of start_regex will be assumed to be also end_regex. start_regex attribute is mandatory.

Well, basically, that's for the plan.

Implementation

I first implemented code that reads and parses configuration. In order to do that I had to change files src/config/localfile-config.[ch] and src/error_messages/error_messages.h files. Note that I introduced a new field void *data into logreader structure who's purpose is to keep private data of each log reader. In that way I'm not cluttering structure with lots of attributes used only sporadically. Alternatively, I could use union and place everything there, but for the time being this will do.

Then I modified src/logcollector/read_multiline.c to take its parameter from a new place in logreader structure, i.e. from private *data pointer. The small problem with how I did it is that it is dependent on 32/64-bit architecture as I'm directly manipulating with pointers, which is actually one big no-no. :) But, for the prototype it will do.

Next, I copied read_multiline.c into read_modsec_audit.c and modified it to work with mod_security audit logs. Note that mod_security, for each request and response (that are treated as a single record) creates a series of lines and blocks. Block are separated by blank lines, but everything is kept between the following lines:
-- 62a78a12-A--

...

-- 62a78a12-Z--
Blank lines before and after those that mark beginning and an end are ignored. Random looking number (62a78a12) is an identifier that binds multiple blocks together in log files. In my implementation I assume there is no interleaving of those!

Finally, I implemented read_regex.c. This one is the same as the read_modsec_audit.c but instead of fixed delimiters for start and the end or a record, delimiters are provided via configuration file in a form of regular expressions. Note that it would be possible to make read_modsec_audit a special case of read_regex via appropriate use of regular expressions in the configuration file.

When modules were finished I just had to integrate them into logcollector.[ch] (and modify how multiline is detected). And the implementation was finished.

All the changes are provided in new_log_types-v00.patch.

Some conclusions and ideas for a future work

There are few shortcomings of this implementation. First, it is implemented on Linux and only slightly tested even on that platform. So, it is very likely buggy and non-portable across different platforms. Next, there is certainly a non-portable part with respect to 32-bit vs. 64-bit pointers. I marked that part in the code. Finally, security review has to be done, after all, this is security sensitive application!

It seems to me that multiline module doesn't seem to work right, i.e. there are some corner cases when it misbehaves. Namely, there is a check that a single line doesn't exceed maximum line size, but there is no check if more than one line exceeds that threshold. And, probably, if it exceeds, then the reading will get out of sync, i.e. wrong lines will be grouped together.

For regexp module written as a part of this treatise probably additional attributes should be included, like flags so that you can say, e.g., if the match should be case sensitive or not. Or, that you can remember matches from start_regex (using () operator) and reuse them in end_regex regular expression.
For the end, I don't want to criticize too much, but the build system of OSSEC isn't what you would call: flexible. This isn't a problem for production environment, but for development it is since, to test a single change, you have to rebuild all the source. Also, OSSEC assumes you run it from the installation directory, which is owned by a root. Again, this is a problem for a development, more specifically testing. I think there is a lot of room for the improvement.

Friday, July 20, 2012

A case against wizards...

Well, I mean on those configuration wizards that allow you to quickly setup and get on going with something.

They have their advantages, but also disadvantages. In my opinion one big disadvantage is that they take away one very important thing from you and that is making mistakes. Yes, because we learn by making mistakes, and if everything goes right, we haven't learned much. In short term, you won, but in long term, I think you lose. Namely, when something doesn't go right - and things have a huge tendency not to go right - then, if you show a problem to a person that did a lot of mistakes, and the one that used wizard and didn't have a clue of what can go wrong, I think that the one that made mistakes will be more efficient in solving a problem.

So, what would be a conclusion? Well, I think you should first try a harder way and only after you mastered it, take a shortcuts to be as quick as possible.

Integrating FreeIPA and Alfresco...

After describing how to install CentOS, DNS and reverse DNS, FreeIPA and Alfresco, in this post I'm going to describe how to integrate Alfresco with FreeIPA. I want to achieve the following goals with the integration:
  • Users and groups are kept within FreeIPA and authentication is done by FreeIPA.
  • Alfresco Web interface honors Kerberos tickets. Upon opening Web interface users are immediately presented with their pages withoug necessity for authentication (if, of course, they have valid Kerberos tickets).
  • Authentication when mounting DAV share is also done via Kerberos tickets.
In short, I want to achieve SSO (Single Sign-On) as much as possible. Users sign in when they start to use their workstations once, that's the only time they have to enter password.

Thursday, June 28, 2012

Snort with MySQL support on 64-bit CentOS 6...

In one of the previous posts I wrote about compiling Snort 2.9.2.1 on 64-bit CentOS. The newest stable version of Snort now is 2.9.2.3 and I'll use that version from now on. But, the old post is still valid for compiling that new one, so there is no need for another post.

But, there is a problem. If  you tried to build Snort package with MySQL support like this:
rpmbuild --rebuild --with mysql snort-2.9.2.3-1.src.rpm
then you certainly got the following message:
<some unrelated configure script output>
checking for mysql...

**********************************************
  ERROR: unable to find mysqlclient library (libmysqlclient.*)
  checked in the following places
        /usr
        /usr/lib
        /usr/mysql
        /usr/mysql/lib
        /usr/lib/mysql
        /usr/local
        /usr/local/lib
        /usr/local/mysql
        /usr/local/mysql/lib
        /usr/local/lib/mysql
**********************************************

error: Bad exit status from /var/tmp/rpm-tmp.R2KI5J (%build)


RPM build errors:
    Bad exit status from /var/tmp/rpm-tmp.R2KI5J (%build)
Well, the problem is that on 64-bit CentOS (and RHEL derivatives, including Fedora) 64-bit libraries are in /lib64 and /usr/lib64 directories. The easiest way to circumvent that problem is to do the following.

First, install SRPMS file so that it is unpacked:
rpm -ivh snort-2.9.2.3-1.src.rpm
Then, go to ~/rpmbuild/SPEC directory, and open file snort.spec in some text editor. Search for the following block:
   if [ "$1" = "mysql" ]; then
        ./configure $SNORT_BASE_CONFIG \
        --with-mysql \
        --without-postgresql \
        --without-oracle \
        --without-odbc \
        %{?EnableFlexresp} %{?EnableFlexresp2} \
        %{?EnableInline}
   fi
It's somewhere around line 231. Modify it to include line         --with-mysql-libraries=/usr/lib64, i.e. it should now look like follows:
    if [ "$1" = "mysql" ]; then
        ./configure $SNORT_BASE_CONFIG \
        --with-mysql \
        --with-mysql-libraries=/usr/lib64 \
        --without-postgresql \
        --without-oracle \
        --without-odbc \
        %{?EnableFlexresp} %{?EnableFlexresp2} \
        %{?EnableInline}
   fi
Save and close file. Then, start snort build using the following command:
rpmbuild -bb --with mysql snort-2.9.2.3-1.src.rpm
And that should be it...

Tuesday, June 26, 2012

Setting up reverse DNS server...

In the post about DNS configuration I skipped reverse DNS configuration. But, it is necessary to have it in some cases, like FreeIPA installation or for mail servers. So, I'm going to explain how to configure reverse DNS server.

While "normal" DNS resolution works by names, from root server down to the authoritative one for the name we are looking for, reverse DNS resolution works within special top-level domain (in-addr.arpa). Within this domain, sub-domains are comprised from octets within IP address in reverse order. Now, if your block of IP addresses ends on byte boundary (e.g. /8, /16, /24) the setup is relatively simple. Otherwise, you upstream provider (the one that holds larger IP address block) has to point to your domain on a per address base.

Let us bring this to more concrete values. Suppose that our public IP address space is 192.0.2.0/24. Also, suppose that your mail server has public IP address 192.0.2.2. In that case, reverse query is sent for name 2.2.0.192.in-addr.arpa and query type is set to PTR, i.e. we are looking for a name 2 within 2.0.192.in-addr.arpa zone.

So, it's relatively easy to setup reverse DNS. You need to define appropriate zones that include only network part of your IP addresses. In our case we have two zones, but IP addresses used for one of them depends on who's asking (client from the local network or client on the Internet). So, we have three zones in effect:
  1. DMZ, when asked by local clients, is in the network 10.0.0.0/24. This means we have reverse zone 0.0.10.in-addr.arpa for local clients.
  2. DMZ, when asked by internet clients, is in the network 192.0.2.0/24. This means that for them reverse zone is 2.0.192.in-addr.arpa.
  3. Finally, clients in local network (non-DMZ one) have IP addresses from a block 172.16.1.0/24 and so they are placed within reverse zone 1.16.172.in-addr.arpa.
So, within internal view you should add the following two zone statements:
zone "0.0.10.in-addr.arpa" {
    type master;
    file "example-domain.com.local.rev";
};

zone "1.16.172.in-addr.arpa" {
    type master;
    file "example-domain.local.rev";
};
And within internet view you should add the following zone statement:
zone "2.0.192.in-addr.arpa" {
   type master;
    file "example-domain.com.rev";
};
Then, you should create the three zone files (example-domain.com.local.rev, example-domain.local.rev, and example-domain.com.rev) with the following content:
# cat example-domain.com.local.rev
 $TTL 1D
@    IN    SOA    @ root.example-domain.com. (
            2012062601 ; serial
            1D         ; refresh
            1H         ; retry
            1W         ; expire
            3H )       ; minimum
           NS    ns1.example-domain.com.

1          PTR    ns1.example-domain.com.
# cat example-domain.local.rev
 $TTL 1D
@    IN    SOA    @ root.example-domain.com. (
            2012062601 ; serial
            1D         ; refresh
            1H         ; retry
            1W         ; expire
            3H )       ; minimum
           NS    ns1.example-domain.com.

1          PTR    test.example-domain.local.
# cat example-domain.com.rev
$TTL 1D
@    IN    SOA    @ root.example-domain.com. (
            2012062601 ; serial
            1D         ; refresh
            1H         ; retry
            1W         ; expire
            3H )    ; minimum
           NS    ns1.example-domain.com.

1          PTR    ns1.example-domain.com.
Don't forget to change permissions on those files as explained in the previous post. Now, restart BIND and test server:
# nslookup ns1.example-domain.com 127.0.0.1
Server:  127.0.0.1
Address: 127.0.0.1#53

Name:    ns1.example-domain.com
Address: 10.0.0.1
[root@ipa ~]# nslookup 10.0.0.1 127.0.0.1
Server:  127.0.0.1
Address: 127.0.0.1#53

1.0.0.10.in-addr.arpa    name = ns1.example-domain.com.
As it can be seen, DNS server correctly handles request for IP addres 10.0.0.1 and returns ns1.sistemnet.hr. Let's try with a name from LAN:
# nslookup test.example-domain.local 127.0.0.1
Server:  127.0.0.1
Address: 127.0.0.1#53

Name:    ipa.example-domain.local
Address: 192.0.2.1

[root@ipa named]# nslookup 192.0.2.1 127.0.0.1
Server:  127.0.0.1
Address: 127.0.0.1#53

1.2.0.192.in-addr.arpa    name = test.example-domain.local
That one is correct too. So, that's it, you have reverse DNS correctly configured. Testing from the outside I'm leaving to you as an exercise. ;)

Monday, June 25, 2012

Setting up DNS server...

In the previous post I described how to install minimal CentOS distribution with some additional useful tools. In this part I'm going to describe how to install DNS server. This DNS server will exhibit certain behavior depending on where the client is, in other words we are going to setup split DNS. Note that since DNS server is accessible from the Internet, it is placed in DMZ network, i.e. 10.0.0.0/24.

Environment and configuration parameters


In the following text we assume network topology from the previous post, but we need some additional assumptions and requirements before going further. First, our public domain is example-domain.com and all hosts within DMZ will belong to that zone, i.e. FQDN of mail server will be mail.example-domain.com. But, when some client from the Internet asks for certain host, e.g. mail.example-domain.com DNS server will return public IP address. On the other hand, when client from a local network, or even DMZ, asks for the same host private IP address will be returned, i.e. 10.0.0.2. The reason for such behavior is more efficient routing. In other words, if client from a local network would receive public IP address, then all the traffic would have to be NAT-ed. We'll also assume that we are assigned a public block of IP addresses 192.0.2.0/24.

There will be  also additional domain example-domain.local in which all the hosts from the local domain will be placed. Additionally, this domain will be visible only from the local network and DMZ, it will not exist for clients on the Internet.

To simplify things we are not going to install secondary DNS server nor we are going to install reverse zones. For a test purposes this is OK, but if you are doing anything serious, you should definitely install slave DNS servers (or use someone's service) and configure reverse zones. Also, I completely skipped IPv6 configuration and DNSSEC configuration.

Installation of necessary packages


There are a number of DNS server implementations to choose from. The one shipped with CentOS is BIND. It is the most popular DNS server, with most features, but, in the past it had a lot of vulnerabilities. Nevertheless, we'll use that one. Maybe sometime in future, I write something about alternatives, too.

So, the first step, is to install necessary packages:
yum install bind.x86_64 bind-chroot.x86_64 bind-utils.x86_64
Server itself is in bind package, bind-chroot is used to confine server to alternative root filesystem in case it is compromised. Finally, bind-utils contains utilities we need to test server. This is 4.1M of download data, and it takes around 7M after installation. Not much.

Note that it is a good security practice to confine server into alternative root. As I already said, in the past bind had many vulnerabilities, and in case new one is discovered and exploited, we want to minimize potential damage.

That's all about installation.

Configuration

Next, we need to configure server before starting it. Main configuration file for bind is /etc/named.conf . So, open it with your favorite editor as we are going to make some changes in it. Remember that our domain is called example-domain.com and that we want private addresses to be returned for internal clients, and public ones for external clients. Additionally, we have a local domain example-domain.local. Here is the content of the /etc/named.conf file:
acl lnets {
    10.0.0.0/24;
    172.16.1.0/24;
    127.0.0.0/8;
};

options {
  listen-on port 53 { any; };
  directory     "/var/named";
  dump-file     "/var/named/data/cache_dump.db";
  statistics-file "/var/named/data/named_stats.txt";
  memstatistics-file "/var/named/data/named_mem_stats.txt";
  allow-update     { none; };
  recursion no;

  dnssec-enable no;
};

view "internal" {
    match-clients { lnets; };
    recursion yes;

    zone "." IN {
        type hint;
        file "named.ca";
    };

    zone "example-domain.local" {
        type master;
        file "example-domain.local";
    };

    zone "example-domain.com" {
        type master;
        file "example.domain.local";
    };

    include "/etc/named.rfc1912.zones";
};

view "internet" {
    match-clients { any; };
    recursion no;

    zone "example-domain.com" {
        type master;
        file "example-domain.com";
    };
};
Let me now explain what this configuration file actually specifies. In global, there are four blocks, i.e. acl, options, and two view blocks.

acl block (or blocks, because there may be more such statements) specifies some symbolic name that will refer to group of addresses.  This is good from maintenance perspective because there is only one place where something is defined. In our case, we define our local networks, i.e. DMZ and local LAN. There is also loopback address because when some application on DNS server itself asks something, DNS server should treat it as if it is coming from a local network.

Next is an option block. In our case we specify that DNS server should listen (listen-on) port 53 on any IP address (any). In case you want to restrict on which addresses DNS server listens, you can enumerate them here instead of any keyword. Better yet, create ACL and use symbolic name. In option block we also disabled DNSSEC (dnssec-enable no), disabled updates to server (allow-update no), and disabled recursion (recursion no). Apart from that we defined directories, e.g. for zone files.

Finally, there are two view statements. They define zones depending on where the client asking is. If it is on the local networks, then first view (i.e. internal) is consulted, and if the client is anywhere else, then the second view is consulted (i.e. internet). match-clients statement classifies clients, and it is here that we use our ACL lnets. When query arrives, the source address from the packet is taken and compared to match-clients. The first one that matches is used. So, it is obvious that the second view matches anything but since it is the last one, then it is a catch-all view. Anyway, for our local clients we allow recursive queries (recursion yes), while we disallow it for a global clients (recursion no). This is a good security practice! Also, note that for internal clients we define example-domain.local zone, which isn't defined for global clients. And, for the same example-domain.com zone, two different files (i.e. databases with names) are used. This reflects the fact that we want local clients to get local IP addresses, while global clients should get globally valid IP addresses.

Zone configurations

There are three zones in our case. First one is a global one, used to answer queries to clients on the Internet. Looking into the main configuration file the name of the file containing this zone has to be example-domain.com and it has to be placed in /var/named directory! The content of this file is:
$TTL 1D
@    IN    SOA    @ root.example-domain.com. (
            2012062501    ; serial
            1D            ; refresh
            1H            ; retry
            1W            ; expire
            3H )          ; minimum
           NS    ns1.example-domain.com.

ns1        A    192.0.2.1
There are three records in the file. The first one defines zone itself, it is called Start of Authority, or SOA. The @ sign is shorthand for zone name and it is taken from the /etc/named.conf file. Then, there is a continuation that defines name server (NS) for the zone. It is continuation because the first column (where @ sign is) isn't defined so it is taken from the previous record. Name server is ns1.example-domain.com. Finally, there is IP address of the name server (A record). You should keep in mind that any name that doesn't end with dot is appended with a domain name. A lot of errors are caused by that omission. For example, if you wrote ns1.example-domain.com (without ending dot) this would translate into ns1.example-comain.com.example-domain.com. which is obviously wrong. But note that we are counting on this behavior in A records since we only wrote a name, without a domain!

The other zone files are similar to the previous one and also have to be within /var/named directory. I.e. for local view of example-domain.com. the content of zone file is:
$TTL 1D
@    IN    SOA    @ root.example-domain.com. (
            2012062501 ; serial
            1D         ; refresh
            1H         ; retry
            1W         ; expire
            3H )       ; minimum
        NS    ns1.example-domain.com.

ns1        A    10.0.0.2
Note that it is almost identical, except that IP address for name server is now from a private range! Finally, here is the content of zone that contains names from a local network:
$TTL 1D
@    IN    SOA    @ root.example-domain.local. (
            2012062501 ; serial
            1D         ; refresh
            1H         ; retry
            1W         ; expire
            3H )       ; minimum
        NS    ns1.example-domain.com.

test        A    172.16.1.1
Again, almost the same. Note that I added one test record with phony IP address. This is so that I can test DNS server if it is correctly resolving IP addresses.

One final thing before starting DNS server! You should change file ownership and SELinux context. To do so, use the following two commands (run them in /var/named directory):
chcon system_u:object_r:named_zone_t:s0 example-domain.*
chown root.named example-domain.*
The first command changes SELinux context, and the second one owner and group of files. So, it is time to start BIND, i.e. DNS, server:
/etc/init.d/named start
If there were some error messages, like the following one:
zone example-domain.com/IN/internal: loading from master file example-domain.com.local failed: permission denied
zone example-domain.com/IN/internal: not loaded due to errors.
Then something is wrong. Look carefully what it says to you as the error messages are quite self explanatory! In this particular case the problem is that DNS server can not read zone file. If you get that particular error message, check that SELinux context and ownership were changed properly.

Testing


Now, if you get just OK, it doesn't yet mean that everything is OK. You have to query server in order to see if it responds correctly. But, if there were any errors, they will be logged in /var/log/messages.

To test DNS server, use nslookup like this:
nslookup <name> 127.0.0.1
Here, I assume you are testing on a DNS server itself. If you are testing on some other host then you should change address 127.0.0.1 with proper IP address of DNS server (public one in case you ask from the internet, local one if you are asking from some computer in DMZ or local network). In any case you should receive correct answers. Here are few examples:
# nslookup ns1.example-domain.com 127.0.0.1
Server:        127.0.0.1
Address:    127.0.0.1#53

Name:    ns1.example-domain.com
Address: 10.0.0.2
# nslookup test.example-domain.local 127.0.0.1
Server:        127.0.0.1
Address:    127.0.0.1#53

Name:    test.example-domain.local
Address: 172.16.1.1
As expected, when internal host sends request internal IP address are returned. You should also check that any valid name is properly resolved (recursive queries). For example, you could ask for www.google.hr:
# nslookup www.google.hr 127.0.0.1
Server:        127.0.0.1
Address:    127.0.0.1#53

Non-authoritative answer:
www.google.hr    canonical name = www-cctld.l.google.com.
Name:    www-cctld.l.google.com
Address: 173.194.70.94
Which, in this case, resolves properly.

Now, there is a question how to test if names are properly resolved for clients on the Internet. The obvious solution is to try from some host on the Internet and see what's happening. In case you don't have host on the Internet, you can cheat using NAT. ;)

Do the following. First, add some random IP address to your loopback interface, i.e.:
ip addr add 3.3.3.3/32 dev lo
Then, configure NAT so that any query sent via loopback gets source IP address changed:
iptables -A POSTROUTING -o lo -t nat -p udp --dport 53 -j SNAT --to-source 3.3.3.3
And now, try to send few queries:
# nslookup www.slashdot.org. 127.0.0.1
Server:        127.0.0.1
Address:    127.0.0.1#53

** server can't find www.slashdot.org: REFUSED
That one is OK. Namely, we don't want to resolve third party names for outside clients (remember, recursive is off!). Next test:
# nslookup ns1.example-domain.local. 127.0.0.1
Server:        127.0.0.1
Address:    127.0.0.1#53

** server can't find ns1.example-domain.local: REFUSED
This one doesn't work either. Which is what we wanted, i.e. our local domain isn't visible to the outside world. Finally, this one:
# nslookup ns1.example-domain.com. 127.0.0.1
Server:        127.0.0.1
Address:    127.0.0.1#53

Name:    ns1.example-domain.com
Address: 192.0.2.1
Which sends us correct outside address.

So, all set and done. Don't forget to remove temporary records (i.e. test), iptables rule, lo address, and also don't forget to modify /etc/resolv.conf so that your new server is used by default by local applications.

Saturday, October 1, 2011

Installing Snort 2.9.1 on 64-bit CentOS 6...

I just installed Snort 2.9.1 on CentOS 6, and since that wasn't straightforward process, I decided to document all the steps I did for a later reference. Also, maybe someone will find this useful so I placed it here.

The process of setting up Snort is divided into three phases, compilation, installation and configuration. Compilation phase is done entirely on auxiliary host, while installation and configuration phases are done on the target host, i.e. on the host where you wish to install snort.
Binary Snort packages from the download pages are all for 32 bit machines. Furthermore, SPEC file within provided SRPM has two bugs. The first one is that it wrongly links with libdnet.1 library that doesn't exist. I circumvented that problem as described below. The second problem is that not all pretprocessors are included into the final binary package. If you try to start snort and it fails with the following message in the log file:
FATAL ERROR: /etc/snort/snort.conf(463) Unknown preprocessor: "sip".
then this is manifestation of that problem. Apart from sip; imap, pop and reputation pretprocessors are also missing. I have fixed spec file, and made the new Snort SRPM package. If you trust me enough (but don't! :)), you can skip the compilation phase and obtain directly binary packages for daq and snort from my homepage. In that case, go to the installation phase and continue from there.

Compilation

As I said, the first problem with Snort is that on the download page there are no precompiled binaries for 64-bit versions of Linux distributions. Still there are SRPMS packages of Snort (extension src.rpm) and its prerequisite Daq so it isn't so bad. Download those packages, and rebuild them, first daq and then, after installing daq, snort itself. For rebuild process development environment is mandatory, i.e. compiler, development libraries, etc. Since probably you are going to run snort on firewall, or some machine close to firewall, it isn't good security practice to install development environment on target machine (i.e. firewall). So, find another machine with CentOS 6 and all the latest updates (or install one) and perform build process there. You'll need at minimum to have package rpm-build-4.8.0-16.el6.x86_64, afterwards, any missing package will be reported and you can install it using yum. So, install rpm-build package, and try to start build process (do this as ordinary user!):
rpmbuild --rebuild daq-0.6.1-1.src.rpm
If missing packages are reported then install them (as superuser) and try to start build process again. Note that libdnet you can find in EPEL repository. Repeat this until build process is successful. Binary package you'll find in the directory ~/rpmbuild/RPMS/x86_64/. Go there and install daq package:
yum localinstall --nogpgcheck daq-0.6.1-1.x86_64.rpm
Option nogpgcheck is necessary since we didn't sign binary package. Then, go back to directory where you downloaded daq and snort, and start snort build process:
rpmbuild --rebuild snort-2.9.1-1.src.rpm
This too can stop due to the missing packages, so install any required package and restart build process. Do this until build process is successful.
Now you have daq and snort packages ready in the build output directory ~/rpmbuild/RPMS/x86_64/. There are files daq-0.6.1-1.x86_64.rpm and snort-2.9.1-1.x86_64.rpm.

Installation

Transfer binary packages of snort and daq to the target machine and install them there:
yum localinstall --nogpgcheck daq-0.6.1-1.x86_64.rpm \
            snort-2.9.1-1.x86_64.rpm
It could happen also that you'll need additional packages, but any dependencies will be automatically retrieved and installed by yum. So, that's for the installation phase.

Build process, for whatever reason, wrongly got dependency on libdnet library, it looks for libdnet.1 instead of libdnet.so.1. To check if this is problem in your case, just try to start snort:
# /etc/init.d/snortd start
Starting snort: /usr/sbin/snort: error while loading shared libraries: libdnet.1: cannot open shared object file: No such file or directory
                                                           [FAILED]
In case the output looks like that one, you have the problem with libdnet.1 too. To solve it, to the directory /usr/lib64 and run there the following command:
# ln -s libdnet.so.1 libdnet.1
This is actually a hack, since build process has a bug, but as I didn't want to look or modify build process, this was easier to do and I did it that way.

The error with library libdnet was caused by the manually installed libdnet in /usr/local/ which had name libdnet.1 for whatever reason and that was picked by configure script. In other words, if you compile snort manually you'll not have that problem, only if you used old binary that I provided (now that is fixed!).
You'll also need to obtain snort rules and that requires you to register on Snort Web page. After registering, and downloading rules, unpack the archive you obtained in some directory. In the following text I'm using package snortrules-snapshot-2910.tar.gz from the September 1st, 2011 (and which was obtained on October 1st, 2011).

What you'll get is the following structure:
$ ls -1
etc
preproc_rules
rules
so_rules
Move directories preproc_rules, rules and so_rules into /etc/snort directory. Also, move the content of etc directory to /etc/snort directory overwriting any files there.

In case you have SELinux enabled snort will be prevented from starting because of wrongly labeled preprocessor plugins. This manifests itself with the following line in the log files:
FATAL ERROR: Failed to load /etc/snort/so_rules/precompiled/RHEL-6-0/x86-64/2.9.1.0//smtp.so: /etc/snort/so_rules/precompiled/RHEL-6-0/x86-64/2.9.1.0//smtp.so: failed to map segment from shared object: Permission denied
Of course, the exact paths will differ depending on your exact installation. Note that snort runs as unconfined process and until I find a way to confine it this can be solved by running the following command in the directory /etc/snort/so_rules/precompiled/RHEL-6-0/x86-64/2.9.1.0 (note that this is the directory reported in the log file!):
# chcon system_u:object_r:lib_t:s0 *
Configuration

The final step is snort configuration prior to running it. Master configuration is stored in the /etc/snort/snort.conf file, so open it with your favorite text editor and modify the following lines:
  1. Line that reads ipvar HOME_NET any (cca. 45th line). Replace any with you network address. In my case that was 192.168.1.0/24.
  2. Line that starts with dynamicpreprocessor directory words (cca. 234th line). Parameter is directory and change this parameter to /usr/lib64/snort-2.9.1_dynamicpreprocessor/.
  3. Immediately following the previous line is the line that starts with dynamicengine. Change the parameter of that line with the value /usr/lib64/snort-2.9.1_dynamicengine/libsf_engine.so.
  4. And, immediately following the previous line is the line that starts with words dynamicdetection directory whose parameter should be /etc/snort/so_rules/precompiled/RHEL-6-0/x86-64/2.9.1.0/.
  5. Also, you have to create two empty files, /etc/snort/rules/white_list.rules and /etc/snort/rules/black_list.rules. Alternatively, you can disable reputation pretprocessor (find line that begins with preprocessor reputation and comment out the whole block.
Additionally, open /etc/sysconfig/snort file and look if there is something you need to change. For example, in case you have multiple interfaces on which you would like to run snort, you'll have to configure them in that file.

Finally, start snort with the following command:
# /etc/init.d/snortd stop
and, if snort should be started during the boot process, also run the following command:
# chkconfig snortd on
And, that's it! :)

About Me

scientist, consultant, security specialist, networking guy, system administrator, philosopher ;)

Blog Archive