Tuesday, March 26, 2013

Periodically scan network with nmap...

I think that it is a good idea to periodically scan network using nmap in order to take a snapshot of a current state, and to be able to track changes in the network. For that purpose I wrote the following quick and dirty bash script:
#!/bin/bash

# Interface on which scan should be performed. Multiple interfaces
# should be separated with spaces!
SCAN_INTERFACES="eth1"

# Network that should be scanned. If empty, or undefined, automatically
# deduce network attached to interface. Note that if you specified
# multiple interfaces than this variable should be undefined!
SCAN_NETWORKS=

#######################################################################
# THERE ARE NO MORE CONFIGURABLE PARTS AFTER THIS LINE
#######################################################################

TIMESTAMP=`date +%Y%m%d%H%M`
START=`date +%Y%m%d%H%m%S.%N`

cd /var/log/nmap || exit 1

for if in \$SCAN_INTERFACES
do
    # Find network to scan if it isn't specified...
    [ -z "\$SCAN_NETWORKS" -o "\$if" != "\$SCAN_INTERFACES" ] && SCAN_NETWORKS=`/sbin/ip ro sh dev \$if | grep -v via | cut -f1 -d" "`

    # Find addresses on the output interface so that we don't scan them
    EXCLUDE_LIST=`/sbin/ip addr sh dev \$if | awk '/inet / {print "--exclude ", substr(\$2, 1, index(\$2, "/")-1)}'`
    [ -z "\$SCAN_NETWORKS" ] && continue

    # Start scanning
    nmap -n -Pn -sS -O -sV -T4 -vv \${EXCLUDE_LIST} -oA nmap-\$if-\${TIMESTAMP} -e \$if ${SCAN_NETWORKS} >& nmap-scan-\$if-\${TIMESTAMP}.log
done

echo "START \$START END `date +%Y%m%d%H%m%S.%N`" >> /var/log/nmap-scan.log

exit 0
Note that some lines are wrapped due to the shortage of space. This script assumes several things in order to run properly:
  1. You have a directory /var/log/nmap where all the result files will be placed.
  2. nmap is version 6, but definitely not 4 because version 4 has some weaknesses.
  3. You want to scan networks assigned to your interfaces.
  4. The script is run under root user.
Now, after each run of this script you'll have four files left in /var/log/nmap each with the following extension:
  1. nmap - this is a standard nmap output file
  2. gnmap - greppable nmap output
  3. xml - XML output file
  4. log - Log file into which stdout and stderr were redirected during nmap's run.
It is also necessary to configure script to be run periodically. cron is ideal for that purpose. To achieve that, you can add the following entry to root's crontab:
0 */2 * * * full_path_and_name_to_your_script
Obviously, you'll have to change full_path_and_name_to_your_script with exact path and filename. In this case, you'll get the script to be run every two hours.

Thursday, March 21, 2013

Detecting hosts that cross connect two networks...

There are different scenarios in which you can have a situation where some of your clients are connected in the same time to two different networks of different trust levels. This is dangerous because effectively they can be used as a staging point for potential attackers or malware to "jump" from less trusted network to more trusted one.

For example, suppose that you have protected internal wired LAN, and in the same time you have wireless LAN that is used for guests and allowed unrestricted access to the Internet, as shown in the figure below. Someone, from your internal and protected network, might intentionally or accidentally connect to the wireless network too, and in that case he/she will shortcut the two networks. If you thought that you can use MAC addresses to detect such hosts, you can not, for a simple reason that in one network host is connected using wired ethernet card with one MAC address, and in the second network it is connected using WLAN network card with another MAC address.

Hypothetical situation to illustrate how internal host might shortcut two networks of different trust levels
So, how to detect those hosts that shortcut two networks? Actually, there is an easy and elegant way. Namely, you can send ARP requests on one network for IP addresses on another network. ARP module, by default, doesn't know or care for routing tables so that it knows on which interface it should respond with specific addresses. If we look again on the figure above, this means that you can run the following command from the proxy host (the host above firewall):
arping -I eth1 10.0.0.250
I assume in that command that eth1 is the interface connected to AP. What will happen is that broadcast will be sent on the wireless network and the client will respond with its wireless MAC address even though that address is not used on the wireless network.

So, I hope the idea is clear now. To detect if there is some host cross connecting two networks I send arp request on a host (i.e. proxy host) for each possible IP address used on the the other network (i.e. local protected network in the figure above).

Note that it is possible to disable such behavior on Linux machines using sysctl variable /proc/sys/net/ipv4/conf/*/arp_filter. You can find more information, for example, here.

nmap games

Now, there is another problem. How to scan the whole network without manually trying each possible IP address. The first solution is, obviously, to use nmap. Nmap is a great tool for network scanning, but in this case it has a problem, I tried to run it in the following way, but unsuccessfuly:
# nmap -PR -Pn -e eth1 10.0.0.0/24
Starting Nmap 5.51 ( http://nmap.org ) at 2013-03-21 10:07 CET
nexthost: failed to determine route to 10.0.0.0
QUITTING!
Option -PR requires ARP scan, -Pn disables ping scan, and -e eth1 tells nmap to send packets via interface eth1. The problem is that I'm trying to scan network 10.0.0.0/24 on interface eth1 and there is no route in routig tables that tells kernel/nmap that network is really connected on interface eth1. So, nmap refuses to scan those addresses. One solution is to temporarily add that route:
ip route add 10.0.0.0/24 dev eth1
But this isn't an option if the route already exists in order for the hosts on the protected network to be able to access proxy, i.e. if there is route similar to the following one:
# ip ro sh
...
10.0.0.0/24 via 172.16.1.1 dev eth0
...
Again, I'm making a lot of assumptions here (network between proxy and firewall, IP addresses and interfaces) but I hope you understand the point. The given route is in the routing tables and removing it isn't an option.

Next try was using -sn switch, i.e. to disable port scan:
# nmap -PR -Pn -sn -e eth1 10.0.0.0/24
Starting Nmap 5.51 ( http://nmap.org ) at 2013-03-21 10:07 CET
Well, now nmap worked, sort of, because it showed all the hosts are up. Using tcpdump I found that it didn't send anything to the network. Reason: it thinks this is remote network, pings are disabled, arp cannot be used, and finally, because of -Pn, assumes all the hosts are up. So, I was again at the beginning.

Simple arping solution

Until I figure out how to force nmap to send arp probes without worrying  about routing tables here is a simple solution using arping command:
#!/bin/bash

TIMEOUT=4

for i in {1..254}
do
if arping -q -f -w \$TIMEOUT -I eth2 10.0.0.\$i
then
echo "10.0.0.$i is up"
fi
done
There are three problems with this solution:
  1. In case your network has netmask other than /24 you'll have to change this script, i.e. it is a bit more complicated. How much, depends on the network mask.
  2. The problem with this solution is that it is slow. For example, to scan 254 addresses and with timeout of 4 seconds, it will take about 17 minutes to scan the whole address range assuming no address is alive (what is actually desired state on the network).
  3. Finally, timeout value is a bit tricky to determine. Majority of the responses are really quick, i.e. under a second. But some devices respond slower, i.e. when they entered some kind of a sleep state.
Still, it is a satisfactory solution until I find a way to use nmap for that purpose. If you know how, please, leave a comment.

Tuesday, March 12, 2013

Storing arpwatch output into database

arpwatch is very useful tool which logs its output via syslog and also sends mail alerts. Unfortunately, this isn't configurable, i.e. arpwatch, out-of-the-box, doesn't support any other way of logging.  One approach is to modify arpwatch to be able to log into some SQL database, but this isn't straightforward way, i.e. not an easy one. Namely, arpwatch is written in C, and besides, it's hard to know if this would be accepted by upstream (who ever that migh be).

So, I decided to go with a different approach. I configured arpwatch to log its output into log file and wrote a Python script that executes via cron and transfers all the data into the database. Here is how I did it along with all the scripts.

Configuring logging

The first step is to configure arpwatch to log its output into a separate file. This isn't possible to do in arpwatch itself, but it is possible to achieve it by configuring syslog, or rsyslog to be more precise. On CentOS 6 rsyslog is used that allows just that. All you have to do is to place a file named (for example) arpwatch.conf in directory /etc/rsyslog.d with the following content:
if $programname == 'arpwatch' then /var/log/arpwatch.log
&~
Don't forget to restart rsyslog after that. This will write anything logged by arpwatch binary into /var/log/arpwatch.log file. All the different log lines that can appear are documented in arpwatch's manual page so I won't replicate them here.

Configuring database

In my case I created a single table using the following SQL statement:
CREATE TABLE arpwatch (
  macaddr char(17) NOT NULL,
  ip_addr int(10) unsigned NOT NULL,
  state varchar(8) NOT NULL,
  timestamp datetime NOT NULL,
  oldmac char(17) DEFAULT NULL
)
I think it's pretty obvious what goes where. Only thing that might be strange is that I'm using INT(10) for IP address. But that is because SNORT also stores IP addresses in such a way so in order to be compatible with it I used it also. Also, what is missing is primary key, but for the time being I'm not using it.

Script

Here is the script that should be started from the cron. For example, store it in /usr/local/sbin directory and to start it every 20 minutes add the following line (as root user) to cron using 'crontab -e' command:
*/20 * * * * /usr/local/sbin/arpwatchlog2sql.py
Note that the script expects configuration file. Here is a sample configuration file you'll have to modify. The script expects configuration file to be in its current directory, but you can place it into /usr/local/etc and modify the line CONFIGFILE in script accordingly.

Log rotation

Finally, you should be certain that logs are properly handled, i.e. rotated along with other logs. Since arpwatch is logging via syslog, that means that you have to modify rsyslog's log configuration file, i.e. /etc/logrotate.d/syslog. In there you'll see that logfiles maintained by rsyslog are enumerated, one per line. Just add arpwatch.log to that list and that should be it.

Saturday, March 9, 2013

Integrating LibreOffice 4 and Alfresco using CMIS

Alfresco is an excellent CMS solution, that unfortunately doesn't have good integration with LibreOffice. That is, it didn't have until LibreOffice 4 (LO4) was released. You could use WebDav to mount Alfresco shares, but there are some quirks related to versioning. First, after each save version is automatically increased by Alfresco, and second, there is no clear way to do check in and provide log of changes. Also, LibreOffice's autosave mechanism can make some problems. But, as I said, LO4 added CMIS protocol. CMIS is actually more than a simply protocol, but in essence it is a standard way of accessing content management systems so any CMS that supports CMIS will allow LibreOffice to be well integrated. CMIS is in Alfresco supported without any additional customizations, so if you followed my post about installing Alfresco, you are ready to try LibreOffice and CMIS.

Now, I was trying all this on Fedora 18 as a client workstation, so if you also have Fedora 18 you'll have to install manually LO4. So, if you didn't already install it, I wrote a post about that so take a look at it and install LO4.

So, to use CMIS first you'll have to enable LibreOffice's native Load/Save dialog boxes. This is done via Options... dialog (found in Toos menu option). Then, select option General in the left pane and you'll see in the main pane check box labeled Use LibreOffice dialogs under Open/Save dialogs section. Mark that check box and close the dialog. Finally, there is no need to restart LibreOffice:


Now, you need to add share from Alfresco where you have your documents stored. To do that, go to the Open dialog (in File menu), and now you'll see a button with three dots in the upper right corner. Click on it and new dialog appears:


Under the Type drop down box select CMIS, and then, under Server Details heading choose option Alfresco 4 as a Server Type. The dialog will have the following form:


Then Binding URL that has the following form:
http://<host>/alfresco/cmisws/RepositoryService?wsdl
has to be changed by filling in the host and port. When you fill URL, click on circular arrow beneath URL on the right hand side of Repository option. This will query repository for available URLs. You have also to type in name for this repository, and that's basically it, click OK to close this dialog. You'll be asked several times for username and password, so, provide it every time. This has to be username/password combination of a user that will access documents, not an administrator's password. Also, during this process, if you clicked check box that LO should remember password, a dialog will appear that will ask you for a master password that will protect your saved password(s).

Now, each time you go to Open dialog to open a new file, you'll have on the left pane this repository under the whatever name you've typed in Name text box.

There were several gotchas during this process. First, don't expect some useful/meaningful error messages. Basically, when you press that circular arrow, and something isn't right, you won't receive any message at all. So, here are some things that might catch you.

First of, while setting up the LO4 I suggest that you start it from the command line. The reason is that you'll see error messages on the console. Like the following ones I got:
http://your.server.name:8443/alfresco/cmisws/RepositoryService?wsdl:1: parser error : Start tag expected, '<' not found
that one was caused because I tried to use http protocol on https port (8443)! :D

Secondly, try first with http, and when you set that up, then you should try to switch to https. The problem is that LibreOffice has to have installed CA that issued certificate for Alfresco. If it doesn't have, then it will silently disconnect, and, as I said, it won't tell you what happened.

Third, if the hostname is incorrect, you also won't receive any error message. So, I suggest that you c/p the URL and try to retrieve wsdl using wget or curl:
wget --no-check-certificate 'https://your.server.name:8443/alfresco/cmisws/RepositoryService?wsdl'
For the end of this setup part, I'll mention that CMIS was introduced in LibreOffice 3.6 but it is marked as an experimental feature and those have to be explicitly enabled via Options dialog. Also, the dialog is different in LibreOffice 4 than in LibreOffice 3.6. Take a look at this blog post, but after several unsuccessful tries I decided to do it with LO4 because that was what I needed.

Experiences

Functionality seems to be OK but access to Alfresco repository is slow. This has a big impact on autosave, i.e. when you type everything suddenly freezes. Also, now and then, LO4 freezes very shortly but noticeably, when editing file from Alfresco.

Tuesday, March 5, 2013

LibreOffice 4 on Fedora 18...

It seems that LibreOffice 4 won't be packaged for Fedora 18, only in Fedora 19 judging from the fact that it is already in Rawhide and in previous versions of Fedora only minor updates were performed. So, if you need some feature from this newer version you'll have to install it separately from Fedora's repositories. Fortunately, it isn't so hard and/or dangerous to do and here is how.

First, go to LibreOffice Download pages and click on the Main installer link. This will trigger download of archive with all the RPMS necessary for the installation. Note that it's cca. 180MB of data.

After the download finishes, unpack it and you'll have a new directory LibreOffice_4.0.0.3_Linux_x86-64_rpm. Enter that directory, then RPMS/ subdirectory and finally run the following command (as a root user):
yum localinstall *.rpm
When asked to confirm installation process, just hit y. Additionally, install package that will provide LibreOffice 4 in different GNOME menus (note: this is a single line):
yum localinstall desktop-integration/libreoffice4.0-freedesktop-menus-4.0.0-103.noarch.rpm
Now, you can run LibreOffice 4 like you would run older LibreOffice. Note that you have both versions installed in parallel.

Monday, March 4, 2013

Fedora 18 and update to kernel 3.8.1

Today, I updated Fedora 18 and, as a consequence, kernel was also updated to version 3.8.1. Up until now, after each upgrade only thing I had to do is to softlink version.h file (see this post, section Virtualization). But now, VMCI module didn't compile either. Luckily, some had the same problem during RC status of kernel 3.8 and they successfully solved it. :) I tried it, and it worked flawlessly.

You need to download the patch and then execute the following commands (as a user root):
cd /usr/lib/vmware/modules/source
cp vmci.tar vmci.tar.SAVED
tar xf vmci.tar
cd vmci-only
patch -p1 < path to downloaded patch filecd ..
tar cf vmci.tar vmci-only/
rm -rf vmci-only
Be careful with the last rm command. :) Also, cp command is only precaution, if something goes wrong, you have a copy of old vmci.tar archive.

Anyway, just for the completeness here is what you should do to fix missing version.h file:
cd /usr/src/kernels/3.8.1-201.fc18.x86_64/include/linux
ln -sf /usr/src/kernels/3.8.1-201.fc18.x86_64/include/generated/uapi/linux/version.h .
And that's it. Probably soon will appear all-in-one-patch that will streamline this whole procedure.

About Me

scientist, consultant, security specialist, networking guy, system administrator, philosopher ;)

Blog Archive