Showing posts with label security. Show all posts
Showing posts with label security. Show all posts

Wednesday, July 4, 2018

Cracking raw MD5 hashes with John the Ripper

I just spent at least 15 minutes trying to figure out why every single post on the Internet tells me to place MD5 hash in a file and call John like this
john --format=raw-md5 --wordlist=/usr/share/dict/words md5.txt
and yet, it constantly gives me an error message:
No password hashes loaded (see FAQ)
The content of md5.txt was:
20E11C279CE49BCC51EDC8041B8FAAAA
I even tried prepending dummy user before this hash, like this:
dummyuser: 20E11C279CE49BCC51EDC8041B8FAAAA
but without any luck.

And of course I have extended version of John the Ripper that support raw-md5 format.

It turned out that John doesn't support capital letters in hash value! They have to be written in small letters like this:
20e11c279ce49bcc51edc8041b8fbbb6
after that change, everything worked like a charm. What a stupid error!?

Sunday, December 13, 2015

Research paper: "Development of a Cyber Warfare Training Prototype for Current Simulations"

One of my research directions I'm taking is simulation of security incidents and cyber security conflicts.  So, I'm searching for research papers that present work about that particular topic and one of them is the paper "Development of a Cyber Warfare Training Prototype for Current Simulations". I found out for this paper via announcement made on SCADASEC mailing list. The interesting thing is that the given paper couldn't be found on Google Scholar at the time this post was written. Anyway, it was presented on Fall 2014 Simulation Interoperability Workshop organized by Simulation Interoperability Standards Organization (SISO). All papers presented on the Workshop are freely available on SISO Web pages. The given workshop is, according to papers presented, mainly oriented towards military applications of simulation. Note that cybersecurity simulations only started to appear but the use of simulations in military are old thing.

Reading the paper Development of a Cyber Warfare Training Prototype for Current Simulations was valuable experience because I met for the first time a number of new terms specific to military domain. Also, there are references worth taking a look at, what I'm going to do.

In the end, I had the following conclusions about the paper:
  1. The paper talks about integrating cyber domain into  existing combat simulation tools. So, they are not interested in having a cybersecurity domain specific/isolated simulation tool. It might be extrapolated that this is based on the US military requirements.
  2. When the authors talk about cyber warfare training what they are basically describing is a cyber attack on command and control (C&C) infrastructure used on a battlefield.
  3. The main contribution of the paper is a description of requirements gathering phase based on use cases (section 3) and proposed component that would allow implementation of proposed scenarios (section 4).

Thursday, December 10, 2015

SCADA/ICS security conferences in 2016

On SCADASEC mailing list there was a question about security conferences in 2016 worth attending. I find this question very interesting so I decided to list here all responses received in this thread. The result is shown in the following table:


List of ICS/SCADA conferences in 2016
Important dates Conference name Venue Comment
Conference date:
January 12-14, 2016
S4x16 Week Miami South Beach Probably the best in the US for a heavy research focus, Dale and team do an excellent job on trying to do a "what's next" approach usually as well as a lot of flash/flair (fun time)
Conference date:
February 7-11, 2016
Kaspersky Security Analyst Summit Tenerife, Spain
Conference date:
February 9-11, 2016
DistribuTech Orange County Convention Center, West Halls A3-4 & B, Orlando, FL Focused on power grid, very OT centric and not security focused gives a unique look at broader industry
Conference date:
February 16-23, 2016
ICS Security Summit Orlando, FL Great 2 day conference with opportunity to take training classes if you want, hands-on challenges, live demos, and ~200 strong IT/OT mixed audience
Conference date:
April 26-28, 2016
ICS Cyber Security London, United Kingdom
Conference date:
May 3-5, 2016
ICSJWG 2016 Spring Meeting Scottsdale, AZ Multiple times a year in different locations): Definitely one to go for anyone new to the ICS community, it's free and the ICSCERT folks are always very kind/professional/awesome.
Conference date:
May 30, 2016
ACM Cyber-Physical System Security Workshop (CPSS 2016) Xi’an, China
Conference date:
October 25-27, 2016
4SICS Stockholm, Sweden Great IT/OT mix (50% practitioners this past year) with a very similar vibe to S4 in terms of flair/research
Conference date:
November 10-11, 2016
11th Annual API Cybersecurity Conference & Expo Westin Houston Memorial City Houston, Texas Focused on Oil/Gas, very diverse group of speakers with a lot of vendor interaction.

Note: I'm extrapolating in this case as when and where the next conference will take place.

The column titled Comment is taken from this mail message, so I'm crediting the original author as I don't have any first-person experience with the mentioned conferences. Also, you can find additional list of conferences here and here.

Tuesday, June 16, 2015

How to fix weak DH key in Zimbra 7

I just had to fix a problem of weak DH keys in Zimbra 7. Namely, Firefox and Chrome, after upgrade, don't want to connect to servers that use DH keys less than 1024 bits. This means that IMAP won't work either, as it uses SSL/TLS too (or at least it should ;)). Note that the solution is to upgrade to newest Zimbra version, but for me at the moment there is no way I can upgrade my server, i.e. the upgrade is planned but currently it isn't possible. Googling around gave nothing for Zimbra 7, but only for Zimbra 8. In the end, it also turned out that in order to fix the length of DH keys it is necessary to have Java 8, while in Zimbra 7 Java 6 is used.

After a lot of search, the solution turned out to be easy. The key is to disable cipher suites that use DH keys. I managed to do that using the following commands:
zmprov mcf +zimbraSSLExcludeCipherSuites TLS_DHE_RSA_WITH_AES_128_CBC_SHA
zmprov mcf +zimbraSSLExcludeCipherSuites TLS_DHE_RSA_WITH_AES_256_CBC_SHA
zmprov mcf +zimbraSSLExcludeCipherSuites TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA
zmprov mcf +zimbraSSLExcludeCipherSuites TLS_DHE_RSA_WITH_DES_CBC_SHA
zmprov mcf +zimbraSSLExcludeCipherSuites TLS_DHE_RSA_WITH_DES_CBC3_SHA
zmprov mcf +zimbraSSLExcludeCipherSuites TLS_EDH_RSA_WITH_3DES_EDE_CBC_SHA
zmprov mcf +zimbraSSLExcludeCipherSuites SSL_EDH_RSA_WITH_3DES_EDE_CBC_SHA
zmprov mcf +zimbraSSLExcludeCipherSuites TLS_DHE_DSS_WITH_AES_128_CBC_SHA
zmprov mcf +zimbraSSLExcludeCipherSuites TLS_DHE_DSS_WITH_AES_256_CBC_SHA
zmprov mcf +zimbraSSLExcludeCipherSuites SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA
zmmailboxdctl restart
After that, Webmail worked again. You can check supported ciphersuites using sslscan command, i.e. in my case after the given change I got the following ciphersuites:
$ sslscan webmail:443 | grep Accepted
    Accepted  SSLv3  256 bits  AES256-SHA
    Accepted  SSLv3  168 bits  EDH-RSA-DES-CBC3-SHA
    Accepted  SSLv3  168 bits  DES-CBC3-SHA
    Accepted  SSLv3  128 bits  AES128-SHA
    Accepted  SSLv3  128 bits  RC4-SHA
    Accepted  SSLv3  128 bits  RC4-MD5
    Accepted  TLSv1  256 bits  AES256-SHA
    Accepted  TLSv1  168 bits  EDH-RSA-DES-CBC3-SHA
    Accepted  TLSv1  168 bits  DES-CBC3-SHA
    Accepted  TLSv1  128 bits  AES128-SHA
    Accepted  TLSv1  128 bits  RC4-SHA
    Accepted  TLSv1  128 bits  RC4-MD5
Even though Webmail worked, Thunderbird didn't connect. Using Wireshark I found out that Thunderbird, for IMAP connection, tries to use EDH-RSA-DES-CBC3-SHA. I tried to disable that ciphersuite on the server side, but no matter what I've tried, it didn't work. In the end I disabled that cipher on the client side. I opened Thunderbird's configuration editor and there I manually disabled given cipher by setting configuration setting to false.

Tuesday, March 11, 2014

Compiling OVALDI 5.10.1.6 on CentOS 6.5

Some time ago I wrote about compiling Ovaldi on CentOS 6. Now, I tried to compile it again, and I found out that some things changed. Most importantly, there is no need to compile old Xalan/Xerces libraries any more. But, there are still problems with RPM. To make the story short, I managed to compile it and create RPM. Here are the files:
  • patch you need to be able to compile ovaldi
  • SRPM file you can use to recompile ovaldi; it contains patch
  • RPM file if you don't want to compile it yourself (and you trust me ;))
Note that I didn't do any testing at all! So, it might happen that rpm based stuff doesn't work. If that's the case leave a comment and I'll take a look when I find time.

Tuesday, March 26, 2013

Periodically scan network with nmap...

I think that it is a good idea to periodically scan network using nmap in order to take a snapshot of a current state, and to be able to track changes in the network. For that purpose I wrote the following quick and dirty bash script:
#!/bin/bash

# Interface on which scan should be performed. Multiple interfaces
# should be separated with spaces!
SCAN_INTERFACES="eth1"

# Network that should be scanned. If empty, or undefined, automatically
# deduce network attached to interface. Note that if you specified
# multiple interfaces than this variable should be undefined!
SCAN_NETWORKS=

#######################################################################
# THERE ARE NO MORE CONFIGURABLE PARTS AFTER THIS LINE
#######################################################################

TIMESTAMP=`date +%Y%m%d%H%M`
START=`date +%Y%m%d%H%m%S.%N`

cd /var/log/nmap || exit 1

for if in \$SCAN_INTERFACES
do
    # Find network to scan if it isn't specified...
    [ -z "\$SCAN_NETWORKS" -o "\$if" != "\$SCAN_INTERFACES" ] && SCAN_NETWORKS=`/sbin/ip ro sh dev \$if | grep -v via | cut -f1 -d" "`

    # Find addresses on the output interface so that we don't scan them
    EXCLUDE_LIST=`/sbin/ip addr sh dev \$if | awk '/inet / {print "--exclude ", substr(\$2, 1, index(\$2, "/")-1)}'`
    [ -z "\$SCAN_NETWORKS" ] && continue

    # Start scanning
    nmap -n -Pn -sS -O -sV -T4 -vv \${EXCLUDE_LIST} -oA nmap-\$if-\${TIMESTAMP} -e \$if ${SCAN_NETWORKS} >& nmap-scan-\$if-\${TIMESTAMP}.log
done

echo "START \$START END `date +%Y%m%d%H%m%S.%N`" >> /var/log/nmap-scan.log

exit 0
Note that some lines are wrapped due to the shortage of space. This script assumes several things in order to run properly:
  1. You have a directory /var/log/nmap where all the result files will be placed.
  2. nmap is version 6, but definitely not 4 because version 4 has some weaknesses.
  3. You want to scan networks assigned to your interfaces.
  4. The script is run under root user.
Now, after each run of this script you'll have four files left in /var/log/nmap each with the following extension:
  1. nmap - this is a standard nmap output file
  2. gnmap - greppable nmap output
  3. xml - XML output file
  4. log - Log file into which stdout and stderr were redirected during nmap's run.
It is also necessary to configure script to be run periodically. cron is ideal for that purpose. To achieve that, you can add the following entry to root's crontab:
0 */2 * * * full_path_and_name_to_your_script
Obviously, you'll have to change full_path_and_name_to_your_script with exact path and filename. In this case, you'll get the script to be run every two hours.

Thursday, March 21, 2013

Detecting hosts that cross connect two networks...

There are different scenarios in which you can have a situation where some of your clients are connected in the same time to two different networks of different trust levels. This is dangerous because effectively they can be used as a staging point for potential attackers or malware to "jump" from less trusted network to more trusted one.

For example, suppose that you have protected internal wired LAN, and in the same time you have wireless LAN that is used for guests and allowed unrestricted access to the Internet, as shown in the figure below. Someone, from your internal and protected network, might intentionally or accidentally connect to the wireless network too, and in that case he/she will shortcut the two networks. If you thought that you can use MAC addresses to detect such hosts, you can not, for a simple reason that in one network host is connected using wired ethernet card with one MAC address, and in the second network it is connected using WLAN network card with another MAC address.

Hypothetical situation to illustrate how internal host might shortcut two networks of different trust levels
So, how to detect those hosts that shortcut two networks? Actually, there is an easy and elegant way. Namely, you can send ARP requests on one network for IP addresses on another network. ARP module, by default, doesn't know or care for routing tables so that it knows on which interface it should respond with specific addresses. If we look again on the figure above, this means that you can run the following command from the proxy host (the host above firewall):
arping -I eth1 10.0.0.250
I assume in that command that eth1 is the interface connected to AP. What will happen is that broadcast will be sent on the wireless network and the client will respond with its wireless MAC address even though that address is not used on the wireless network.

So, I hope the idea is clear now. To detect if there is some host cross connecting two networks I send arp request on a host (i.e. proxy host) for each possible IP address used on the the other network (i.e. local protected network in the figure above).

Note that it is possible to disable such behavior on Linux machines using sysctl variable /proc/sys/net/ipv4/conf/*/arp_filter. You can find more information, for example, here.

nmap games

Now, there is another problem. How to scan the whole network without manually trying each possible IP address. The first solution is, obviously, to use nmap. Nmap is a great tool for network scanning, but in this case it has a problem, I tried to run it in the following way, but unsuccessfuly:
# nmap -PR -Pn -e eth1 10.0.0.0/24
Starting Nmap 5.51 ( http://nmap.org ) at 2013-03-21 10:07 CET
nexthost: failed to determine route to 10.0.0.0
QUITTING!
Option -PR requires ARP scan, -Pn disables ping scan, and -e eth1 tells nmap to send packets via interface eth1. The problem is that I'm trying to scan network 10.0.0.0/24 on interface eth1 and there is no route in routig tables that tells kernel/nmap that network is really connected on interface eth1. So, nmap refuses to scan those addresses. One solution is to temporarily add that route:
ip route add 10.0.0.0/24 dev eth1
But this isn't an option if the route already exists in order for the hosts on the protected network to be able to access proxy, i.e. if there is route similar to the following one:
# ip ro sh
...
10.0.0.0/24 via 172.16.1.1 dev eth0
...
Again, I'm making a lot of assumptions here (network between proxy and firewall, IP addresses and interfaces) but I hope you understand the point. The given route is in the routing tables and removing it isn't an option.

Next try was using -sn switch, i.e. to disable port scan:
# nmap -PR -Pn -sn -e eth1 10.0.0.0/24
Starting Nmap 5.51 ( http://nmap.org ) at 2013-03-21 10:07 CET
Well, now nmap worked, sort of, because it showed all the hosts are up. Using tcpdump I found that it didn't send anything to the network. Reason: it thinks this is remote network, pings are disabled, arp cannot be used, and finally, because of -Pn, assumes all the hosts are up. So, I was again at the beginning.

Simple arping solution

Until I figure out how to force nmap to send arp probes without worrying  about routing tables here is a simple solution using arping command:
#!/bin/bash

TIMEOUT=4

for i in {1..254}
do
if arping -q -f -w \$TIMEOUT -I eth2 10.0.0.\$i
then
echo "10.0.0.$i is up"
fi
done
There are three problems with this solution:
  1. In case your network has netmask other than /24 you'll have to change this script, i.e. it is a bit more complicated. How much, depends on the network mask.
  2. The problem with this solution is that it is slow. For example, to scan 254 addresses and with timeout of 4 seconds, it will take about 17 minutes to scan the whole address range assuming no address is alive (what is actually desired state on the network).
  3. Finally, timeout value is a bit tricky to determine. Majority of the responses are really quick, i.e. under a second. But some devices respond slower, i.e. when they entered some kind of a sleep state.
Still, it is a satisfactory solution until I find a way to use nmap for that purpose. If you know how, please, leave a comment.

Tuesday, March 12, 2013

Storing arpwatch output into database

arpwatch is very useful tool which logs its output via syslog and also sends mail alerts. Unfortunately, this isn't configurable, i.e. arpwatch, out-of-the-box, doesn't support any other way of logging.  One approach is to modify arpwatch to be able to log into some SQL database, but this isn't straightforward way, i.e. not an easy one. Namely, arpwatch is written in C, and besides, it's hard to know if this would be accepted by upstream (who ever that migh be).

So, I decided to go with a different approach. I configured arpwatch to log its output into log file and wrote a Python script that executes via cron and transfers all the data into the database. Here is how I did it along with all the scripts.

Configuring logging

The first step is to configure arpwatch to log its output into a separate file. This isn't possible to do in arpwatch itself, but it is possible to achieve it by configuring syslog, or rsyslog to be more precise. On CentOS 6 rsyslog is used that allows just that. All you have to do is to place a file named (for example) arpwatch.conf in directory /etc/rsyslog.d with the following content:
if $programname == 'arpwatch' then /var/log/arpwatch.log
&~
Don't forget to restart rsyslog after that. This will write anything logged by arpwatch binary into /var/log/arpwatch.log file. All the different log lines that can appear are documented in arpwatch's manual page so I won't replicate them here.

Configuring database

In my case I created a single table using the following SQL statement:
CREATE TABLE arpwatch (
  macaddr char(17) NOT NULL,
  ip_addr int(10) unsigned NOT NULL,
  state varchar(8) NOT NULL,
  timestamp datetime NOT NULL,
  oldmac char(17) DEFAULT NULL
)
I think it's pretty obvious what goes where. Only thing that might be strange is that I'm using INT(10) for IP address. But that is because SNORT also stores IP addresses in such a way so in order to be compatible with it I used it also. Also, what is missing is primary key, but for the time being I'm not using it.

Script

Here is the script that should be started from the cron. For example, store it in /usr/local/sbin directory and to start it every 20 minutes add the following line (as root user) to cron using 'crontab -e' command:
*/20 * * * * /usr/local/sbin/arpwatchlog2sql.py
Note that the script expects configuration file. Here is a sample configuration file you'll have to modify. The script expects configuration file to be in its current directory, but you can place it into /usr/local/etc and modify the line CONFIGFILE in script accordingly.

Log rotation

Finally, you should be certain that logs are properly handled, i.e. rotated along with other logs. Since arpwatch is logging via syslog, that means that you have to modify rsyslog's log configuration file, i.e. /etc/logrotate.d/syslog. In there you'll see that logfiles maintained by rsyslog are enumerated, one per line. Just add arpwatch.log to that list and that should be it.

Thursday, January 3, 2013

Signing XML document using xmlsec1 command line tool

Suppose that you have some XML document you wish to sign. It turns out it's very easy to do so because there is xmlsec library, and in particular xmlsec1 command line tool that's standard part of Fedora Linux distribution. The only problem is that its very picky and not very informative when it comes to error logging, finally, there are a lot of small details that can catch you. Since I had to sign a document I spent some time trying to figure out how to do that. In the end, I managed to do it and I'll write here how to for a future reference. Before I continue you'll need certificate and a key to be used for verification and signing. They are not the topic of this post so I'll just give you them: private key, certificate, and CA certificate.

Ok, let's assume that you have the following XML document you wish to sign:
<?xml version="1.0" encoding="UTF-8"?>
<document>
  <firstelement attr1="attr1">
    Content of first element.
    <secondelement attr2="attr2">
      Content of the second element.
      <thirdelement attr3="attr3">
        And the content of the third element.
      </thirdelement>
    </secondelement>
  </firstelement>
</document>
Basically, you can take any XML document you wish. I'll suppose that this XML document is stored in the file tosign.xml. If you typed yourself XML document, or if you just want to be sure, you can check if XML is well formed. There is xmllint tool that surves that purpose. Just run it like this:
$ xmllint tosign.xml
And if you don't get any error messages, or warnings, that the XML document is well formed. You can also check if the document is valid by providing schema, or DTD, via appropriate command line switches.

In order to sign this document you have to add XML Signature fragment to the XML file. That fragment defines how the document will be signed, what will be signed, and, where the signature, along with certificate, will be placed. The fragment has the following form:
<Signature xmlns="http://www.w3.org/2000/09/xmldsig#">
  <SignedInfo>
    <CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
    <SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/>
    <Reference>
      <Transforms>
        <Transform           Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
        <Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
      </Transforms>
      <DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/>
      <DigestValue />
    </Reference>
    </SignedInfo>
  <SignatureValue />
  <KeyInfo>
    <X509Data />
  </KeyInfo>
</Signature>
Note that this (quite verbose) fragment has to be placed somewhere within the root element. Now, lets sign this, newly created document. To do so invoke xmlsec1 command like this (this is one line in case it is broken into two due to the formatting):
xmlsec1 --sign --privkey-pem privkey.pem,cert.pem --output signed.xml tosign.xml
After this, the signed XML document will be in the file named signed.xml. Take a look into it, the placeholders within signature fragment are filled up with signature data, and with a certificate who's private key was used to sign the XML document.

Note that signature itself is generated using private key (privkey.pem) which, as its name suggest, has to be private for a signer. Otherwise, anyone can falsify signature.

Now, to verify signed XML document you have to specify trusted CA that will be used to verify signature. It has to be certificate of the certificate authority (CA) that issued signer's certificate. In my case that's cacert.pem, i.e.:
$ xmlsec1 --verify --trusted-pem cacert.pem signed.xml
OK
SignedInfo References (ok/all): 1/1
Manifests References (ok/all): 0/0
As you can see, the signature was verified OK. You can try now to change something in the XML document and see if the verification passes or not.

I'll mentioned one more thing before concluding this post. Namely, in the previous example the whole XML document was signed. But, you can sign only a part. To do so, you have to do two things. The first one is to mark the element that you wish to sign (its content will also be signed) and the second is to tell xmlsec1 to sign only that element.

The first step is accomplished by adding attribute to the element that should be signed. Let's assume that in our case we only want secondelement to be signed. Modify the appropriate opening tag to have the following form:
<secondelement attr2="attr2" id="signonlythis">
Note that I added attribute id, but basically any name can be used (unless you use some predefined schema or DTD).

The second step is to tell xmlsec1 that only this element should be signed. This is accomplished by modifying Reference element to have the following form:
<Reference URI="#signonlythis">
If you now try to sign this modified XML document using command I gave above, you'll receive an error message:
$ xmlsec1 --sign --privkey-pem cert.key,cert.pem --output test_signed.xml tosign.xml
func=xmlSecXPathDataExecute:file=xpath.c:line=273:obj=unknown:subj=xmlXPtrEval:error=5:libxml2 library function failed:expr=xpointer(id('signonlythiselement'))
func=xmlSecXPathDataListExecute:file=xpath.c:line=356:obj=unknown:subj=xmlSecXPathDataExecute:error=1:xmlsec library function failed:
func=xmlSecTransformXPathExecute:file=xpath.c:line=466:obj=xpointer:subj=xmlSecXPathDataExecute:error=1:xmlsec library function failed:
func=xmlSecTransformDefaultPushXml:file=transforms.c:line=2405:obj=xpointer:subj=xmlSecTransformExecute:error=1:xmlsec library function failed:
func=xmlSecTransformCtxXmlExecute:file=transforms.c:line=1236:obj=unknown:subj=xmlSecTransformPushXml:error=1:xmlsec library function failed:transform=xpointer
func=xmlSecTransformCtxExecute:file=transforms.c:line=1296:obj=unknown:subj=xmlSecTransformCtxXmlExecute:error=1:xmlsec library function failed:
func=xmlSecDSigReferenceCtxProcessNode:file=xmldsig.c:line=1571:obj=unknown:subj=xmlSecTransformCtxExecute:error=1:xmlsec library function failed:
func=xmlSecDSigCtxProcessSignedInfoNode:file=xmldsig.c:line=804:obj=unknown:subj=xmlSecDSigReferenceCtxProcessNode:error=1:xmlsec library function failed:node=Reference
func=xmlSecDSigCtxProcessSignatureNode:file=xmldsig.c:line=547:obj=unknown:subj=xmlSecDSigCtxProcessSignedInfoNode:error=1:xmlsec library function failed:
func=xmlSecDSigCtxSign:file=xmldsig.c:line=303:obj=unknown:subj=xmlSecDSigCtxSigantureProcessNode:error=1:xmlsec library function failed:
Error: signature failed
Error: failed to sign file "tosign.xml"
The problem is that URI attribute references ID attribute of an element. But, ID element isn't recognized by name but has to be specified in DTD or in schema, depending what you have. In our case there is neither schema nor DTD and thus ID isn't recognized by xmlsec1. So, we have to tell it what is the name of the ID attribute, and that can be done in two ways. The first one is by using command line switch --id-attr, and so the command to sign this document is:
xmlsec1 --sign --privkey-pem privkey.pem,cert.pem --id-attr:id secondelement --output signed.xml tosign.xml
The name after the column is the attribute name that is ID. Default value is "id", but can be anything else. If it is "id", then it can be omitted. The argument to --id-attr is element whose attribute should be treated as an id. You should also be careful of namespaces. If they are used then the namespace of the element has to be specified too, and not shorthand but the full namespace name. Finally, note that XML is case sensitive!

The other possibility is to create DTD file and to give it as an argument to xmlsec1. In this case, DTD should look like this (I'll assume that this is the content of a file tosign.dtd):
<!ATTLIST secondelement id ID #IMPLIED>
And you would invoke xmlsec1 like this:
xmlsec1 --sign --privkey-pem privkey.pem,cert.pem --dtd-file tosign.dtd --output signed.xml tosign.xml
Note that you'll receive a lot of warnings (DTD is incomplete) but the file will be signed. To check the signature, you again have to specify either --dtd-file or --id-attr options, e.g.
xmlsec1 --verify --trusted-pem cacert.pem --id-attr:id secondelement signed.xml
Now, you can experiment to check that really only secondelement was signed and nothing else.

Final note. You have to put XML signature fragment in XML file you are signing. What can confuse you (and confused me) is that there is an option sign-tmpl that adds this fragment, but it is very specific and used only for testing purposes.

Thursday, November 29, 2012

Few notes about sslstrip tool...

I decided to test sslstrip tool. The idea was that I'll use it to demonstrate to users that they should take a note if there is https when they are accessing some site where they have to type password or some other sensitive data. To create test network I used Windows 7 running within VMware Workstation and using iptables I redirected traffic from virtual machine to local port 80 where I started sslstrip tool. But, no matter what I did, it didn't work. It seems that when VMWare is used iptables redirection  doesn't work as expected. In other words, it seems that netfilter hooks aren't placed within vmware network stack.

I managed to get around that issue by modifying hosts file within Windows. Namely, you should open file C:\Windows\System32\drivers\etc\hosts and add the following line there:
192.168.x.1     www.facebook.com facebook.com
The exact IP address is the one assigned to vmnet8 interface on host operating system. Now start Firefox as usual and type in the URL bar:
http://www.facebook.com
Note that I'm explicitely telling Firefox to use http, not https. Anyway, after I did it this way everything worked as expected.

The next "problem" you migh have is that no matter what you do, the site you access automatically switches to https. The reason is HSTS. It is used by server to inform Web browser that it should be accessed only through SSL connections. For this reason sslstrip doesn't work with sites that use HSTS, like Google  and Twitter. But, it doesn't mean that those sites are completely protected. If the client is accessing those sites for the first time or the client never used https to access them, then HSTS can be prevented. The point is that HSTS information is transferred only via https connection. Anyway, to get around this clear history (i.e. go Tools then Clear Recent History... and select to clear everything).

And, for the end, I don't think that it is necessary to enable forwarding in the Linux kernel in order for sslstrip to work, i.e. the following command is unnecessary:
echo 1 > /proc/sys/net/ipv4/ip_forward
Namely, the kernel isn't doing forwarding of IP packets in order for this to work. sslstrip acts as a proxy and thus kernel isn't doing any relaying. But, in case you are diverting only a part of the traffic, e.g. only HTTP, and the kernel is handling the rest, i.e. DNS, then forwarding is necessary in the kernel.

Tuesday, October 30, 2012

CFP: MIPRO ISS

Starting from this year I'm going to be a vice chair of Information Systems Security event that is a part of a larger MIPRO conference. The reason I took this role is that I believe that relevant security event is missing in this region and that this conference (I'll say conference not event from now on and by that I'll refer to ISS event) can fill the void. Furthermore, I believe there is lot of a room for improvements, which is of course mandatory if this conference is to become regional, and I have some ideas what and how to do it. But it will take me some time until I articulate what I intend to do. In the meantime, CFP was published [PDF].

I don't find conferences appropriate for publishing finished work, journals a better for that purpose. Conferences are, on the other hand, ideal for presenting your work in progress in order to solicit feedback so that, in the end, you improve quality of your research. I especially invite students, undergraduate, graduate and postgraduate, so submit their work for diploma thesis or PhDs. Also of great interest are findings of weakness (vulnerabilities) somewhere, I invite you to present your finding on the conference. Of course, in that case you should be careful first to notify those you found a vulnerability so that they have time to react.

Sunday, October 28, 2012

Research paper: "Before We Knew It..."

The paper I'll comment in this post was presented on ACM's Conference on Computer and Communications Security held on Oct. 16-18, 2012. The paper tries to answer the following question: How long, on average, does zero-day attack last before it is publicly disclosed? This is one of those questions, which when you see them are so obvious, but for some strange reason they didn't occur to you. And what's more, no one else didn't try to tackle them! In the same time this is a very important question from security defense perspective!

Anyway, having an idea is one thing, to realize it is completely another. And in this paper, the authors did both very well! In short, it is an excellent paper with a lot of information to digest! So, I strongly recommend anyone who's in security field to study it carefully. I'll put here some notes what I found novel and/or interesting while I was reading it. Note that for someone else, something else in the paper may be interesting or novel, and thus this post is definitely not replacement for reading the paper yourself. Also, if you search a bit on the Internet you'll find that others also covered this paper.

Contributions

The contributions of this paper are:
  • Analysis of dynamics and characteristics of zero-day attacks, i.e. how long it takes before zero-day attacks are discovered, how many hosts are targeted, etc.
  • A method to detect zero-day attacks based by correlating anti-virus signatures of malicious code that exploits certain vulnerabilities with a database of binary file downloads across 11 million hosts on the Internet.
  • Analysis of impact of vulnerability disclosure on number of attacks and their variations. In other words, what happens when new vulnerability is disclosed, how exactly does that impact number and variations of attacks.
Findings and implications

The key finding of this research is that zero day attacks are discovered, on average, 312 days after they first appeared. But in one case it took 30 months to discover the vulnerability that was exploited. Next finding is that zero day attacks, by themselves, are quite targeted. There are of course exceptions, but majority of them hit only several hosts. Next, after vulnerability is disclosed there is a surge of new variants of exploits as well as number of attacks. The number of attacks can be five orders of magnitude higher after they've been disclosed than before.

During their study, the authors found 11 not previously known zero-day attacks. But be careful, it isn't a statement that they found vulnerabilities now previously known. It means there are known vulnerabilities, but up to this point (i.e. this research) it wasn't know that those vulnerabilities were used for zero-day attacks.

So, here is my interpretation of implications of these findings. This means that currently there are at least dozen exploits in the wild no one is aware of. So, if you are a high profile company, this means that you are in a serious trouble. Now, as usual, everything depends on many things are you, or will you, be attacked. Next, when there is a disclosure of a vulnerability and there is no patch available, you have to be very careful because at that point there is a surge of attacks.

Friday, July 20, 2012

Querying SNORT SQL database

When SNORT stores its data into SQL database then there is obvious question how to get data you would otherwise had in plain log files generated by SNORT. So, here is what I managed to deduce so far (note that the post will be extended as I learn more). In case you have comment/addition/correction please post a comment on this post. That is especially valid for SQL queries as I'm not an expert in that area and some of them might be suboptimal.

Few introductory words


To try the following examples you need working instance of MySQL database and SNORT that logs into database (directly or via barnyard2). If you have that, then run mysql command line client (or some equivalent) and select SNORT database. You are now ready to go...

This post is written using schema version 107. To find out which version of schema you have, run the following query:
mysql> select * from `schema`;
+------+---------------------+
| vseq | ctime               |
+------+---------------------+
|  107 | 2012-07-10 10:20:52 |
+------+---------------------+
1 row in set (0.00 sec)
Note the backticks! Namely, schema is MySQL's reserved word and if you don't use backticks, MySQL will report syntax error! Alternatively, you can use syntax database.tablename to avoid table name being treated as a reserved word.

Finally, because of screen size constraints, I'm limiting the output more often than not, here is what you'll see in that regard:
  1. In SELECT statement, I'm using LIMIT N keyword to get only first N rows.
  2. I'll explicitly enumerate fields to be returned in SELECT statement instead of using star (i.e. SELECT column1,column2 instead of SELECT *).
  3. I'll also use LEFT() function to limit number of characters retrieved from VARCHAR and similarly typed columns.

Examples of queries


The first thing you probably want to find out is how many alerts there were on a certain day, e.g. on a July 10th, 2012. This is easy, just run the following query:
mysql> select count(*) from event where timestamp between '2012-07-10' and '2012-07-11';
+----------+
| count(*) |
+----------+
|    12313 |
+----------+
1 row in set (0.01 sec)
Two things you should note about this query:
  1. All the generated events are stored in the table event. There is a column timestamp which stores timestamp when an event was generated.
  2. To select date range I'm using between/and keywords. I'm also shortening typing by providing only a date while time is assumed to be 00:00:00 so this query basically catches anything on July 10th, 2012, as requested.
I could equally well use the following query:
select count(*) from event where date(timestamp)='2012-07-10';
to get the same result, but in case I want a range instead of a single day, syntax using BETWEEN keyword is better.

To get number of events generated on a current day, use the following query:
mysql> select count(*) from event where date(timestamp)=date(now());
+----------+
| count(*) |
+----------+
|      178 |
+----------+
1 row in set (0.13 sec)
Note that we are using function NOW() to get current time and then we just extract date using DATE() function.

While we are at the table events, here is its structure:
mysql> show columns from event;
+-----------+------------------+------+-----+---------+-------+
| Field     | Type             | Null | Key | Default | Extra |
+-----------+------------------+------+-----+---------+-------+
| sid       | int(10) unsigned | NO   | PRI | NULL    |       |
| cid       | int(10) unsigned | NO   | PRI | NULL    |       |
| signature | int(10) unsigned | NO   | MUL | NULL    |       |
| timestamp | datetime         | NO   | MUL | NULL    |       |
+-----------+------------------+------+-----+---------+-------+
4 rows in set (0.00 sec)
Only the timestamp column contains data in this table, other columns are links to other tables as follows:
  1. sid and cid are links to packet data, i.e. IP/TCP/UDP headers and associated data. Those are placed within separate tables which we'll talk about later.
  2. signature is link (foreign key) to signature table column sig_id
Ok, what about finding out number of events per day? Well, easy again, the following select statement will do that:
mysql> select count(*),date(timestamp) as count from event group by date(timestamp);
+----------+------------+
| count(*) | count      |
+----------+------------+
|    11689 | 2012-06-28 |
|    17904 | 2012-06-29 |
|     4353 | 2012-06-30 |
|     4322 | 2012-07-01 |
|    14198 | 2012-07-02 |
|     2977 | 2012-07-03 |
|    12313 | 2012-07-10 |
|    13014 | 2012-07-11 |
|     9126 | 2012-07-12 |
|     2642 | 2012-07-17 |
|     1527 | 2012-07-19 |
+----------+------------+
11 rows in set (0.07 sec)
I could use ORDER BY statement to get a day with largest number of alerts, otherwise they are sorted according to a day. In this case I used function DATE() to chop time part of the timestamp. Otherwise, I would get alerts broken down by minutes.

Ok, let's move on. What about finding out all types of events that occurred, or in other words, all signatures. Well, signatures that SNORT generates are stored in the table signature and simple query on this table will give us the answer what signatures were generated so far:
mysql> select sig_id,sig_name from signature;
+--------+-----------------------------------------------------------------------+
| sig_id | sig_name                                                              |
+--------+-----------------------------------------------------------------------+
|      1 | SCAN UPnP service discover attempt                                    |
|      2 | stream5: TCP Small Segment Threshold Exceeded                         |
|      3 | http_inspect: NO CONTENT-LENGTH OR TRANSFER-ENCODING IN HTTP RESPONSE |
|      4 | http_inspect: MESSAGE WITH INVALID CONTENT-LENGTH OR CHUNK SIZE       |
|      5 | stream5: Reset outside window                                         |
|      6 | ssh: Protocol mismatch                                                |
+--------+-----------------------------------------------------------------------+
6 rows in set (0.00 sec)
All in all, our SNORT instance generated six different signatures so far. The table signature has the following structure:
mysql> show columns from signature;
+--------------+------------------+------+-----+---------+----------------+
| Field        | Type             | Null | Key | Default | Extra          |
+--------------+------------------+------+-----+---------+----------------+
| sig_id       | int(10) unsigned | NO   | PRI | NULL    | auto_increment |
| sig_name     | varchar(255)     | NO   | MUL | NULL    |                |
| sig_class_id | int(10) unsigned | NO   | MUL | NULL    |                |
| sig_priority | int(10) unsigned | YES  |     | NULL    |                |
| sig_rev      | int(10) unsigned | YES  |     | NULL    |                |
| sig_sid      | int(10) unsigned | YES  |     | NULL    |                |
| sig_gid      | int(10) unsigned | YES  |     | NULL    |                |
+--------------+------------------+------+-----+---------+----------------+
7 rows in set (0.00 sec)
The columns are:
  1. sig_id is primary key of this table.
  2. sig_name is textual representation of signature.
  3. sig_class_id
  4. sig_priority
  5. sig_rev
  6. sig_sid
  7. sig_gid
Ok, the next thing you might want to know is how many time each alert was generated. So, to achieve this use the following SQL query:
mysql> select sig_id,left(sig_name,30),count(*) from signature as s, event as e where s.sig_id=e.signature group by sig_name;
+--------+--------------------------------+----------+
| sig_id | left(sig_name,30)              | count(*) |
+--------+--------------------------------+----------+
|      4 | http_inspect: MESSAGE WITH INV |      109 |
|      3 | http_inspect: NO CONTENT-LENGT |      198 |
|      1 | SCAN UPnP service discover att |    55440 |
|      6 | ssh: Protocol mismatch         |     2360 |
|      5 | stream5: Reset outside window  |    33698 |
|      2 | stream5: TCP Small Segment Thr |      971 |
+--------+--------------------------------+----------+
6 rows in set (0.23 sec)
We had to do a join across two tables, signature and event. As you can see I got specific signatures with their count. Furthermore, I could order them so that I have most frequent ones on top (or bottom). Also, you should note that I'm using LEFT() function to make the output shorter in order to fit this post.

Ok, what about finding number of signatures generated on a specific day, say, today? Well, this is the same as the previous query but we only have to add one more condition, namely that the rows from the table event are taken into account only if timestamp is from today:
mysql> select sig_id,left(sig_name,30),count(*) from signature as s, event as e where s.sig_id=e.signature and date(e.timestamp)=date(now()) group by sig_name;
+--------+--------------------------------+----------+
| sig_id | left(sig_name,30)              | count(*) |
+--------+--------------------------------+----------+
|      6 | ssh: Protocol mismatch         |      226 |
|      5 | stream5: Reset outside window  |        2 |
|      2 | stream5: TCP Small Segment Thr |       40 |
+--------+--------------------------------+----------+
3 rows in set (0.14 sec)
Easy, the only difference from the previous query is shown in italic font. Now, let us move on. Suppose we want to know hosts that generated packets that triggered alerts. In order to do that we have to include table iphdr in the query. Table iphdr contains data from the IP header of captured packet. So, run the following SELECT statement:
mysql> select signature,count(*) as cnt,inet_ntoa(ip_src) from event,iphdr where event.cid=iphdr.cid and event.sid=iphdr.sid group by ip_src order by cnt;
+-----------+-------+-------------------+
| signature | cnt   | inet_ntoa(ip_src) |
+-----------+-------+-------------------+
|         3 |     1 | 192.168.1.44      |
|         5 |     1 | 192.168.1.89      |
|         5 |     1 | 192.168.1.27      |
|         5 |     1 | 192.168.1.5       |
|         5 |     1 | 192.168.1.120     |
|         5 |     1 | 192.168.0.21      |
+-----------+-------+-------------------+
6 rows in set (0.0 sec)
Ok, I have source IP addresses that triggered total of CNT number of alerts. Note that IP addresses are kept in a decimal form, so they have to be converted into dot form using inet_ntoa() MySQL function.

Here is the structure of iphdr table:
mysql> show columns from iphdr;
+----------+----------------------+------+-----+---------+-------+
| Field    | Type                 | Null | Key | Default | Extra |
+----------+----------------------+------+-----+---------+-------+
| sid      | int(10) unsigned     | NO   | PRI | NULL    |       |
| cid      | int(10) unsigned     | NO   | PRI | NULL    |       |
| ip_src   | int(10) unsigned     | NO   | MUL | NULL    |       |
| ip_dst   | int(10) unsigned     | NO   | MUL | NULL    |       |
| ip_ver   | tinyint(3) unsigned  | YES  |     | NULL    |       |
| ip_hlen  | tinyint(3) unsigned  | YES  |     | NULL    |       |
| ip_tos   | tinyint(3) unsigned  | YES  |     | NULL    |       |
| ip_len   | smallint(5) unsigned | YES  |     | NULL    |       |
| ip_id    | smallint(5) unsigned | YES  |     | NULL    |       |
| ip_flags | tinyint(3) unsigned  | YES  |     | NULL    |       |
| ip_off   | smallint(5) unsigned | YES  |     | NULL    |       |
| ip_ttl   | tinyint(3) unsigned  | YES  |     | NULL    |       |
| ip_proto | tinyint(3) unsigned  | NO   |     | NULL    |       |
| ip_csum  | smallint(5) unsigned | YES  |     | NULL    |       |
+----------+----------------------+------+-----+---------+-------+
14 rows in set (0.00 sec)
sid and cid columns are connection to event table, and to tcphdr and udphdr tables. The rest of the columns contain data from IP header. For example, ip_ver contains IP version. So, you can try to see how many protocol versions that triggered alerts there was:
mysql> select ip_ver,count(*) from iphdr group by ip_ver;
+--------+----------+
| ip_ver | count(*) |
+--------+----------+
|      4 |    92445 |
+--------+----------+
1 row in set (0.04 sec)
In my case, it was only IPv4. We can also do the same with the other fields, like which transport layer protocols were observed:
mysql> select ip_proto,count(*) from iphdr group by ip_proto;
+----------+----------+
| ip_proto | count(*) |
+----------+----------+
|        6 |    43076 |
|       17 |    49785 |
+----------+----------+
2 rows in set (0.04 sec)
Obviously, only two, UDP (id 17) and TCP (id 6). BTW, those numbers you can look up in /etc/protocols file on any Linux machine, or you can go to IANA.

To see all source IP addresses that triggered alerts we can use the following query:
mysql> select inet_ntoa(ip_src),count(*) from iphdr group by ip_src limit 5;
+-------------------+----------+
| inet_ntoa(ip_src) | count(*) |
+-------------------+----------+
| 10.61.34.152      |       20 |
| 85.214.67.247     |        2 |
| 134.108.44.54     |        2 |
| 192.168.5.71      |       10 |
| 192.168.102.150   |     2130 |
+-------------------+----------+
5 rows in set (0.00 sec)
Now, it can turn out that there are some IP addresses that we actually didn't expect and we want to know, when and what happened. Take for example the address 10.61.34.152 from the above output, let's see what this address generated:
mysql> select inet_ntoa(ip_src),inet_ntoa(ip_dst),count(*) from iphdr where inet_ntoa(iphdr.ip_src)='10.61.34.152' group by ip_dst;
+-------------------+-------------------+----------+
| inet_ntoa(ip_src) | inet_ntoa(ip_dst) | count(*) |
+-------------------+-------------------+----------+
| 10.61.34.152      | 239.255.255.250   |       20 |
+-------------------+-------------------+----------+
1 row in set (0.03 sec)
Using this query we see that all the packets were destined to address 239.255.255.250. A bit of grouping according to date:
mysql> select date(timestamp),count(*) from event,iphdr where (event.cid,event.sid)=(iphdr.cid,iphdr.sid) and inet_ntoa(ip_src)='10.61.34.152' group by date(timestamp);
+-----------------+----------+
| date(timestamp) | count(*) |
+-----------------+----------+
| 2012-07-02      |       20 |
+-----------------+----------+
1 row in set (0.03 sec)
we see that all events were generated on the same day. And what was the alert:
mysql> select signature.sig_name,count(*) from signature,event,iphdr where (event.cid,event.sid)=(iphdr.cid,iphdr.sid) and inet_ntoa(ip_src)='10.61.34.152' and event.signature=signature.sig_id group by sig_id;
+------------------------------------+----------+
| sig_name                           | count(*) |
+------------------------------------+----------+
| SCAN UPnP service discover attempt |       20 |
+------------------------------------+----------+
1 row in set (0.84 sec)
Well, all were UPnP service discovery requests.

One interesting thing, at least for me, is who sent ICMP Echo Request messages on the network. This is easy to determine using the following query:
mysql> select inet_ntoa(iphdr.ip_src) as SRC,inet_ntoa(iphdr.ip_dst) as DST,timestamp from event,iphdr,icmphdr where (icmphdr.sid,icmphdr.cid)=(event.sid,event.cid) and (iphdr.sid,iphdr.cid)=(event.sid,event.cid) and icmp_type=8 limit 3;
+-------------+--------------+---------------------+
| SRC         | DST          | timestamp           |
+-------------+--------------+---------------------+
| 192.168.1.8 | 192.168.1.55 | 2012-07-20 11:05:01 |
| 192.168.1.8 | 192.168.1.55 | 2012-07-20 11:05:01 |
| 192.168.1.8 | 192.168.1.55 | 2012-07-20 11:05:02 |
+-------------+--------------+---------------------+
3 rows in set (0.00 sec)

Obviousy, host with address 192.168.1.8 sent probes to host 192.168.1.55.

So much for now. Detailed info about DB schema used by SNORT can be found on this link.

In the end, my impression is that it is definitely much more easier and efficient to gather statistics using SQL database than plain files but that it is the best to use some tool that has all those queries predefined and to fall back to SQL only when you have some very specific requirement.

Wednesday, July 18, 2012

Research paper: "Lessons from the PSTN for Dependable Computing"

I came across this paper while reading about self-healing systems. The authors of the paper (Enriquez, Brown, Patterson) are doing analysis of FCC disruption reports in order to find out the causes of faults in PSTN. Additionally, PSTN is large and complex networks and certainly experiences from maintaining this network can help a lot in maintaining Internet infrastrcture.

I'll emphasize the following key points from this paper that I find interesting:
  • PSTN operators are required to fill disruption report when 30,000 people are affected and/or the disruption is longer than 30 minutes. There is a screen shot of report in the paper, even though it probably can be downloaded from FCC site. But, it seems that reports themselves are not publicly available?
  • They analyzed reports from year 2000. There is a reference in the paper with older, similar, analysis.
  • They used three metrics for comparison: number of outages, customer minutes and blocked calls. Number of outages is a simple count of outages, customer minutes is a multiplication of duration and total number of customers affected (disregarding the fact if they tried to make a call during disruption). Finally, blocked calls is a multiplication of duration and number of customers that really tried to make a call during disruption.
  • The prevailing cause of disruption is human error, more than 50% in any case. Human error is further subdivided into those made by persons affiliated in some way with the operator and those that are not. Those affiliated with the operator are cause of a larger number of disruptions.

ASLR to extreme

I was reading about Artificial Immune Systems (more about that in another post) and in one of the papers the statement was that biological systems increase resiliency by diversity. Furthermore, they give a contra example in computer networks in which Internet Explorer (at the time the paper was written) had 90% market share. It's obvious that when something hits IE, it hits almost the whole Internet. This isn't diversity by any standard.

I think that we have such problems with security in general that we need some new, radical solution. Probably, we are long way from that solution, but it occurred to me that this is exactly what is necessary, diversity that will disallow attackers from influencing single computers and thus large parts of the Internet. Still, it is hard to expect there will be N producers of operating systems, then N of browsers, etc. It's not easy to produce those, it takes long time and huge resources. Now, biological systems are much much older and theoretically it could be that in some distant future there will be such diversity. IMHO, this is questionable, and as I said it's theoretically in some distant future, which is why it is beyond the point. What we need is something that works now.

If you think a bit what we need is a mutation, that will change computer systems, from the bottom up in unpredictable ways. On the bottom I'm thinking about parts of a single application, while on the top I think of the complex systems consisting of computers and networks. Furthermore, this mutation has to be specific to each system so that there are hardly two similar systems in existence. So, for example, the computer you work on isn't similar to any other computer in use, and, as you use it, it evolves and mutates.

Now, why I mentioned Address Space Layout Randomization (ASLR) in the title? Because it seems to me to be a step in the direction of totally mutating everything. Namely, ASLR mutates address space of the process thus making it unpredictable for attackers and making each systems different. This mutation unfortunately, is restricted because it is too coarse grained, i.e. you move whole libraries, but not functions, of even blocks of the code from which functions are built.

Of course there are problems. For a start, similarity is a key to maintenance of systems. Companies having a large number of computers try hard to make them equal, just to lower maintenance costs. Not only that, developers count on similarity to be able to reproduce bugs, and consequently to correct them. So, those requirements should either be kept in a new system (which in part is contradictory) or new ways of achieving the same effect (i.e. maintainability).

Finally, mutation has to be dynamic. Namely, even if attacker gets into one system, or part of the system he needs time to discover other parts of the system. If mutation is quick enough, the knowledge that attacker obtains will be worthless before he manages to use it. Not only that, but potentially what he already achieved will evaporate soon.

Sunday, June 10, 2012

Stuxnet... the origin... and implications...

Wow! I was reading Jeffrey Carr's post in which he admits being wrong about Stuxnet origin, and he references this article that made him change is minde. It is definitely a fascinating read about Stuxnet, how it was conceived, developed and used. I recommend that you take a time and read it! Namely, for several years now you'll find all over the Internet accusations that Chinese government is attacking western companies and governments. But this shows that other governments aren't sitting and doing nothing. Moreover, this article shows that malware has been brought to a new level of use in which it is used as attack weapon, to cite the article: Somebody crossed the Rubicon.

I suppose this will have a huge impact and lot of implications:
  1. Russia is pushing towards some kind of international treaty that would regulate use of cyberweapons. One of the advocates of this is Kaspersky, but there are also critiques. Anyway, this article gives a push to Russian government intentions.
  2. What impact will this have to closed source software? Because, no one can never be sure what's in there, especially if the company producing this software is under control of foreign country. Now, Microsoft already gave access to source code of, I think India among others, but this also means that Indian secret services can find bugs and use it against other countries? Sounds like Games without frontiers...
  3. Antivirus software, NIDS, HIDS and usual protection doesn't help here! They relay on a mass, i.e. someone gets infected but this allows anti virus companies to analyze threat, to create signatures and to update anti virues software so that huge majority is protected. These are, in a way, custom made attack programs.
  4. With a backup of government agencies, these attacks can be very sophisticated. But note that anyone with enough resources (i.e. reach enough) can do the same.
All in all, very interesting and far reaching developments...

Monday, February 27, 2012

Nortel security breach...

This story is unbelievable example of doing security totally wrong and being totally irresponsible to customers and shareholders but also to one's own country!

What happened is that attackers (supposedly, but very probably, from China) obtained passwords of Nortel's seven top executives and used them to gain access into corporate network. Once in, they installed rootkits that allowed them to monitor everything what happened within the company! After some employees detected that there is a breach, top executives apparently didn't do anything to stop it, asses damages and introduce controls to prevent it. Not only that, but they (according to some comments) were the first ones to blame for a breach as a directly responsible because of their careless behavior.

What is basically even more serious is that Nortel, as well as any other company, has obligation towards its customers to keep them safe! Namely, by compromising Nortel it is highly likely, especially with a breach of such a size, that Nortel's products were compromised and that attackers had access to them. By gaining access to those products attackers certainly gained access to vulnerabilities which allowed them to endanger Nortel's customers too! This is unbelievable, and I have no words to express how I feel about it. It's like being in a Twilight Zone!

Also, shareholders were also victims because top management didn't properly protect company's assets and thus, they indirectly incurred damages to the company!

I believe that there have to be laws regulating such behavior as those are damaging to everyone, as I tried to explain. And without laws, nothing can be done to prosecute those responsible for such behavior!

About Me

scientist, consultant, security specialist, networking guy, system administrator, philosopher ;)

Blog Archive