Showing posts with label research paper. Show all posts
Showing posts with label research paper. Show all posts

Wednesday, July 19, 2017

When superstitious are good...

I just read the following paper:
Nunn, Nathan, and Raul Sanchez de la Sierra. Why Being Wrong can be Right: Magical Warfare Technologies and the Persistence of False Beliefs. No. w23207. National Bureau of Economic Research, 2017.
and I find it very interesting. Basically it is about why superstitions are good in certain cases. In this paper the author analyzes a case of a village in a Democratic Republic Congo. Namely, due to the unstable political situation there are lot of violence done  by different military groups that regularly attack villages. To protect themselves people in some villages believe they can be made resistant to bullets by strictly following a special magical procedure. It's obviously false but in case someone dies they prescribe the fault to not following this special magical procedure. This sounds crazy, but the effect is interesting. While it hurts individuals, it helps the collective since more people are willing to engage in defending villages with the end result of having 2 years of peace in this specific village that was brought as an example.

The key is that the utility of individual increases by everyone contributing to defense, but decreases when individual invests more. This, in effect, means that everyone will not invest the best he can and thus the collective will suffer! The superstition encourages everyone to give the best they can thus helping the collective. This is brilliant!

This result provokes some thinking as to whether some superstitions that I find annoying are actually beneficial, like religion for example. 

Sunday, December 13, 2015

Research paper: "Development of a Cyber Warfare Training Prototype for Current Simulations"

One of my research directions I'm taking is simulation of security incidents and cyber security conflicts.  So, I'm searching for research papers that present work about that particular topic and one of them is the paper "Development of a Cyber Warfare Training Prototype for Current Simulations". I found out for this paper via announcement made on SCADASEC mailing list. The interesting thing is that the given paper couldn't be found on Google Scholar at the time this post was written. Anyway, it was presented on Fall 2014 Simulation Interoperability Workshop organized by Simulation Interoperability Standards Organization (SISO). All papers presented on the Workshop are freely available on SISO Web pages. The given workshop is, according to papers presented, mainly oriented towards military applications of simulation. Note that cybersecurity simulations only started to appear but the use of simulations in military are old thing.

Reading the paper Development of a Cyber Warfare Training Prototype for Current Simulations was valuable experience because I met for the first time a number of new terms specific to military domain. Also, there are references worth taking a look at, what I'm going to do.

In the end, I had the following conclusions about the paper:
  1. The paper talks about integrating cyber domain into  existing combat simulation tools. So, they are not interested in having a cybersecurity domain specific/isolated simulation tool. It might be extrapolated that this is based on the US military requirements.
  2. When the authors talk about cyber warfare training what they are basically describing is a cyber attack on command and control (C&C) infrastructure used on a battlefield.
  3. The main contribution of the paper is a description of requirements gathering phase based on use cases (section 3) and proposed component that would allow implementation of proposed scenarios (section 4).

Sunday, October 28, 2012

Research paper: "Before We Knew It..."

The paper I'll comment in this post was presented on ACM's Conference on Computer and Communications Security held on Oct. 16-18, 2012. The paper tries to answer the following question: How long, on average, does zero-day attack last before it is publicly disclosed? This is one of those questions, which when you see them are so obvious, but for some strange reason they didn't occur to you. And what's more, no one else didn't try to tackle them! In the same time this is a very important question from security defense perspective!

Anyway, having an idea is one thing, to realize it is completely another. And in this paper, the authors did both very well! In short, it is an excellent paper with a lot of information to digest! So, I strongly recommend anyone who's in security field to study it carefully. I'll put here some notes what I found novel and/or interesting while I was reading it. Note that for someone else, something else in the paper may be interesting or novel, and thus this post is definitely not replacement for reading the paper yourself. Also, if you search a bit on the Internet you'll find that others also covered this paper.

Contributions

The contributions of this paper are:
  • Analysis of dynamics and characteristics of zero-day attacks, i.e. how long it takes before zero-day attacks are discovered, how many hosts are targeted, etc.
  • A method to detect zero-day attacks based by correlating anti-virus signatures of malicious code that exploits certain vulnerabilities with a database of binary file downloads across 11 million hosts on the Internet.
  • Analysis of impact of vulnerability disclosure on number of attacks and their variations. In other words, what happens when new vulnerability is disclosed, how exactly does that impact number and variations of attacks.
Findings and implications

The key finding of this research is that zero day attacks are discovered, on average, 312 days after they first appeared. But in one case it took 30 months to discover the vulnerability that was exploited. Next finding is that zero day attacks, by themselves, are quite targeted. There are of course exceptions, but majority of them hit only several hosts. Next, after vulnerability is disclosed there is a surge of new variants of exploits as well as number of attacks. The number of attacks can be five orders of magnitude higher after they've been disclosed than before.

During their study, the authors found 11 not previously known zero-day attacks. But be careful, it isn't a statement that they found vulnerabilities now previously known. It means there are known vulnerabilities, but up to this point (i.e. this research) it wasn't know that those vulnerabilities were used for zero-day attacks.

So, here is my interpretation of implications of these findings. This means that currently there are at least dozen exploits in the wild no one is aware of. So, if you are a high profile company, this means that you are in a serious trouble. Now, as usual, everything depends on many things are you, or will you, be attacked. Next, when there is a disclosure of a vulnerability and there is no patch available, you have to be very careful because at that point there is a surge of attacks.

Wednesday, July 18, 2012

Research paper: "Lessons from the PSTN for Dependable Computing"

I came across this paper while reading about self-healing systems. The authors of the paper (Enriquez, Brown, Patterson) are doing analysis of FCC disruption reports in order to find out the causes of faults in PSTN. Additionally, PSTN is large and complex networks and certainly experiences from maintaining this network can help a lot in maintaining Internet infrastrcture.

I'll emphasize the following key points from this paper that I find interesting:
  • PSTN operators are required to fill disruption report when 30,000 people are affected and/or the disruption is longer than 30 minutes. There is a screen shot of report in the paper, even though it probably can be downloaded from FCC site. But, it seems that reports themselves are not publicly available?
  • They analyzed reports from year 2000. There is a reference in the paper with older, similar, analysis.
  • They used three metrics for comparison: number of outages, customer minutes and blocked calls. Number of outages is a simple count of outages, customer minutes is a multiplication of duration and total number of customers affected (disregarding the fact if they tried to make a call during disruption). Finally, blocked calls is a multiplication of duration and number of customers that really tried to make a call during disruption.
  • The prevailing cause of disruption is human error, more than 50% in any case. Human error is further subdivided into those made by persons affiliated in some way with the operator and those that are not. Those affiliated with the operator are cause of a larger number of disruptions.

About Me

scientist, consultant, security specialist, networking guy, system administrator, philosopher ;)

Blog Archive