Showing posts with label opinion. Show all posts
Showing posts with label opinion. Show all posts

Thursday, May 18, 2017

Using astrology to protect from APTs

Probably when you saw the title, your reaction was WTF?! Using astrology for APT detection, that's totally crazy! But, the sad fact is that it isn't so crazy after all because large number of products that are offered on the market claim that they are protecting you from APTs in the same way astrology claims it can predict your future.

To elaborate a bit more this claim, the key question is how do you know it's true that protection works? We can rephrase this question into another one: What process did manufacturers use to prove, beyond reasonable doubt, that their products are capable of detecting APTs? Did they publish somewhere what/how they did it? Also, since nothing is perfect, its obvious that no solution will detect all the cases. In how many cases will the products detect APTs, and again, if they provide such numbers, how they came up to them? What is precision, and what is recall? Anyway, this is not published so it is something you have to go buy on trust, not on the numbers and experiments.

Even more, in astrology if things turn out to be different, then the person doing prediction changes story somehow, for example he/she didn't know some crucial information which made the prediction wrong, or they predict in such a way that no matter what happens, it will be true. In other words, you can never falsify the astrology and that is the main reason it isn't science. But the same reasoning goes for products that protect you from APTs, too. Either if they protect you or not you have no way of knowing weather that was a pure luck or in the case of detection if this was something deliberately designed into the product.

So, to conclude, I don't think that majority of products for APT protection are nothing more than application of astrology to cyber security!

Friday, January 6, 2017

Few thoughts about systemd and human behavior

I was just reading comments on a post on Hackernews about systemd. Systemd, as you might know, is a replacement for the venerable init system. Anyway, reading the comments was reading about all the same story over and over again. Namely, there are those strongly pro and those strongly con the systemd, in some cases based on arguments (valid or not) and in other cases based on feelings. In this post I won't go into technical details about systemd but I'll concentrate on a human behavior that is the most interesting to me. And yes, if you think I'm pro systemd, then you're right, I am!

Now, what I think is the key characteristic of people is that they become too religious about something and thus unable to critically evaluate that particular thing. It happened a lot of times, and in some cases the transition from controversy was short, in other cases it took several or more generations of human lives. Take as an example the Christian religion! It also started as something controversial, but ended as a dogma that isn't allowed to be questioned. Or something more technical, ISO/OSI 7 layer model. It started as a controversy - home many layers, 5, 6, or 7? The result of this controversy we know, and after some short period of time it turned into a dogma, i.e. that 7 layers is some magical number of layers that isn't to be questioned. Luckily, I don't think that it is the case any more, that is, it is clear that 7 layers was too much. Anyway, I could list such cases on and on, almost ad infinitum. Note that I don't claim that any controversial change succeeded in the end, some were abandoned and that's (probably) OK.

I should also mention one other interesting thing called customs (as in norm). People lives are intervened with customs. Anyway, we have a tendency to do something that our elders did just because, i.e. we don't know why. I don't think that's bad per se, after all, probably that helped us to survive. The problem with the customs is that they assume slow development and change in environment. In such cases they are very valuable tool to collect and pass experience from generation to generation. But, when development/change speed reaches some tipping point, customs become a problem, not an advantage - and they stall adjustment to new circumstances. So, my personal opinion about customs is that we should respect them, but never forget to analyze if they are applicable/useful or not in a certain situation.

Finally, there is one more characteristic of a human beings, and that is inertia. We used to do things in certain way, and that's hard to change. Actually, I do not think that it is unrelated to religion and customs, actually on the contrary, I think they are related and it might be something else that is behind. But i won't go into that, at least not in this post.

So, what all this has to do with the systemd? Well, there is principle or philosophy in Unix development that states that whatever you program/create in Unix, let it do one thing and let it do it right. For example, tool to search for file should do it well, but not do anything else. And my opinion is that this philosophy turned into a custom and a religion in the same time. Just go through the comments related to SystemD and read them a bit. A substantial number of arguments is based on the premise that there is a principle and it should be obeyed under any cost/circumstance. But all those who bring this argument forget to justify why this principle would be applicable in this particular scenario.

And the state of computing has drastically changed from the time when this philosophy was (supposedly) defined (i.e. 1970-ties) and today's world. Let me say just a few big differences. Machines in the time when Unix was created were multiuser and stationary, with limited resources and capabilities, and they were used for much narrower application domains than today. Today, machines are powerful and inexpensive, used primarily by a single user. They do a lot more than they used to do 40 years ago, and they offer to users a lot more. Finally, users expectations from them are much higher than they used to be.

One advantage of doing one thing and doing it well was that it reduces complexity. In a world when programming was done in C or assembler, this was very important. But it also has a drawback, and that it is that you lose ability to see above the simple things. This in turn, costs you performance but also functionality, i.e. what you can do. Take for example pipes in Unix. They are great for data stored in text organized in a records consisting of lines. But what about JSON, XML and other complex structures? In other word, being simple means you can do just a simple things.

This issue of simple and manageable and complex and more able is actually a theme that occurs in different areas, too. For example, in networking where you have layers, but because cross layer communication is restricted means you have a problems with modern networks. Or, take for example programming and organizing software in simple modules/objects. Again, the more primitive base system is, the more problems you have to achieve complex behavior - in terms of performance, complexity, and so on.

Few more things to add to the mix about Unix development. First, Unix is definitely success. But it doesn't mean that everything that Unix did is a success. There are things that are good, bad, and ugly. Nothing is perfect, nor will ever be. So, we have to keep in mind that Unix can be better. The next thing we have to keep in mind is that each one of us has a particular view on the world, a.k.a. Unix, and our view is not necessarily the right view, or the view of the majority. This fact should influence the way we express ourselves in comments. So, do not over generalize each single personal use case. Yet, there are people who's opinion is more relevant, and that are those that maintain init/systemd and similar systems, as well as those that write scripts/modules for them.

Anyway, I'll stop here. My general opinion is that we are in 21st century and we have to re-evaluate everything we think should be done (customs) and in due course not be religious about certain things.

P.S. Systemd is not a single process/binary but a set of them so it's not monolithic. Yet, some argue its a monolithic! What is a definition of "monolithic"? With that line of reasoning GNU's coreutils is a monolithic software and thus not according to the Unix philosophy!

Friday, November 30, 2012

Internet Freedom - Well done EU!

If you think that Internet brought revolution only to individuals (and maybe different businesses that market themselves over the Internet)  than you miss one important link, telecommunication companies. Before the Internet they were in charge of everything related to communication and they did whatever they wanted, in the supposed name of the customers. If they thought that something isn't good, then no matter what people wanted, they weren't getting it. And we shell not forget pricing, which generated huge revenues. But, after the tremendous success of the Internet, things drastically changed. For some of the underlying reasons you can read in my other post, but the key is that the control was given to users, not network (i.e. telecoms). Now, telecoms are what they should be: data carriers only.

All good, but the problem is that there are no huge profits in data transfer, at least not as it used to be and telecoms don't just sit and wait. And so, every now and then we hear of some brilliant idea coming from telecommunication industry by which they either try to bring back good old days, or they try to offer something that doesn't make sense. Just in case you didn't know, ATM was one such idea that, fortunately  was a big failure! Even more interesting is a comment on this blog post from a guy (or guys) that are trying to reimplement some protocols from mobile telephony. They criticize specifications produced by telecoms (and related industry) for introducing new things, not because they are necessary, but because they are patented and in that way allow manipulation!

But, these days there is one other "very interesting" idea. Probably not many people know that ITU is trying to introduce mechanisms in order to regulate the Internet. Fortunately, EU isn't approving that, along with US. I approve that wholeheartedly  and I can not describe how outraged I am when I think about telecoms and ITU!

But, it is probably enough to point who is proposing regulation and to be clear what real motives are. Also interesting are requirements by some countries that Google and other Internet providers would have to pay to them to be allowed to distribute content to their citizens. This is absurd, because who forces users to access Google?

And ITU is also something I really dislike, a lot! It is a bureaucratic institution that produces standards for telecommunications. It's a dinosaur of the past. If you, as a single person, want to propose something, or just take part in some activity, you first have to be member of some member state standardization body, which isn't free. Then, you have to be delegated as a representative to ITU, and only then you can take part in some activity. And now we come to the best part, specifications that were produced common purpose were quite pricey. Truth to be told, they are now distributing specifications free of charge, but if it weren't the Internet, we would still have to pay for them. Contrast that to IETF, where membership and participation is open to everyone who wants to participate. Also, all the specifications produced by IETF are available for free to anyone. Now, I'm not claiming that IETF is perfect, but I certainly do claim that IETF is much better than ITU.

And while I'm at ITU/IETF, it happened to me several years ago that I called our Ministry in order to ask for funding to visit IETF. Apparently, this particular Ministry was willing to do that, or so it was written on their Web pages. The only caveat was that it didn't include IETF for a simple reason it isn't so bureaucratic as ITU. To cut the story short, bureaucrat I talked with didn't understand what I was talking about, nor he was interested to find out. And it ended without a grant...

Wednesday, September 19, 2012

Problem with authors that do not attend conferences and pay fees...

Yesterday I had an idea and before I forget about it, I thought it could be good to share it with the world. :) The reason I got that idea is that, very likely, I'm going to be vice chair of Information Systems Security event.

As far as I know, there are lot of authors that submit their papers to conferences with the intention to be published in proceedings, but they don't show up on the conference to present their work. This is regarded as rude. But, even worse is that there are also those that don't pay conference fees. Namely, usually the proceedings have to be sent to the press before deadline for payments, and also, there are those people that pay on the spot, which is also acceptable.

So, the problem is that authors promise that they will pay and attend conference, but in the end they don't do neither of those two things. So, the question is, how to revoke papers of the authors that didn't pay? Obviously, it is not possible to revoke paper from published proceedings. Nor it is an option to print proceedings later because it would incur additional costs (shipping). Lately, it is also common that conferences publish CDs with proceedings only. Those potentially could be duplicated on spot, but then again you have a problem that you don't know until the last day of the conference who will or who will not attend/pay and thus you would need to postpone CD until the end. This is also unacceptable as some people come only for a day or two and they leave before the last day of the conference, so again you have shipping costs. But shipping is not the only problem, the other is that people that attend the conference like to have papers in order to more easily follow presentation.

The proposed solution is simple. First of all, the proceedings would be published only on CD or USB. That is environmentally friendly approach. Next, all the accepted papers would be placed on the CD. BUT, they are encrypted and inaccessible without a key that is NOT on the CD/USB itself. Each paper with its own, unique, key. That key would be published on Web pages of conference (or on IEEE/ACM pages). Obviously, only those that payed (and attended the conference) will have keys published and thus, their papers will be part of the proceedings. The others will be completely unavailable.

This can be made quite transparent, in a sense that some application is started that obtains keys, and stores it locally if necessary so that content can be read when offline.

I think that's good idea. Even though its realization is questionable. What do you think? :)

Friday, July 20, 2012

A case against wizards...

Well, I mean on those configuration wizards that allow you to quickly setup and get on going with something.

They have their advantages, but also disadvantages. In my opinion one big disadvantage is that they take away one very important thing from you and that is making mistakes. Yes, because we learn by making mistakes, and if everything goes right, we haven't learned much. In short term, you won, but in long term, I think you lose. Namely, when something doesn't go right - and things have a huge tendency not to go right - then, if you show a problem to a person that did a lot of mistakes, and the one that used wizard and didn't have a clue of what can go wrong, I think that the one that made mistakes will be more efficient in solving a problem.

So, what would be a conclusion? Well, I think you should first try a harder way and only after you mastered it, take a shortcuts to be as quick as possible.

Friday, June 11, 2010

Software patching strategy...

There's much controversy about security flaw found in Windows XP and published by Google's researcher. You can read about it across the Internet, but here's one story. Short version is this: after the researcher found the flaw he gave Microsoft five days to react. But, Microsoft has something called 'patch Tuesday' which means it is delivering patches every Tuesday. I agree that this brings predictability to IT departments, but this is only true if the number of patches is high. So in the end, Microsoft didn't react in the given timeframe and the researcher published exploit. As I sad in one comment, I don't believe that Microsoft didn't react on purpose. It is more likely that they didn't know how to react and/or their procedures are not up to the task. Some are criticizing the researcher's approach, while the others are not. And it is true that this brought some IT managers, CISOs and who knows who else into very dangerous situation. But then, something like this should be made equal to impossible by a responsible company that is producing such a critical piece of a software like OS.

Contrast this researcher's behavior with some cracker that happened to find the same flaw. He could either sell it or devise exploit himself. In either case the exploit would be used at some point and the software producer wouldn't be notified about the flaw. By the time it is clear that there is a flaw, five days to react is huge time frame!

What is my point? The point is that in present times, and in future even more so, it is luxury to react in more than a day, maybe even in more than a few hours. Patches and workarounds have to be available immediately. Of course, someone could comment now that in the complex software it is hard to devise patch in a such short time frame. But, actually, I think it is possible with few changes in how software is developed.

First, software has to be clearly modularized. Each module has to have capability to be disabled or enabled. Obviously, when a flaw is found in a module, the first reaction would be to disable the module. Of course, there are two problems with this. The first one is that there are modules that are absolutely critical for a functional system. For example, in Linux kernel it is not possible to disable kernel locks because kernel can not function without them. But such a critical software should be made small and controllable so that it can be fixed in a matter of hour. The second problem is that some modules are not critical for the system itself, but are critical for the environment in which they are working. For example, driver for a network card. It can be disable, and the system itself will be functional, but it will also be useless. This case should be handled in such way that each user can use predefined functionality definitions and alter them or create a new ones. To return to an example with a driver for a network card. In this case it is common wisdom that no system today can work without network card. So, some profile will define network card as critical and as such it will not be disabled without explicit confirmation from the administrator. But, such modules have to be developed with principles somewhere in between the absolutely necessary modules and modules that aren't necessary at all (like Help system that is the cause of the problem with Windows XP that started this blog entry).

Thursday, July 31, 2008

Security through obscurity - is it useless?

For a few weeks now I've been thinking about security through obscurity (STO). It is a common wisdom that it's a bad way to build security of anything. But, this doesn't have to be necessarily true, as I'll explain in the moment. What made me write this post is that a similar comment about usefulness of STO was given in a Matt Bishop's artice in IEEE Security & Privacy journal (About Penetration Testing, November/December 2007, pp 84-87). He notes that:

Contrary to widespread opinion, this defense [STO] is valid, providing that it’s used with several defensive mechanisms (“defense in depth”). In this way, the attacker must still overcome other defenses after discovering the information. That said, the conventional wisdom is correct in that hiding information should never be the only defensive mechanism.
His note goes right to the point. So, to explain this point, first I'll explain what STO is and why it is problematic. Then I'll explain what actually security is, and finally, how in this context STO can be actually useful.

STO is a principle that you are secure if the attacker doesn't know how you protect yourself. For example, if you invent new crypto algorithm and don't tell anyone how it works, then the one that invented algorithm believes it's more secure. Instead of crypto algorithm, you can take almost anything you want. Good example would be communication protocol. Now, the problem with this approach was that usually crypto algorithms, or protocols, were very poorly desinged! So, the moment someone reverse engineered those he was able to break in! Now, think for the moment if this secret algorithm is actually AES? Would discovery of algorithm mean that STO is bad? I suppose not, and so should you, but let us first see what security is.

Security is complex topic, and I believe we could discuss it for days without reaching it's true definition. But, one key point about security is that there is no such thing as perfect security. You are always vulnerable, that is, in any real world situation. So, to be secure actually means too hard for attacker to break in. When attacker breaks in, he doesn't attack from some void, but he has to have some information. So, the more information attacker has about it's target, it's more likely he'll succeed.

Now, how this goes along with STO? Imagine to implementations, completly identical, apart from the first implementation beeing secret. In the first case attacker has first to find information about implementation and then he can try some attack, while in the second case the attacker can immediately start attack.

So, STO can make security better, but with precautions. First, it must not be the only way of protection, i.e. bad algorithm/protocol/implementation. Second, you have to be ceratin that sooner or later someone will reverse engineer your secret, depending on how popular your implementation is.

To conclude, STO could help make security better, but only if used with caution. What you can be almost certain, is that if you go to invent new crypto algorithm, new protocol, or something similar you'll certainly make an error that will make the design, as well as implementation, very weak! Thus, this was of using STO might be usefull only for biggest ones with plenty of resources and skills, like e.g. NSA. :)

Sunday, July 13, 2008

The critique of dshield, honeypots, network telescopes and such...

To start, it's not that those mechanisms are bad, but what they present is only a part of the whole picture. Namely, they give a picture of the opportunistic attacks. In other words, they monitor behavior of automated tools, script kiddies and similar attackers, those that do not target specific victims. The result is that not much sophistication is necessary in such cases. If you can not compromise one target, you do not waste your time but move on to the next potential victim.

Why they only analyse optimistic attacks? Simple, targeted attacks are against victims with some value, and honeypots/honeynets and network telescopes work by using anallocated IP address space and thus there is no value in those addresses. What would be interesting is to see attacks on high profile targets, and surrounding addresses!

As for dshield, which might collect logs from some high profile site, the data collected is too simple to make any judgements about the attackers sophistication. What's more, because of the anonymization of data, this information is lost! Honeypot, on the other hand, do allow such analysis, but those data is not collected from the high profile sites.

In conclusion, it would be usefull to analyse data of attacks on popular sites, or honeypots placed in the same address range as those interesting sites. Maybe even combination of those two approaches would be interesting for analysis.

That's it. Here are some links:

dshiled
honeynets

Tuesday, February 5, 2008

DDoS attacks, Internet, new Internet and POTS...

I was just thinking about many initiatives (e.g. GENI) to design Internet from scratch! It certainly requires us to break out from the current way of thinking, that's with us for about 40 years now, and to find and propose something new. The good example of this break through was the Internet itself, i.e. the concept of packet switched network. As a side note, Van Jacobson has an idea of how this new might look like and I recommend the reader to find his lecture he held in Google on Google Videos.

While thinking about what is this "new" thing, I took as an example DDoS attacks. There are no DDoS attacks in POTS and they are a big problem for the Internet. So, how this new mechanism should work in order to prevent DDoS attacks. The key point of DDoS attack (or more generally, DoS attack) is that there are finite resources that are consumed by attacker and thus, regular users can not access those resources, they are denied service.

And, while I was thinking about it, I actually realised that there is DDoS attack possibility in the POTS as there are also finite resources. Ok, ok, I know, I managed to reinvent the wheel, but hey, I'm happy with it. :) So, if possible, why there are no DoS attacks in telephony? The key point is that end devices in POTS are dumb and thus, not remotely controllable. If they were remotely controllable, then the attacker would be able to gain access to them and to use huge number of those devices to mount an attack on selected victim. Maybe this attack would be even more effective than the one on the Internet since resources taken by end devices are not shared even though the end devices don't use them.

It turns out that DDoS attack is actually a consequence of giving more power to the user via the more capable end devices. Furthermore, because those end devices are complex systems it's inevitable that there would be many ways of breaking in and controlling them.

Of course, someone might argue that the problem is in ease with which IP packets can be spoofed. But, this is actually easily solvable, at least in theory, if each ISP would control it's access network for spoofed addresses. The more serious problem is actually DoS attack made by legitimate IP packets. It is traceable if coming from a single source, or small number of sources, but the real problem is a network of compromized hosts (botnets). There is no defence from those networks as they look as legitimate users.

So, because we are limited with real world and we'll always have only finite resources on our disposal it turns out that the only way of getting rid of DDoS is to restrict end devices, which by itself is impossible. Now, this is thinking within current framework. But, what if we can made finite resource apparently infinite, or somehow restrict end devices.... This is something for further thinking...

Sunday, January 28, 2007

OSS support for Croatian language

I was just looking at the Asterisk open source PBX and one of the features of that software is possibility of integration with Festival. Festival is a text to speech synthesis software, freely available on the Internet! And it's quite good piece of software. Using only Festival, or Festival in combination with some other application, like Asterisk, interesting services could emerge.

Now, we come to the point! I searched for possibility to use Croatian language in that application. And guess what, there is no application that supports it. There are quite few application for speech synthesis and none of them, you guessed, has support for Croatian! Actually, there is possibility of adding Croatian to those software using generic support but it's far from usefull.

So, this made me think a bit! What the hell is Croatian Ministry of Sciences and whatever else doing!? Shouldn't at least they care about this aspect of development? Shouldn't they try to invest some money in development of such software? Shouldn't they put out some tender searching for interested parties that would develop such software? Also, the license of that software should be such that afterwards this software could be used in both, open source and commercial applications, e.g. some BSD style license. And not only there is a problem with software for speech synthesis. There's no OCR capable software, syntax and grammar checking are also not well supported, if supported at all, and to talk about voice recognition is to much!

Speaking of syntax checking, thanks to enthusiasts there is some support in open source office applications, but much remains to be done and I believe that investment in that respect would help, but would help to Croatian language – and I believe that's important to the Government and also to the aforementioned Ministry.

Sunday, January 14, 2007

“You will work on the newest technologies”

The title of this entry is actually taken from one ad searching for prospective students to work in a Croatian telecom after graduating. Actually, ad itself is very cleverly thought out and I have to give credit to the one who thought of it. But, there is always doubt that it was actually “taken” from someone else...

What's important about this sentence is how little it actually says and how misleading it is. To work on the newest technologies is very attractive, but, the secretaries working on Word 2007, or whatever latest version is, are also working on the newest technologies! So, in order to find out what this sentence really means, I'm going to dissect it a bit. But before I continue let me stress one thing. I'm talking about average case, and correspondingly, it might be true for the particular telecom, but it also might be false!

The hart of the problem is that the reality of Croatia is the fact that there is almost no development, and everything boils down to giving services and selling something. So, the phrase working on the newest technologies means actually configuring devices or application software products, and if you are particularly unlucky, to sell them! And what is so attractive of being user and/or seller instead of being engineer?! I suppose that students enrolled in electrical engineering and/or computer science courses because they don't treat themselves as users.

Now, you might say that by configuring this devices, or application software, one is actually using it as a tool and doing something new! But let's try again, when we are talking about telecom - and the others are more or less the same, the marketing department is the one who says that company needs another service/product/whatever. Engineering department then reads manuals of available equipment and their capabilities and configures it so that requested service is implemented! Now, where's development in that process?!

And one related thing, namely, there are a plenty of different ads seeking employees and offering work on newest technologies, while the truth is, when you start working, you are only allowed to look into this equipment (if there is any) and, because it is used in production environment, you are not allowed to play with it!

So, to conclude, working on the newest technology in Croatia isn't so exciting for an engineer as it might sound at first.

About Me

scientist, consultant, security specialist, networking guy, system administrator, philosopher ;)

Blog Archive