2006-10-30

Software vs. hardware virtualization

This article [vmware.com] compares performance of software and hardware virtualization techniques. What is most surprising is that the results are mixed: hardware-assisted virtualization can actually be slower than pure software virtualization under certain workloads. Wow!

The benchmark was done with VT-enabled Pentium 4. It would be interesting to see how the AMD's Pacifica hardware virtualization compares.

2006-10-29

A small laugh

Being a Croatian citizen, I found this news article funny: according to the "Worldwide Press Freedom Index", USA ended up at 53. place , together with Croatia, Botswana and Tonga.

What's even more surprising is that some former communist countries, which only recently abandoned communism are extremely high on the list - eg. Czech Republic is at the 5th place.

2006-10-28

Anti-virus, virtualization and security paradigm

This is a very interesting interview with Joanna Rutkowska, the author of a "Blue Pill" rootkit. She just confirmed an opinion I had for a long time: that AV programs are mostly useless (heck, she doesn't even run one on her WinXP 64-bit machine).

AV detection is an inherently undecidable problem; therefore it will always be possible to create an undetectable virus. Without needing a rootkit that puts the OS into a VM.

Her wish (quote):

"The solution that I would love to have would be based on integrity checking of all the system components, starting from filesystem (digitally signed files), through verifying that all code sections in memory haven't been modified (something I partly implemented in my SVV scanner) and finally checking all the possible "dynamic hooking places" in kernel data sections."

is not realistic (unless the scanner is in the hypervisor) because of the question: How does the scanner ensure its own integrity?

What I would like to see is a paradigm shift in the security industry. It should put more weight on prevention and damage containment rather than source code auditing and scanning of programs/memory. Both techniques have been in use for a very long time and they don't work very well.

My view is that the OS should use the virtualization technology to create extremely light-weight, isolated environments; in the extreme case 1 VM per running application (this requires some heavy engineering to be doable efficiently - eg. sharing of the core OS code between VM instances). Each VM would expose only those parts of OS functionality that is absolutely neccessary for the application to work. Information flow between VMs would be strictly under user control (thus, making the user once more the weakest link in the chain).

There lie some heavy research questions in my proposal:
  1. Efficient memory utilization (it would be infeasible to completely copy all of the underlying OS into each VM). Hypervisor would have to be intimately tied to the "guest" OS.
  2. Policies for information flow between VMs.
  3. Efficient history saving (so that user can roll back to some previous VM state).
  4. Interoperability with other VM products like Xen or VMware.
Regarding the last point, there is an interesting comment in the AMD64 Pacifica manual for the VMRUN instruction under "Instruction intercepts":
"Note: The current implementation requires that the VMRUN intercept always be set in the VMCB."
Is this a hint that, in the future, we might get HW support for recursive virtual machines?

2006-10-18

Object-orientation

I have kind of despised object-oriented programming for a long time now (This was a result of bad experiences on large projects. It actually made things worse than better.). Until I found this section in Xavier Amatriain's PhD thesis. It is a nice view on the matter from Kristen Nygaard, one of the "fathers" of object-oriented programming (the other was Ole-Johan Dahl). His view I actually like.

After reading that section in the thesis and browsing through Nygaard's and Dahl's homepage, I felt a bit sad for not getting to meet them in person. Now I respect for OO as it was envisioned and find it a worthy idea. Misused in practice most of the time.

Tags:

2006-10-17

Interfaces and stability

These days there is much fuss around (not so) newly discovered bugs in nVidia drivers for Linux. Instead of being happy that a large software vendor has gone to trouble of providing drivers for a nonsignificant portion of their market share, users are whining about the "evil" nature of closed-source binary blobs being downloaded to kernel.


  1. The importance of the bug is exaggerated. I consider it a bad practice to install any kind of advanced graphic capabilities on servers. As for desktops.. well, a plethora of bugs in other "desktop programs" already exist, this one doesn't make any additional threats beyond the existing ones. And it's simple to fix - don't use the driver.

  2. I've found many complaints that nVidia's drivers are low-quality, unstable or just don't work. (Even today a friend complained to me.)


What is most fascinating is that users of these drivers are barking at the wrong tree (nVidia in this case): the real fault lies on the lack of official kernel APIs which are also ever-changing, to make the situation more difficult. And Linus is even proud of it, replying in the lines of "read the source".

IMHO, users are in this case a direct victim of such attitude. As I said, Linux is only a secondary platform to nVidia. They have no real financial incentive in keeping up with Linux kernel development. There is no point in constantly keeping behind myriad of Linux kernels with different patches and trying to make drivers work with every single one of them. Why? Because there's no stable kernel API.

Binary-only drivers (if written well, but that's beside the point here) work very well on Solaris, AIX and Win32. I don't know about AIX, but I know that Solaris and Win32 publish official driver development kits (DDK). Every 3rd party manufacturer can write a driver w/o relying on the "current state of flux" of the kernel and be reasonably certain that their investment in the platform is long-term. Something which is not the case with Linux.

I encourage users to stop buying the "binary blobs are evil"-nonsense and start asking the following question to Linux developers: "Why doesn't Linux have DDK?" If some DDK appears, Linux will maybe (just maybe) become a more attractive platform for hardware manufacturers. I believe it'd be easier to convince "big players" to write Linux drivers conforming to DDK than to convince them to publish HW specs. Until such time, users are "doomed" to reverse-engineered drivers, "black magic" (like ndiswrapper), buggy (like nVidia's), or simply no drivers at all.

And I fully understand reluctance of ATI and nVidia to open up specs. Opening up the HW spec can reveal much about internal implementation. And internals are what they are living of. Encourages competition. And in the end, it's the users who benefit of it. (Just imagine ATI copying every feature of nVidia with same performance and comparable price, and vice-versa. They would simply loose any incentive to further develop their chips. At least until a newcomer to the market appears.)

Tags:

2006-10-07

Another critique of "free" software zealots

This article announces a completely "free" browser named IceWeasel, derived from the Firefox code. This article points out some problems that distributions like Ubuntu and Debian have while distributing Firefox, and why Firefox might not actually be "free".

In my opinion, they present a skewed view of the matters, unfairly picturing the Mozilla corporation as a "bad guy". Quote from the 2nd article: "Though Debian and Debian-derived distributions such as the popular Ubuntu Linux currently include Mozilla Firefox, they do not typically include the actual Mozilla Firefox logo."

The question to ask is: WHY do they remove Firefox of its logo? What do they put instead? Logos have extreme importance in todays world, and can be actually said to sell products (look for example at Nike). What kind of ethics drives these "free" software developers? It seems that not only they want to own the code, they also seem to want to own corporate identity. And be able to remove it from programs at their will, possibly replacing it with their own. That's using hard work of another company to promote themselves. If this is kind of ethics that RMS and FSF stand for, their effort is better renamed to "slave" software.

Tags:

2006-10-01

The terrorists have won

Here you can see a small video showing violent reactions of very small quantities (as small as 2 grams) of alkaline metals with water. Imagine sneaking 2 grams of lithium onto a plane, buying a bottle of water on the plane and dropping lithium inside it. KABOOM! Or better yet, drop it into the toilet. Even if it doesn't crash the plane, it'll make an unforgettable experience for the passengers. Are metal detectors sensitive enough to detect 2 grams of any metal? Can the security officer examining your hand-baggage through x-ray notice any object weighing 2 grams?

And the new EU airline security regulations, that will take effect from 1.11., forbid carrying more than 1 deciliter of own liquids onto the plane. Who are they trying to protect and from whom? Just to make it clear, I have no intentions of blowing up planes or killing people. This post is a form of protest, and a way to point out worthlessness of most of these security measures. Esp. forbidding liquids. If terrorists want to mass-kill people, they can do it almost undisturbed at check-in waiting lines.

As a side note, people die. Nobody lives forever. More people die of cancer than have died in terrorist attakcs since 9/11. Yet much more money is spent on "war on terror" (it'd be better renamed to "war generating terror") and fear-propaganda about dangers of terrorism than on people's health. I wonder how many people would stop smoking, eating junk food and began living healthier in general if that much money were invested in anti-smoking and other health campaings.

IMO, the way to fight terrorism is not to take away freedom from people and giving it to the government (exactly what is happening now in eg. US) and corporations (eg. airline security bodies). The word "terror" comes from the Latin language, and its original meaning is fear, fright. Given this meaning, and considering how many people are afraid and frightened, I think it's fairly OK to say that terrorists have won. Not only are the people afraid, certain governments seem to be pushing their citizens into dictatorship. Slowly, but surely. (Just look at the new "torture law" in the US.) Exactly the thing they claim to be fighting against.

So how should we fight terrorism? First, stop being afraid. (That might not be in the interest of certain presidents, as the fear they themselves have generated by their propaganda is the only thing keeping them in power). Second, we as a society should adapt. As the human immune system adapts to bacteria and viruses, the society should adapt to terrorism. As with diseases, there will always be random casualties. But random casualties are already all around us (home-accidents, car-accidents, drug overdose, medical mistreatment..); why do we have to single-out terror-accidents (I purposefully use the word accident here!) and make a fuss about them?

I don't have a recipe for the "adapt" part. People do not want to be killed by terrorists. People do not want to live in fear. People do not want war. I believe that people will cooperate on their own with police to prevent bad things from happening, only if given a chance. But as long as they are afraid, they won't dare take that chance even if given.

Good examples of the latter reasoning are arrests for attempted attacks in London and Denmark. That's commendable. But stricter security regulations are not justified. It's like fighting diseases by forbidding bacteria. It doesn't work.

Tags: