It's night and I'm sleepy, so this will be short. I've replaced ubuntu with gentoo on my laptop. I have to say that gentoo is a pleasant surprise. No behind-the-scenes "magic" - you have to do everything by yourself. All important aspects of the system are well documented. Compiling stuff is not that of a chore with the emerge - their packaging system. I also like the non-SYSV init because close to BSD-style init.
I also have to say that I have a very good experience with their support. The only package that failed to compile was bochs. I decided to report the bug and received an answer within 15 minutes. Impressive.
So far - no frustrations and a feeling of a well-designed system that gives great freedom to its users.
2005-09-28
2005-09-25
apt-get remove ubuntu
I am seriously considering this step. Replacement - Slackware or back to FreeBSD.
Some packages in the distribution are too old (e.g. tetex-2 when there is tetex-3 available), and some programs are configured at compile-time in a way that does not fit my needs (e.g. bochs without the internal debugger).
The worst thing is that the default installation does not install most of the needed development packages (e.g. X11 headers) so you have to download them from the net. They are not even present on the distribution CD! On top of all, since early in the morning, my local time, ubuntu repositories are down, so I can't get the packages I need.
Most of the other software I can compile myself, but I'm missing the critical part which I'd really rather not install on my own - X11 headers. Other SW that I need (or, better said, must, because of inadequate configuration) to compile on my own, depends on X11 headers.
I don't want to keep ranting on.. I can just conclude that I've never experienced so much frustration with any system on my destkop (I've tried FreeBSD and several flavors of linux) as with Ubuntu. And things that I've mentioned here you can't really blame on the beta.
Some packages in the distribution are too old (e.g. tetex-2 when there is tetex-3 available), and some programs are configured at compile-time in a way that does not fit my needs (e.g. bochs without the internal debugger).
The worst thing is that the default installation does not install most of the needed development packages (e.g. X11 headers) so you have to download them from the net. They are not even present on the distribution CD! On top of all, since early in the morning, my local time, ubuntu repositories are down, so I can't get the packages I need.
Most of the other software I can compile myself, but I'm missing the critical part which I'd really rather not install on my own - X11 headers. Other SW that I need (or, better said, must, because of inadequate configuration) to compile on my own, depends on X11 headers.
I don't want to keep ranting on.. I can just conclude that I've never experienced so much frustration with any system on my destkop (I've tried FreeBSD and several flavors of linux) as with Ubuntu. And things that I've mentioned here you can't really blame on the beta.
2005-09-23
Java exceptions
Recently there has been a hot discussion about Java vs. C (well, these discussions always seem hot) on sci.crypt. I have also given my contribution to the discussion.
The discussion concentrates on exceptions vs. error codes for error handling. I think that, in most cases, error codes are the better solution. Read the article for the explanation.
However, there I've started an idea on which I'd like to elaborate in more detail here. There are many problems with the naive use of exceptions, and this article gives an introduction to exception safety and exception guarantees.
To summarize the article shortly, one of bigger problems is that throwing an exception can leave your object in an inconsistent state. This is not only C++ problem, it is a problem in any language that provides exceptions, including Java and Python. So why not borrow transactions from RDBMS and introduce them in the programming language?
Consider the following artificial example (in some pseudo, C-like language):
The transaction consists of everything within
Some open questions:
The discussion concentrates on exceptions vs. error codes for error handling. I think that, in most cases, error codes are the better solution. Read the article for the explanation.
However, there I've started an idea on which I'd like to elaborate in more detail here. There are many problems with the naive use of exceptions, and this article gives an introduction to exception safety and exception guarantees.
To summarize the article shortly, one of bigger problems is that throwing an exception can leave your object in an inconsistent state. This is not only C++ problem, it is a problem in any language that provides exceptions, including Java and Python. So why not borrow transactions from RDBMS and introduce them in the programming language?
Consider the following artificial example (in some pseudo, C-like language):
x = 2;
begin {
// some mutating code
++x;
throw some_exception;
}
The transaction consists of everything within
begin{}
. When execution "cleanly" falls over the closing brace, the transaction is commited. If an exception is thrown within the begin{}
block, then all changes to any member variables of the object done within the transaction are "rolled-back". In the above example, the x
will be reverted to value 2.Some open questions:
- What to do static variables within transaction blocks,
- should nested transactions be allowed,
- implementation issues,
- performance considerations.
2005-09-18
Consumer clubs and privacy
I remember recently shopping something in "Turbo Sport", a sport equipment shop in Croatia. Some young girl (maybe 14-15 years old) in the queue before me has been offered a membership in the shop's "customer club". With each purchase you collect some points and when you collect enough points, you get some kind of discount or a gift. In fact, every customer is offered the membership, which I had declined.
The point is that the membership application asks you for a bunch of personal data like the date of birth, full name, the home address and so on (it took her few minutes to fill out the complete membership form). Although I'm not keen on giving away my personal data, another thing bothered me more than that at the moment.
It is the fact that each time someone uses his/her membership card, the shop has an opportunity to tie together all their past purchases and build their profile. Such kind of information is much more valuable to the shop than a few euros of discount it gives. Somebody may object that the shop can build a profile every time someone uses a credit-card, but note that Croatian citizens make their payments most often (like in more >90% cases) in cash.
Maybe I'm paranoid, but.. if they don't build your profile, why not just make a "stamp" scheme that e.g. Subway fast-food chain has. There is no single piece of personal information on that small piece of paper on which stamps are collected. When you collect 10 stamps, you get a discount and a new piece of paper.
Why does "Turbo Sport" make you fill out a form that takes a few minutes to fill out (i.e. they ask you for much data). What's worse, you have no idea what for are they using your data or how safely it is stored. Then I didn't bother to search for it, but I don't even think that there was some kind of privacy policy written on the membership form.
Croatia does have some regulations about securing personal data (I'm not acquainted with them at all), but I don't think that any law can forbid organizations to collect personal data. In the end, everyone gives it out volountarily when asked to.
And oh, BTW, I really like Subway's sandwiches. Much better (both in the terms of taste and ingredients) than McDonald's products :)
The point is that the membership application asks you for a bunch of personal data like the date of birth, full name, the home address and so on (it took her few minutes to fill out the complete membership form). Although I'm not keen on giving away my personal data, another thing bothered me more than that at the moment.
It is the fact that each time someone uses his/her membership card, the shop has an opportunity to tie together all their past purchases and build their profile. Such kind of information is much more valuable to the shop than a few euros of discount it gives. Somebody may object that the shop can build a profile every time someone uses a credit-card, but note that Croatian citizens make their payments most often (like in more >90% cases) in cash.
Maybe I'm paranoid, but.. if they don't build your profile, why not just make a "stamp" scheme that e.g. Subway fast-food chain has. There is no single piece of personal information on that small piece of paper on which stamps are collected. When you collect 10 stamps, you get a discount and a new piece of paper.
Why does "Turbo Sport" make you fill out a form that takes a few minutes to fill out (i.e. they ask you for much data). What's worse, you have no idea what for are they using your data or how safely it is stored. Then I didn't bother to search for it, but I don't even think that there was some kind of privacy policy written on the membership form.
Croatia does have some regulations about securing personal data (I'm not acquainted with them at all), but I don't think that any law can forbid organizations to collect personal data. In the end, everyone gives it out volountarily when asked to.
And oh, BTW, I really like Subway's sandwiches. Much better (both in the terms of taste and ingredients) than McDonald's products :)
2005-09-10
Ubuntu criticism
Yesterday I've installed Ubuntu 5.10 on a friend's recommendation. Some things I want to work on are simply not compilable at all under FreeBSD. Friend really praised Ubuntu 5.10 (currently in preview), so I said, what the heck, I might as well try it. Note that I was warned upfront that this is a preview version, but he said that everything was working perfectly on his laptop. I have Dell Latitude D800 with a Broadcom wireless card.
During the installation, I've set my location to Oslo. This resulted in using Norway ubuntu mirrors in
I'm disappointed that, although they boast about fresh software packages, they only have tetex 2 in their .deb packages. I had to compile on my own tetex 3.
Another seirous criticism goes to default settings of applications. I've installed postfix in the "smart-host" mode. I was really surprised to see it was by default listening on all network interfaces! In FreeBSD, sendmail is installed and enabled by default (to allow local mail delivery), but it is listening only on 127.0.0.1. It took me 5 minutes to install postfix docs, find the option and fix it, but still - I don't think that this is a reasonable, secure default.
The most serious critic goes to stability. It froze on me twice in two days. Even with Windows XP I haven't had such experience in quite a while. I don't remember the first freeze, but the second is like this: I'm working in GNOME (trying it out), lock screen, return, and I can't do anything. Processor starts heating up (I hear the fan turning up to its max), after a long wait (~30sec) the password entry box appears, but I can't type in anything. I can see that the thing isn't really frozen (i.e. there is interrupt processing - e.g. the mouse pointer moves), but I can't even switch to console to log in and kill the X session. C-A-Backspace also doesn't do kill it. So I do Ctrl-Alt-Del and succeed in cleanly rebooting the mahcine.
OK, it is beta, preview, call it whatever you like. I would expect some applications to crash, some packages not really being set up the way they should. But falling apart in a way that I have to reboot the machine - sadly, comparable to Windows. Which have, in contrast, become remarkably more stable as of XP (on my former work I used to run them, without any trouble, for over two weeks without a reboot).
My last, application-related problem, was getting Thunderbird to work properly, after restoring the backup of my home directory. To add to the trouble, my accounts were enigmail-enabled with an older version of enigmail. It took me 2 hours of trial and error to figure out how to properly move my thunderbird profile over to the new OS. Start thunderbird on an "empty" profile, don't create any accounts or profiles, and copy from the old profile just the
Not having something like "Archive accounts" and "Restore from archive" in the GUI menus to ease account and mail folder migration, I blame on the thunderbird team. Because unpacking the existing profile and using it with the new thunderbird and new plugin versions, simply does not work.
On the positive side, almost all hardware that I've had a chance to test at home is working perfectly. I didn't bother with 3D acceleration because I don't need it. Sound, USB stick hotplugging, even the wireless card, using ndiswrapper, is working OK. I didn't have the chance yet to try out the Ethernet card yet, but it is readily recognized on boot, so there should be no problems. I guess there should be no problems with other HW, either.
The only exception is that "small rubber thing between GHB keys" that serves as a mouse-replacement. This simply stops working in X after soft reboot. It works after coldstart, i.e. power-on.
Another nice surprise is Acrobat Reader 7 in the packages. I was really amazed about that! Software packages seem really fresh overall, save for the unfortunate tetex. Firefox 1.0.6 and Thunderbird 1.0.6. I haven't moved from 1.0.2 on FBSD because I was too lazy to compile it and fixes were appearing to frequently. I simply did not have the nerves to keep pace.
To conclude: I've migrated from FreeBSD 5.3 which is far from an OS targetted at desktop users, but I've had no problems setting it up. And absolutely no problems (stability or otherwise) running it. Although, to be fair, I was using only fvwm2 on FBSD, and I'm trying out the new GNOME on Ubuntu. As can be expected - more features, more bugs.
Overall, it's a nice system with a familiar Debian flavor. However, I would recommend that you wait for the release. This preview version is too much trouble in my experience.
During the installation, I've set my location to Oslo. This resulted in using Norway ubuntu mirrors in
/etc/apt/sources.list
. These repositories were not usable for some reason I didn't want to investigate. On friend's suggestion, I've pointed the APT sources file to US mirrors and things started working OK.I'm disappointed that, although they boast about fresh software packages, they only have tetex 2 in their .deb packages. I had to compile on my own tetex 3.
Another seirous criticism goes to default settings of applications. I've installed postfix in the "smart-host" mode. I was really surprised to see it was by default listening on all network interfaces! In FreeBSD, sendmail is installed and enabled by default (to allow local mail delivery), but it is listening only on 127.0.0.1. It took me 5 minutes to install postfix docs, find the option and fix it, but still - I don't think that this is a reasonable, secure default.
The most serious critic goes to stability. It froze on me twice in two days. Even with Windows XP I haven't had such experience in quite a while. I don't remember the first freeze, but the second is like this: I'm working in GNOME (trying it out), lock screen, return, and I can't do anything. Processor starts heating up (I hear the fan turning up to its max), after a long wait (~30sec) the password entry box appears, but I can't type in anything. I can see that the thing isn't really frozen (i.e. there is interrupt processing - e.g. the mouse pointer moves), but I can't even switch to console to log in and kill the X session. C-A-Backspace also doesn't do kill it. So I do Ctrl-Alt-Del and succeed in cleanly rebooting the mahcine.
OK, it is beta, preview, call it whatever you like. I would expect some applications to crash, some packages not really being set up the way they should. But falling apart in a way that I have to reboot the machine - sadly, comparable to Windows. Which have, in contrast, become remarkably more stable as of XP (on my former work I used to run them, without any trouble, for over two weeks without a reboot).
My last, application-related problem, was getting Thunderbird to work properly, after restoring the backup of my home directory. To add to the trouble, my accounts were enigmail-enabled with an older version of enigmail. It took me 2 hours of trial and error to figure out how to properly move my thunderbird profile over to the new OS. Start thunderbird on an "empty" profile, don't create any accounts or profiles, and copy from the old profile just the
prefs.js
file and Mail
directory. Just in case, I've deleted all references to enigmail in the copied prefs.js
and everything worked fine. OK, this is hardly Ubuntu fault.Not having something like "Archive accounts" and "Restore from archive" in the GUI menus to ease account and mail folder migration, I blame on the thunderbird team. Because unpacking the existing profile and using it with the new thunderbird and new plugin versions, simply does not work.
On the positive side, almost all hardware that I've had a chance to test at home is working perfectly. I didn't bother with 3D acceleration because I don't need it. Sound, USB stick hotplugging, even the wireless card, using ndiswrapper, is working OK. I didn't have the chance yet to try out the Ethernet card yet, but it is readily recognized on boot, so there should be no problems. I guess there should be no problems with other HW, either.
The only exception is that "small rubber thing between GHB keys" that serves as a mouse-replacement. This simply stops working in X after soft reboot. It works after coldstart, i.e. power-on.
Another nice surprise is Acrobat Reader 7 in the packages. I was really amazed about that! Software packages seem really fresh overall, save for the unfortunate tetex. Firefox 1.0.6 and Thunderbird 1.0.6. I haven't moved from 1.0.2 on FBSD because I was too lazy to compile it and fixes were appearing to frequently. I simply did not have the nerves to keep pace.
To conclude: I've migrated from FreeBSD 5.3 which is far from an OS targetted at desktop users, but I've had no problems setting it up. And absolutely no problems (stability or otherwise) running it. Although, to be fair, I was using only fvwm2 on FBSD, and I'm trying out the new GNOME on Ubuntu. As can be expected - more features, more bugs.
Overall, it's a nice system with a familiar Debian flavor. However, I would recommend that you wait for the release. This preview version is too much trouble in my experience.
2005-09-06
GPL no more - part 2
Part 1 continued.. with a slight change in a title :)
Create a version of (BSD-licensed!)
There are at least two interesting problems to be addressed:
More interesting problem is passing of code pointers. Excluding self-modifying code, the dynamic linker does not make code segments of the dynamic library available to the executable or vice-versa. Dynamic library and the executable share code only with the dynamic linker, not mutually. First time when a callback is called in the "other" module, SIGSEGV is received. Special handler can then "patch" the pointer to point into some trampoline code within the dynamic linker, and then that trampoline code forwards the call to its real destination.
The second class of problems could be solved partly within the kernel and partly within the dynamic loader. E.g. by making the dynamic linker, the executable and dynamic libraries share the same process-structure except for memory maps.
Beware! This is just a wild idea. It may or may not be feasible in practice and it may or may not circumvent GPL. Nevertheless, it is an interesting technical problem.
This scheme may or may not have found a loophole in the GPL. Quoting Werner Koch:
Create a version of (BSD-licensed!)
ld.so
that will execute the program in a separate address space from its libraries. So the "procedure call" will call the stub in the BSD licensed ld.so
which will just "pass a message" to the real shared library and return a result code to the application.There are at least two interesting problems to be addressed:
- passing pointers, and
- implementing functions that manipulate the whole process state (e.g. setuid, execve, fork, etc.)
More interesting problem is passing of code pointers. Excluding self-modifying code, the dynamic linker does not make code segments of the dynamic library available to the executable or vice-versa. Dynamic library and the executable share code only with the dynamic linker, not mutually. First time when a callback is called in the "other" module, SIGSEGV is received. Special handler can then "patch" the pointer to point into some trampoline code within the dynamic linker, and then that trampoline code forwards the call to its real destination.
The second class of problems could be solved partly within the kernel and partly within the dynamic loader. E.g. by making the dynamic linker, the executable and dynamic libraries share the same process-structure except for memory maps.
Beware! This is just a wild idea. It may or may not be feasible in practice and it may or may not circumvent GPL. Nevertheless, it is an interesting technical problem.
This scheme may or may not have found a loophole in the GPL. Quoting Werner Koch:
Just for the record: Linking is only one indication that the whole is a derived work. There is no one to one relation ship and in particular even two separate processes might make up a derived work.IMHO, this statement just illustrates the problem with GPL: there is no clear definition of derived work.
GPL once more - part 1
This text is divided into two parts. This (the first part) discusses the issue, and in the second part I will describe a method that will (possibly) make it possible to get rid of GPL problems related to dynamic linking - once and for all :)
On the GnuPG -devel and -users mailing lists there is again a vigorous discussion on Werner's decision not to support PKCS#11 in GnuPG. One of the major arguments is again the licensing issue.
It seems that the problem is linking of the (proprietary) PKCS#11 shared library in the same address space with the GnuPG binary. E.g. executing a GPL'd binary that uses Win32 API does not violate GPL when calling kernel because:
I maintain that, in many cases, these two things are the same. Consider two possible implementations of PKCS#11:
But for some strange reason, using the first implementation of PKCS#11 would make the whole program, under GPL interpretation, a "derivative work". User is (legally) not allowed even to dynamically link a proprietary PKCS#11 DLL with a GPL incompatible license.
However, the second case is perfectly OK use-case: GPL application communicates with proprietary daemon using a pre-established "protocol" and everything is fine. Legally, what happens if UNIX sockets are implemented by shared memory? What does GPL say in that case?
I maintain that in both cases, the amount of sharing and coupling between the application and PKCS#11 module is in both cases the same. The only difference is in mechanism used to accomplish this sharing. In both cases the application shares only data with the service provider (PKCS#11 - in either form).
In my opinion, data sharing does not and cannot (in any common-sense interpretation) constitute a "derivative work". Furthermore, GPL is only about code sharing. Then again, I'm not a lawyer - comments are welcome.
Then again, you can register callback functions with PKCS#11 driver, creating an even more interesting situation. These functions can be translated into asynchronous messages from some daemon back to the application. So in my view, even in this case there is no code sharing. There is as much code sharing as with registering a ubiquitous
Paradoxically, it seems that GnuPG would be allowed to use closed-source MS CAPI because it is delivered as a "part of the operating system". The way CAPI works is:
So your application interacts with CAPI (delivered as a part of the operating system - an exception permitted by the GPL), and CAPI interacts with the back-end driver for the particular hardware device.
More interested readers can look into the gnupg-users and gnupg-devel archives for the thread named "OpenPGP card". (For the future record: around 6th Sep, 2005.)
Based on these things (also read my previous posts on GPG and Bitlbee), I personally find the GPL license flawed in many respects. A good open-source license should not allow for "paradoxes" (like the above example) and it also should not prohibit mixing the code (in any way - source form, static or dynamic linking) with another, essentialy "free" or standards-conforming code, like OpenSSL[1] or PKCS#11[2] drivers.
[1] free as in both open-source and no charge
[2] free as in no charge
During these discussions on the list I had a most interesting idea. Read about it soon in the next blog entry.
On the GnuPG -devel and -users mailing lists there is again a vigorous discussion on Werner's decision not to support PKCS#11 in GnuPG. One of the major arguments is again the licensing issue.
It seems that the problem is linking of the (proprietary) PKCS#11 shared library in the same address space with the GnuPG binary. E.g. executing a GPL'd binary that uses Win32 API does not violate GPL when calling kernel because:
- They are not linked together in the same address space but communicate by well-defined messages.
- There is also an exception in the GPL for the software delivered as a "part of the operating system".
I maintain that, in many cases, these two things are the same. Consider two possible implementations of PKCS#11:
- As a shared library, and
- as a daemon listening on a local ("UNIX") socket.
But for some strange reason, using the first implementation of PKCS#11 would make the whole program, under GPL interpretation, a "derivative work". User is (legally) not allowed even to dynamically link a proprietary PKCS#11 DLL with a GPL incompatible license.
However, the second case is perfectly OK use-case: GPL application communicates with proprietary daemon using a pre-established "protocol" and everything is fine. Legally, what happens if UNIX sockets are implemented by shared memory? What does GPL say in that case?
I maintain that in both cases, the amount of sharing and coupling between the application and PKCS#11 module is in both cases the same. The only difference is in mechanism used to accomplish this sharing. In both cases the application shares only data with the service provider (PKCS#11 - in either form).
In my opinion, data sharing does not and cannot (in any common-sense interpretation) constitute a "derivative work". Furthermore, GPL is only about code sharing. Then again, I'm not a lawyer - comments are welcome.
Then again, you can register callback functions with PKCS#11 driver, creating an even more interesting situation. These functions can be translated into asynchronous messages from some daemon back to the application. So in my view, even in this case there is no code sharing. There is as much code sharing as with registering a ubiquitous
WindowProc
callback with Win32 API.Paradoxically, it seems that GnuPG would be allowed to use closed-source MS CAPI because it is delivered as a "part of the operating system". The way CAPI works is:
your application -> CAPI -> back-end driver
So your application interacts with CAPI (delivered as a part of the operating system - an exception permitted by the GPL), and CAPI interacts with the back-end driver for the particular hardware device.
More interested readers can look into the gnupg-users and gnupg-devel archives for the thread named "OpenPGP card". (For the future record: around 6th Sep, 2005.)
Based on these things (also read my previous posts on GPG and Bitlbee), I personally find the GPL license flawed in many respects. A good open-source license should not allow for "paradoxes" (like the above example) and it also should not prohibit mixing the code (in any way - source form, static or dynamic linking) with another, essentialy "free" or standards-conforming code, like OpenSSL[1] or PKCS#11[2] drivers.
[1] free as in both open-source and no charge
[2] free as in no charge
During these discussions on the list I had a most interesting idea. Read about it soon in the next blog entry.
2005-09-04
Bitlbee encrypted storage patch
This patch fixes a security problem with the Bitlbee server. Namely, users' accounts, passwords and contacts are written in plaintext or weakly obfuscated form, that can't stand serious cryptanalysis, on the disk. If the public Bitlbee server gets compromised, much of your personal data can be stolen.
This patch fixes this. It employs CAST5-CBC encryption algorithm and PKCS#5 password-based key derivation with 2^16 iterations to slow down password-guessing attacks. The patch depends on OpenSSL. It also adds more extensive logging to make it possible to track for example repeated logging attempts.
It also fixes a file descriptor leak which could be used for remote DoS attack, and some memory leaks. Namely, in the original Bitlbee, one file descriptor was leaked for each invalid login attempt. If spawned from inetd (the recommended way of running Bitlbee), it is trivial to overflow the system's file descriptor table. And then everything halts.
The file descriptor leak fix made it into the mainstream (latest patches). Memory leaks I didn't report so no official patch is available.
And these bugs I've uncovered only after looking into two functions in the whole source! And I'm not intimately familiar with the code.
Will the "encrypted storage" get merged into the mainstream Bitlbee? I've contacted their authors, and they aren't keen on incorporating it into the mainstream. Guess why - OpenSSL license isn't "free enough" and compatible with GPL. They prefer one of:
My motivation for writing the patch? I'm user of a certain public server and I'm concerned about my data. I have convinced the administrator of that server to apply the encrypted storage patch. He likes the idea.
Bitlbee is also a really great program, although, IMHO, not yet quite ready for public servers. But it's perfectly OK for personal use.
I'm an IRC user and finally I can chat with my friends on IM networks who can't use IRC, without messing around with several different client programs. Thanks to the Bitlbee team.
This patch fixes this. It employs CAST5-CBC encryption algorithm and PKCS#5 password-based key derivation with 2^16 iterations to slow down password-guessing attacks. The patch depends on OpenSSL. It also adds more extensive logging to make it possible to track for example repeated logging attempts.
It also fixes a file descriptor leak which could be used for remote DoS attack, and some memory leaks. Namely, in the original Bitlbee, one file descriptor was leaked for each invalid login attempt. If spawned from inetd (the recommended way of running Bitlbee), it is trivial to overflow the system's file descriptor table. And then everything halts.
The file descriptor leak fix made it into the mainstream (latest patches). Memory leaks I didn't report so no official patch is available.
And these bugs I've uncovered only after looking into two functions in the whole source! And I'm not intimately familiar with the code.
Will the "encrypted storage" get merged into the mainstream Bitlbee? I've contacted their authors, and they aren't keen on incorporating it into the mainstream. Guess why - OpenSSL license isn't "free enough" and compatible with GPL. They prefer one of:
- GNU TLS which, judging from the manual, is inferior to OpenSSL in many ways and lacks features some of which are necessary for the functionality of this patch (among other things, PKCS#5 key derivation).
- Mozilla NSS which is both an overkill and even less documented than OpenSSL.
My motivation for writing the patch? I'm user of a certain public server and I'm concerned about my data. I have convinced the administrator of that server to apply the encrypted storage patch. He likes the idea.
Bitlbee is also a really great program, although, IMHO, not yet quite ready for public servers. But it's perfectly OK for personal use.
I'm an IRC user and finally I can chat with my friends on IM networks who can't use IRC, without messing around with several different client programs. Thanks to the Bitlbee team.
Subscribe to:
Posts (Atom)