Blog Infrastructure: Control versus Capitulation

I like to be in control of my destiny where my public website (and by blog) is concerned. That way, my content isn’t at the mercy of a third-party that may start charging to host my content, remove content, or stop hosting my content. I can call this control *self reliance*.

Being in control of my blog has its costs. I am the person responsible to make sure the blog software (wordpress) stays up-to-date, which takes time — valuable time that I’d rather spend doing something else (and usually do).

Most people I know that blog have already out-sourced the their blogging platform, whether they realize it or not. Should I capitulate (i.e. surrender control) and do the same thing?

In some sense, my ability to function in this high tech world requires that I rely on others. I rely on a third party to provide the blogging software (wordpress), host my web server (digitalspace.net), another to provide bandwidth, another to provide a domain name (joker.com). On and on the list goes. I am not an island unto myself. My ability to succeed depends on being a part of civilized society.

I’d capitulate control of my blog, except that I still want a canonical location for my blog to live — one that is a little bit less subject to the whims of a single corporate entity. The best place is at jaredrobinson.com. If I need to switch to a new hosting provider or switch to a different domain name registrar, the canonical URL doesn’t have to change.

I’m not ready to capitulate yet. I like my canonical blog URL.

Test-driven development in Perl

There’s an impressively in-depth presentation from [OSCON 2008](http://en.oreilly.com/oscon2008/public/schedule/proceedings) about [Practical Test Driven Development in Perl](http://assets.en.oreilly.com/1/event/12/Practical%20Test-driven%20Development%20Presentation.pdf). It covers Test::More, Test::Class, Test::Differences, Test::Deep and Test::MockObject.

I also found the following to be interesting: [Even Faster Web Sites](http://assets.en.oreilly.com/1/event/12/Even%20Faster%20Web%20Sites%20Presentation%202.ppt) and [Pro PostgreSQL](http://assets.en.oreilly.com/1/event/12/Pro%20PostgreSQL%20Presentation.odp). Reading these helps me to know a little bit about what I don’t know.

Visualize your hard drive using a TreeMap viewer

Every once in a while, I get low on disk space, and hunting for large directories or files to delete can be difficult manually. [Tree Map visualization](http://en.wikipedia.org/wiki/Treemap) tools make the job easier. There’s [WinDirStat](http://windirstat.info/) for Windows, [KDirStat](http://kdirstat.sourceforge.net) for KDE, and [Disk Usage Analyzer](http://live.gnome.org/GnomeUtils/Baobab) (baobab) for Gnome.

![TreeMap Image](http://library.gnome.org/users/baobab/stable/figures/baobab_fullscan.png.en)

Article: A Patent Lie, and other patent happenings

Timothy B. Lee of the Cato Institute wrote [A Patent Lie](http://www.nytimes.com/2007/06/09/opinion/09lee.html?_r=3&oref=slogin&oref=slogin&oref=slogin), in which he explains why copyright is better for the software industry than patents:

> Don’t software companies need patent protection? In fact, companies, especially those that are focused on innovation, don’t: software is already protected by copyright law, and there’s no reason any industry needs both types of protection. The rules of copyright are simpler and protection is available to everyone at very low cost. In contrast, the patent system is cumbersome and expensive. Applying for patents and conducting patent searches can cost tens of thousands of dollars. That is not a huge burden for large companies like Microsoft, but it can be a serious burden for the small start-up firms that produce some of the most important software innovations.

The good news about software patents is that [they’ve been weakened](http://en.wikipedia.org/wiki/KSR_v._Teleflex) so that patent troll companies can’t wreak quite as much havoc as they have in the past. Now there’s not as much money in it. Apparently, [patent troll companies are getting smarter](http://www.linuxworld.com/community/?q=node/16789) about working with open source — most recently with RedHat:

> Trolls need to collect money to survive, and open source vendors can’t give it to them. The good news from this settlement [with RedHat], and [Blackboard’s](http://www.linuxworld.com/news/2007/020107-blackboard-no-action-against-open-source.html), is that trolls are realizing that hitting an open source company is like robbing a store where the safe is on a time lock. They can do damage and hurt people, but the money isn’t available to them.

The settlement was also [documented by Groklaw](http://www.groklaw.net/article.php?story=20080611191302741).

Products to avoid

The nice thing about mass-market commercial software is that I can purchase it at a small fraction of the cost to develop it myself, which I would never do because I don’t have the time. Unfortunately, home-user mass-market software seems to lack quality. Here are some that I recommend against.

* [Greeting Card Factory](http://www.google.com/search?q=greeting+card+factory). When I opened the package, I discovered that the software shipped on about six separate CDs! I purchased the software in 2007 — an enlightened age where most people have DVD drives. I’m impatient, and disliked having to play disk jockey to install the software. Once installed, I discovered that it’s cumbersome to use — too much clicking with the mouse required to get the job done. There’s no good preview of card greeting messages in the template browser, so I have to load each one in, click through the buttons to see the message, and then start all over again to find an appropriate card. It sure is a waste of time. The best greeting card software I’ve used was American Greetings, but that version was designed years ago and required inserting CDs to load some of the cards. Hallmark’s software was the most polished, robust, and least annoying, but I liked the quality of cards from American Greetings better.

UPDATE: There is a good way to preview greeting card messages in the template browser — you have to increase the zoom level to the maximum, and additional preview controls become visible.

* Symantec and McAffe AntiVirus. They slow down a computer too much (by 20% or more!). Anything that annoys my grandmother about activation is too much of a hassle. Switch to [AVG Free](http://www.google.com/search?q=AVG+free). I run Vista with an unprivileged account, and so far, I haven’t needed AV. I ran AVG Free on Windows XP for several years, and never got a virus — because I didn’t download and install random software — and because my user account didn’t have administrative privileges.

There’s hardware to avoid as well:

* [Kodak printers](http://printers.kodak.com/). I decided to give a Kodak printer a try because of the promise of cheaper ink. The printer has been a constant hassle ever since we purchased it. Just tonight, even after selecting the best print quality, it still printed every other line as faded and smudgy. My wife seems to know the ritual to make it print better, but she’s not here at the moment. Avoid Kodak printers at all costs. Go with an Epson or an HP — they provide quality results. If a laser printer fits your needs, they’re usually more reliable than an inkjet printer.

Fedora 9, NVidia, VMWare Server

I’ve upgraded four systems to Fedora 9 in the past couple of weeks. For those that have NVidia cards, it was a bumpy ride until NVidia released a [new driver](http://www.nvidia.com/object/linux_display_ia32_173.08.html). To install it as a pre-built RPM package, see [this blog post](http://nareshv.blogspot.com/2008/04/fedora-9-rawhide-and-latest-nvidia-179.html).

For the system that runs VMWare Server, it was necessary to [upgrade to version 1.0.6](http://www.howtoforge.com/vmware-server-installation-on-a-fedora9-desktop), which supports the 2.6.25 kernel shipped with Fedora 9.

NoMachine NX, Fedora 9 and SELinux

I upgraded from Fedora 7 to Fedora 9 using [preupgrade](http://fedoraproject.org/wiki/PreUpgrade), and then I couldn’t connect to the [NoMachine NX Server](http://www.nomachine.com/). It’s due to SELinux, again (I [wrote about this earlier](http://jaredrobinson.com/blog/?p=89)). The approach to solve it is still the same, although the policy is different:

Here’s what my audit.log messages looked like:

May 30 07:48:03 localhost kernel: type=1400 audit(1212155283.470:7): avc: denied { getattr } for pid=876 \
comm=”sshd” path=”/usr/NX/home/nx/.ssh/authorized_keys2″ dev=sda2 ino=70976 \
scontext=system_u:system_r:sshd_t:s0-s0:c0.c1023 tcontext=system_u:object_r:usr_t:s0 tclass=file \
May 30 08:22:35 localhost kernel: type=1400 audit(1212157355.873:9): avc: denied { read } for pid=872 \
comm=”sshd” name=”authorized_keys2″ dev=sda2 ino=70976 \
scontext=system_u:system_r:sshd_t:s0-s0:c0.c1023 tcontext=system_u:object_r:usr_t:s0 tclass=file

Here’s how I created and inserted the policy:

cd /etc/selinux
cat /var/log/audit/audit.log | audit2allow -M nx
semodule -i nx.pp

And here’s the nx.te file:

module nx 1.0;
require {
type sshd_t;
type usr_t;
class file { read getattr };
}
#============= sshd_t ==============
allow sshd_t usr_t:file { read getattr };

Open Source Security report from Coverity

[Coverity](http://www.coverity.com) has published it’s [Open Source Scan Report 2008](http://coverity.com/library/pdf/Coverity-Scan_Open_Source_Report_2008.pdf), which details the security status of several open source projects. Here’s my summary:

* The overall security of open source projects is improving.
* There’s a linear relationship between the amount of code and the amount of bugs.
* Surprisingly, there’s no relation between function length and defect density.

Projects with exceptionally low defect density include Amanda, NTP, OpenPAM, OpenVPN, Perl, PHP, Python, TCL, Postfix, Samba, curl, libvorbis and vim.

The top two security defects are

1. NULL pointer dereference
2. Resource leak

I got to preview [Coverity Prevent](http://www.coverity.com/html/prod_prevent.html) at a previous job, and it rocks at finding real bugs in code, with a very low rate of false positives.

Attempt to patent a mental process

David A. Wheeler says, “The US Court of Appeals for the Federal Circuit in Washington, DC just heard arguments in the Bilski case, where the appellant (Bilski) is arguing that a completely mental process should get a patent. The fact that this was even entertained demonstrates why the patent system has truly descended into new levels of madness. At least the PTO rejected the application.”

Wheeler goes on to explain why [patents on information is lunacy](http://www.dwheeler.com/blog/2008/05/09#bilski-information-is-physical)