Laptop hard drive lifetime (load cycles)

Run the following:

$ sudo smartctl -A /dev/sda | grep -P ‘Load_Cycle|ATTRIB’

And see something like this:

ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always – 6038

The last value is the number of load cycles. A laptop hard drive typically has a lifetime of 600,000 load cycles. If the count is increasing by several thousand per day (or even several hundred), it may be cause for concern.

For information and how to fix it, see [http://lwn.net/Articles/256769/](http://lwn.net/Articles/256769/)

The journey to Fedora 8

I’ve upgraded our family laptop to [Fedora 8](http://fedoraproject.org/wiki/Releases/8/ReleaseSummary) (yes, we still dual boot into Windows Vista). The upgrade would have been a rather bumpy ride, except that I knew that Fedora 8 upgrades are [problematic](http://fedoraproject.org/wiki/Bugs/F8Common#head-7b9bf2dab0e2bdd97d98334c7198cd9cd3eaf9be) (installs are OK), and that there’s a [workaround](https://bugzilla.redhat.com/show_bug.cgi?id=372011).

While Fedora 7 supported our laptop fairly well, Fedora 8 is even better. The power savings features are better. The Fedora community has tracked down and fixed several programs that were power hogs. The screen dims automatically while on battery power after about 30 seconds. File systems are reportedly mounted with the new [‘relatime’ option](http://www.lesswatts.org/tips/disks.php), which saves on hard drive battery usage — unless you upgrade, in which case, you have to add it manually. Improved wireless drivers in combination with an improved Network Manager connect more reliably, and more quickly, to our WPA2 access point.

*FreeNX broken, and manually fixed*

I use FreeNX regularly to connect to a remote linux box. When I upgraded one machine to Fedora 8, I couldn’t connect using an NX client. I found [a suggestion](http://www.nabble.com/Fedora-8-working-for-anyone–t4806795.html) that helped me fix it: Edit `/usr/libexec/nx/nxnode` and replace `DISPLAY=”unix:$display”` with `DISPLAY=”:$display”` everywhere. Hopefully, someone will re-roll the FreeNX packages to fix this for Fedora 8.

*Ubuntu sidenote*

I’ve heard the claim that Ubuntu is more ready for the desktop than Fedora, and up to this point, I didn’t know how that could be. Last weekend, I plugged a Logitech quick cam into my brother’s Ubuntu system. I was trying to figure out how to load the webcam driver, when we discovered that Ubuntu had already recognized the webcam, and it was ready to use.

Linux performance tuning

When attempting to find and fix performance bottlenecks on a Linux system, it’s helpful to know where to start. Here are a few resources I’ve found:

IBM’s [Linux Performance and Tuning Guidelines](http://www.redbooks.ibm.com/abstracts/redp4285.html), published July 2007

> This IBM Redpaper describes the methods you can use to tune Linux,
tools that you can use to monitor and analyze server performance, and
key tuning parameters for specific server applications. The purpose of
this redpaper is to understand, analyze, and tune the Linux operating
system to yield superior performance for any type of application you
plan to run on these systems. ( [Read more…](http://www.redbooks.ibm.com/abstracts/redp4285.html) )

This website has useful tips:
[http://www.performancewiki.com/](http://www.performancewiki.com/)

Google has some tools that people recommend:
[http://code.google.com/p/google-perftools/wiki/GooglePerformanceTools](http://code.google.com/p/google-perftools/wiki/GooglePerformanceTools)

This book seems to be recommended:
[Optimizing Linux Performance](http://www.amazon.com/Optimizing-Linux-Performance-Hands-Professional/dp/0131486829)

In my experience, strace and ltrace along with the “-c” and “-T” options, are extremely useful — even for Perl scripts. The “-T” option shows the timings of calls, and can help isolate what calls are the slowest.

* `strace -o program.trace -T -p `
* `ltrace -o program.trace -T -p `

The “-c” option gives a summary of the calls that used the most time:

* `strace -c -p `
* `ltrace -c -p `

I haven’t found a good way to isolate memory leaks in Perl programs — not that I’m an expert. What has worked for me is to [divide and conquer](http://en.wikipedia.org/wiki/Divide_and_conquer_algorithm) in order to isolate the problem.

Internet Explorer more secure than Firefox?

In the past, I’ve recommended to friends and family that they run Firefox instead of Internet Explorer to gain better security and usability on Windows systems. I’m re-evaluating that stance now that I’ve learned about a new feature of Windows Vista that restricts Internet Explorer and runs it inside of a jail. It’s called [Protected Mode](http://blogs.msdn.com/ie/archive/2006/02/09/528963.aspx), or [Mandatory Integrity Control](http://www.securityfocus.com/infocus/1887/2), and it means that that spyware and adware are less likely to infect a Vista computer.

As far as I know, Firefox doesn’t (yet) run inside the “jail”, so Internet Explorer is probably the more secure choice — yet another reason to admire the technical engineering [effort that went into Windows Vista](http://en.wikipedia.org/wiki/Windows_Vista#New_or_improved_features).

Despite the improved security of IE 7 in Vista, I enjoy the usability of Firefox, including the ability to disable JavaScript from running by default, using the [NoScript extension](https://addons.mozilla.org/en-US/firefox/addon/722) extension. Does anyone know whether there’s a [NoScript extension](https://addons.mozilla.org/en-US/firefox/addon/722) available for Internet Explorer? If not, I’m sticking with Firefox.

Laptop lamentations and blissful benefits

At our household, we’ve finally made the leap from a desktop computer to a shiny new laptop — an [HP dv6426us](http://www.amazon.com/exec/obidos/ASIN/B000RGG5EC/). A new computer, in theory, should save time because it runs faster, right? Wrong. It takes time to become familiar with Windows Vista and where they’ve managed to hide various configuration options (displaying file extensions in Explorer). HP doesn’t make it obvious how to get rid of their annoying add-ons from popping up in my face. I didn’t buy this thing to run the HP Health Center. I bought it so the OS would stay out of my way, and let me focus on work (err, tinkering).

We’re still attached to our desktop computer until we have migrated our data and applications over to the laptop. Migration requires time, time, and more time. [FireFox](http://www.getfirefox.net/), [Thunderbird](http://www.mozilla.com/thunderbird/), Quicken, [Vim](http://www.vim.org), [Password Safe](http://passwordsafe.sourceforge.net/), [Putty](http://www.chiark.greenend.org.uk/~sgtatham/putty/), [Cygwin](http://www.cygwin.com/), PrintMaster, Hallmark Create-a-Card, Palm Desktop, [OpenOffice](http://www.openoffice.org/), [IrfanView](http://www.irfanview.com/), [NoMachine NX](http://www.nomachine.com/), an instant messaging client, and the list goes on. I’ve tried Vista’s new [Windows Mail](http://en.wikipedia.org/wiki/Windows_Mail), and it’s much better than Outlook Express, but my wife and I have our email in Thunderbird, and it was easy to migrate that across — once I figured out where to drop the folder on Vista — in `C:\Users\MyUserName\AppData\Roaming\Thunderbird`. PrintMaster 12 didn’t run for non-admin users until I figured out that I needed to grant Full Control access for `C:\Program Files\Broderbund\PrintMaster\Ereg`. Cygwin and NoMachine NX conflict with each other.

I bought this particular laptop because the hardware was likely to work with Linux — it has an Intel graphics card, which has open-source Linux drivers, and Intel WiFi. Open source drivers mean that suspend and resume are far more likely to function correctly than when using proprietary drivers (as from Nvidia or ATI).

I would install Fedora 7, which meant I needed to resize the existing windows partition. Vista’s disk manager made this a piece of cake. Installing Fedora 7 was easy. At first, Fedora didn’t resume after suspending to RAM. After applying all Fedora updates, it worked, although WiFi doesn’t work after the resume. Hibernate always works, and so does WiFi after resuming from hibernation. WiFi and the NetworkManager didn’t allow me to connect to my WPA2-encrypted access point until I disabled SELinux.

Linux has other problems running on the hardware, including:

– Secondary screen output hasn’t worked yet. This is easy and painless in Windows. It sounds like the latest xorg releases may help solve this situation with their Rotate and Render extensions ([RandR](http://en.wikipedia.org/wiki/XRandR)).
– Microphone doesn’t seem to work (although it does through VMWare Windows guest). Haven’t figured this out yet.
– Spotty webcam support in applications. Ekiga crashes. But yes, I found a [driver for my webcam](http://linux-uvc.berlios.de/). Too bad it didn’t come with Fedora 7 — I had to download, compile and install it myself.
– Slow hibernate/resume. The [TuxOnIce](http://www.tuxonice.net/) project supposedly remedies this, but I don’t want to spend all of my time tweaking my Linux box.
– Battery life. Even the Linux kernel hackers acknowledge that Windows gives better battery life than Linux. This situation is being remedied, gradually.

It takes more time than I want to spend to get Linux to run optimally on this hardware, and there are some Windows applications that just don’t have equivalents in Linux, like Print Master. My plan is to run Linux under [VirtualBox](http://www.virtualbox.org/) or [VMPlayer](http://www.vmware.com/products/player/).

I now realize that there’s huge value in an OEM preinstall of an operating system for end users. I had considered buying a [Ubuntu DELL laptop](http://www.dell.com/content/topics/segtopic.aspx/linux_3x), but let’s face it, DELL insiprons are ugly. HP systems are sleek, beautiful, and cost less while coming with more features (like a webcam).

Having a laptop is changing the way we work. Mobility is a huge win. We took the laptop with us when we went to vote in the primary election, because we had candidate information we could access using a web browser. There are downsides, of course.

We need a new printer and a new scanner with Vista drivers. We get to put up with frequent security dialogs interrupting our work flow. Vista itself consumes more RAM (nearly 500 MB), so we may need to upgrade to 2 GB. Vista runs slower than XP and Linux. Vista’s hibernate/resume is quite slow and gives no visual feedback during hibernation, just a blank screen. Vista’s boot and login experience is slow. Oh well. Life goes on.

Git underwhelms

I work on source code from two separate SVN
repositories. One of them is geographically remote. Working with the remote server is slow for ‘log’, ‘diff -r’, ‘blame’, etc. Due to my interest in distributed version control, and my desire for faster repository access, I decided to try git and git-svn. Doing ‘log’, ‘diff’, etc. with a local git repo is much faster, but on the whole, working in a git repo created with git-svn has been difficult and unrewarding. Perhaps it would be easier if others at my company were using git-svn and we could share ideas. Working with git and git-svn requires learning a new workflow, and I haven’t yet reached enlightenment.

Challenges with Git:

  • The Git Wiki is often out-of-date and/or incomplete (submodule support, for example).
  • No Nautilus, Konquerer, or Windows Explorer integration.
  • No KDevelop itegration.
  • git-gui should:
  • let me double-click on files listed in either “Staged Changes” or “Unstaged Changes” to edit the file. Or let me right-click and choose an “edit” option.
  • Let me use an external diff program such as meld or kdiff3. git-gui should let me set this up and use it. qgit has an external diff option (defaults to kompare), but it doesn’t use the working copy on the right hand side, so it’s not possible to use the diff tool to change the working copy file.

Challenges with Git-SVN: (More complicated to use than Subversion)

  • Two stage commit instead of single stage. ‘git commit’, ‘git-svn dcommit’
  • Error messages are cryptic, so I don’t know how to resolve the errors.
  • git-svn rebase doesn’t merge changes from upstream Subversion server into my working copy, and git-svn doesn’t tell me what workflow I should be using. So I ran git-svn fetch to pull upstream Subversion changes. Then I ran git-gui and chose Merge->Local. It told me something helpful. “You are in the middle of a change. File X is modified. You should complete the current commit before starting the merge. Doing so will help you abort a failed merge, should the need arise.” “git-svn rebase” should have told me
    the same thing.

Reasons to continue with Subversion:

  • Workflow is easier, less complex — perhaps because I’m used to it.
  • Windows Explorer integration via TortiseSVN.
  • IDE integration. Nearly every IDE supports or has a pluging for Subversion.
  • svnmerge.py gives me cherry-picking support (between branches within the same repository)
  • svnmerge.py remembers merges, so I don’t have to resolve the same
    conflicts twice.
  • I don’t need disconnected operation in my workplace.

I hope that in a year, Git, git-svn and developer tool integration will be more mature and thus rewarding to use. With the rapid development I see happening, it wouldn’t be surprising.

I will continue to use git-svn. It gives me the speed I need for working with log history, annotate and diff.

Update: I’ve come across Git for Computer Scientists, and seeing the pretty graphs leads me to believe that working with git requires an understanding of how git works.

Junk Science: New Science Challenges Climate Alarmists?

Fox News reports on [Junk Science: New Science Challenges Climate Alarmists?]( http://www.foxnews.com/story/0,2933,292810,00.html)
Thursday, August 09, 2007

> … The new model predicts that, during the coming decade, average global
temperature will be 0.3 degrees Centigrade (plus/minus 0.21 degrees
Centigrade) higher than the 2004 average temperature.

> But can mathematical models really estimate global temperature change
within 0.3 degrees Centigrade when we don’t even know what the average
global temperature is to within 0.7 degrees Centigrade?

> As NASA’s alarmist-in-chief James Hansen admits, we have no definition
of what we are trying to measure in the context of average global
temperature. “For the global mean, the most trusted models produce a
value of roughly 57.2 degrees Fahrenheit, but it may easily be anywhere
between 56 and 58 degrees Fahrenheit and regionally, let alone locally,
the situation is even worse,” says Hansen.

> For a dimmer view of the concept of average global temperature, consider
the thoughts of renowned theoretical physicist Freeman Dyson who says
that average land temperature is “impossible to measure… is a fiction…
nobody knows what it is… there’s no way you can measure it.”

> The UK researchers (and most other climate alarmists) are even wrong on
the matter of 1998 being the warmest year on record – at least for the
U.S. According to a new analysis which discovered an error in a NASA
dataset, 1934 is the new warmest year on record for the U.S. In fact,
four of the warmest 10 years in the U.S. date from the 1930s while only
three date from the last 10 years. This is an embarrassing setback for
alarmists, especially since about 80 percent of manmade carbon dioxide
(CO2) emissions occurred after 1940.

[Read more…](http://www.foxnews.com/story/0,2933,292810,00.html)

Global warming? Look at the numbers

The Canada National Post reports [Global warming? Look at the numbers](http://www.canada.com/nationalpost/columnists/story.html?id=61b0590f-c5e6-4772-8cd1-2fefe0905363 )

>Last week, NASA’s Goddard Institute for Space Studies — whose temperature records are a key component of the global-warming claim (and whose director, James Hansen, is a sort of godfather of global-warming alarmism) — quietly corrected an error in its data set that had made recent temperatures seem warmer than they really were…. The hottest year since 1880 becomes 1934 instead of 1998, which is now just second; 1921 is third…. Perhaps we will have uncontrollable warming in the future, but it likely hasn’t started yet.

Article: The Pillars of Concurrency

[The Pillars of Concurrency](http://www.ddj.com/dept/64bit/200001985), July 02, 2007 by Herb Sutter

“In his inaugural column, Herb makes the case that we must build a
consistent mental model before talking about concurrency.
Herb is a software architect at Microsoft and chair of the ISO C++
Standards committee.”

* Pillar 1: Responsiveness and Isolation Via Asynchronous Agents
* Pillar 2: Throughput and Scalability Via Concurrent Collections
* Pillar 3: Consistency Via Safely Shared Resources
* Composability: More Than The Sum of the Parts

Ubiquitous Version Control and the future of Subversion

Mantra: “[Version control must become ubiquitous](http://subversion.tigris.org/servlets/ReadMsg?list=dev&&msgNo=128193)” — Branko Čibej

Subversion developers are gradually leaning toward distributed version control techniques [[1]](http://subversion.tigris.org/servlets/ReadMsg?list=dev&msgNo=128301), [[2]](http://subversion.tigris.org/servlets/ReadMsg?list=dev&msgNo=128301). However, they don’t want users to have to know it’s distributed. They don’t want users to know they’re even using version control. Lawyers, architectes, etc. all need version control, and often, they don’t even know that it’s possible. It also needs to be easy to do text searches on commit history.

> We do need to recognize that users are not interested in becoming version control experts, and we need to pay close attention to what they actually want, as opposed to what experts might want them to want.

> The reason Subversion is taking over the world is because it is tremendously
user-focused, and because it provides well-documented APIs that enable other
developers to write software on top of Subversion. We should copy what
we need from the decentralized systems, but remember that most users don’t know
or care whether a system is centralized or decentralized — their ideal system
is one they don’t notice. — [Karl Vogel](http://subversion.tigris.org/servlets/ReadMsg?list=dev&msgNo=128111)

Apparently, users really like the ability to “lock” files in the repository. How does that work with distributed version control? I don’t know.

[Eric Raymond praised the Subversion developers](http://subversion.tigris.org/servlets/ReadMsg?list=dev&msgNo=128106), although he believes the future lies in distributed systems like Mercurial.

Karl Fogel [replies to this and to Linus Torvald’s criticisms of Subversion](http://subversion.tigris.org/servlets/ReadMsg?list=dev&msgNo=128111). He does a great job of summarizing Linus’s talk on Git, and the things people want to do with version control:

> For many organizations, including open source projects, centralization is a
feature: you want changes (and branches) to end up in the master repository
sooner rather than later, so they’ll be visible to everyone, so they’ll be
backed up, so they’ll go through the central hook system, etc. It
focuses the community on a shared object (Ben Collins-Sussman makes this argument in
more detail at [http://blog.red-bean.com/sussman/?p=20](http://blog.red-bean.com/sussman/?p=20)).”

> A general tool configured to behave in a specific way is never quite
as natural to use as a tool designed for that specific use in the
first place. In other words, Subversion can — will have to — take
on some of the features of decentralized VC systems, but it will never
be as good a decentralized system as they are. By the same token, a
decentralized system can be configured to work like a centralized one,
but will never be as good at it as Subversion is.”

Individual [.svn dirs may go away](http://subversion.tigris.org/servlets/ReadMsg?list=dev&msgNo=128148), along with the ability to move a
subdirectory from a checkout somewhere else and have it still work. The reason? Better performance.