mtnwestruby: Ruby Queues

Mountain West Ruby Conference
Ruby Queues (RQ) by Ara Howard
16 March 2007

http://www.linuxjournal.com/article/7922
Ara works for NOAA — primarily with satellite data sets. 50KLOC, all paid for by tax payer dollars. Builds medium sized 10-20 node distributed systems.

RQ helps build instant distributed linux clusters. When presenting RQ to scientists, he rarely mentions Ruby. Today, he will talk about the technical side of RQ. RQ isn’t one of the most interesting pieces of software he’s written, but he learned more than average while writing it. One of the reasons he teaches and presents is because he learns while doing it.

RQ has been used to help generate power outage maps after hurricanes hit. Why did he develop it? The lab purchased a bunch of linux machines instead of a Cray, because it was cheaper. His job was to make them work together. He tends to believe that the first link on Goggle will yield the information he needs, so he went looking for a simple distributed computing framework. The solutions he found were the wrong fit, or overly heavyweight. In their environment, the programs that act on the data follow the data, because it’s more expensive to move data than to move programs. He decided to write RQ.

Tried using MySQL for the server queue controller. However, it adds complication with setting up usernames and passwords, and getting approval for the security thereof. He decided to leverage what was already approved – traditional UNIX file permissions and NFS for shared access to data. He also couldn’t run a process as root, or have it listen on a TCP port.

Needed NFS-safe lock files.

gem install lockfile # he wrote this package

NFS lockd wasn’t very good at throughput or fairness. One node would get the lock 500 times in a row, then the next node 500 times, etc. He wrote lock-polling code with a back-off algorithm. It took a while to get it right.

Ended up using SQLite for the shared data store. “Beats the pants off pstore, fsdb, Madeline, etc.” Most of these un-ideal solutions didn’t work well with NFS-heavily-cached data. They would run for 2 weeks, then get corruption. In contrast, SQLite is very robust over NFS — it detects and recovers from corruption.

gem install slave

How does a normal user install daemon processes? RQ cron.

nrtq query – input and output in YAML. He didn’t tell scientists that it was YAML. They didn’t need to know. Using YAML meant he didn’t have to write his own parser, and it’s human readable.
RQ is being used on a single host to queue jobs. There’s a Rails plugin.

Lessons learned:

  • NFS is quirky, but it’s the defacto standard. We get to live with it and work around the quirks.
  • LVM kills performance.
  • Roll your own NFS locking. The standard one is insufficient.
  • Use NFS hard mounts. Puts nodes to sleep until NFS server comes back online.
  • RQ does not move data around. They use vsftpd to allow data to be moved.
  • Constraints are good. Turns out many people and organizations operate under the same constraints.

Linux C++ IDE; NX

Lately, I’ve been developing on Linux. When developing remotely, I can get
along with a shell and vim, with VNC, or with remote-X. However, none of these
options are as fast or as nice as using NX. Here are the instructions to install and use
the NX server and client on Fedora Core 5 and 6:
http://fedoranews.org/contributors/rick_stout/freenx/

What’s the best C++ IDE in Linux? Out of the three IDEs I have evaluated, I’d
recommend either SlickEdit or NetBeans C++. I haven’t tried Emacs. I’ve installed KDevelop, but haven’t tried it much yet.

Eclipse CDT

  Overall: Immature and over complicated. I prefer vim with a ctags file, jedit, nedit, or gedit.
  Code Completion: Broken -- rarely works
  Search by Symbol or Reference: Broken
  Debugger support: Yes. Ugly user interface
  Custom build (bjam): Yes
  Project support: Yes. Automatically adds new files, removes old files from workspace
  Refactoring support: No
  Subversion support: Yes, with plugin

SlickEdit

  Overall: Excellent IDE
  Code Completion: The best of the bunch, but not as good as Visual Studio
  Search by Symbol or Reference: Excellent
  Debugger support: Yes. Difficult to setup
  Custom build (bjam): Yes
  Project support: Yes
  Refactoring support: Good
  Subversion support: Yes
  Notes: Has fairly good key emulation support for Visual Studio, Vim, Brief, Emacs, etc.
  Language Support: Tagging and syntax highlighting for C++, Java, Perl, Python and Ruby (to name just a few).

NetBeans C++

  Overall: Better than Eclipse CDT
  Code Completion: Yes
  Search by Symbol or Reference: Yes
  Debugger support: Yes, but haven't yet figured out how to set breakpoints.
  Custom build (bjam): Yes
  Project support: Not yet evaluated
  Refactoring support: No
  Subversion support: Yes, with plugin or with NetBeans beta 6.0.

KDevelop

  Overall: Not yet evaluated
  Code Completion: Yes
  Search by Symbol or Reference: Symbol - Yes (using ctags); Reference - Unknown.
  Debugger support: Yes
  Custom build (bjam): Most likely
  Project support: Yes
  Refactoring support: Unknown
  Subversion support: Yes

None of these tools are as good at code completion as Microsoft Visual Studio 2005.

What’s happening with Version Control Systems?

I’ve long had an interest in version control systems (VCS), also known
as source code management (SCM) systems — beginning with RCS, SCCS and
CVS. CVS
was already showing it’s age when I started using it in 1998. When the
company I worked for, Axent, was acquired by Symantec in 2000, we switched to
using Perforce. At first, I
thought Perforce was a step backwards from CVS. After using it heavily
for a few months, it was clear that CVS and WinCVS
didn’t come close to the ease-of-use and features of Perforce and
p4win. CVS was dreadfully slow compared to Perforce, which was
lightning fast (and still is).

Perforce encourages third-party developers to develop add-ons for use
with their software, which is almost as good as what you get with an
active open-source project. Alough Perforce is proprietary, it’s about as open
as I’ve ever seen a commercial project. It runs on many platforms, has
conversion scripts to migrate CVS repositories to Perforce, etc. It’s not
cheap, unless you’re working on an open-source project, in which case, you can
get free licenses to use it.

At some point, I heard about the Subversion
project, which aimed to correct many of the deficiencies of CVS. Those
were the pre-1.0 days, and it was interesting to watch the development
of Subversion.

About the same time, Bitkeeper was in the news. It was different than CVS, Subversion and Perforce because it was a distributed
version control system. The idea appealed to me because of the
idea that a developer could have version control for his/her private
changes without having to check-in to the main repository until they
were ready. At that time, there weren’t any mature open-source
distributed version control systems to investigate.

I switched jobs late in 2004, and my new company was using Subversion.
Overall, I have been very pleased with Subversion in day-to-day use.
It’s much better than CVS. We had some reliability problems with the
Subversion server. It was running on Windows with the BDB database
storage back-end. When it was switched to a Linux server with the FSFS
back-end, it became much more reliable. My team uses TortiseSVN — an
excellent user interface that integrates with Windows Explorer.

I’ve periodically kept tabs on version control systems. Many open-source variants have sprung up over the last few years: Mercurial, Bazaar-NG, Git/Cogito, Darcs, SVK, Arch and Monotone.
Lately though, I haven’t seen any great reviews on which ones are the
most mature, or what the pros and cons are of each. So, I’ve done some
google research to figure it out, focusing primarily on the distributed
variants.

The conclusion I’ve come to is that the developers of each version
control system are learning from the developers of the other version
control systems, and each project is improving. The Subversion developers are
learning from the distributed version control developers. Recently, there was
an SVN developer summit and they tried out Mercurial, which tells me that there’s merit to the distributed approach.

If you’re already using a modern version control system, the cost to
switch may outweight the benefit. Organizations seem to be able to
cope with legacy tools like Visual SourceSafe and CVS, although better tools
can make developer’s lives easier.

Here’s my own highly subjective comparison table. I’ve marked, in red, some of the things
I think are noteworthy. I focused my efforts on the compeitors that
seem to have garnered the most community adoption. I’ve included one
commercial system, Perforce. Each item is rated on a scale of 1 to 10, 10 being the best. (Update: There’s a better table than mine at http://bazaar-vcs.org/RcsComparisons and various comparisons at Wikipedia)

Comparison of Source Code Management systems

January 31, 2007 Subversion SVK Git/Cogito Mercurial Bazaar-NG Darcs Perforce Notes
Command-line name svn svk git / cg hg bzr darcs p4
Cross-Platform 10 9 6 10 10 9 10 Windows, Linux, Mac, Solaris, etc.
Maturity 9 6 8 7 5 8 10 Maturity based on lifetime, and project flux in code
Maturity: GUI 9 0 5 4 3 1 10
Disconnected/offline operation 2 10 10 10 10 10 0 Disconnected 1. editing of files, 2. branching, 3. merging, 4. history, etc. Especially handy when there’s no network connectivity, such as when on an airplane.
Community Adoption 10 2 8 7 5 2 1
Documentation Quality 10 7 7 8 6 8 10
Storage Format: Robustness 5 5 10 8 7 5 5 Storage format least susceptible to corruption.
Storage Format: Not in flux 1 1 10 8 1 1 ?
(re)Merging support 0 9 9 9 9 10 4 Remembers prior merges, cherry-picking, etc.
Repository Size 1 9 10 9 ? ? ?
Speed 2 7 10 8 6 10
Scalability 9 9 10 9 5 5 9
Commercial Backing 10 5 10 10 10 5 10
Subversion Integration 10 8 6 5 4 4 ? Tailor can be used to migrate changes between all systems
Totals: 88 87 119 112 81 68 79

If I were to pick a VCS system today, it would probably be Git, followed by Mercurial. What follows are my unpolished notes and ideas.

Git/Cogito

Git is very scalable, and is
the fastest
open-source version control system available. Git has a wide community
of professional engineers supporting it, and it has a bright
future. There are graphical user interfaces available for Git such as
gitk and qgit, although none of them are as mature as the user interfaces available for Subversion. Cogito is the easy-to-use command-line wrapper around git. See also the Cogito Wiki.
According to Keith Packard of xorg fame, Git has the most
robust/reliable repository storage format
. Advantages of git and all distributed VCSes include 1. offline repository access, 2.
private branches, 3. distributed backups including change history.

For those wishing to use Git/Cogito on Windows, use Cygwin and select the git and/or cogito packages and read the information at http://git.or.cz/gitwiki/WindowsInstall. For those organizations wishing for excellent Windows-Explorer integration, use git-cvsserver in combination with TortiseCVS.

To install git and cogito on Fedora, run the following as root:
  yum install git cogito qgit

I’ve reluctantly decided that Git isn’t as mature as Subversion, which
shouldn’t be surprising because Subversion has been around for longer. Git
isn’t the right fit for all projects. Git was designed for monolithic code
bases, not for modular code bases, although work is in progress to allow it to
support sub projects
(similar to svn:externals).
“Such flexibility is an implicit feature of centralized SCMs, but is much more
difficult to implement in a distributed system like git. As a result, git
currently lacks built-in subproject support, although gitweb does have a notion
of subprojects.”

There’s a document that describes Common Mistakes made when using Git. Unfortunately, most of it isn’t written yet — there’s only a loose outline.

Tutorials:

Tools — See http://git.or.cz/gitwiki/InterfacesFrontendsAndTools

Mercurial

The OpenSolaris project decided between Bazaar-NG, Git and Mercurial.
Mercurial was chosen primarily because 1. it was fast (although Git is
faster), 2. the Mercurial developers were very responsive to the
OpenSolaris developers and 3. OpenSolaris developers felt like they
could hack Python code, and 4. the repository format works well with ZFS &
NetApp filesystem snapshotting. Their evaluation of Git is here,
and it looks like the listed downsides are now out-of-date or superficial. The Mozilla project had a “version control shootout“, and although they haven’t yet made a decision, Mercurial and Bazaar-NG sounded the best to them.

The following has diagrams to illustrate distributed merging:
http://www.selenic.com/mercurial/wiki/index.cgi/UnderstandingMercurial

Mercurial is more mature than Bazaar-NG, and Mercurial is faster:
http://sayspy.blogspot.com/2006/11/bazaar-vs-mercurial-unscientific.html

“Technologically, centralized systems are a single point of failure–
any problems with the central server are problems for all people using
it.” — http://bazaar-vcs.org/WhyUseBzr

Mercurial supports access control, email notify, line-ending conversion,
etc.:
http://www.selenic.com/mercurial/wiki/index.cgi/UsingExtensions

SVK

SVK is built on top of Subversion, so it should, in theory, integrate
well with an existing Subversion repository, allowing developers to use
a distributed tool even if the master server remains a Subversion
server. Community adoption is high enough to have some confidence
in the future of the project, although adoption isn’t nearly as high as with Git, Mercurial or Bazaar-NG.

It used to be difficult to install, but you can now get a prebuilt
installer for Windows and probably for Linux as well. Working copies
(sandboxes) have no extra meta data (no .svn directory which interfere
with find, etc.) The repository format is significantly smaller than
with Subversion. I’ve found that SVK is much faster than Subversion,
although I haven’t used it much. There is not yet a graphical
user interface — a must for many organizations/communities.

The good, the bad and the ugly about SVK (Sept 2006):
http://kitenet.net/~joey/blog/entry/svk.html

Darcs

Users of darcs, including myself, appreciate its simplicity and
ease-of-use (note: Cogito, Mercurial and Bazaar-NG are also easy to
use). Downsides of darcs are that 1. Darcs is implemented in Haskel,
which limits the contributing developer community (perhaps it will inspire
people to learn Haskel), 2. depends on having Haskel libraries installed and
3. there’s no graphical user interface, unless you consider darcsweb. Still, I like darcs, and I use it on my
home linux box. Like Perforce and SVK, darcs doesn’t clutter up directories
with .darcs metadata. It used to be that Darcs wasn’t very scalable, but
I hear that it’s become much more scalable as of mid-2006. I’ve read that
Mercurial and Darcs feel somewhat similiar in their command-line user
interface.

Mirroring Subversion with Darcs and Tailor (Sept 2006):
http://fiatdev.com/articles/2006/09/10/mirroring-subversion-with-darcs-and-tailor

Subversion

Subversion has a bright future, I think, and we may yet see some of the
advantages of distributed systems appear. For those who need merge
history tracking, which makes future merges from the same branch
easier, there’s svnmerge.py. In a future release, Subversion will have this feature built-in.

The Subversion 1.4 release brought impressive speedups for working copy operations.

Control/Power

Changing information flow by switching from a centralized system to a
distributed system will empower or disempower different sets of people.
I wouldn’t be surprised if one encounters resistance in switching.

In the centralized model, developers are empowered to make any change
they want, which may affect everyone, without consulting others. Of course,
if they abuse that power, they may lose commit access. With a distributed system, an integrator pulls in people’s changes based on what and whom they trust. If
you’re aiming for quality code that doesn’t destabilize a system, it
sounds like a good approach, and it works well for Linux kernel development. Most distributed systems can be used similiar to a
centralized system, so that no integrator is required — individuals can push their changes to the master repository.

No-hassle online backup software

No-hassle online backup software for Windows XP: http://mozy.com and http://carbonite.com. Five dollars per month. Not bad.

I heard about these from listening to this podcast on usability of software

Why Software Sucks by David Platt
http://cdn.itconversations.com/ITC.TM-DavidPlatt-2007.01.02.mp3

What is the most important thing to the average computer user? They want their machine to “just work”. Why does Google know how to correctly translate a United Parcel Service tracking number, while the actual UPS website requires multiple entries just to get to the point where the tracking number can be entered? Programmer David Platt is the author of “Why Software Sucks…and What You Can Do About It”.

While average users are expected to use the computer as an everyday tool, programmers too often produce software that has poor functionality, especially compared to other devices used to perform other routine tasks.

One of the other major problems is that software is too often marketed to enterprises rather than individuals, and that constant updates are meant to convince companies to regularly upgrade, with little or no thought given to the end user.

The discussion is both enlightening and entertaining. While Platt believes the problem can be solved, he thinks it won’t happen unless software designers change their point of view to better consider the needs of the end user.

Stages of Security in the life of computer software

Is your client-server software secure? You may be surprised to find that even mature software, sporting the use of standard encryption, could be putting your mission-critical data at risk. Why is that? It has to do with economics and the lifecycle of software. Here are the stages.

Prototype. No one cares about the security of the client-server communication.

First release. Althought the system (likely little more than an improved prototype) has been shipped, it may not be usable, and if it is, it lacks mission-critical features. Security is low on the priority list. If encryption and authentication were implemented, it is most likely minimal, brittle and insufficient. If the security is robust, the product is not functional and likely never will be because the ISV will go out of business.

Second or third release. If the ISV has survived long enough to release a second or third version, customers likely demanded the use of standard encryption. In the old days (the 1990s), this means the ISV would have switched from XOR “encryption” to DES or 3DES. These days, standard encryption probably means use of SSL/TLS without certificate checks. Note that customers probably won’t ask about the security of the authentication mechanisms.

Unfortunately, use of a standard encryption algorithm doesn’t mean communication is secure. Software is likely to be vulnerable to man-in-the-middle attacks and have authentication bugs. The customer isn’t likely to know this, and neither is the ISV. If the ISV does know, they won’t fix the problems. This shouldn’t be surprising — the risk tolerance is different for the ISV versus their customers. The software vendor isn’t the one that is going to suffer losses due to information disclosure or breach of integrity.

At this stage, customers usually apply more pressure to implement new features than to focus on security. Of course, this is based on ignorance of the actual situation. The vendor isn’t likely to want to pay the price to improve security… unless the customer knows and applies pressure to get it fixed.

Eventually, a security-conscious customer (i.e. a financial institution or a government) hires someone to evaluate the software, and they start asking hard questions of the ISV. Most of the ISV’s software engineers won’t know the answers because the security mechanism is transparent to their daily work — it stays out of sight, and out of mind. Eventually, people figure out that the encryption is vulnerable to man-in-the-middle attacks or authentication bugs. At first, customers may be reluctant to believe that the security holes are serious, and then they will panic. They will apply pressure to the ISV to get it fixed.

Take-away points:

  • SSL without certificate checks is vulnerable to Man in the Middle attacks.
  • Almost no ISV gets encryption right the first time, and they won’t fix problems unless their feet are held to the fire.

Tim Bray tells us what’s awesome about Ruby

Tim Bray explains what is to like about the Ruby programming language:

I’ll jump to the conclusion first. For people like me, who are proficient in Perl and Java, Ruby is remarkably, perhaps irresistibly, attractive. Over the last week I’ve got an unreasonable amount of work done in a ridiculously short period of time, with lots of interruptions, in a language I previously didn’t know. It’s intuitive enough that I’ve often found myself guessing at a syntax or a method or a usage and getting it right first time.

Maybe the single biggest advantage is readability. Once you’ve got over the hump of the block/yield idiom, I find that a chunk of Ruby code shouts its meaning out louder and clearer than any other language. Anything that increases maintainability is a pearl beyond price.

I’ve been programming in C and Java for a quarter-century and I find Ruby easier to read, only a week in. Of course, a language’s culture is often more important than all that technical crap. I’ve found the ruby-talk mailing list to be a fount of wisdom and friendly to ignorant newbies too.

http://www.tbray.org/ongoing/When/200x/2006/07/24/Ruby

Follow the link to find out “What’s Lame” about Ruby.

Best of Breed, or Best of Mediocrity?

Having worked for some time as a software engineer in the enterprise security
software world, I know that customers (enterprises) look for “best of breed”
software. For a large company customer, this usually means that a software
solution distinguishes itself in some way that makes it work well in their
environment. Often, this translates to reliability, cross-platform support,
person-to-person support and the ability to function beyond what is advertised.

As many are aware, there is “consolidation” going on in the security market.
Big fish are swallowing smaller fish, and it’s lucrative, in the short term,
for everyone except customers. Supposedly, the consolidation means that two
separate products can be “integrated”, or unified. Never mind the previous
competitive relationship that may have existed between the product teams and
their management. For some reason, people seem to think that competition
evaporates and that the two product teams will happily work together to build
the next generation “Best of Breed” software solution.

Not so.

In any big corporation or software company, there are constant power plays
being made. You could call this “decision making”, and if you have uncommonly
good leaders, you might even say good decisions are being made.
Unfortunately, it is human nature for most people to misuse and abuse positions
of power. Instead of making product decisions that are best for their merged
customer base, they make decisions that keep themselves in a position of power.

So, we have two best of breed products: Overdog and Underdog. Underdog is
easier to manage, but isn’t as complete in its offerings. Overdog is more
complete, but is more expensive to deploy and manage. Overdog has the advantage
of being used in Fortune 500 companies. Underdog, on the other hand, is trying
to break into that market space.

Enter Big Fish — a.k.a. Consolidator. Consolidator buys Overdog, and a few
years later, buys Underdog. We take two products, both “Best of Breed” in
different ways, and expect to see them merged together to make something “next
generation” — better, faster, stronger, and easier to use.

Whenever there is a consolidation, talented people get fired, and their
creative ideas and abilities are lost. Product integration never happens as
easily as anyone would like to believe (if it happens at all). And in the end, customers end up with a
product that we can best label as “Best of Mediocrity”. Consolidation means
that customers lose their “Best of Breed” solutions.

What can you expect from Software Consolidators? Mediocre solutions. Look
elsewhere for excellence.

Article: Crash-only software

Crash-only software: More than meets the eye
by Valerie Henson July 12, 2006:
http://lwn.net/Articles/191059/

Properly implemented, crash-only software produces higher quality, more
reliable code; poorly understood it results in lazy programming. Probably
the most common misconception is the idea that writing crash-only software
is that it allows you to take shortcuts when writing and designing your
code. Wake up, Sleeping Beauty, there ain’t no such thing as a free lunch.
But you can get a more reliable, easier to debug system if you rigorously
apply the principles of crash-only design.