Data security can only be achieved by those empowered

Users of online services don’t have the ability (i.e. aren’t empowered) to secure the data stored by those services. Only the engineers and the companies that build the services can do that. So I agree with Cindy Cohn, who says:

…we need to ensure that companies to whom we entrust our data have clear, enforceable obligations to keep it safe from bad guys. This includes those who handle it it directly and those who build the tools we use to store or otherwise handle it ourselves.

In my view, business leadership and software engineers have an ethical responsibility to secure their systems and services so that customer’s data and sensitive information doesn’t get misused or abused.

I’d like it if customers had a reliable and consistent way to evaluate the quality and diligence given to keeping their data safe — something like Charity watch or Consumer Reports.

Continuous Delivery

Have you been working on a software project where your momentum seems to be slowing down? It tends to happen as features are added, because it requires time and effort to maintain and verify existing features with each and every release. Without automated tests, momentum slows, or worse, you end up shipping broken software from time to time.

A book titled Continuous Delivery enumerates changes and improvements that organizations can adopt to increase momentum. The business case seems compelling, with the promise of:

  1. Faster reaction times (for the business, for new features, for bugfixes)
  2. Reduced risk via earlier feedback
  3. Flexibility in releasing (dormant features, enabled with a feature switch)
  4. Reduced development costs

The authors have an informative website, and others, including Atlassian, have written on the topic of Continuous Delivery in an informative series of blog posts called “A skeptic’s guide to continuous delivery”, split into parts one, two, three, four and five. In the first installment, they say “CD fundamentally requires some amount of cultural rewiring because the core structure of CD is a delivery pipeline through which changes flow”.

I like the vision of Continuous Delivery, and if large and small companies have used it to increase their agility in achieving business objectives, then it’s possible for others to do the same.

pre-commit

At work, we use git, and git supports hooks, including pre-commit hooks. Rather than write my own, and do it poorly, I’m using a tool called pre-commit, created by engineers at Yelp.com. To them, I offer my thanks.

ip and ss: better than ifconfig and netstat

I’ve been using Linux for a while now, so typing certain commands is fairly ingrained, like ‘ifconfig’ and ‘netstat’. I know about “ip addr”, which is more modern than ifconfig, and I use it sometimes.

This week, I learned about ‘ss’, which is faster than ‘netstat’, and does more. My favorite invocation is “ss -tlp” to show programs listening on tcp sockets.

OpenWest notes

This past weekend, I attended the excellent #OpenWest conference, and I presented Scaling RabbitMQ.

The volunteers that organized the conference deserve a huge amount of thanks. I can’t imagine how much work it was. I should also thank the conference sponsors.

A local group of hardware engineers designed an amazing conference badge, built from a circuit board. They deserve a big “high-five”. There was a soldering lab where I soldered surface mount components for the first time in my life – holding the components in place with tweezers. I bought the add-on kit for $35 that included a color LCD screen and Parallax Propeller chip. It took me 45 minutes to do the base kit, and two hours to do the add-on kit. I breathed a sigh of relief when I turned on the power, and it all worked.

The speakers did a great job, and I appreciate the hours they spent preparing. I wish I could have attended more of the sessions.

Among others, I attended sessions on C++11, Rust, Go lang, Erlang, MongoDB schema design, .NET core, wireless security, Salt Stack, and digital privacy.

I’m going to keep my eye on Rust, want to learn and use Go, and use the new beacon feature of Salt Stack. Sometime in the future, I’d like to use the new features of C++11.

The conference was an excellent place to have useful side-conversations with vendors, speakers, and past colleagues. It was a great experience.

Linux, time and the year 2038

Software tends to live longer than we expect, as do embedded devices running Linux. Those that want to accurately handle time through the year 2038 and beyond will need to be updated.

Fifteen years after Y2K, Linux kernel developers continue to refine support for time values that will get us past 2038. Jonathan Corbet, editor of LWN.net, explains the recent work in his typical lucid style: https://lwn.net/Articles/643234

It sounds like ext3 and NFSv3 filesystems will need to go the way of the dodo, due to lack of support for 64 bit time values, while XFS developers are working on adding support to get us past 2038. By that time, many of us will have moved on to newer file systems.

One comment linked to this useful bit of information on time programming on Linux systems: http://www.catb.org/esr/time-programming/, the summary is:

To stay out of trouble, convert dates to Unix UTC on input, do all your calculations in that, and convert back to localtime as rare as possible. This reduces your odds of introducing a misconversion and spurious timezone skew.

It’s also excellent advice for any back-end system that deals with data stored from devices that span a continent or the world, although it doesn’t necessarily eliminate daylight savings bugs.

Containerization – the beginning of a long journey

I read this today, and thought it’s worth sharing:

The impact of containerization in redefining the enterprise OS is still vastly underestimated by most; it is a departure from the traditional model of a single-instance, monolithic, UNIX user space in favor of a multi-instance, multi-version environment using containers and aggregate packaging. We are talking about nothing less than changing some of the core paradigms on which the software industry has been working for the last 20 – if not 40 – years.”

And yet it is tempered with reality:

we are really only at the beginning of a long journey…”

http://rhelblog.redhat.com/2015/05/05/rkt-appc-and-docker-a-take-on-the-linux-container-upstream/

Ubuntu and .local hostnames in a corporate network

In the past, I’ve had trouble getting my Ubuntu machine to resolve the .local hostnames at work. I didn’t know why Ubuntu had this problem while other machines did not.

When I did a DNS lookup, it failed, and ping of host.something.local failed. Yet ping of the hostname without the .something.local extension worked. Odd. I googled various terms, but nothing useful came up. I tried watching the DNS lookup with tcpdump, but it didn’t capture anything.

Eventually, I thought of using ‘strace ping host.something.local’ to see what was happening, and it turns out that DNS was never being queried — it was talking to something called avahi.

I googled “avahi”, and was reminded that hostname resolution is configured in /etc/nsswitch.conf. In the case of Ubuntu, it’s configured to send *.local requests to Avahi (mdns4_minimal), and no further — i.e. if Avahi doesn’t resolve it, it doesn’t try DNS.

In my case, I want corporate DNS to resolve .local addresses. So I changed my /etc/nsswitch.conf from this:

hosts: files mdns4_minimal [NOTFOUND=return] wins dns mdns4

to this:

hosts: files wins dns mdns4_minimal mdns4

And now my Ubuntu development machine can communicate with our internal .local machines without having to resort to using IP addresses, short names, or having to place the mapping in /etc/hosts.