Yubikey 4 GPG key generation (Ubuntu)

Install supporting software

sudo apt-add-repository ppa:yubico/stable
sudo apt-get update
sudo apt-get install scdaemon -y
sudo apt-get install python-setuptools python-crypto python-pyscard python-pyside pyside-tools libykpers-1-1 pcscd -y
sudo apt-get install yubioath-desktop yubikey-personalization yubikey-personalization-gui yubikey-manager  -y

Insert Yubikey and Generate key

gpg --card-edit
gpg/card> admin
gpg/card> generate
gpg/card> quit

export and backup the public keys, because the Yubikey only stores the private portion of the key

gpg --armor --export $KEYID > mykey.pub

Require touching the Yubikey button to authenticate, sign, or encrypt:

ykman openpgp touch aut on 
ykman openpgp touch sig on 
ykman openpgp touch enc on 

Change the pin

gpg --card-edit
gpg/card> admin
gpg/card> passwd
gpg/card> quit

Change yubikey information

gpg --card-edit
gpg/card> name
gpg/card> lang
gpg/card> quit

References:

LEDE awesomeness

I’ve had what I thought was a great WiFi router for the past 3 years. The vendor continues to provide firmware updates, which is admirable.

Having heard of the awesome improvements that are being made by folks in the LEDE fork of OpenWRT (in the area of eliminating bufferbloat), I thought it was time for an upgrade. So I purchased an Archer C7 version 2 router, and today, I installed LEDE. Installation was a breeze. Configuring LEDE isn’t as easy as most consumer WiFi routers, but the payoff has been good.

My downstream 2GHz WiFi cameras and networking gear seem to be staying online better, and streaming live video works better as well. I’m not sure if my family notices much of a difference, but I do. I appreciate the folks who have brought me better networking.

Runtime debugging tools for Linux

Here’s a useful presentation on Linux debugging tools — tools that don’t require source code, additional prints or logging.

http://jvns.ca/blog/2016/09/17/strange-loop-talk/

  • strace has a new flag that I didn’t know about: -y, which prints the paths that are associated with file descriptors.

  • opensnoop lets you see the details of open() calls across the entire system, or for an individual process, or for paths containing certain characters, or it can print the file paths that couldn’t be opened.

  • pgrep shows the stack trace of a running process, which can be useful to get an idea of what a program spends most of its time doing.

  • dstat shows system resource stats. It is a replacement for vmstat, iostat and ifstat.

  • htop — a more beautiful ‘top’, and easier to use. I still mostly use ‘top’ because it is installed by default. Other great tools I use include ‘powertop’ and ‘iotop’.

  • ngrep — an alternative to tcpdump, but allows the use of regexes to match plain-text data in packets.

  • tcpdump — useful when troubleshooting network connections between servers.

  • wireshark — a more UI-friendly tool than tcpdump, with dissectors for most protocols

RabbitMQ, memcache, and too many socket connections

What happens when you have hundreds of services connected to RabbitMQ and memcache, and those services have a bug that causes them to keep their previous socket connections open, and repeatedly reconnect to RabbitMQ and memcache?

They crash.

It occurred to me that one can prevent too many connections using iptables on the RabbitMQ and memcache machines. Here’s how:

http://www.cyberciti.biz/faq/iptables-connection-limits-howto/

The corollary is that setting the per-ip connection limit too low can also cause problems.

I’d guess that more commonly public-facing servers like NGINX and Apache don’t have the problem of crashing. Hopefully, they degrade gracefully, and refuse additional connections while continuing to service the connections they already have open.

ip and ss: better than ifconfig and netstat

I’ve been using Linux for a while now, so typing certain commands is fairly ingrained, like ‘ifconfig’ and ‘netstat’. I know about “ip addr”, which is more modern than ifconfig, and I use it sometimes.

This week, I learned about ‘ss’, which is faster than ‘netstat’, and does more. My favorite invocation is “ss -tlp” to show programs listening on tcp sockets.

OpenWest notes

This past weekend, I attended the excellent #OpenWest conference, and I presented Scaling RabbitMQ.

The volunteers that organized the conference deserve a huge amount of thanks. I can’t imagine how much work it was. I should also thank the conference sponsors.

A local group of hardware engineers designed an amazing conference badge, built from a circuit board. They deserve a big “high-five”. There was a soldering lab where I soldered surface mount components for the first time in my life – holding the components in place with tweezers. I bought the add-on kit for $35 that included a color LCD screen and Parallax Propeller chip. It took me 45 minutes to do the base kit, and two hours to do the add-on kit. I breathed a sigh of relief when I turned on the power, and it all worked.

The speakers did a great job, and I appreciate the hours they spent preparing. I wish I could have attended more of the sessions.

Among others, I attended sessions on C++11, Rust, Go lang, Erlang, MongoDB schema design, .NET core, wireless security, Salt Stack, and digital privacy.

I’m going to keep my eye on Rust, want to learn and use Go, and use the new beacon feature of Salt Stack. Sometime in the future, I’d like to use the new features of C++11.

The conference was an excellent place to have useful side-conversations with vendors, speakers, and past colleagues. It was a great experience.

Linux, time and the year 2038

Software tends to live longer than we expect, as do embedded devices running Linux. Those that want to accurately handle time through the year 2038 and beyond will need to be updated.

Fifteen years after Y2K, Linux kernel developers continue to refine support for time values that will get us past 2038. Jonathan Corbet, editor of LWN.net, explains the recent work in his typical lucid style: https://lwn.net/Articles/643234

It sounds like ext3 and NFSv3 filesystems will need to go the way of the dodo, due to lack of support for 64 bit time values, while XFS developers are working on adding support to get us past 2038. By that time, many of us will have moved on to newer file systems.

One comment linked to this useful bit of information on time programming on Linux systems: http://www.catb.org/esr/time-programming/, the summary is:

To stay out of trouble, convert dates to Unix UTC on input, do all your calculations in that, and convert back to localtime as rare as possible. This reduces your odds of introducing a misconversion and spurious timezone skew.

It’s also excellent advice for any back-end system that deals with data stored from devices that span a continent or the world, although it doesn’t necessarily eliminate daylight savings bugs.