I’ve had what I thought was a great WiFi router for the past 3 years. The vendor continues to provide firmware updates, which is admirable.
Having heard of the awesome improvements that are being made by folks in the LEDE fork of OpenWRT (in the area of eliminating bufferbloat), I thought it was time for an upgrade. So I purchased an Archer C7 version 2 router, and today, I installed LEDE. Installation was a breeze. Configuring LEDE isn’t as easy as most consumer WiFi routers, but the payoff has been good.
My downstream 2GHz WiFi cameras and networking gear seem to be staying online better, and streaming live video works better as well. I’m not sure if my family notices much of a difference, but I do. I appreciate the folks who have brought me better networking.
Thanks to the work of Dave Täht, WiFi will be getting faster in future versions of Linux by reducing bufferbloat. Read more about it at LWN.net.
This matters, because Linux runs in nearly everything these days, from Android, to TVs, to smart home devices.
Here’s a useful presentation on Linux debugging tools — tools that don’t require source code, additional prints or logging.
strace has a new flag that I didn’t know about: -y, which prints the paths that are associated with file descriptors.
opensnoop lets you see the details of open() calls across the entire system, or for an individual process, or for paths containing certain characters, or it can print the file paths that couldn’t be opened.
pgrep shows the stack trace of a running process, which can be useful to get an idea of what a program spends most of its time doing.
dstat shows system resource stats. It is a replacement for vmstat, iostat and ifstat.
htop — a more beautiful ‘top’, and easier to use. I still mostly use ‘top’ because it is installed by default. Other great tools I use include ‘powertop’ and ‘iotop’.
ngrep — an alternative to tcpdump, but allows the use of regexes to match plain-text data in packets.
tcpdump — useful when troubleshooting network connections between servers.
- wireshark — a more UI-friendly tool than tcpdump, with dissectors for most protocols
I came across this recently, and I think it’s worth sharing. It outlines gotchas of commonly used commandline tools and arguments such as when ‘rm -rf’ doesn’t remove a directory, and how to get around it, or when ‘wc -l’ fails to count the last line in a file.
What happens when you have hundreds of services connected to RabbitMQ and memcache, and those services have a bug that causes them to keep their previous socket connections open, and repeatedly reconnect to RabbitMQ and memcache?
It occurred to me that one can prevent too many connections using iptables on the RabbitMQ and memcache machines. Here’s how:
The corollary is that setting the per-ip connection limit too low can also cause problems.
I’d guess that more commonly public-facing servers like NGINX and Apache don’t have the problem of crashing. Hopefully, they degrade gracefully, and refuse additional connections while continuing to service the connections they already have open.
I’ve been using Linux for a while now, so typing certain commands is fairly ingrained, like ‘ifconfig’ and ‘netstat’. I know about “ip addr”, which is more modern than ifconfig, and I use it sometimes.
This week, I learned about ‘ss’, which is faster than ‘netstat’, and does more. My favorite invocation is “ss -tlp” to show programs listening on tcp sockets.
I changed my password on my Ubuntu system this week, and then found that I couldn’t log in, except on a virtual terminal.
My home directory is encrypted, and apparently, it’s better to change a password using the graphical utilities, rather than the command line utilities. The following article was quite helpful in recovering:
This past weekend, I attended the excellent #OpenWest conference, and I presented Scaling RabbitMQ.
The volunteers that organized the conference deserve a huge amount of thanks. I can’t imagine how much work it was. I should also thank the conference sponsors.
A local group of hardware engineers designed an amazing conference badge, built from a circuit board. They deserve a big “high-five”. There was a soldering lab where I soldered surface mount components for the first time in my life – holding the components in place with tweezers. I bought the add-on kit for $35 that included a color LCD screen and Parallax Propeller chip. It took me 45 minutes to do the base kit, and two hours to do the add-on kit. I breathed a sigh of relief when I turned on the power, and it all worked.
The speakers did a great job, and I appreciate the hours they spent preparing. I wish I could have attended more of the sessions.
Among others, I attended sessions on C++11, Rust, Go lang, Erlang, MongoDB schema design, .NET core, wireless security, Salt Stack, and digital privacy.
I’m going to keep my eye on Rust, want to learn and use Go, and use the new beacon feature of Salt Stack. Sometime in the future, I’d like to use the new features of C++11.
The conference was an excellent place to have useful side-conversations with vendors, speakers, and past colleagues. It was a great experience.
Software tends to live longer than we expect, as do embedded devices running Linux. Those that want to accurately handle time through the year 2038 and beyond will need to be updated.
Fifteen years after Y2K, Linux kernel developers continue to refine support for time values that will get us past 2038. Jonathan Corbet, editor of LWN.net, explains the recent work in his typical lucid style: https://lwn.net/Articles/643234
It sounds like ext3 and NFSv3 filesystems will need to go the way of the dodo, due to lack of support for 64 bit time values, while XFS developers are working on adding support to get us past 2038. By that time, many of us will have moved on to newer file systems.
One comment linked to this useful bit of information on time programming on Linux systems: http://www.catb.org/esr/time-programming/, the summary is:
To stay out of trouble, convert dates to Unix UTC on input, do all your calculations in that, and convert back to localtime as rare as possible. This reduces your odds of introducing a misconversion and spurious timezone skew.
It’s also excellent advice for any back-end system that deals with data stored from devices that span a continent or the world, although it doesn’t necessarily eliminate daylight savings bugs.
I read this today, and thought it’s worth sharing:
“The impact of containerization in redefining the enterprise OS is still vastly underestimated by most; it is a departure from the traditional model of a single-instance, monolithic, UNIX user space in favor of a multi-instance, multi-version environment using containers and aggregate packaging. We are talking about nothing less than changing some of the core paradigms on which the software industry has been working for the last 20 – if not 40 – years.”
And yet it is tempered with reality:
“…we are really only at the beginning of a long journey…”