Coming changes in Internet Protocols

Here’s what I think is a fascinating read. I’m excited about QUIC, and less excited that well-intentioned (sometimes draconian) protocol enforcement encourages software engineers to move nearly all protocols to run on top of HTTP or HTTPS — as a way to bypass the enforcement.

https://blog.apnic.net/2017/12/12/internet-protocols-changing/

When a protocol can’t evolve because deployments ‘freeze’ its extensibility points, we say it has ossified. TCP itself is a severe example of ossification; so many middleboxes do so many things to TCP — whether it’s blocking packets with TCP options that aren’t recognized, or ‘optimizing’ congestion control.

It’s necessary to prevent ossification, to ensure that protocols can evolve to meet the needs of the Internet in the future; otherwise, it would be a ‘tragedy of the commons’ where the actions of some individual networks — although well-intended — would affect the health of the Internet overall.

Yubikey 4 GPG key generation (Ubuntu)

Install supporting software

sudo apt-add-repository ppa:yubico/stable
sudo apt-get update
sudo apt-get install scdaemon -y
sudo apt-get install python-setuptools python-crypto python-pyscard python-pyside pyside-tools libykpers-1-1 pcscd -y
sudo apt-get install yubioath-desktop yubikey-personalization yubikey-personalization-gui yubikey-manager  -y

Insert Yubikey and Generate key

gpg --card-edit
gpg/card> admin
gpg/card> generate
gpg/card> quit

export and backup the public keys, because the Yubikey only stores the private portion of the key

gpg --armor --export $KEYID > mykey.pub

Require touching the Yubikey button to authenticate, sign, or encrypt:

ykman openpgp touch aut on 
ykman openpgp touch sig on 
ykman openpgp touch enc on 

Change the pin

gpg --card-edit
gpg/card> admin
gpg/card> passwd
gpg/card> quit

Change yubikey information

gpg --card-edit
gpg/card> name
gpg/card> lang
gpg/card> quit

References:

Blind adherence to process

Blind adherence to process also drives out creative people and rewards nonproductive bean counters.

From The Responsive Enterprise: Embracing the Hacker Way

To paraphrase something else the article said: Organizational memory needs to be periodically “reset” to keep up with operating in a changing world, else it can become an impediment to growth.

Another comment about process:

Being agile is about communication. The process needs to change with the situation. — Erik Meijer

Expect-CT Extension for HTTP

I recently learned of Chrome’s intent to remove public key pinning, and replace it with the new, draft, Expect-CT HTTP header. Ultimately, it should give us a safer web.

Chris Palmer explains:

To defend against certificate misissuance, web developers should use the Expect-CT header, including its reporting function.

Expect-CT is safer than HPKP due to the flexibility it gives site operators to recover from any configuration errors, and due to the built-in support offered by a number of CAs. Site operators can generally deploy Expect-CT on a domain without needing to take any additional steps when obtaining certificates for the domain. Even if the CT log ecosystem substantially changes during the validity period of the certificate, site operators can provide updated SCTs in the form of OCSP responses (if their CA supports it) or via a TLS extension (if they wish for greater control). The combination of these mitigations substantially reduces the risk of DoS (either accidental or hostile) via Expect-CT deployment. By combining Expect-CT with active monitoring for relevant domains, which a growing number of CAs and third-parties now provide, site operators can proactively detect misissuance in a way that HPKP does not achieve, while also reducing the risk of misconfiguration and avoiding the risk of hostile pinning.

Iterating the python3 exception chain

I use the python requests library to make HTTP requests. Handling exceptions and giving the non-technical end user a friendly message can be a challenge when the original exception is wrapped up in an exception chain. For example:

import requests

url = "http://one.two.threeFourFiveSixSevenEight"
try:
    resp = requests.get(url)
except requests.RequestException as e:
    print("Couldn't contact", url, ":", e)

Prints:

Couldn’t contact http://one.two.threeFourFiveSixSevenEight : HTTPConnectionPool(host=’one.two.threeFourFiveSixSevenEight’, port=80): Max retries exceeded with url: / (Caused by NewConnectionError(‘<requests.packages.urllib3.connection.HTTPConnection object at 0x7f527329c978>: Failed to establish a new connection: [Errno -2] Name or service not known’,))

And that’s a mouthful.

I want to tell the end user that DNS isn’t working, rather than showing the ugly stringified error message. How do I do that, in python3? Are python3 exceptions iterable? No. So I searched the internet, and found inspiration from the raven project. I adapted their code in two different ways to give me the result I wanted.

Update Aug 10: See the end of this blog post for a more elegant solution.

import socket
import requests
import sys

def chained_exceptions(exc_info=None):
    """
    Adapted from: https://github.com/getsentry/raven-python/pull/811/files?diff=unified

    Return a generator iterator over an exception's chain.

    The exceptions are yielded from outermost to innermost (i.e. last to
    first when viewing a stack trace).
    """
    if not exc_info or exc_info is True:
        exc_info = sys.exc_info()

    if not exc_info:
        raise ValueError("No exception found")

    yield exc_info
    exc_type, exc, exc_traceback = exc_info

    while True:
        if exc.__suppress_context__:
            # Then __cause__ should be used instead.
            exc = exc.__cause__
        else:
            exc = exc.__context__
        if exc is None:
            break
        yield type(exc), exc, exc.__traceback__

def chained_exception_types(e=None):
    """
    Return a generator iterator of exception types in the exception chain

    The exceptions are yielded from outermost to innermost (i.e. last to
    first when viewing a stack trace).

    Adapted from: https://github.com/getsentry/raven-python/pull/811/files?diff=unified
    """
    if not e or e is True:
        e = sys.exc_info()[1]

    if not e:
        raise ValueError("No exception found")

    yield type(e)

    while True:
        if e.__suppress_context__:
            # Then __cause__ should be used instead.
            e = e.__cause__
        else:
            e = e.__context__
        if e is None:
            break
        yield type(e)

saved_exception = None
try:
    resp = requests.get("http://one.two.threeFourFiveSixSevenEight")
except Exception as e:
    saved_exception = e
    if socket.gaierror in chained_exception_types(e):
        print("Found socket.gaierror in exception block via e")
    if socket.gaierror in chained_exception_types():
        print("Found socket.gaierror in exception block via traceback")
    if socket.gaierror in chained_exception_types(True):
        print("Found socket.gaierror in exception block via traceback")

if saved_exception:
    print("\nIterating exception chain for a saved exception...")
    for t, ex, tb in chained_exceptions((type(saved_exception), saved_exception, saved_exception.__traceback__)):
        print("\ttype:", t, "Exception:", ex)
        if t == socket.gaierror:
            print("\t*** Found socket.gaierror:", ex)
    if socket.gaierror in chained_exception_types(saved_exception):
        print("\t*** Found socket.gaierror via chained_exception_types")

Here’s the output:

Found socket.gaierror in exception block via e
Found socket.gaierror in exception block via traceback
Found socket.gaierror in exception block via traceback

Iterating exception chain for a saved exception...
    type: <class 'requests.exceptions.ConnectionError'> Exception: HTTPConnectionPool(host='one.two.threeFourFiveSixSevenEight', port=80): Max retries exceeded with url: / (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fae7d0bfa20>: Failed to establish a new connection: [Errno -2] Name or service not known',))
    type: <class 'requests.packages.urllib3.exceptions.MaxRetryError'> Exception: HTTPConnectionPool(host='one.two.threeFourFiveSixSevenEight', port=80): Max retries exceeded with url: / (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fae7d0bfa20>: Failed to establish a new connection: [Errno -2] Name or service not known',))
    type: <class 'requests.packages.urllib3.exceptions.NewConnectionError'> Exception: <requests.packages.urllib3.connection.HTTPConnection object at 0x7fae7d0bfa20>: Failed to establish a new connection: [Errno -2] Name or service not known
    type: <class 'socket.gaierror'> Exception: [Errno -2] Name or service not known
    *** Found socket.gaierror: [Errno -2] Name or service not known
    *** Found socket.gaierror via chained_exception_types()

Now I can write the following code:

url = "http://one.two.threeFourFiveSixSevenEight"
try:
    resp = requests.get(url)
except requests.RequestException as e:
    if socket.gaierror in chained_exception_types(e):
        print("Couldn't get IP address for hostname in URL", url, " -- connect device to Internet")
    else:
        raise

Very nice — just what I wanted.

Note that Python 2 does not support exception chaining, so this only works in Python 3.

Aug 10: A colleague of mine, Lance Anderson, came up with a far more elegant solution:

import requests
import socket

class IterableException(object):

        def __init__(self, ex):
                self.ex = ex

        def __iter__(self):
                self.next = self.ex
                return self

        def __next__(self):
                if self.next.__suppress_context__:
                        self.next = self.next.__cause__
                else:
                        self.next = self.next.__context__
                if self.next:
                        return self.next
                else:
                        raise StopIteration

url = "http://one.two.threeFourFiveSixSevenEight"

try:
        resp = requests.get(url)
except requests.RequestException as e:
        ie = IterableException(e)
        if socket.gaierror in [type(x) for x in ie]:
                print("Couldn't get IP address for hostname in URL", url, " -- connect device to Internet.")

Standards body recommends removing periodic password change requirements

CSO Online reports that The National Institute of Standards and Technology’s (NIST’s) draft guidelines change some long-established best-practices — practices that have been ineffective for many years.

Changes include:

  • Remove periodic password change requirements
  • Drop the algorithmic complexity song and dance
  • Require screening of new passwords against lists of commonly used or compromised passwords

Elaborating further:

The reality is that passwords are weak no matter how often they are changed or how difficult they are, and people usually have only a variant of one or two passwords. Man in the middle or man in the browser hacks can take your password even if it is extremely lengthy and complicated – IT administrators can see your passwords, your bank can see your passwords,” [Eric Avigdor] said.

He said the guidelines recognize that the way to solve the password problem is to accept that passwords are weak and add on other complementary factors of authentication, whether mobile or hardware OTP tokens as well as PKI based USB tokens or smart cards.

How to solve the annoying Windows 10 pop-up “These files might be harmful”

This morning, I was helping my children copy pictures for a school project to a USB flash drive. The pictures are located on my Linux server, and shared using samba. Each time we copied one to the USB flash drive, Windows 10 helpfully interrogated us with the question, “These files might be harmful to your computer”.

A google search found a solution on superuser.com. Thank you, Google, StackExchange, Mr. Atwood, and Wouter.

Farewell to Comcast

I’ve been a happy Comcast ISP customer for more than ten years. I was impressed that they continued to invested in their infrastructure and that they supported IPv6. Even today, many ISPs don’t support IPv6.

Today, I switched to a wireless ISP that offers better pricing and faster upload speeds. They don’t support IPv6, and although I wish they supported IPv6, I don’t need IPv6.

My family, for the most part, didn’t notice, although my teenage son will likely protest when he finds out that he can’t stream ESPN to his phone.

Farewell, Comcast. You served me well.

PGP trust model doesn’t work

It’s been several years since I used GPG to PGP-sign and encrypt and to verify the authenticity of PGP-signed email messages.

So it was interesting to read why the PGP trust model doesn’t improve security:

http://arstechnica.com/security/2016/12/op-ed-im-giving-up-on-pgp/

I believe that confidentiality isn’t a binary thing — if one desires it, one must continually stay up-to-date on what approaches work and what is economically feasible, and what is no longer effective.

The article recommends Signal or WhatsApp for instant messaging, Magic Wormhold or OnionShare for file sharing, etc. It also recommends the use of Yubikey 4 for authentication.

Article: Why it’s likely impossible to restore online privacy

An article from the Deseret News is worth sharing. Here are a few snippets:

Most people don’t understand how much information is being collected.
Dryer said personal information is gathered into massive databases
regularly “and to a far more pervasive extent than most people
realize, either voluntarily or involuntarily.”

… Asay notes, “…Everyone says they’re concerned about privacy, but if you give them 20 cents, they tell you whatever you want. … All the information allows us to do some amazing things.”

Europe also codified a “right to forget.” America has not.