Data security can only be achieved by those empowered

Users of online services don’t have the ability (i.e. aren’t empowered) to secure the data stored by those services. Only the engineers and the companies that build the services can do that. So I agree with Cindy Cohn, who says:

…we need to ensure that companies to whom we entrust our data have clear, enforceable obligations to keep it safe from bad guys. This includes those who handle it it directly and those who build the tools we use to store or otherwise handle it ourselves.

In my view, business leadership and software engineers have an ethical responsibility to secure their systems and services so that customer’s data and sensitive information doesn’t get misused or abused.

I’d like it if customers had a reliable and consistent way to evaluate the quality and diligence given to keeping their data safe — something like Charity watch or Consumer Reports.

When your USB devices can be used against you

Interesting: “about half of your devices, including chargers, storage, cameras, CD-ROM drives, SD card adapters, keyboards, mice, phones, and so on, are all likely to be proven easily reprogrammable and trivially used to… attack software. Unfortunately, the only current solution on the horizon is to not share any USB devices between computers.” — Dragos Ruiu

Smartphones and Privacy

A cautionary note from Arstechnica:

Given how much of what is on smartphones is now automatically backed up to the cloud, anyone should take pause before disrobing before their smartphone camera—regardless of the phone operating system or how that image will be delivered to its intended audience. The security of all of these services is only as secure as the obscurity of the mother’s maiden name of the person you sent that picture to—or of the next zero-day flaw.

I don’t think smartphones belong in bedrooms or bathrooms, but since most people want the convenience of having them there, it may be a good idea to keep the phone in the drawer while changing, or covered while showering, etc.

I think it’s a good idea to assume that what one’s smartphone can hear, see, or the data it contains could be made public someday — and perhaps sooner than we think. The same is true for any data we store “in the cloud”.

Fidelity App: Not responsible for accuracy of financial information

Do you ever read the fine print when you install an application, and it presents you with an end-user-license-agreement?

I do.

Recently, I installed the Fidelity iPhone app, and here’s a few surprising parts of their service agreement:

By using the Services, I consent to the transmission by electronic means…. I acknowledge that Fidelity cannot assure the security or privacy of electronic transmission of such information. Any transmission may also be subject to other agreements that you have with your mobile service or access device provider. Accordingly, I must assess whether my use of the Services is adequately secure to meet my particular needs.

While all information accessible through the Services has been obtained from sources believed to be reliable, I understand that Fidelity will not be responsible whatsoever for the accuracy, timeliness, completeness, or use of any information received by it or received by me from Fidelity or any Provider through the Services and that Fidelity does not make any warranty concerning such information.

I don’t think most of us are capable of assessing whether our use of a third-party service is adequately secure — it’s difficult for security professionals to decide such things.

Linux tty auditing

Since RHEL 5.4, and in recent Fedora releases, it’s possible to audit what users type at their tty (command prompt), thanks to the work of Steve Grubb, a RedHat employee.

Edit /etc/pam.d/system-auth and append the following, but not both:

session required pam\_tty\_audit.so disable=\* enable=root
session required pam\_tty\_audit.so enable=\*

Wait for users to log in and type into a terminal. Later, to see audited tty input, run:

aureport --tty

When a user logs in, the pam module tells the kernel to enable tty auditing for a process and its children. All tty input is logged, but it may not be incredibly easy to read (it includes backspaces, control characters, etc.). I’m unclear as to when and how often the kernel flushes out accumulated tty input to the audit log. The records are identified with a type of TTY in /var/log/audit/audit.log.

In addition to tty auditing, RedHat patched their bash shell so that it neatly audits each and every command line it executes, with a record type of USER\_TTY. It’s prettier to read than raw tty auditing — and it’s easy for a user to bypass by using a shell that doesn’t send its commands to the Linux audit system, like zsh, or a custom-built unpatched bash. Maybe that’s why “aureport –tty” doesn’t show USER\_TTY records.

—-

The Linux auditing system is powerful. It’s possible to write rules that watch for modification to certain files, or that log the use of certain system calls. See the “audit.rules” manpage for more information.

Fedora 14, SSH ports and SELinux

SELinux in Fedora 14 is configured to constrain the ports on which SSH can listen (see the bug report). The solution:

setsebool -P sshd_forward_ports 1

This allows SSH to listen on ports besides 22, and to forward ports. Reading the bug report is interesting. In my opinion, OpenSSH has an outstanding security track record, and we probably don’t need additional SELinux policy to constrain it. It’s probably wise to practice security in-depth (to have more than one line of defense), but it creates a large road bump for most SSH power users. From what I read, it sounds like most people still disable SELinux.

Trust, but verify

In [a comment](http://lwn.net/Articles/375051/) over at LWN.net, a reader pointed out that it’s a good idea to verify not just SSL certificate, but also doctors, mechanics, etc. He says, “it’s simply a requirement of a healthy society that it’s citizens have a healthy skepticism and be willing to put the effort into understanding what is going on around them. It’s not that you don’t trust them. Its that you do what you can, in your limited way, to make sure that you can trust them.”

Users, Security and Scams

I read Bruce Schneier’s [Crypto-Gram](http://www.schneier.com/crypto-gram.html) monthly. It’s from there that I found most of these links, with the exception of the ones on social engineering. I found the first paper on scam victims to be especially thought provoking (although it’s long). The video clip demonstrating social proof was amusing.

*[Understanding scam victims: seven principles for systems security](http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-754.pdf)*

Summary: Scammers manipulate people with distraction, deception, herd mentality, greed, time pressure and by impersonating authority. If something sounds too good to be true, it probably is.

—-

*[Social Engineering](http://www.infosectoday.com/Norwich/GI532/Social_Engineering.htm)* [\[2\]](http://www.chips.navy.mil/archives/09_Jan/web_pages/social_engineering.html) [\[3\]](http://packetstormsecurity.nl/docs/social-engineering/aaatalk.html)

*Summary*: Social engineers exploit people’s tendency to trust and to be helpful. They do this with ingratiation, impersonation, diffusion of responsibility, urgency, appeal to conformity (aka “social proof” or herd mentality), intimidation, deception, and authoritative orders.

There’s an entertaining Candid Camera (http://www.social-engineer.org/framework/Influence_Tactics:_Consensus_or_Social_Proof).

—-

*[The Rational Rejection of Security Advice by Users](http://research.microsoft.com/en-us/um/people/cormac/papers/2009/SoLongAndNoThanks.pdf)*

*Summary*: Security practitioners often dole out advice that is perceived by users as too time consuming. So users ignore or reject the security advice. However, “Advice that has compelling cost-benefit tradeoff has real chance of user adoption…. the costs and benefits have to be those the user cares about”. _Time_ is one thing users care about.