Show which git branches have been merged and can be deleted

At work, we generate quite a few feature branches, which get tested, and then merge into “develop”. The feature branches don’t get cleaned up frequently. Here’s a series of shell commands I cobbled together to show the most recent person to commit to the branch, and which branches have been merged into develop.

git checkout develop
git pull -r
(for branch in $(git branch -r --merged | grep -vP "release|develop|master") ; do git log -1 --pretty=format:'%an' $branch | cat ; echo " $branch" ; done) | sort | sed -e 's#origin/##'

The output looks something like this:

Jane Doe feature/something
Jane Doe feature/another-thing
Jane Doe feature/yet-another-something
Zane Ears feature/howdy

And they can be deleted as follows:

git push origin --delete feature/something

Notes about OKRs, goals and pitfalls

At work, I’ve been asked to know our team OKRs and set some of my own. I’m new to this, and so I decide to google for information about them. OKR stands for Objectives and Key Results, and the idea is to:

  • make aspirational, easy-to-remember goals (objectives) that stretch the company, the team, and optionally, the individual, then write them down.
    • I.e. we’re trying to answer the question, “what strategic (big) things should we do next?”
  • determine key results — notice the plural — a set of actions and measurements that will indicate how close we came to meeting the big goal
    • indicated in numeric form. This is said to be the “secret sauce” that makes OKRs better than other forms of strategic goal setting. We aren’t aiming for a perfect score. In fact, a perfect score is indicative of problems.
  • share the goals and key results widely within a company and team because it helps get people aligned (unified) and makes them accountable.

OKRs are a tool meant to help us, and as with any process, we aren’t meant to become a slave of the tool. Adapt it to make it work, or find a better tool when it doesn’t work.

Setting objectives and defining key results takes time and thought. Otherwise, it may not yield value.

OKRs remind me of S.M.A.R.T. goal setting. So why do we need OKRs? Again, I googled for an answer, and it’s approximately this: With SMART goal setting, organizations and teams tend to forget to…

  • stretch — make aspirational, strategic goals
  • act and pursue their goal — accountability is important
  • align teams and individuals with the aspirational goals

Among the many helpful things I read, I found this from perdoo.com:

Why should I split my goals into Objectives and Key Results?

…it helps to increase company-wide transparency as everyone should be able to understand the Objective. Key Results are often more technical and don’t appeal to, or aren’t understood by, everyone.

Objectives also represent key focus points for an organization or team. They should, therefore, be inspiring and easy to remember.

The same article linked to a Harvard Business School article titled “Goals Gone Wild”, which warn of the dangers of goal setting. OKRs are supposed to have safeguards against these pitfalls. Standard pitfalls of goals include:

  • focusing too narrowly or specifically — lose sight of other valuable things such as emergent opportunities and ethical behavior
  • not enough time given to achieve the goal, or a reporting period that is too long
    • yearly measuring is too long, that’s why the key results in OKRs are measured quarterly or more frequently.
  • overly challenging goals may encourage
    • lying about performance
    • cheating to attain the goal
    • taking unacceptable risks
  • creating a culture of competition rather than cooperation
  • the goals themselves killing motivation
    • I.e. a goal (a key result) for a CEO doesn’t necessarily make sense for an engineer

Ten years ago, my wife and I bought a Hyundai Sonata. Upon completing the purchase, the salesman asked us to give him a perfect score on Hyundai’s evaluation of the sales experience. He said anything besides a perfect score was unacceptable. My wife and I raised our eyebrows, knowing that he was gaming the system. I went along with it, knowing that Hyundai wasn’t getting an accurate measurement. I regret my decision, and I hope that Hyundai realized that perfect scores were indicative of problems in their measuring.

References:

  • https://medium.com/startup-tools/okrs-5afdc298bc28
  • https://www.wrike.com/blog/okrs-quarterly-planning/
  • https://www.betterworks.com/articles/the-value-of-shifting-from-s-m-a-r-t-goals-to-okrs/
  • https://www.linkedin.com/pulse/goal-setting-grow-smart-okr-diana-horn
  • https://www.atiim.com/blog/2-reasons-why-okr-goal-setting-is-better-than-any-other-approach/
  • https://www.perdoo.com/blog/goals-vs-okrs/

Treating work like a race

Chad Fowler, in his book, My Job Went to India, made the following remarks about working effectively:

If you treat your projects like a race, you’ll get to the end a lot faster than if you treat them like a prison cell.

A sense of urgency, even if manufactured, is enough to easily double or triple your productivity.

I’d add that it needs to be an enjoyable race, and that urgency, sustained for too long, can wear a person out. Races are more enjoyable when run with a group of friends.

 

Bypassing the I.T. security fortress

On the back of my mind for the past few years, I’ve been thinking about how I.T. security becomes less meaningful as time goes on. The use of digital cameras isn’t usually allowed, yet a company isn’t (usually) going to boot out an employee for having a cell phone with a digital camera — or even using it to take a snapshot of a diagram that will be placed on a corporate wiki. The use of USB thumb drives for transferring and storing corporate data is perceived as a risk, but often, it’s a practical way of getting one’s job done. Remember network firewalls? They’re still in place, but they’re increasingly meaningless. They certainly don’t keep out viruses and trojan horses. And with the increasing prevalence of wireless networking, there’s even less incentive for people to play by the I.T. security rules. Dan Kaminsky [expresses these thoughts better than I have](http://www.doxpara.com/?p=1245)
:

> … every restriction, every alteration [I.T. makes] in people’s day to day business, carries with it a risk that users will abandon the corporate network entirely, going “off-grid” in search of a more open and more useful operating environment. You might scoff, and think people would get fired for this stuff, but you know what people really get fired for? Missing their numbers.

> Its never been easier to get away with going off-grid. Widespread availability of WiMax and 3G networks mean there’s an alternate, unmonitored high speed network available at every desk.

Kaminsky [goes on](http://www.doxpara.com/?p=1245) to discuss some of the ramifications of these ongoing changes, including “the Cloud” (e.g. Google docs) and the security of corporate data.

jvisualvm: A free Java memory and CPU profiler

I needed to profile a Java application, and since we had a JProfiler floating license, I used it. JProfiler works well, although it’s pricey. I was googling for other Java profiling tools, and [stackoverflow.com](http://stackoverflow.com/search?q=visualvm) made mention of [jvisualvm](https://visualvm.dev.java.net/), which comes bundled with JDK 6 release 7. I noticed that on my Fedora 10 box, the java-1.6.0-openjdk package includes jvisualvm. None of my coworkers had heard of it.

JProfiler introduces a significant performance penalty into the code it profiles, whereas other tools including jvisualvm and YourKit have a much lower impact. I’m going to give jvisualvm a try, once I get the target environment set up properly with the new JDK.

UPDATE: jvisualvm won’t profile remote applications like JProfiler can. jvisualvm is not quite as easy to use, and I haven’t figured out how to get stack traces on the CPU and memory hot spots. Overall, I like the tool.

UPDATE 2: jvisualvm can be configured to give a stack trace of memory hot spots. I’ve learned that performance between the Java 1.5 and 1.6 jvms can be very different. I’ve learned that I can run ‘kill -3 ‘ to print a stack trace of my running java processes. It’s helped me to narrow down bottlenecks in an application when the profiler wasn’t granular enough.

REST versus RPC

Have you considered the merits and applicability of RESTful web apps? Here are a few notes I’ve made.

There was quite a [discussion about RPC, REST, and message queuing](http://steve.vinoski.net/blog/2008/07/13/protocol-buffers-leaky-rpc) — they are not the same thing. Each one is needed in a different scenario. All are used in building distributed systems.

Wikipedia’s [explanation of REST](http://en.wikipedia.org/wiki/Representational_State_Transfer) is quite informative, especially their [examples](http://en.wikipedia.org/wiki/Representational_State_Transfer#Example) of RPC versus REST.

The poster “soabloke” says RPC “Promotes tightly coupled systems which are difficult to
scale and maintain. Other abstractions have been more successful in building
distributed systems. One such abstraction is message queueing where systems
communicate with each other by passing messages through a distributed queue.
REST is another completely different abstraction based around the concept of a
‘Resource’. Message queuing can be used to simulate RPC-type calls
(request/reply) and REST might commonly use a request/reply protocol (HTTP) but
they are fundamentally different from RPC as most people conceive it. ”

The [REST FAQ](http://rest.blueoxen.net/cgi-bin/wiki.pl?RestFaq) says, “Most applications that self-identify as using “RPC” do not conform to the REST. In particular,
most use a single URL to represent the end-point (dispatch point) instead of using a multitude of
URLs representing every interesting data object. Then they hide their data objects behind method
calls and parameters, making them unavailable to applications built of the Web. REST-based
services give addresses to every useful data object and use the resources themselves as the
targets for method calls (typically using HTTP methods)… REST is incompatible with
‘end-point’ RPC. Either you address data objects (REST) or you don’t.”

RPC: Remote Procedure Call assumes that people agree on what kinds of procedures they would like
to do. RPC is about algorithms, code, etc. that operate on data, rather than about the data
itself. Usually fast. Usually binary encoded. Okay for software designed and consumed by a
single vendor.

REST: All data is addressed using URLs, and is encoded using a standard MIME type. Data that is
made up of other data would simply have URLs pointing to the other data. Assumes that people
won’t agree on what they want to do with data, so they let people get the data, and act on it
independently, without agreeing on procedures.

Test-driven development in Perl

There’s an impressively in-depth presentation from [OSCON 2008](http://en.oreilly.com/oscon2008/public/schedule/proceedings) about [Practical Test Driven Development in Perl](http://assets.en.oreilly.com/1/event/12/Practical%20Test-driven%20Development%20Presentation.pdf). It covers Test::More, Test::Class, Test::Differences, Test::Deep and Test::MockObject.

I also found the following to be interesting: [Even Faster Web Sites](http://assets.en.oreilly.com/1/event/12/Even%20Faster%20Web%20Sites%20Presentation%202.ppt) and [Pro PostgreSQL](http://assets.en.oreilly.com/1/event/12/Pro%20PostgreSQL%20Presentation.odp). Reading these helps me to know a little bit about what I don’t know.

Linux performance tuning

When attempting to find and fix performance bottlenecks on a Linux system, it’s helpful to know where to start. Here are a few resources I’ve found:

IBM’s [Linux Performance and Tuning Guidelines](http://www.redbooks.ibm.com/abstracts/redp4285.html), published July 2007

> This IBM Redpaper describes the methods you can use to tune Linux,
tools that you can use to monitor and analyze server performance, and
key tuning parameters for specific server applications. The purpose of
this redpaper is to understand, analyze, and tune the Linux operating
system to yield superior performance for any type of application you
plan to run on these systems. ( [Read more…](http://www.redbooks.ibm.com/abstracts/redp4285.html) )

This website has useful tips:
[http://www.performancewiki.com/](http://www.performancewiki.com/)

Google has some tools that people recommend:
[http://code.google.com/p/google-perftools/wiki/GooglePerformanceTools](http://code.google.com/p/google-perftools/wiki/GooglePerformanceTools)

This book seems to be recommended:
[Optimizing Linux Performance](http://www.amazon.com/Optimizing-Linux-Performance-Hands-Professional/dp/0131486829)

In my experience, strace and ltrace along with the “-c” and “-T” options, are extremely useful — even for Perl scripts. The “-T” option shows the timings of calls, and can help isolate what calls are the slowest.

* `strace -o program.trace -T -p `
* `ltrace -o program.trace -T -p `

The “-c” option gives a summary of the calls that used the most time:

* `strace -c -p `
* `ltrace -c -p `

I haven’t found a good way to isolate memory leaks in Perl programs — not that I’m an expert. What has worked for me is to [divide and conquer](http://en.wikipedia.org/wiki/Divide_and_conquer_algorithm) in order to isolate the problem.

Git underwhelms

I work on source code from two separate SVN
repositories. One of them is geographically remote. Working with the remote server is slow for ‘log’, ‘diff -r’, ‘blame’, etc. Due to my interest in distributed version control, and my desire for faster repository access, I decided to try git and git-svn. Doing ‘log’, ‘diff’, etc. with a local git repo is much faster, but on the whole, working in a git repo created with git-svn has been difficult and unrewarding. Perhaps it would be easier if others at my company were using git-svn and we could share ideas. Working with git and git-svn requires learning a new workflow, and I haven’t yet reached enlightenment.

Challenges with Git:

  • The Git Wiki is often out-of-date and/or incomplete (submodule support, for example).
  • No Nautilus, Konquerer, or Windows Explorer integration.
  • No KDevelop itegration.
  • git-gui should:
  • let me double-click on files listed in either “Staged Changes” or “Unstaged Changes” to edit the file. Or let me right-click and choose an “edit” option.
  • Let me use an external diff program such as meld or kdiff3. git-gui should let me set this up and use it. qgit has an external diff option (defaults to kompare), but it doesn’t use the working copy on the right hand side, so it’s not possible to use the diff tool to change the working copy file.

Challenges with Git-SVN: (More complicated to use than Subversion)

  • Two stage commit instead of single stage. ‘git commit’, ‘git-svn dcommit’
  • Error messages are cryptic, so I don’t know how to resolve the errors.
  • git-svn rebase doesn’t merge changes from upstream Subversion server into my working copy, and git-svn doesn’t tell me what workflow I should be using. So I ran git-svn fetch to pull upstream Subversion changes. Then I ran git-gui and chose Merge->Local. It told me something helpful. “You are in the middle of a change. File X is modified. You should complete the current commit before starting the merge. Doing so will help you abort a failed merge, should the need arise.” “git-svn rebase” should have told me
    the same thing.

Reasons to continue with Subversion:

  • Workflow is easier, less complex — perhaps because I’m used to it.
  • Windows Explorer integration via TortiseSVN.
  • IDE integration. Nearly every IDE supports or has a pluging for Subversion.
  • svnmerge.py gives me cherry-picking support (between branches within the same repository)
  • svnmerge.py remembers merges, so I don’t have to resolve the same
    conflicts twice.
  • I don’t need disconnected operation in my workplace.

I hope that in a year, Git, git-svn and developer tool integration will be more mature and thus rewarding to use. With the rapid development I see happening, it wouldn’t be surprising.

I will continue to use git-svn. It gives me the speed I need for working with log history, annotate and diff.

Update: I’ve come across Git for Computer Scientists, and seeing the pretty graphs leads me to believe that working with git requires an understanding of how git works.

Ethics are about business survival

[Business ethics about survival, leaders told](http://www.deseretnews.com/dn/view2/1,4382,660225718,00.html)

> Ethics aren’t important because they help businesses feel good about themselves… [it] is about staying in business.

> “We don’t ask you to do ethics so you can feel warm and soft and squishy,” Jennings said. “We ask you to do ethics because it is an integral part of long-term business survival. This is the thing you have to stay focused on when the pressure hits. This is the antidote,” [said professor and columnist Marianne Jennings]

[Read more](http://www.deseretnews.com/dn/view2/1,4382,660225718,00.html)