• git merge --squash

    I’m a fan of rebasing in git. I like being able to make lots of small, messy commits without worrying about a readable history and cleaning it up later. But on occasion I’ve had issues where rebasing and squashing my commits has resulted in many merge conflicts. I’m not really sure why this sometimes happens but it is a pain. Fixing merge conflicts can be error prone and time consuming.

    Thankfully I’ve recently come across git merge --squash which can help in these situations. To use it do the following:

    # Checkout master or whatever you branched off
    git checkout master
    
    # Create a new, clean branch
    git checkout -b fancy-new-clean-branch
    
    # Now merge the old, messy branch into the new
    # branch with git merge --squash
    git merge --squash old-messy-branch
    
    # This will copy all the changes to the current
    # branch but will NOT create a new commit. Add all
    # the changes and then commit. 
    git add .
    git commit -m 'New easy to understand commit'
    

    Now your git history on the new branch will have a new single commit that you can push up and create a pull request for.

  • Mistakes You Apparently Just Have to Make Yourself

    Mistakes You Apparently Just Have to Make Yourself:

    This code is too bad. We have to rewrite it from scratch.

    Functional tests!!

    Bureaucracy solves everything

    I’ve probably done all of these at some point but the above were particularly painful. The problem is that most of these sound completely reasonable at first. It’s only after you do them that you see the underlying problems.

  • 10 Amazing Dev Tweets

  • Create a Vim app for Mac OS X

    On occasion it is nice to have a way to open a file in Vim without having to jump over to the terminal. I researched a few GUI options like VimR or MacVim and others but all had problems. MacVim didn’t support FZF and wasn’t built on Neovim, VimR and others didn’t quite feel polished/stable enough.

    After a bit more research and trial and error I found that I could create an Automator app with an AppleScript which would accomplish what I wanted.

    Here is the script if you want to try it out.

    on run {input, parameters}
      try
        set filePath to POSIX path of input
      on error errMsg
        set filePath to ""
      end try
    
      tell application "iTerm"
        create window with default profile
        tell front window
          tell current session
            if filePath is "" then
              write text ("vim; exit")
            else
              write text ("vim " & quote & filePath & quote & "; exit")
            end if
          end tell
        end tell
      end tell
    end run
    

    To use this script, open Automator and create a new Application, then choose “Run AppleScript” and drag it into the workflow. Paste the above script in then save the application and you are ready to go. This script opens iTerm but it could be modified to use Terminal too.

  • Query Array of Objects in Postgres

    Storing data in a JSON column in Postgres can be very handy but a bit more difficult to query than normal tables. In particular, querying arrays of objects had me stumped for a while.

    The trick is to use jsonb_array_elements to expand the array into a row for every object. Then each object can be queried individually by using the ->> operator extract a key and use it in a where clause.

    Conceptually this isn’t much different than a one to many inner join where, because of the join, you can have a row mostly duplicated in the query output after it is joined.

    For example:

    User Table Address Table
    Full Name Phone Zip
    John Doe 555-555-5555 78701
    John Doe 555-555-5555 78613

    So let’s say you have a user address and have an addresses which is a JSONB column, containing multiple address objects.

    The following query expands the addresses into multiple rows, then uses the ->> operator to extract the zip field and then finds any that are equal to 78701.

    select *
    from "user" u, jsonb_array_elements(u.addresses) as obj
    where  obj->>'zip' = '78701';
    

    Just like with an inner join there might be a need to do a group by to get the result you want but overall it is pretty straightforward!

    Read more about the the available JSON operators and functions here.

  • Interactive Git Stash

    Often times I have the need to stash some changes in git but don’t want to stash everything.

    The solution is to use git stash -p. The -p instructs git to do a stash interactively and will iterate over each changed hunk one at a time and instruct git what to do.

    These are the options:

    y - stash this hunk
    n - do not stash this hunk
    q - quit; do not stash this hunk or any of the remaining ones
    a - stash this hunk and all later hunks in the file
    d - do not stash this hunk or any of the later hunks in the file
    g - select a hunk to go to
    / - search for a hunk matching the given regex
    j - leave this hunk undecided, see next undecided hunk
    J - leave this hunk undecided, see next hunk
    k - leave this hunk undecided, see previous undecided hunk
    K - leave this hunk undecided, see previous hunk
    s - split the current hunk into smaller hunks
    e - manually edit the current hunk
    ? - print help
    
  • Purge Redis Keys with Lua Script

    Redis has some powerful Lua scripting capabilities. One of the uses I’ve found for this feature is purging cache keys. On occasion I need to purge a set of keys that all have the same prefix.

    The following command will do just that.

    EVAL "return redis.call('del', unpack(redis.call('keys', ARGV[1])))" 0 prefix:*
    

    Replace prefix with whatever prefix you are looking for and they will be deleted from Redis.

  • First Round Interview with Kimber Lockhart on Technical Debt

    It’s easy for teams to go to extremes with technical debt – either trying to eliminate any and all and the expense of delivering the needs of the business, or ignoring tech debt and letting it happen without any thought or purpose, eventually causing great pains later on in the product’s life.

    This interview with Kimber Lockhart has some great advice for engineering teams on how to handle technical debt responsibly. Here are a few highlights:

    Begin code review before anyone codes. The best technical teams envision — not just review — code together. “It goes without saying that some form of code review is essential in any type of engineering organization,” Lockhart says. “Early on, pair development or sitting next to each other reading through code will work. As the team grows, it might be important to get code reviews more formally from different individuals or teams.”

    For many companies, process evolves so that this code review is the only time engineers get feedback on their code and iterate to make it better,” says Lockhart. “Unfortunately, finding a problem after the fact forces the tough decision between taking the time to rewrite and living with bad code.

    Technical debt should be a controlled decision to take a shortcut.

    “Technical debt is not the scarlet letter. It happens to the best of teams. I’d argue it’s actually irresponsible for a startup not to have any technical debt.”

    Scrap the shortcuts that don’t save time. Lockhart has found that shortcuts can be an illusion — often it takes the same amount of time to write clean code as it would to produce code that introduces technical debt. “The problem is, bad code often feels faster in the same way hurrying feels faster,” says Lockhart, “little time is wasted planning and the code itself is written more frantically.”

    Create a rating system for bad code. “Many engineering teams lament the failure of their organization to adequately address bad code resulting from technical debt, but they can’t get their footing when asked how to resolve it,” Lockhart observes. “Engineering teams owe their organization careful prioritization, just like anyone else making requests.”

    “Hire seasoned engineers who have some tolerance for technical debt and an earned intuition when it comes to trade-offs. Seek developers who think in pros and cons, not absolutes.”

  • Better Colors in Vim

    I’m a sucker for pretty editors. Webstorm, Atom, Sublime and others have always have been better looking than Vim. Vim and the terminal simply weren’t able to support the 24-bit colors that the other editors could. Macvim did a lot to improve the situation but I really like to use Tmux, iTerm2, and Neovim together.

    Luckily, while doing some research on the topic the other day, I finally got 24-bit color support via iTerm (nightly), Tmux, and Neovim.

    Here is how to accomplish it.

    1.) Install Neovim. Here are the install instructions for Homebrew on Mac. Once complete, add the following to your Vim config file.

    let $NVIM_TUI_ENABLE_TRUE_COLOR=1
    

    2.) Download the nightly version of iTerm2. It supports 24-bit color.

    3.) Install a patched version of Tmux that supports 24-bit color.

    brew tap choppsv1/term24
    brew install choppsv1/term24/tmux
    

    Finally, here is what my setup looks like in all it’s colorful beauty.

    If you’d like to checkout my dotfiles and Vim config they are on Github here.

  • How to Create New Postgres User

    I was doing some work in Knex today, the fantastic SQL builder for node and needed create a new user. It took some research but here is what I found:

    This assumes you’ve already logged into Postgres with another user.

    CREATE ROLE myUser WITH LOGIN PASSWORD '';
    GRANT ALL PRIVILEGES ON DATABASE "knex_test" TO myUser;
    

    To test it out just log back in.

    psql -h localhost -U myUser -d knex_test