2016-12 Cool Stuff
Some of the interesting things I’ve been playing with lately
Created by Jim Moore
2016-12 Cool Stuff by Jim J. Moore is licensed under a Creative Commons Attribution 4.0 International License.
Presentation source at https://github.com/jdigger/201612-cool-stuff-preso
Agenda
- Git Tips
- Version Controlled OS
- Spring Boot
- JHipster
- Modern Continuous Integration
Git Tips
Don’t forget about Git Made Simple
![Git Made Simple](images/Git_Made_Simple.png)
git config --global push.default current
$ git checkout mybranch
$ git push # git push origin HEAD:mybranch
This differs from matching
and simple
(the pre-2.0 and post-2.0 defaults, respectively) in that it only works on the current branch, not across all local branches.
git config --global alias.pf …
[alias]
pf = "!git config --get branch.$(git symbolic-ref
--short HEAD).merge > /dev/null && git push
--force-with-lease || git push --force-with-lease -u"
(If you copy & paste into you ~/.gitconfig
you need to remove the newlines used here for formatting)
Runs "git push --force-with-lease"
with the "-u"
flag to set the tracking branch, depending on if it has already been set.
--force-with-lease
The docs for --force-with-lease
Essentially does “optimistic locking” on the remote branch. Checks that the rev hasn’t changed since the last time you did a "git fetch"
.
If they match, acts as "git push --force"
because it assumes you’ve already done any needed reconciliation. If the revs don’t match exactly it fails.
git config --global branch.autoSetupRebase always
When the upstream branch is set, the "branch.<name>.rebase true"
config is also set.
What that effectively means is that the default behavior for "git pull"
is to do a rebase instead of a merge.
What That Looks Like
$ git checkout -b mybranch
Switched to a new branch 'mybranch'
$ cat .git/config
[remote "origin"]
url = https://github.com/jdigger/201612-cool-stuff-preso.git
fetch = +refs/heads/*:refs/remotes/origin/*
[branch "master"]
remote = origin
merge = refs/heads/master
rebase = true
$ git pf
To https://github.com/jdigger/201612-cool-stuff-preso.git
* [new branch] mybranch -> mybranch
Branch mybranch set up to track remote branch mybranch from
origin by rebasing.
$ cat .git/config
[branch "master"]
remote = origin
merge = refs/heads/master
rebase = true
[branch "mybranch"]
remote = origin
merge = refs/heads/mybranch
rebase = true
Git Workflow Implications
- Using
"git pull"
and"git pf"
you can safely have your nice linear history even when doing squashing and other advanced features.- The caveats about rewriting history of published integration branches applies, but it’s no worse than when done with a merge.
- Of course you want to do
"git pull && git rebase origin/master"
on a regular basis.
Making “Green to Green” Development Easier
If you’re the kind person that does TDD and likes to “checkpoint” your work every time you see The Green Bar…
![6gh9vOu0qbQmk](http://i.giphy.com/6gh9vOu0qbQmk.gif)
git config --global alias.fixup …
[alias]
fixup = "!sh -c 'git add -A &&
git commit -m \"fixup! $(git log
-1 --format='\\''%s'\\'' $@)\"' -"
(If you copy & paste into you ~/.gitconfig
you need to remove the newlines used here for formatting)
Adds any changes made to the index and commits them with the same commit summary message as the prior commit, prepended with "fixup! "
That becomes much more useful when used in conjunction with…
git config --global rebase.autosquash true
Adds "--autosquash"
to any "git rebase -i"
command.
What That Looks Like
$ git commit -am "Build out Git Workflow section"
[mybranch 8602f88] Build out Git Workflow section
1 file changed, 38 insertions(+)
$ # changes made
$ git fixup
[mybranch 09931f2] fixup! Build out Git Workflow section
1 file changed, 10 insertions(+)
$ # changes made
$ git fixup
[mybranch 9640c0f] fixup! fixup! Build out Git Workflow section
1 file changed, 7 insertions(+)
$ git rebase -i
pick 8602f88 Build out Git Workflow section
fixup 09931f2 fixup! Build out Git Workflow section
fixup 9640c0f fixup! fixup! Build out Git Workflow section
# Rebase d88acf0..9640c0f onto d88acf0 (4 commands)
#
# Commands:
# p, pick = use commit
# s, squash = use commit, but meld into previous commit
# f, fixup = like "squash", but discard this commit's log message
If you do the "git fixup"
and squashing process, it’s an extremely powerful and convenient tool that lets you stay “in the flow.”
Like all powerful tools, it implies an extra level of diligence and responsibility.
Please make sure you then proof-read the end result ("git diff origin/master HEAD"
, inspect in SourceTree, etc.) before pushing!
Of course this is one of the major ways git is so much better than prior RCSs. The Index lets you easily do all kinds of magic.
Proofing On The Front
If you prefer to do the proof-reading before ever adding to the Index you can:
"git diff"
- view the uncommitted changes in a tool like gitk, IDEA, SourceTree, etc.
"git add -p"
and explicitly approve every chunk
Personally, I use all four (these 3 + “fixup”) depending what I’m doing, the nature of the changes, etc. As always, it’s good to be aware of the tools available and learning when to choose the right one.
End Product
- Git makes it useful and easy to have terrible (but progressively better) “rough drafts” of your work as you think through a problem
- At the end of the process you can squash your rabbit trails, consolidating those drafts into a beautiful end product of one or more commits crafted to make it clear to the poor souls that come after of what happened and why
- Of course that only works if you also make sure that the commit message(s) accurately describes what changed and why
Git-Process
![Git Process](images/Git-Process.png)
Version Controlled OS
I’ll be talking particularly about macOS/OSX, but the techniques apply for any reasonable OS.
Simple Example With Homebrew
On macOS one of the principle ways of installing software is with Homebrew. (Along with Homebrew Cask for binary-only installations.)
One of the many benefits is that it’s trivial to get a listing of what’s on your machine:
$ brew list
ack freetype httpie mongodb
asciidoc imagemagick vim docker
...
$ brew cask list
firefox sourcetree atom spotify
virtualbox dropbox intellij-idea sublime
evernote java skype hipchat
...
Let’s remember that information…
$ brew list > brews.txt
$ brew cask list > casks.txt
$ git add brews.txt casks.txt && git commit -m "Latest brews"
$ git push
Now if your machine gets toasted, you get a new one, etc.
$ git clone https://myremoterepo.git brewsrepo ; cd brewsrepo
$ cat brews.txt | while read l ; do brew install $l ; done
$ cat casks.txt | while read l ; do brew cask install $l ; done
See Josh Long’s blog post for a much better (and robust) write-up
“Dotfiles”
“Dotfiles” is a shorthand way of referring to all the various configurations that reflect how over the years you’ve optimized you environment.
It refers to the fact that on *nix systems a great deal of customization is done via files starting with a “.” (which hides them) such as "~/.profile"
Includes things like:
- what applications are installed
- how they’re set up
- plugins, etc.
- scripts
- aliases
- key-mappings
- etc.
In other words, it’s all those various pieces that take so much time after getting a new machine that often take a couple days to a week until it “feels” right and you can be truly productive again.
History of Dotfiles
In many ways, it’s an extension of the whole “infrastructure as code” move of DevOps (after all, your workstation is part of development and delivery), but it’s much older than that.
The tools and techniques have gotten much better recently, though.
“Meant” To Be Forked?
There’s an on-going debate as to whether dotfiles are meant to be forked or if they’re too personal for that.
Regardless, the industry consensus is that they should be version-controlled and shared freely*.
Private Information
What about things like SSH keys, passwords, OAuth tokens, etc.?
The most basic approaches:
- Recreate everything manually (regenerate tokens, revoke old ones, etc.)
- Messy, error-prone, and time-consuming. But it’s also what most everyone does.
- Store with everything else, but encrypted
- Depending on how this is done, can be subject to brute-force attacks if the attacker cares enough
- Store “specially” (thumb-drive, paper in your wallet, stripe across data-stores, etc.)
- Prevents brute-force attacks, but susceptible to getting lost. Because of its inconvenience people tend to subvert this by copying in multiple places, etc.
- Store specially and encrypt
- While the most secure (as-in “state secrets”), the hardest to deal with by far
Encryption and Git
While encrypted data is just data to git, what we’d like for this use-case is for it be encrypted on its way inside the repository and unencrypted on its way out.
(But only if you have the keys, of course!)
For this example we’ll use a combination of git-crypt and Keybase.io using Gnu Privacy Guard (GPG).
$ brew install gnupg2 keybase git-crypt
$ keybase login <KEYBASE_USERNAME> # authorize device in blockchain
$ keybase pgp export -s | gpg --import
$ cd myrepo
$ git crypt init
$ git crypt add-gpg-user <KEY_ID or FINGERPRINT>
$ echo "secret/* filter=git-crypt diff=git-crypt" >> .gitattributes
$ git add .gitattributes secrets/*
$ git commit -m "Added some secret files"
When the repo is cloned, everything looks good as long as you don’t try to look at the encrypted file(s), which show as binary garbage.
You need to do a "git crypt unlock"
with the appropriate credentials (the private key in this case), at which point the local workdir would have the unencrypted files.
For a much deeper treatment of security, GPG, Git, etc. Tyler Duzan wrote an excellent article. He specifically talks about signing commits, but the tools and practices all apply.
In particular, he talks about how to secure your keys much better than I’m doing here for the sake of simplicity.
Hosting Dotfiles
GitHub’s Dotfile Guide is a great central source of information, as is “Awesome dotfiles”.
Just perusing the major repos of published dotfiles is sure to provide useful gems.
A classic for macOS/OSX is Mathias Bynens' script to customize virtually everything about your Mac.
If you want to make your files public, GitHub.com is a great place.
If you are not so sure, BitBucket.org and GitLab.com offer free private repos.
Dotfile Managers
“Classically” managing these kinds of things has been a matter of copying some files around and running custom shell scripts.
Thankfully, the tooling has gotten much better…
If you’re comfortable with general-purpose system management tools like Puppet or Ansible, there’s lots of resources on how to use those to manage workstations as well.
(Such as GitHub’s Boxen to manage Macs with Puppet.)
More specialized for dotfiles, here’s a sampling (in no particular order):
- Pearl - focuses on modularity (packages), making it easy to share with your friends, your team, or the world
- Fresh - “Bundler for your dotfiles”, it’s as powerful (and complex) as you would expect from a package manager
- Homemaker - Statically compiled (Go) for system types, supports configuration variants for different hosts, and doesn’t assume git for everything.
- MacOS Bot - Simple and highly opinionated around how developers typically like things
- YADM - Doesn’t rely on symlinks, supports encryption, and allows multiple configurations based on OS/hostname.
- Cider - Lets Homebrew and Casks do most of the heavy lifting, providing some additional config for dotfiles and centralizing for your machine(s).
- DotBot - Requires no installation; it tries to make typical things easy while staying out of your way
Spring Boot
![Spring Boot tweet](images/Spring_Boot_tweet.png)
Amazingly, it’s gotten a lot simpler since then.
Brings “convention over configuration” as far as you can imagine taking it, making it trivial to have major development and operational concerns taken care of (configuration from environment or service registry, security, caching, load-balancing, monitoring & management, etc.)
Naturally, being Spring, you can use as much or as little as you want, you can tune the heck out of it, and it integrates with EVERYTHING.
![Spring Initializr](images/Spring_Initializr.png)
JHipster
![JHipster](images/JHipster.png)
![JHipster Client](images/JHipster_Client.png)
![JHipster Server](images/JHipster_Server.png)
![JDL Studio](images/JDL-Studio.png)
![jhipster generator](images/jhipster_generator.png)
![JHipster Project](images/JHipster_Project.png)
While it’s principally meant for starting a project, there’s on-going work to make it work well to make updates to an existing JHipster project as well.
Even if, for whatever reason, you don’t want to use JHipster directly, it’s extremely useful to see how to integrate a host of technologies.
Tons of comments in the code, illustrations of best-practices, etc.
![The JHipster Mini Book 2 0](images/The_JHipster_Mini-Book_2_0.png)
Modern Continuous Integration
In the Beginning (2001)
![CruiseControl Home](images/CruiseControl_Home.png)
- Essentially a fancy
cron
on Tomcat that would poll your repository and kick off a build. - While it provided some basic capabilities (test reports, notifications, etc.) it was very rudimentary.
- Was a HUGE step forward and helped kick off The Rise of Agile®™©.
- You configure it on web pages, and it tracks configuration, builds and their results in its database.
![Hudson Jenkins](images/Hudson_Jenkins.png)
- Provided a nice plugin architecture, just as both Agile and Open Source were coming into their own.
- A huge ecosystem of plugins.
- Became the de-facto standard starting about 2008.
- Acts as a shared resource for teams/companies.
- It is the responsibility of the Jenkins admins to make sure that the environment is set up correctly, manage plugins, etc.
- Continuous Delivery support (pipelines, etc.) is starting to get serious attention.
- Very quick to become a “pet”/“snowflake” server, with all the associated problems.
- While it’s possible to support multiple branches, it’s extremely fragile if those branches have configuration/environment needs different than the mainline.
![Travis CI](images/Travis_CI.png)
- The first popular CI platform for the Cloud Era.
- There’s virtually no setup on the server: just point it at your project repository.
- All configuration for jobs is done via a configuration file (
".travis.yml"
) file in the project’s repository. - Provide a handful of standard VMs (later Containers), providing a completely sandboxed environment for each build.
- There’s no plugins, and the responsibility for environment configuration (installing needed software, etc.) has to be handled in each project’s configuration file.
- No pipeline support, and only fairly rudimentary builds are practical (made a bit better by competitors like CircleCI and GitLab)
- No centralized IT support needed, and scales extremely well
- Since all configuration with the code, it’s trivial to support different configurations for different branches, moving backwards in time, etc.
![Concourse CI](images/Concourse_CI.png)
Primitives:
- Tasks: execution of a script in an isolated environment with dependent resources made available to it
- Resources: data, inputs/outputs (e.g., files, git, s3, etc.)
- Jobs: functions composed of behavior (tasks) and inputs/outputs (resources/other jobs)
- All configuration is done in the project’s repository. (Though you can have shared configuration as a resource.)
- Uses the efficiency of Containers and surrounding infrastructure (e.g., Docker registries) to give the capabilities of plugins without the need to actually plug them into anything.
- Pipelines are “auto-discovered” by building a DAG of a job’s inputs and outputs.
- Every task (and even every “resource”, which is roughly analogous to “plugins”) is run in its own container.
- Because everything is sandboxed, with very tightly defined data transfer between the parts (principally STDIN/STDOUT), it makes reasoning about the pieces much easier and more reliable than in shard-state systems.
Because the “servers” are immutable and the ubiquity of container tooling, you can set everything up in a VirtualBox instance running locally, check the configuration into the project’s repository, and you’re guaranteed that it will run exactly the same in a data-center/cloud environment.
![Concourse CI 5min](images/Concourse_CI_5min.png)
Not only can the project’s code be written in anything, but so can any of the “resources”.
- If you look at the source code, the standard ones vary from Bash to Ruby to GoLang, but there’s no reason Java or C# couldn’t be used.
- On a Docker container it will simply invoke
"/opt/resource/check"
,"/opt/resource/in"
or"/opt/resource/out"
with communication via STDIN/STDOUT/STDERR (and some minimal environment variables)
There’s an EXCELLENT tutorial at https://github.com/starkandwayne/concourse-tutorial
Review
- Git Tips
- Version Controlled OS
- Spring Boot
- JHipster
- Modern Continuous Integration