RFC7217 describes a privacy extension that aims to improve upon RFC4941. In contrast to RFC4941 which provides random and temporary addresses, RFC7217 provides a stable address that provides much of the same benefits as the temporary addresses.

Known as ‘stable-privacy’ in Network Manager, RFC7217 is not desirable in some situations. For example, in a hosted environment where the IPv6 address is expected to be determinstic based on the MAC address, as is the case on Linode.

Linode uses SLAAC to provide IPv6 addresses to it’s nodes. With SLAAC, Linode’s routers advertise the prefix for the IPv6 network and the hosts, if configured properly, will use that information combined with the hardware address to generate an IPv6 address for use.

On Red Hat based systems like RHEL and CentOS, NetworkManager interferes with this process by defaulting to using RFC7217 to configure stable addresses. In order to disable this behavior, we need to reconfigure NetworkManager to not use stable addresses.

To check and see if your interface is running in ‘stable-privacy’ mode, run:

$ nmcli conn show "Wired interface 1" | grep ipv6.addr-gen-mode
ipv6.addr-gen-mode:                     stable-privacy

Above, we are running in stable-privacy mode. We want to disable privacy extensins altogether and run in eui64 mode. Update our connection configuration by running:

$ sudo nmcli conn edit "Wired interface 1" ipv6.addr-gen-mode eui64
# reload our connection
$ for act in down up; do sudo nmcli conn $act "Wired connection 1"; done

Our IPv6 address should now properly reflect the combination of your prefix and hardware address.


Over the last day or so I’ve been slowly moving my ruby projects over to using rbenv instead of RVM. There’s nothing inherently wrong with RVM, but I do lots of interesting things with my shell that, when combined with my tmux setup, seems to always be giving me flak.

So at the recommendation of a friend, I sat down with

for a couple of hours, and these are my notes from that experience.


Typically, I would not make a change so drastically, but I found the process of converting to be fairly painless and simple enough for me to go back to if need be without much fanfare. The process for me was to …

  1. Remove any source references to RVM scripts in my .bashrc and .bash_profile files
  2. Remove any path modifications that include the RVM bin dir
  3. Use homebrew to install rbenv (alternatively, you could clone the repository1 into ~/.rbenv
  4. Add the following into my .bashrc
if which rbenv > /dev/null; then eval "$(rbenv init -)"; fi

Moar Rubies!

If all went well above, we should have a working rbenv installation. Now let’s take a look at the rubies currently available –

$ rbenv versions
* system (set by /Users/arusso/.rbenv/version)

I only have a single ruby version install initially, but with the help of the ruby-build2 plugin (available via hombere), I get access to the

rbenv install
command where I can install new versions of ruby.

In this case, my system ruby is version 2.0.0-p481 (

ruby -v
). This is too new for all my work, since I do a good deal with Puppet on RHEL6 which ships with 1.8.7-p374. Let’s start by installing that version –

$ rbenv install 1.8.7-p374
Downloading ruby-1.8.7-p374.tar.gz...
-> http://dqw8nmjcqpjn7.cloudfront.net/876eeeaaeeab10cbf4767833547d66d86d6717ef48fd3d89e27db8926a65276c
Installing ruby-1.8.7-p374...
Installed ruby-1.8.7-p374 to /Users/arusso/.rbenv/versions/1.8.7-p374

Downloading rubygems-1.6.2.tgz...
-> http://dqw8nmjcqpjn7.cloudfront.net/cb5261818b931b5ea2cb54bc1d583c47823543fcf9682f0d6298849091c1cea7
Installing rubygems-1.6.2...
Installed rubygems-1.6.2 to /Users/arusso/.rbenv/versions/1.8.7-p374

Now looking at the versions available to us, we see –

$ rbenv versions
* system (set by /Users/arusso/.rbenv/version)

Activating a Ruby

I typically activate rubies in two ways. First and foremost, when I’m switching between rubies for testing I used to use

rvm use $version
to get the ruby I want. With rbenv, this becomes
rbenv shell $version

The second way I choose rubies is by setting my ruby version in the .ruby-version file in my project directory. Fortunately, this does not really change and I can mostly leave it alone3.

For more information on how rbenv chooses a ruby version, see the project’s README section4 on the subject

Next Time

The next post will dive into the differences in gemset management between RVM and rbenv, as well as some useful plugins that make rbenv a better tool all around.

  1. https://github.com/sstephenson/rbenv 

  2. https://github.com/sstephenson/ruby-build 

  3. https://github.com/sstephenson/rbenv#choosing-the-ruby-version 

  4. rvm conviently allows you to select which gemset you want to use within the .ruby-version file. rbenv on the other hand does not even support gemsets without the help of the rbenv-gemset5 plugin. With it, you need only move the gemset information into the .ruby-gemset file. Part 2 will go into more detail about gemsets. 

  5. https://github.com/jf/rbenv-gemset 

…because “Poor Man’s Puppet Testing” just sounded lame…


There are better, more effective and automated ways of testing your puppet manifests. However if you are in a position where setting up a CI server is not in the time budget, this article is for you.

This article also assumes you do not have some sort of orchestration tool at your disposal, and uses SSH1 to fake it until we make it. If this is not your situation fret not! You should be able to tailor it to use your orchestration tool of choice with minimal effort.


This article discusses a fairly simple idea – we will be updating some puppet code, committing our changes and having a simple process run noop runs on hosts we specify. The output of these runs will be displayed for us to inspect.


First off, we will need two bash functions. The first is a helper function that I came up with as a way to abstract away running arbirtray commands on a host called

. Fancy, right? Fortunately, you should be able to refactor it to use your tool of choice fairly easily.

runon () {
    SSH_OPTS="-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -a -T"

    if [[ "$1" == "" || "$2" == "" ]]; then
        echo "USAGE: runon <client> \"<command>\"    ";
        ssh $SSH_OPTS ${CLIENT} "$2";

Next up, we are going to setup a bash function that is constantly checking a file we specify for hosts to connect and test our manifests against. I use environments to test all code changes2, so the function needs to know about the environment as well as the filename to poll for changes.

test_puppet_env () {
    if [[ -f $1 && "$2" != "" ]]; then
        while [ 1 -eq 1 ]; do
            while read line; do
                for hst in $(eval "echo $line"); do
                    echo "#### HOST: ${hst} ####";
                    runon ${hst} "sudo puppet agent -t --noop --environment ${2}";
                    echo "#######################";
                eval "sed -i '' -e '/^$line\$/d'" $1;
            done < $1;
            sleep 1;
       echo "USAGE: test_puppet_env <filename> <environment>"

With that, we have all the functions we need. Now, let’s put it to use. We will assume I am doing some refactoring in a puppet environment ‘foo’, and want to test changes that would apply to hosts specified in ~/test-hosts

test_puppet_env ~/test-hosts foo

Now open up a new window (or hopefully a pane3) and run the following command:

echo arusso-dev-0{1..9}.example.com >> ~/test-hosts

Suddenly, you will see output in the original pane where we ran

. For a couple of hosts, this works reasonably well. But when I am testing a large number of hosts (>5) I typically do the following so I can grep through the files later rather than read them outright:

test_puppet_env ~/test-hosts foo | tee session-$(date +%Y%m%dT%H%M)

Now, in addition to having the output sent to stdout I can have a file I can grep/awk/sed through to find interesting bits of information I’m looking for.

So that’s it. Not too bad, right? Let me know what you think below.

  1. Technically this is SSH in a loop. Sleep well knowing I carry the burden of this terrible deed. Still, let’s keep it between us and not say anything to Luke. 

  2. Obviously some code cannot be tested in environments; the most obvious example being types. I assume that you know this, and use something like Vagrant to develop and modify such code in a sandbox. If not, I highly suggest this approach. 

  3. tmux (or screen) is your friend here. 

Table of Contents


After Heartbleed, I found myself in need of replacing a large number of SSL keypairs, most of which included SAN certificates. Of course, the first thing I did was try to script the process which resulted in some bashing of my head against my desk as I stumbled through the OpenSSL Ruby library.

But fret not, I’ll try to explain it as best I can and if you think I’ve made a mistake, I’m sure you will let me know in the comments below!

Assumptions and Prerequisites

I assume you are using a modern Ruby, version 2.1 or greater in this case. Though older versions may work, I have not tested any out. Let me know in the comments if you find another one works or doesn’t.

As for any gems we may need, the only one we pull in is the


Creating our Certificate Request

Including our Requirements

I may be in the minority, but I hate when I do not get the require statements I need as part of the post. Since this is my article I will do future me a favor and provide them here. You’re welcome future me.

require 'openssl'
require 'openssl-extensions/all'

Generating the Key Pair

Now we will generate our key pair. As you probably know, we need to provide the public key as part of our request then use the private key to sign the request.

key = OpenSSL::PKey::RSA.new 2048

keyfile = '/tmp/mycert.key'
file = File.new(keyfile,'w',0400)

Generate the Request

Next up we will generate our request object. To do that, we first need to create our certificate subject as an


subj_arr = [ ['CN', 'myhost.example.com'], [ 'DC','example'], ['DC','com']]
subj      = OpenSSL::X509::Name.new(subj_arr)

Now, we create our request:

request = OpenSSL::X509::Request.new
request.version = 0
request.subject = subj
request.public_key = key.public_key

Now that we have our request, we need to setup our extensions and add them to it. This is the critical piece of this post since our SAN values are one of the extensions we need to add.

To begin, I found the following to be needed for basic SSL certificates. You may find different for your needs.

exts = [
  [ 'basicConstraints', 'CA:FALSE', false ],
  [ 'keyUsage', 'Digital Signature, Non Repudiation, Key Encipherment', false ],

Next we add our SAN extension to the request. First we need to format each SAN entry, then we’ll add them to our extension array:

sans = [ 'example.com', 'www.example.com' ]
sans.map! do |san|
  san = "DNS:#{san}"
exts << [ 'subjectAltName', sans.join(','), false ]

Now we need to convert our array into OpenSSL attributes, and add them to our request.

ef = OpenSSL::X509::ExtensionFactory
exts.map! do |ext|
attrval = OpenSSL::ASN1::Set([OpenSSL::ASN1::Sequence(exts)])
attrs = [
attrs.each do |attr|

Sign our Request

Finally, the very last thing we do is sign our request after we are done modifying it. If you do any other work on the request object in your own code, you need to make sure you do it before you get here.

request.sign(key, OpenSSL::Digest::SHA1.new)

# save our request to a file
csrfile = '/tmp/csrfile'
file = File.new(csrfile,'w',0400)

# print out our request to screen for good measure
puts request.to_text

Code for this Example

Real World Example

Remember how I needed to write a tool in the face of the Heartbleed scramble? Well you can check out how I used the above code to write a tool that grabs an existing certificate and extract the information I need to generate a new key/certificate request based on it.

link: regenerate-cert

I was working on a server where the customer was using ClamAV and Stream Ports to check for viruses. They had a problem where the server would not accept their connection. The logfiles for the error showed up like:

Sun Jan  1 08:00:03 2012 -> ERROR: ScanStream 1088: accept() failed.

At first I thought it was a firewall rule, but after looking things over it looked ok. In my googling, I noticed a lot of people had problems with SELinux and since this system did in fact run SELinux, I started looking at those logfiles. I found the following errors (formatted for readability):

type=AVC msg=audit(1326835719.949:35310855): avc:  denied  { name_bind } for
    pid=4901 comm="clamd" src=1505 scontext=user_u:system_r:clamd_t:s0
    tcontext=system_u:object_r:port_t:s0 tclass=tcp_socket
type=SYSCALL msg=audit(1326835719.949:35310855): arch=c000003e syscall=49
    success=no exit=-13 a0=c a1=415d0e50 a2=10 a3=41a705af1fe3fb79 items=0
    ppid=1 pid=4901 auid=4294967295 uid=3218 gid=3218 euid=3218 suid=3218
    fsuid=3218 egid=3218 sgid=3218 fsgid=3218 tty=(none) ses=4294967295
    comm="clamd" exe="/usr/sbin/clamd" subj=user_u:system_r:clamd_t:s0

This was one of my first rodeo’s with SELinux, so further research on the name_bind permission was necessary. I found that this occurs when an application tries to open a port they aren’t allowed to. I checked the SELinux configuration to see what ports it would allow ClamAV to open:

$ sudo semanage port -l | grep clamd
clamd_port_t tcp 3310

Bingo! The conf file for my client was defaulting to the streamports being open from 1024-2048, so I added that exception:

$ sudo semanage port -a -t clamd_port_t -p tcp 1024-2048 
$ sudo semanage port -l | grep 3310 
clamd_port_t tcp 1024-2048, 3310 

Once done, I tested…

$ telnet localhost 3310 Trying
Connected to localhost.localdomain (
Escape character is '^]'.
PORT 1246

And confirmed it worked!