:::: MENU ::::

Disconnected Ruby demos on OpenShift 3

I’m headed to China soon, and the Great Firewall can present issues. S2I builds on OpenShift 3 generally require internet access (for example, pulling from Github or installing Ruby Gems), so I wanted to see what it would take to go fully disconnected. It’s actually surprisingly easy. For reference, my environment is the same environment as the OpenShift Training repository. I am using KVM and libvirt networking and all three hosts are running on my laptop. My laptop’s effective IP address, as far as my KVM VMs are concerned, is

Also, I have pre-pulled all of the required Docker images into my environment, like the training documentation suggests. This means that OpenShift won’t have to pull any builder or other images from the internet, so we can truly operate disconnected

First, an http-accessible git repository is required for using S2I with OpenShift 3 right now. Doing a google search for a simple git HTTP server revealed a post entitled, unsurprisingly, Simple Git HTTP server. In it, the instructions suggest using Ruby’s built in HTTP server, WEBrick. Here’s what Elia says:

git update-server-info # this will prepare your repo to be served
ruby -run -ehttpd -- . -p 5000

One thing to note – you must run the update-server-info command after every commit in order for webrick to actually serve the latest commit. I figured this out the hard way. On Fedora and as a regular user, you usually want to use a high port for stuff, so I chose a really high port — 32768. I also had to open the firewall. Fedora, by default, uses firewalld. Your mileage may vary:

firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -p tcp -m tcp --dport 32768 -m conntrack --ctstate NEW -j ACCEPT

With the firewall open, the git repo lives at — not too shabby! Next, we need to make the Ruby Gems accessible via HTTP locally as well. Some Google-fu again brings us to something useful. In this case, Run Your Own Gem Server. While the article indicates that you can just run gem server, I found that this produced strange results and I filed bug #1303. I was using RVM in my environment due to some other project work, so, in the end, my gem server syntax looked like:

gem server --port 8808 --dir /home/thoraxe/.rvm/gems/ruby-2.1.2 --no-daemon --debug

Of course, this is going to serve gems from your computer, which means the gems have to actually be installed there in the first place. In the case of the Sinatra example, you would have to gem install sinatra --version 1.4.6, which would bring in the gem dependencies. Of course, this requires that you have ruby and rubygems, but you already have that, right?

Running the gem server also requires opening a firewall port:

firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -p tcp -m tcp --dport 8808 -m conntrack --ctstate NEW -j ACCEPT

Note again that these firewall changes will not be permanent. You would need the --permanent option to persist these changes. You now have gems accessible at

At this point you have:

  • A git http server running on port 32768
  • A gem server running on port 8808
  • Open firewall ports

In your OpenShift 3 environment you can now create a new application whose repository is the git HTTP server you set up with Webrick. Again, that’s But, if you just do that, your build will fail if you don’t have internet access. A standard-looking Gemfile probably defines https://rubygems.org in its source. For example, the Sinatra example that OpenShift provides:

source 'https://rubygems.org'
gem 'sinatra', '1.4.6'

Without internet access, we’ll never get to https://rubygems.org. So we can change the Gemfile’s source line to point at our new gem server, which lives at Feel free to clone the example repository and try it yourself. Remember, once you change the Gemfile you will need to run git update-server-info and then (re)start your Webrick server. Also, be sure you are doing this on the master branch, or you’ll need to point OpenShift at whatever branch you decided to use. This totally tripped me up a few times.

At this point, you should be able to do your build in OpenShift. In your build log you will see something like the following (ellipses indicate truncated lines):

I0703 19:44:33.264627       1 sti.go:123] Performing source build from
I0703 19:44:34.010878       1 sti.go:388] ---> Running 'bundle install '
I0703 19:44:34.339680       1 sti.go:388] Fetching source index from
I0703 19:44:35.019941       1 sti.go:388] Resolving dependencies...
I0703 19:44:35.281696       1 sti.go:388] Installing rack (1.6.4) 
I0703 19:44:35.437759       1 sti.go:388] Installing rack-protection (1.5.3) 
I0703 19:44:35.617280       1 sti.go:388] Installing tilt (2.0.1) 
I0703 19:44:35.841344       1 sti.go:388] Installing sinatra (1.4.6) 
I0703 19:44:35.841381       1 sti.go:388] Using bundler (1.3.5) 
I0703 19:44:35.841390       1 sti.go:388] Your bundle is complete!
I0703 19:44:35.841395       1 sti.go:388] It was installed into ./bundle
I0703 19:44:35.862289       1 sti.go:388] ---> Cleaning up unused ruby gems

And your application should work! Well, assuming all the rest of your OpenShift environment is set up correctly…

Transferring Windows 7 to a new computer

I purchased a new motherboard and CPU in an effort to upgrade both my system processing capability and my hard disk space. My original plan was just to clone an existing 1TB drive onto part of a 2x2TB RAID array, but I was struck with many issues, even with disk cloning. I went through a lot of trouble trying to find a method that worked. So after much pain, here’s what I found:

1) The current stable Redobackup is too old to detect the effective RAID device that my new mobo bios was creating. It refused to select a target device.

2) The current stable Clonezilla also has issues. It detects an md device, but then has issues determining its size and refuses to actually write data to it.

3) The GParted livecd seemed to work best. I used gparted to copy partitions from the original drive to the new drive. I then used dd to copy the boot sector, just in case.

What I found is that Windows 7 gets *REALLY ANGRY* when you just pop an existing installation into a new mobo/cpu. It basically is unstartable. I found an article that suggests running Sysprep with “generalize” and “out-of-box” as a part of transferring to a new machine:


Following these instructions and running Sysprep I then found an issue with the Windows Media Player Network Sharing service – it needed to be stopped in order for Sysprep to work right.

(that link may not work without a login).

So, what I ended up doing thus far:

  1. Clone existing 1TB drive onto new, temporary 1TB drive.
  2. Boot old mobo system with cloned 1TB drive, run sysprep per instructions.
  3. Put sysprepped temporary 1TB drive into new mobo system, boot, let Windows do its first startup, finally install (most) drivers.

I found some issues with some of Asus‘ drivers, so I had to do these steps *AGAIN* in order to get to a working system.

My next step is to clone this now updated 1TB drive onto a 2TB bios-based RAID array and hope for the best. I hope someone finds this information useful!

SQLBuddy RPM for RHEL, CentOS, Fedora and etc.

SQL Buddy has been a tool I’ve used a lot lately for simple MySQL administration of servers. It’s a much lighter alternative to phpMyAdmin and can be installed very quickly via a zip. But I wanted an RPM. RPM just makes things a lot easier installation-wise. I don’t have to wget/unzip/etc every single time I want to deploy it. So I built a quickie RPM.

Here’s a link to download the SQL Buddy RPM I’ve created. The source RPM is there, also, if you feel like looking at it and making suggestions. Eventually I’ll get around to submitting it to Fedora for a real package review, and perhaps get it into EPEL. But this was the critical first step for me.

Sharing a Linux printer to Windows with Samba and Cups

So I recently have been setting up a new Fedora 14 Linux machine at home, which used to run Windows as my primary desktop. I figured that I would keep the printer connected physically to this machine, even though it would no longer be the primary desktop. That meant I had to figure out how to get printing working first with Linux, and then printer sharing.

Getting printing working in Linux was fairly easy. In fact, the printer had already appeared in the list of printers without really doing any work. I recalled from a previous attempt a while back that there are some neat specific tools for HP printing in Linux, and I found them again at the HPLIP project. A quick install of that software in Fedora and I, at least, had that up and running.

Sharing the printer via Samba and CUPS is where it got a little tricky. I had ended up fighting a bunch with the specific configuration of Samba, finding lots of conflicting tutorials with different information that didn’t make sense. I tried a few things, and kept getting permissions errors.

I finally realized that, at least for printing, smb is running/executing as the user “nobody”.  I also noticed that there happened to be a samba-specific folder in /var/spool. I put two and two together and figured that SELinux would be happiest with Samba talking to that folder. So, here’s, ultimately, the set-up I ended up with for smb.conf:

  workgroup = YOURWORKGROUP
  server string = Samba Server Version %v
  security = share
  printing = cups
  printcap name = cups
  browseable = yes
  printable = yes
  public = yes
  create mode = 0700
  use client driver = yes
  path = /var/spool/samba

Adding the printer from Windows proved to be a snap:

  1. Browse to the computer name (//yourlinuxmachinename)
  2. Double click the printer to connect to it
  3. Find the driver it needs
  4. Done!

Hopefully this will help some of you if you find yourselves fumbling around trying to make this sort of thing work.

Creating a Windows 7 bootable USB device from Linux

This really should not have been as hard as it was. I tried in vain to take the Windows 7 Ultimate 64-bit ISO, that I had downloaded from MSDN, and put it on a USB HDD that I had laying around. I have just built a new computer and did not bother to buy an optical drive. Unfortunately, my existing Windows machine was 32-bit Windows XP. This meant running any files from the Windows 7 CD (like the boot sector program) was not a possibility.

I tried various tools like UNetbootin, WinToFlash, MultiBootISOs and others. I also tried some tricks with xcopy that did not seem to work. Since I work for Red Hat and am a Linux person, I happened to have a Linux machine at my disposal. Here’s what I found that worked:

  • I created a bootable (IMPORTANT!) 4GB primary NTFS partition on my 40GB external USB HDD
  • I formatted this partition with NTFS
  • I mounted the Windows 7 ISO and the NTFS partition, and copied the files from the ISO to the USB HDD
  • I used ms-sys to write a Windows 7 MBR to the USB HDD

There was at least one caveat here. I saw, in a place or two, suggestions to use ms-sys against the partition itself. When running ms-sys against a partition, it complained, so I ran it against the base device (in my case, /dev/sdb).

Hopefully this will help someone out there!

    How to set the text with formtastic, select and collection

    I’ve been on a tear again working on Riding Resource. We’re trying to do something interesting and slightly social, but I can’t give it all away just yet. There are some forms involved, and I decided that I was going to try and save some time by using Justin French’s formtastic plugin. Well, it surely saved some time, but, as with anything new, there’s a learning curve.

    Since one of the big things that Riding Resource does is help stables see who is searching for them (by storing lots of demographic information), I wanted to make sure that any data these forms captured would be easily reportable. In the case of select lists, that means having models for them with integers and text associated. But when poking around with formtastic, I couldn’t figure out how to make a specific field of the model display in the dropdown for the select. Here’s an example:

    f.input :preferred_discipline, :as =>, :select, :collection => DemographicPreferredDiscipline.all

    melc in #rubyonrails on Freenode suggested that I try using a map. I’d seen these before, so I figured I’d give it a whirl:

    f.input :preferred_discipline, :as => :select, :collection => DemographicPreferredDiscipline.all.map { |dp| [dp.text, dp.id] }

    Text is the name of the field I wanted to display in the select. What do you know? It worked! I figured I would share this here for posterity and Google indexing.

    Random thoughts on net neutrality and free markets

    This is basically a copy of a comment I made on Fred Wilson’s blog, but I wanted to put it here so that other people (who might possibly pay attention to me) might see it, too.  So here are some random thoughts:

    – Wireless technologies (WiFi) have evolved extremely quickly because they are largely “unregulated”. No one really owns the spectrum and every company can make a device that can access that spectrum, so they all compete to offer better performance/features/etc. in that space.

    – The only organization that can create a monopoly is a government. Even if one company were to buy up everything and become the sole provider of a service, it still is not a monopoly. Either people will substitute something else in place of that service (walking instead of taking the train, even though it takes a long time), or someone will determine that the barrier to entry, no matter how significant, will ultimately provide a competitive alternative to the existing monopoly.

    – Cable and telephone companies have “near” monopoly over internet access, but it is only because they have already eaten the tremendous costs of infrastructure over time, and happened to be able to retrofit this infrastructure for use as a data transport infrastructure.

    – Verizon seems to think that, despite the start-up cost, there is competitive benefit to them setting up a new higher-speed data transport infrastructure, as one example. Companies like Clear have decided that, despite the lack of comparable performance to other options today, there is a competitve to their investing in the infrastructure for their wireless data service infrastructure.

    – “Net Neutrality” and spectrum auctions will likely serve to neuter the inevitable explosion in over-the-air as an alternative to existing wired data service infrastructures. Instead of net neutrality making the internet and data services better, it will ultimately serve to further reinforce the near monopoly that the cable companies and phone companies already have by eliminating the competitive benefit that the wireless providers can exert over the cable companies by being net neutral. If Comcast were allowed to really really manipulate its network traffic, customers who did not like this would move to services like Clear in favor of a neutral experience as a trade-off to performance. Forcing the net neutrality hand means that this inevitable movement is going to be stifled.

    Updating Air on Fedora 12 breaks it… hell ensues

    After getting messages about updating Adobe Air for a while, I finally decided to bite the bullet and do it.

    Big mistake.

    Crazy hell ensued, in that nothing from Air would work any more after that, and all I got was cryptic core dumps. I tried to uninstall Air and Tweetdeck, and failed at that for a while, too, until I figured out the following:

    1. Air and Air applications like Tweetdeck actually end up as RPMs.  You can (should) remove them using rpm -e as the root user or with sudo.  (found via Adobe’s page, sort of)
    2. I found the rpms by grepping: rpm -qa | grep ado — or — rpm -qa | grep weet
    3. You may have to remove or move your certificates folder in /etc/opt

    So, if you decide to update Adobe Air on your Fedora 12 box and suddenly everything seems borked, you might just want to uninstall everything and install from scratch.  I just did this and it worked well, and I’m up and running with the latest Tweetdeck for Linux.


    Manipulating links with HTML select and jQuery

    As with most web projects, there’s always some little new glitch that pops up. We’ve been building and massaging our own analytics back end for Riding Resource for some time now, and the change of year from 2009 to 2010 brought some new quirks that had to be dealt with.

    While there were some minor issues related to year/day calculations creating invalid dates, the bigger issue that (I think) was solved rather elegantly was choosing, via a select tag, which year’s analytics report to generate. jQuery came to the rescue, with what turned out to be a far more simple solution than I originally had envisioned.

    Select tags are not exactly the most complicated things in the world. But when you don’t have a form to go with them, it’s hard sometimes to figure out how to make them be useful. Instead of having a link for every year’s report, I figured a nice little drop-down would be an elegant way to choose. But this is where the difficulty was. I wanted a single text link to the report, but I wanted to change what that link actually… linked to… based on the selection in the box.

    Here’s some of the original HTML that was generated by the app:

      <a id="union-county-veterinary-clinic-target" href="/analytics/show2/union-county-veterinary-clinic.pdf">PDF</a>
    <select id="union-county-veterinary-clinic-select" name="date[year]">
        <option value="2009">2009</option>
        <option selected="selected" value="2010">2010</option>

    Here is where jQuery came to the rescue, and some generous fellow with the handle of “kit10” on the #jquery channel on IRC on Freenode. kit10 suggested that I add a “rel” attribute to the select element, and give that “rel” attribute the value of the id of the PDF link. In this way, jQuery could look at the select element, and, when it changed, update the link. After a few machinations, here’s what popped out:

          // courtesy of kit10 on #jquery on freenode
          $("select[id$='-select']").change(function() {
            var target = $('#'+$(this).attr('rel')); // set the target to be the value of the rel of the selector
            var regex = /year=\d\d\d\d/

    To read this code in a sense of plain english:
    Whenever a select element that contains ‘-select’ in the ID changes
    Create a variable called target, and assign it the value of the rel attribute of the select element
    Replace the href attribute of the target with the new year by using the regex of /year=\d\d\d\d/ (to match year=2009, year=2010, etc)

    That was really all there is to it. The new HTML ended up looking like as follows:

      <a id="1000-acres-ranch-target" href="/analytics/show2/1000-acres-ranch.pdf?year=2009">PDF</a>
    <select id="1000-acres-ranch-select" name="date[year]">
        <option value="2009">2009</option>
        <option selected="selected" value="2010">2010</option>

    Twitter integration and parsing links with Ruby on Rails

    As per usual, it’s been a while since I’ve written anything about Ruby on Rails or RidingResource. A while back, we had an issue with Twitter integration that messed up the homepage. I got some time today to fix things, so I figured I would write a little bit about how RidingResource has integrated Twitter into our homepage and about how parsing links works in Ruby on Rails.

    When we were first building RidingResource, we decided it might be cool to have the last few tweets from our RidingResource Twitter account displayed on the home page.  It took me a minute, but, like with most things you want to do with Ruby on Rails, there’s a gem/plugin for that.  The one we chose happens to be Dancroak’s Twitter Search.  The neat thing about this gem is that it allows you to grab things off Twitter very easily, and then use them however you like.

    def home
      ## set up twitter client
      @client = TwitterSearch::Client.new 'equine'
      ## pull in last 3 tweets
      @tweets = @client.query :q => 'from:ridingresource', :rpp => 3

    Well, this is all well and good, but one thing I quickly realized when we initially did this is that when we posted a link in a tweet, it is parsed fine if you look at it on Twitter’s website or through other clients, but we were basically just regurgitating the text of the tweet with no markup. When I went to reinstate the Twitter feed on the homepage today, I started looking into ways to parse URLs with Ruby (on Rails) that were already in strings and to display them with the proper hypertext markup. What I found were some neat snippets on DZone that did just that.

    After some careful Googling and search-term hackery, I stumbled upon this DZone snippet that discusses how to convert URLs into hyperlinks. This snippet written by James Robertson makes use of a gem, alexrabarts-tld, which does some checking to see if items are actually a real domain TLD. As James found, like with many things Ruby, you can pass the items that come out of a gsub regexp into a block, which enables us to replace the URL in the string with the hypertext for the URL.

    Because we were going to use this substitution on every single tweet to check if there were any URLs,  I created a nifty little helper function to do just that.

    def hyperlink_parser(string)
      return string.gsub(/((\w+\.){1,3}\w+\/\w+[^\s]+)/) {|x| is_tld?(x) ? "<a href='#{x}' class='#{link_class}'>#{x}</a>" : x}

    One thing I noticed was that if URLs were in the text that had http:// in them, we would match the rest of the URL, hyperlink the FQDN and other parts of the link, but ignore the http://. It looked really funny to have a link that looked like http://www.erikjacobs.com I realized that this was just a “problem” with the original RegExp that Jason had created, so I started to do some sleuthing.

    The first thing I did was try to find a RegExp tester. While there are many out there, the one I ended up using was Rubular (conveniently uses Ruby – look at that!), which displays the results of our RegExp search against the text in real time. Some careful googling of selected RegExp and URL terms resulted in yet another snippet from DZone, by Rafael Trindade. Getting closer!

    Lastly, since this function might just possibly be used elsewhere, and since I wanted to apply a style at least to the links generated by its use here, I decided to add another argument to the helper method for the link class. The result is the following helper:

    def hyperlink_parser(string, link_class="")
      return string.gsub(/(ftp|http|https):\/\/(\w+:{0,1}\w*@)?(\S+)(:[0-9]+)?(\/|\/([\w#!:.?+=&%@!\-\/]))?/) {|x| is_tld?(x) ? "<a href='#{x}' class='#{link_class}'>#{x}</a>" : x}

    The view essentially just iterates over the tweets that came back from the search, rendering a partial:

    <% for tweet in @tweets -%>
      <%= render :partial => "tweet", :object => tweet %>
    <% end -%>

    And inside the partial we do a few things with the tweet, including linking to the original tweet on Twitter’s site, showing the date, and parsing the text for the URL and returning it:

    <li><p><a href="http://www.twitter.com/RidingResource/status/<%= tweet.id %>" target="_blank"><%= tweet.created_at[5,11] %></a> <%= hyperlink_parser(tweet.text, "tweet") %></p></li>

    Hopefully some of you will find this useful in your quest to either integrate Twitter into your Ruby on Rails’ projects, or to perhaps parse some things that live inside strings into markup. There is almost surely something that already does this, so I probably re-invented the wheel, but it didn’t take long and it seems to work.