Transferring Windows 7 to a new computer

January 6, 2015 – 2:22 pm

I purchased a new motherboard and CPU in an effort to upgrade both my system processing capability and my hard disk space. My original plan was just to clone an existing 1TB drive onto part of a 2x2TB RAID array, but I was struck with many issues, even with disk cloning. I went through a lot of trouble trying to find a method that worked. So after much pain, here’s what I found:

1) The current stable Redobackup is too old to detect the effective RAID device that my new mobo bios was creating. It refused to select a target device.

2) The current stable Clonezilla also has issues. It detects an md device, but then has issues determining its size and refuses to actually write data to it.

3) The GParted livecd seemed to work best. I used gparted to copy partitions from the original drive to the new drive. I then used dd to copy the boot sector, just in case.

What I found is that Windows 7 gets *REALLY ANGRY* when you just pop an existing installation into a new mobo/cpu. It basically is unstartable. I found an article that suggests running Sysprep with “generalize” and “out-of-box” as a part of transferring to a new machine:

Following these instructions and running Sysprep I then found an issue with the Windows Media Player Network Sharing service – it needed to be stopped in order for Sysprep to work right.
(that link may not work without a login).

So, what I ended up doing thus far:

  1. Clone existing 1TB drive onto new, temporary 1TB drive.
  2. Boot old mobo system with cloned 1TB drive, run sysprep per instructions.
  3. Put sysprepped temporary 1TB drive into new mobo system, boot, let Windows do its first startup, finally install (most) drivers.

I found some issues with some of Asus‘ drivers, so I had to do these steps *AGAIN* in order to get to a working system.

My next step is to clone this now updated 1TB drive onto a 2TB bios-based RAID array and hope for the best. I hope someone finds this information useful!

SQLBuddy RPM for RHEL, CentOS, Fedora and etc.

November 2, 2011 – 2:25 pm

SQL Buddy has been a tool I’ve used a lot lately for simple MySQL administration of servers. It’s a much lighter alternative to phpMyAdmin and can be installed very quickly via a zip. But I wanted an RPM. RPM just makes things a lot easier installation-wise. I don’t have to wget/unzip/etc every single time I want to deploy it. So I built a quickie RPM.

Here’s a link to download the SQL Buddy RPM I’ve created. The source RPM is there, also, if you feel like looking at it and making suggestions. Eventually I’ll get around to submitting it to Fedora for a real package review, and perhaps get it into EPEL. But this was the critical first step for me.

Sharing a Linux printer to Windows with Samba and Cups

March 18, 2011 – 3:22 pm

So I recently have been setting up a new Fedora 14 Linux machine at home, which used to run Windows as my primary desktop. I figured that I would keep the printer connected physically to this machine, even though it would no longer be the primary desktop. That meant I had to figure out how to get printing working first with Linux, and then printer sharing.

Getting printing working in Linux was fairly easy. In fact, the printer had already appeared in the list of printers without really doing any work. I recalled from a previous attempt a while back that there are some neat specific tools for HP printing in Linux, and I found them again at the HPLIP project. A quick install of that software in Fedora and I, at least, had that up and running.

Sharing the printer via Samba and CUPS is where it got a little tricky. I had ended up fighting a bunch with the specific configuration of Samba, finding lots of conflicting tutorials with different information that didn’t make sense. I tried a few things, and kept getting permissions errors.

I finally realized that, at least for printing, smb is running/executing as the user “nobody”.  I also noticed that there happened to be a samba-specific folder in /var/spool. I put two and two together and figured that SELinux would be happiest with Samba talking to that folder. So, here’s, ultimately, the set-up I ended up with for smb.conf:

  workgroup = YOURWORKGROUP
  server string = Samba Server Version %v
  security = share
  printing = cups
  printcap name = cups
  browseable = yes
  printable = yes
  public = yes
  create mode = 0700
  use client driver = yes
  path = /var/spool/samba

Adding the printer from Windows proved to be a snap:

  1. Browse to the computer name (//yourlinuxmachinename)
  2. Double click the printer to connect to it
  3. Find the driver it needs
  4. Done!

Hopefully this will help some of you if you find yourselves fumbling around trying to make this sort of thing work.

Creating a Windows 7 bootable USB device from Linux

February 18, 2011 – 11:00 pm

This really should not have been as hard as it was. I tried in vain to take the Windows 7 Ultimate 64-bit ISO, that I had downloaded from MSDN, and put it on a USB HDD that I had laying around. I have just built a new computer and did not bother to buy an optical drive. Unfortunately, my existing Windows machine was 32-bit Windows XP. This meant running any files from the Windows 7 CD (like the boot sector program) was not a possibility.

I tried various tools like UNetbootin, WinToFlash, MultiBootISOs and others. I also tried some tricks with xcopy that did not seem to work. Since I work for Red Hat and am a Linux person, I happened to have a Linux machine at my disposal. Here’s what I found that worked:

  • I created a bootable (IMPORTANT!) 4GB primary NTFS partition on my 40GB external USB HDD
  • I formatted this partition with NTFS
  • I mounted the Windows 7 ISO and the NTFS partition, and copied the files from the ISO to the USB HDD
  • I used ms-sys to write a Windows 7 MBR to the USB HDD

There was at least one caveat here. I saw, in a place or two, suggestions to use ms-sys against the partition itself. When running ms-sys against a partition, it complained, so I ran it against the base device (in my case, /dev/sdb).

Hopefully this will help someone out there!

    How to set the text with formtastic, select and collection

    November 7, 2010 – 10:09 pm

    I’ve been on a tear again working on Riding Resource. We’re trying to do something interesting and slightly social, but I can’t give it all away just yet. There are some forms involved, and I decided that I was going to try and save some time by using Justin French’s formtastic plugin. Well, it surely saved some time, but, as with anything new, there’s a learning curve.

    Since one of the big things that Riding Resource does is help stables see who is searching for them (by storing lots of demographic information), I wanted to make sure that any data these forms captured would be easily reportable. In the case of select lists, that means having models for them with integers and text associated. But when poking around with formtastic, I couldn’t figure out how to make a specific field of the model display in the dropdown for the select. Here’s an example:

    f.input :preferred_discipline, :as =>, :select, :collection => DemographicPreferredDiscipline.all

    melc in #rubyonrails on Freenode suggested that I try using a map. I’d seen these before, so I figured I’d give it a whirl:

    f.input :preferred_discipline, :as => :select, :collection => { |dp| [dp.text,] }

    Text is the name of the field I wanted to display in the select. What do you know? It worked! I figured I would share this here for posterity and Google indexing.

    Random thoughts on net neutrality and free markets

    May 5, 2010 – 11:17 pm

    This is basically a copy of a comment I made on Fred Wilson’s blog, but I wanted to put it here so that other people (who might possibly pay attention to me) might see it, too.  So here are some random thoughts:

    – Wireless technologies (WiFi) have evolved extremely quickly because they are largely “unregulated”. No one really owns the spectrum and every company can make a device that can access that spectrum, so they all compete to offer better performance/features/etc. in that space.

    – The only organization that can create a monopoly is a government. Even if one company were to buy up everything and become the sole provider of a service, it still is not a monopoly. Either people will substitute something else in place of that service (walking instead of taking the train, even though it takes a long time), or someone will determine that the barrier to entry, no matter how significant, will ultimately provide a competitive alternative to the existing monopoly.

    – Cable and telephone companies have “near” monopoly over internet access, but it is only because they have already eaten the tremendous costs of infrastructure over time, and happened to be able to retrofit this infrastructure for use as a data transport infrastructure.

    – Verizon seems to think that, despite the start-up cost, there is competitive benefit to them setting up a new higher-speed data transport infrastructure, as one example. Companies like Clear have decided that, despite the lack of comparable performance to other options today, there is a competitve to their investing in the infrastructure for their wireless data service infrastructure.

    – “Net Neutrality” and spectrum auctions will likely serve to neuter the inevitable explosion in over-the-air as an alternative to existing wired data service infrastructures. Instead of net neutrality making the internet and data services better, it will ultimately serve to further reinforce the near monopoly that the cable companies and phone companies already have by eliminating the competitive benefit that the wireless providers can exert over the cable companies by being net neutral. If Comcast were allowed to really really manipulate its network traffic, customers who did not like this would move to services like Clear in favor of a neutral experience as a trade-off to performance. Forcing the net neutrality hand means that this inevitable movement is going to be stifled.

    Updating Air on Fedora 12 breaks it… hell ensues

    January 12, 2010 – 10:43 am

    After getting messages about updating Adobe Air for a while, I finally decided to bite the bullet and do it.

    Big mistake.

    Crazy hell ensued, in that nothing from Air would work any more after that, and all I got was cryptic core dumps. I tried to uninstall Air and Tweetdeck, and failed at that for a while, too, until I figured out the following:

    1. Air and Air applications like Tweetdeck actually end up as RPMs.  You can (should) remove them using rpm -e as the root user or with sudo.  (found via Adobe’s page, sort of)
    2. I found the rpms by grepping: rpm -qa | grep ado — or — rpm -qa | grep weet
    3. You may have to remove or move your certificates folder in /etc/opt

    So, if you decide to update Adobe Air on your Fedora 12 box and suddenly everything seems borked, you might just want to uninstall everything and install from scratch.  I just did this and it worked well, and I’m up and running with the latest Tweetdeck for Linux.

    Manipulating links with HTML select and jQuery

    January 4, 2010 – 8:27 pm

    As with most web projects, there’s always some little new glitch that pops up. We’ve been building and massaging our own analytics back end for Riding Resource for some time now, and the change of year from 2009 to 2010 brought some new quirks that had to be dealt with.

    While there were some minor issues related to year/day calculations creating invalid dates, the bigger issue that (I think) was solved rather elegantly was choosing, via a select tag, which year’s analytics report to generate. jQuery came to the rescue, with what turned out to be a far more simple solution than I originally had envisioned.

    Select tags are not exactly the most complicated things in the world. But when you don’t have a form to go with them, it’s hard sometimes to figure out how to make them be useful. Instead of having a link for every year’s report, I figured a nice little drop-down would be an elegant way to choose. But this is where the difficulty was. I wanted a single text link to the report, but I wanted to change what that link actually… linked to… based on the selection in the box.

    Here’s some of the original HTML that was generated by the app:

      <a id="union-county-veterinary-clinic-target" href="/analytics/show2/union-county-veterinary-clinic.pdf">PDF</a>
    <select id="union-county-veterinary-clinic-select" name="date[year]">
        <option value="2009">2009</option>
        <option selected="selected" value="2010">2010</option>

    Here is where jQuery came to the rescue, and some generous fellow with the handle of “kit10″ on the #jquery channel on IRC on Freenode. kit10 suggested that I add a “rel” attribute to the select element, and give that “rel” attribute the value of the id of the PDF link. In this way, jQuery could look at the select element, and, when it changed, update the link. After a few machinations, here’s what popped out:

          // courtesy of kit10 on #jquery on freenode
          $("select[id$='-select']").change(function() {
            var target = $('#'+$(this).attr('rel')); // set the target to be the value of the rel of the selector
            var regex = /year=\d\d\d\d/

    To read this code in a sense of plain english:
    Whenever a select element that contains ‘-select’ in the ID changes
    Create a variable called target, and assign it the value of the rel attribute of the select element
    Replace the href attribute of the target with the new year by using the regex of /year=\d\d\d\d/ (to match year=2009, year=2010, etc)

    That was really all there is to it. The new HTML ended up looking like as follows:

      <a id="1000-acres-ranch-target" href="/analytics/show2/1000-acres-ranch.pdf?year=2009">PDF</a>
    <select id="1000-acres-ranch-select" name="date[year]">
        <option value="2009">2009</option>
        <option selected="selected" value="2010">2010</option>

    Twitter integration and parsing links with Ruby on Rails

    October 29, 2009 – 6:28 pm

    As per usual, it’s been a while since I’ve written anything about Ruby on Rails or RidingResource. A while back, we had an issue with Twitter integration that messed up the homepage. I got some time today to fix things, so I figured I would write a little bit about how RidingResource has integrated Twitter into our homepage and about how parsing links works in Ruby on Rails.

    When we were first building RidingResource, we decided it might be cool to have the last few tweets from our RidingResource Twitter account displayed on the home page.  It took me a minute, but, like with most things you want to do with Ruby on Rails, there’s a gem/plugin for that.  The one we chose happens to be Dancroak’s Twitter Search.  The neat thing about this gem is that it allows you to grab things off Twitter very easily, and then use them however you like.

    def home
      ## set up twitter client
      @client = 'equine'
      ## pull in last 3 tweets
      @tweets = @client.query :q => 'from:ridingresource', :rpp => 3

    Well, this is all well and good, but one thing I quickly realized when we initially did this is that when we posted a link in a tweet, it is parsed fine if you look at it on Twitter’s website or through other clients, but we were basically just regurgitating the text of the tweet with no markup. When I went to reinstate the Twitter feed on the homepage today, I started looking into ways to parse URLs with Ruby (on Rails) that were already in strings and to display them with the proper hypertext markup. What I found were some neat snippets on DZone that did just that.

    After some careful Googling and search-term hackery, I stumbled upon this DZone snippet that discusses how to convert URLs into hyperlinks. This snippet written by James Robertson makes use of a gem, alexrabarts-tld, which does some checking to see if items are actually a real domain TLD. As James found, like with many things Ruby, you can pass the items that come out of a gsub regexp into a block, which enables us to replace the URL in the string with the hypertext for the URL.

    Because we were going to use this substitution on every single tweet to check if there were any URLs,  I created a nifty little helper function to do just that.

    def hyperlink_parser(string)
      return string.gsub(/((\w+\.){1,3}\w+\/\w+[^\s]+)/) {|x| is_tld?(x) ? "<a href='#{x}' class='#{link_class}'>#{x}</a>" : x}

    One thing I noticed was that if URLs were in the text that had http:// in them, we would match the rest of the URL, hyperlink the FQDN and other parts of the link, but ignore the http://. It looked really funny to have a link that looked like I realized that this was just a “problem” with the original RegExp that Jason had created, so I started to do some sleuthing.

    The first thing I did was try to find a RegExp tester. While there are many out there, the one I ended up using was Rubular (conveniently uses Ruby – look at that!), which displays the results of our RegExp search against the text in real time. Some careful googling of selected RegExp and URL terms resulted in yet another snippet from DZone, by Rafael Trindade. Getting closer!

    Lastly, since this function might just possibly be used elsewhere, and since I wanted to apply a style at least to the links generated by its use here, I decided to add another argument to the helper method for the link class. The result is the following helper:

    def hyperlink_parser(string, link_class="")
      return string.gsub(/(ftp|http|https):\/\/(\w+:{0,1}\w*@)?(\S+)(:[0-9]+)?(\/|\/([\w#!:.?+=&%@!\-\/]))?/) {|x| is_tld?(x) ? "<a href='#{x}' class='#{link_class}'>#{x}</a>" : x}

    The view essentially just iterates over the tweets that came back from the search, rendering a partial:

    <% for tweet in @tweets -%>
      <%= render :partial => "tweet", :object => tweet %>
    <% end -%>

    And inside the partial we do a few things with the tweet, including linking to the original tweet on Twitter’s site, showing the date, and parsing the text for the URL and returning it:

    <li><p><a href="<%= %>" target="_blank"><%= tweet.created_at[5,11] %></a> <%= hyperlink_parser(tweet.text, "tweet") %></p></li>

    Hopefully some of you will find this useful in your quest to either integrate Twitter into your Ruby on Rails’ projects, or to perhaps parse some things that live inside strings into markup. There is almost surely something that already does this, so I probably re-invented the wheel, but it didn’t take long and it seems to work.

    Git socket timeout issues with CentOS

    September 13, 2009 – 3:50 pm

    So Riding Resource was developed in Ruby on Rails, as many of you may know. At some point this year I made the switch from using a local Subversion source control system to using git with Github, which has been pretty good. The one pain I was having, which I thought was a CentOS 4 / libcurl issue, actually turned out to be the APF firewall that Wiredtree uses on all of their VPS.

    APF is a pretty neat firewall. It does a lot of neat things. It’s also installed by default on the VPS we use from Wiredtree. When I would pull or clone our private repository for Riding Resource, I had no issues with git. However, trying to clone public repositories I would always be greeted with something like:

    fatal: unable to connect a socket (Connection timed out)

    It took me a little while to figure out what was going on here, but what I tracked it down to was actually a firewall issue, and not any kind of issue with git or Github. I don’t know how I figured it out, but I discovered that cloning with Github uses port 9418. This link discusses using tunnels and mentions the port.

    After some inspection, I realized that inbound and outbound traffic was blocked by APF on port 9418. A quickie modification to the EG_TCP_CPORTS and IG_TCP_CPORTS values by adding 9418 and restarting the APF service managed to do the trick.

    This is definitely not relegated only to CentOS or to systems running APF when trying to clone from Github. Any Linux system could be subject to these timeout issues against Github if your firewall is configured to block 9418. So if you are seeing socket connection issues, or fatal errors with fetch-pack, you might just want to check your firewall.