:::: MENU ::::

Manipulating links with HTML select and jQuery

As with most web projects, there’s always some little new glitch that pops up. We’ve been building and massaging our own analytics back end for Riding Resource for some time now, and the change of year from 2009 to 2010 brought some new quirks that had to be dealt with.

While there were some minor issues related to year/day calculations creating invalid dates, the bigger issue that (I think) was solved rather elegantly was choosing, via a select tag, which year’s analytics report to generate. jQuery came to the rescue, with what turned out to be a far more simple solution than I originally had envisioned.

Select tags are not exactly the most complicated things in the world. But when you don’t have a form to go with them, it’s hard sometimes to figure out how to make them be useful. Instead of having a link for every year’s report, I figured a nice little drop-down would be an elegant way to choose. But this is where the difficulty was. I wanted a single text link to the report, but I wanted to change what that link actually… linked to… based on the selection in the box.

Here’s some of the original HTML that was generated by the app:

  <a id="union-county-veterinary-clinic-target" href="/analytics/show2/union-county-veterinary-clinic.pdf">PDF</a>
<select id="union-county-veterinary-clinic-select" name="date[year]">
    <option value="2009">2009</option>
    <option selected="selected" value="2010">2010</option>
  </select>

Here is where jQuery came to the rescue, and some generous fellow with the handle of “kit10” on the #jquery channel on IRC on Freenode. kit10 suggested that I add a “rel” attribute to the select element, and give that “rel” attribute the value of the id of the PDF link. In this way, jQuery could look at the select element, and, when it changed, update the link. After a few machinations, here’s what popped out:

  $(document).ready(function(){
      // courtesy of kit10 on #jquery on freenode
      $("select[id$='-select']").change(function() {
        var target = $('#'+$(this).attr('rel')); // set the target to be the value of the rel of the selector
        var regex = /year=\d\d\d\d/
        target.attr('href',target.attr('href').replace(regex,"year="+$(this).val()));
        });
      });

To read this code in a sense of plain english:
Whenever a select element that contains ‘-select’ in the ID changes
Create a variable called target, and assign it the value of the rel attribute of the select element
Replace the href attribute of the target with the new year by using the regex of /year=\d\d\d\d/ (to match year=2009, year=2010, etc)

That was really all there is to it. The new HTML ended up looking like as follows:

  <a id="1000-acres-ranch-target" href="/analytics/show2/1000-acres-ranch.pdf?year=2009">PDF</a>
<select id="1000-acres-ranch-select" name="date[year]">
    <option value="2009">2009</option>
    <option selected="selected" value="2010">2010</option>
  </select>

Twitter integration and parsing links with Ruby on Rails

As per usual, it’s been a while since I’ve written anything about Ruby on Rails or RidingResource. A while back, we had an issue with Twitter integration that messed up the homepage. I got some time today to fix things, so I figured I would write a little bit about how RidingResource has integrated Twitter into our homepage and about how parsing links works in Ruby on Rails.

When we were first building RidingResource, we decided it might be cool to have the last few tweets from our RidingResource Twitter account displayed on the home page.  It took me a minute, but, like with most things you want to do with Ruby on Rails, there’s a gem/plugin for that.  The one we chose happens to be Dancroak’s Twitter Search.  The neat thing about this gem is that it allows you to grab things off Twitter very easily, and then use them however you like.

def home
  ## set up twitter client
  @client = TwitterSearch::Client.new 'equine'
 
  ## pull in last 3 tweets
  @tweets = @client.query :q => 'from:ridingresource', :rpp => 3
end

Well, this is all well and good, but one thing I quickly realized when we initially did this is that when we posted a link in a tweet, it is parsed fine if you look at it on Twitter’s website or through other clients, but we were basically just regurgitating the text of the tweet with no markup. When I went to reinstate the Twitter feed on the homepage today, I started looking into ways to parse URLs with Ruby (on Rails) that were already in strings and to display them with the proper hypertext markup. What I found were some neat snippets on DZone that did just that.

After some careful Googling and search-term hackery, I stumbled upon this DZone snippet that discusses how to convert URLs into hyperlinks. This snippet written by James Robertson makes use of a gem, alexrabarts-tld, which does some checking to see if items are actually a real domain TLD. As James found, like with many things Ruby, you can pass the items that come out of a gsub regexp into a block, which enables us to replace the URL in the string with the hypertext for the URL.

Because we were going to use this substitution on every single tweet to check if there were any URLs,  I created a nifty little helper function to do just that.

def hyperlink_parser(string)
  return string.gsub(/((\w+\.){1,3}\w+\/\w+[^\s]+)/) {|x| is_tld?(x) ? "<a href='#{x}' class='#{link_class}'>#{x}</a>" : x}
end

One thing I noticed was that if URLs were in the text that had http:// in them, we would match the rest of the URL, hyperlink the FQDN and other parts of the link, but ignore the http://. It looked really funny to have a link that looked like http://www.erikjacobs.com I realized that this was just a “problem” with the original RegExp that Jason had created, so I started to do some sleuthing.

The first thing I did was try to find a RegExp tester. While there are many out there, the one I ended up using was Rubular (conveniently uses Ruby – look at that!), which displays the results of our RegExp search against the text in real time. Some careful googling of selected RegExp and URL terms resulted in yet another snippet from DZone, by Rafael Trindade. Getting closer!

Lastly, since this function might just possibly be used elsewhere, and since I wanted to apply a style at least to the links generated by its use here, I decided to add another argument to the helper method for the link class. The result is the following helper:

def hyperlink_parser(string, link_class="")
  return string.gsub(/(ftp|http|https):\/\/(\w+:{0,1}\w*@)?(\S+)(:[0-9]+)?(\/|\/([\w#!:.?+=&%@!\-\/]))?/) {|x| is_tld?(x) ? "<a href='#{x}' class='#{link_class}'>#{x}</a>" : x}
end

The view essentially just iterates over the tweets that came back from the search, rendering a partial:

<% for tweet in @tweets -%>
  <%= render :partial => "tweet", :object => tweet %>
<% end -%>

And inside the partial we do a few things with the tweet, including linking to the original tweet on Twitter’s site, showing the date, and parsing the text for the URL and returning it:

<li><p><a href="http://www.twitter.com/RidingResource/status/<%= tweet.id %>" target="_blank"><%= tweet.created_at[5,11] %></a> <%= hyperlink_parser(tweet.text, "tweet") %></p></li>

Hopefully some of you will find this useful in your quest to either integrate Twitter into your Ruby on Rails’ projects, or to perhaps parse some things that live inside strings into markup. There is almost surely something that already does this, so I probably re-invented the wheel, but it didn’t take long and it seems to work.


Git socket timeout issues with CentOS

So Riding Resource was developed in Ruby on Rails, as many of you may know. At some point this year I made the switch from using a local Subversion source control system to using git with Github, which has been pretty good. The one pain I was having, which I thought was a CentOS 4 / libcurl issue, actually turned out to be the APF firewall that Wiredtree uses on all of their VPS.

APF is a pretty neat firewall. It does a lot of neat things. It’s also installed by default on the VPS we use from Wiredtree. When I would pull or clone our private repository for Riding Resource, I had no issues with git. However, trying to clone public repositories I would always be greeted with something like:

fatal: unable to connect a socket (Connection timed out)

It took me a little while to figure out what was going on here, but what I tracked it down to was actually a firewall issue, and not any kind of issue with git or Github. I don’t know how I figured it out, but I discovered that cloning with Github uses port 9418. This link discusses using tunnels and mentions the port.

After some inspection, I realized that inbound and outbound traffic was blocked by APF on port 9418. A quickie modification to the EG_TCP_CPORTS and IG_TCP_CPORTS values by adding 9418 and restarting the APF service managed to do the trick.

This is definitely not relegated only to CentOS or to systems running APF when trying to clone from Github. Any Linux system could be subject to these timeout issues against Github if your firewall is configured to block 9418. So if you are seeing socket connection issues, or fatal errors with fetch-pack, you might just want to check your firewall.


Custom (dynamic) error pages with Ruby on Rails

Ruby on Rails is pretty neat stuff. Whenever I try to find out how to do something, it seems that I’m not the first to look. And, fortunately, many have usually solved that problem before. One thing that bugged me with Riding Resource was error pages. Sure, Rails allows you to create static 404 and 500 and other pages for those situations when things go awry. But the fact that those pages were static caused me some heartburn.

For one, if the layout of the website changed, it meant I needed to update the error pages. This is certainly not DRY. And, using static pages, I could not use any Ruby code in my error page, or do anything dynamically at all.

After some quick searching, I came across this post by Rob Hurring which lead me to this post on has_many :bugs, :through => :rails regarding how to create customized error pages with Ruby on Rails. Granted, neither of these solutions was exactly what I was looking for, so a little customization was required. However, the basic requirements were met.

  1. we can see that we can rescue_from many of the standard ActionController errors.
  2. using :with, we can specify a method to invoke to process the rescue activity
  3. the method called to rescue us can render a page with a layout

This took care of everything we needed. For almost all of the error types that would arise, we could redirect to a custom 404 template that allowed for Ruby to be embedded that would use any number of existing layouts, which keeps things tight and DRY.

One caveat that we had particularly related to Riding Resource was that meta tags are currently generated on the fly by a view helper, but we are actually looking at the current params to determine what controller/action we are in, and, therefore, what meta tags to spit out. Unfortunately, I could not find a way to detect, within the template, what status the template had been rendered with.

I ended up creating an instance variable before rendering the error template:

unless ActionController::Base.consider_all_requests_local
  # yeah, its a long line
  rescue_from ActiveRecord::RecordNotFound, ActionController::RoutingError, ActionController::UnknownController, ActionController::UnknownAction, :with => :render_404
  rescue_from RuntimeError, :with => :render_500
end
 
protected
 
def render_404
  @status = "404"
  render :template => "shared/404", :status => :not_found
end

By creating this instance variable, I could then check for the value of @status in my meta tag generator and properly handle things.

One caveat to this solution, in general, is that errors that occur with your process of rendering the custom page will result in no errors being shown at all, except for in the log. Note Ron’s trick to be able to generate the custom error pages even while in development mode.

In the long term I will probably clean up the meta tag generator to create instance variables and then use partials to get the information into the view, but that’s for another day. We also recently added Thoughtbot’s Paperclip plugin to be able to easily attach pictures of facilities, but that post will be made once we get watermarking working.


Setting non-native resolutions in F11

I know, I know — I haven’t blogged about RidingResource in a while, but we’ve been focusing on other non-blogworthy stuff like starting to promote and fixing little goofy bugs here and there. I have, however, been poking about with Fedora 11 (Leonidas) and have been finding little tricks and things here and there to make life easier. One thing I found was that the new F11 has the nifty KMS stuff that gives you the slick graphical boot up and seamless login into Gnome (X). However, one thing I noticed that was missing was the ability to set non-native resolutions in the display settings.

For some, this ability is important. For example, I frequently conduct presentations online and not everyone that I present to has a widescreen monitor. Trying to share a desktop/application at 1680×1050 when the viewer only can see 1024×768 makes things difficult for the viewer.  They end up having to scroll and do all kinds of other goofy stuff that annoys them.

For the life of me, I couldn’t figure out where these non-native resolutions “went.” I remember being able to set them in F10 without doing anything fancy, but in F11 these “extra” modes were curiously absent. After some prodding around, a kind fellow in #fedora on freenode suggested trying to disable KMS when booting in grub by adding the “nomodeset” option. This actually did the trick. While I lose the cute bootup sequence, I can always create another grub boot option that still has the KMS enabled. I can boot normally, or boot with “nomodeset” when I know I’ve got to do a presentation.

Hopefully this information helps!


Fedora 11 (Leonidas) and Adobe AIR

As is to be expected with installing or upgrading any operating system, there might be a few speed bumps along the road. I recently updated one of my laptops to run the latest Fedora 11, Leonidas, and have been spending time re-installing software that I want to use. One thing that I ended up using quite a bit was Adobe AIR with Tweetdeck, a Twitter client.  Adobe is kind enough to provide an Adobe AIR for Linux.

Installing Adobe AIR should be relatively trivial, but I ran into some roadblocks that you might be experiencing, and had some recollection of my experience with F10, so I thought I’d post them here.

  1. Run the installer as the root user or with sudo
  2. I found in several sources that creating a ~/.airinstall.log file will output the (inevitable) error messages in (somewhat) greater detail somewhere
  3. If you get such errors, and you see something about rpmbuild, you may need to install the rpm-build package
  4. If you get more errors, you might find something that whines about librpmbuild.so and librpmbuild-4.7.so  I noticed that there was already a librpmbuild.so.0.0.0 in /usr/lib, so I took a gamble and created a symlink to librpmbuild.so and attempted to reinstall.

Doing these 4 things managed to get Adobe AIR to install in Fedora 11, so hopefully it will work for you, too.


Internal Analytics with Open Flash Chart

We wanted to be able to get some analytics for the various facilities on RidingResource, and that required some thinking. While Google Analytics is certainly great, and we use it heavily, there are some things that it can’t capture that are valuable data to both us and our customers.

Since RidingResource is essentially a search engine, we realized that there was value in knowing how often a facility’s listing “came up,” either by being directly viewed on it’s detail page, or also by being seen in the search results page. Since we also built an API for a partner, which we’ll announce publicly once it goes live, we thought it would be valuable to track API “hits” as well.

Creating the table to store the analytics data was relatively simple. We just created an Analytic model and connected it to the Contact model – Contact is the model that stores the basic information about facilities listed on RidingResource.  We use Single Table Inheritance (STI) for the different types of facilities listed, but that’s for another posting.

We realized that there were not a lot of fields necessary for the analytics table. Since the analytics were connected to a contact, we needed to store the contact ID. Since we identified three different types of analytics, we store an integer for the type field, which we may make into an actual model later.

Lastly, we decided it would be valuable to store the parameters that were used at the time to cause this listing to be displayed. It’s entirely possible that there may some more valuable data that we could search on later, so knowing the params of the “hit” could be valuable.

class Analytic < ActiveRecord::Base
  belongs_to :contact
end
 
class CreateAnalytics < ActiveRecord::Migration
  def self.up
    create_table :analytics do |t|
      t.integer :contact_id
      t.string :parameters
      t.integer :analytic_type
      t.timestamps
    end
  end
 
  def self.down
    drop_table :analytics
  end
end

Ruby on Rails is kind enough to automatically store the params for us as serialized YAML. This way, when we want to actually process and dissect them later, we can simply do the following to get the params hash back the way we need it:

@the_analytics = Analaytic.find(:all, :conditions => :some_conditions)
@the_params = YAML::load(@the_analytics[some_specific_one].parameters)

One thing that needed to be carefully considered was storing “hits” on the search results pages. Because the database is currently a little bit nasty, we’re ending up finding all entries in the DB that match a subset of criteria, and then filtering out the rest that don’t match the remaining criteria. This is actually faster than all the weird joins that end up occuring. After that, there is still the matter of pagination. It’s entirely possible that a facility could be pulled from the DB several times without actually being displayed, so we couldn’t just assume that pulling from the DB in the results page was a hit.

What I realized was that Mislav’s will_paginate does something nice for us – it ends up lopping off all the records outside the pagination range, and leaves us with the few for the current page. This enabled us to simply iterate over the paginated records and store the hits.

@contacts = filter_results(@contact)
@contacts = @contacts.paginate(:page => params[:page], :per_page => 8)
@contacts.each do |contact|
  contact.analytics << Analytic.new( {:contact_id => contact.id, :parameters => params, :analytic_type => 1} )
end

Storing the hits for the detail page and for the API was trivial. There’s only one record to grab on the detail page, so obviously someone is looking at it – hit. Since we don’t know what the people at the other end of the API are actually doing with the data, all we can do is record that a record was provided to the API.

So now that we’re storing the analytics, how the heck do we display them? That’s where Open Flash Chart comes into play. Unfortunately, this turned out to initially be a nightmare for many reasons.

When we first started building RidingResource, I was certainly a rails noob. Not that I am by any means not a noob at this point, but at least I am a little more polished since those early days of not knowing how to do anything. Because I was busy fighting everything at that point, I decided to save myself some headache for the administration area and use Active Scaffold.

Active Scaffold certainly is a nice plugin. It does have a tendency to throw wrenches into the works on occasion because it does some strange things using the Prototype javascript libraries. My first crack at graphing data was to take a look at Flot because it seemed simple and could do the basic things we wanted. The Flot plugin I found (Flotilla) wanted to use jQuery via JRails (don’t think JRuby) which interfered with the Prototype implementation of Active Scaffold. Since the initial graphing was for our admin area, this was out.

After some pondering and question asking in the #rubyonrails channel on Freenode (you can find me there as thoraxe), a few other suggestions came up. Scruffy and Gruff were suggested, but these both used Scalable Vector Graphics. While Firefox supports these today, I was informed that IE does not without a plugin. Our customer base is mostly going to be IE people, and probably not the most tech-savvy. In case these analytics became customer-facing, I did not want to have to worry about teaching non-tech-savvy people how to install browser plugins for IE. Scruffy and Gruff = out.

Next came the flash implementations, of which there were two notable ones. The first I will mention is Ziya, although we did not ultimately choose it. Ziya charts certainly are sexy, but for some reason I decided implementation looked difficult and that the charts were a little bit of overkill for what we needed.

Enter Open Flash Chart, our savior. Well, in the end the savior. It was a hell of a headache getting it to work.

There are several implementations of Open Flash Chart in Ruby on Rails. I have to say that, to a certain extent, all of them are a little sucky. Don’t get me wrong – it’s unfair for me to complain about free code that could make me money! But there is something to say about the cleanliness and simplicity of Technoweenie’s code when compared to some of these plugins.

Open Flash Chart is smart. It is a flash file that you basically feed JSON data to generate charts. That makes it simple. Unless you are using the JSON gem already. Which we are because it is used by dancroak’s twitter_search plugin. Which makes things insane. Remember how Active Scaffold was interfering with Flot? Well, here we were again. Something I was already using interfering with something I wanted to do.

To make a long story short, after much headache surrounding various Open Flash Chart plugins that used Rails’ built-in JSONification, one of the plugins that is mentioned on the OFC webpage happened to use the JSON gem itself. Perfect!

Korin’s Open Flash Chart 2 plugin did the trick. I won’t go into the implementation of everything in its entirety, but I will share the following bits which you may or may not find useful.

Korin’s examples use two controller actions to generate the graph. The first action creates the @graph object which basically just stores the string which represents some code that the swfobject javascript library uses to create the proper html to display the openflashchart.swf.  The second action actually generates the JSON that gets fed into the SWF.

One of the other plugins I had found that did not work for me (because of the previously discussed JSON gem issues) was Pullmonkey’s Open Flash Chart plugin. Pullmonkey did something neat using the respond_to method of MimeResponds in ActionController.

def show
  # find the contact requested
  @contact = Contact.find_by_url(params[:id])
  respond_to do |wants|
  @all_results = Analytic.find(:all, :conditions => { :contact_id => @contact.id })
  @data_results = Analytic.find(:all, :conditions => { :analytic_type => 1, :contact_id => @contact.id})
  @data_details = Analytic.find(:all, :conditions => { :analytic_type => 2, :contact_id => @contact.id})
  @data_api = Analytic.find(:all, :conditions => { :analytic_type => 3, :contact_id => @contact.id})
 
    wants.html do
 
      # set up the graph on the request
      @graph_results = ofc2(650,300,url_for(:action => :show, :format => :json, :graphtype => :results),"")
      @graph_details = ofc2(650,300,url_for(:action => :show, :format => :json, :graphtype => :details),"")
      @graph_api = ofc2(650,300,url_for(:action => :show, :format => :json, :graphtype => :api),"")
    end
    wants.json do
      # provide the JSON back to the flash
      # call the function to generate the graph based on the graph type that is supplied via params
      render :text => results_graph.render
    end
  end
end

This implementation is a little more elegant, but I’m actually finding that, because both the HTML and JSON generation are happening inside the same controller action, some of my finds are being performed multiple times. This is because the records for these finds are needed by both parts of the action, but the action gets called each time any part of the respond_to gets called. I may end up de-elegantizing this and splitting it back into multiple actions if it becomes a performance issue.

In determining what to do with the analytics data, we decided that initially it made sense to simply graph hits by day. Since we cheated in our creation of the analytics table and used the built in timestamps, we already had a created_at field which contained the DateTime of the hit. If you look at that last sentence carefully, you realize that DateTime is not Date. So how do you lop off the time part? This also still leaves the trouble of calculating how many hits occured on each day, too. This is where the elegance of Ruby really shines.

line_values = []
x_labels_text = []
 
instance_variable_get("@data_#{params[:graphtype]}").group_by{ |a| Date.ordinal(a.created_at.year, a.created_at.yday) }.each do |day, results|
  # this will group all of the analytic hits together by day instead of date/time. it then iterates over
  # these results in a block, where day holds the value of the day of the results, and results is an array
  # containing the individual results from that day.
 
  x_labels_text << day.to_s # put the value of the day into the x axis label
  line_values  << results.length # how many hits occured on this day
 
end

group_by, ordinal, and iterating over blocks totally saved our butts here. What the above code enabled us to do was to group the entire array of analytics data by the day (after munching the time off using ordinal), and then iterate over the resulting groups. The blocks allow us to both store the day into the array of labels for the x-axis, as well as determine how many hits occured on that day by using the length of the array of data in the group. Brilliant!

As you can see, what started out as a relatively simple idea (“Let’s graph some analytics about the facilities on RidingResource!”) ended up being a relatively non-trivial coding exercise that took us almost a solid day of man-hours. But, in the end, we were left with something simple that did the job, but which leaves us a lot of room for growth and power.

The only real pain point right now has to do with the actual analytics data. If a day goes by without any hits, nothing gets stored in the database. Since we are grouping by the records that we actually pull out of the database, any dates in the middle with no hits will not be represented. So we are left with the problem of how to determine what dates are “in the middle” that have no hits. It’s not an issue right now, but it may become one in the future. I’m sure we’ll be able to figure it out.


CAPTCHA in Rails – an experiment in anti-spam

One of the things that we decided would be a good idea for RidingResource was to let users of the site contact the various facilities that we have listed. We also wanted to make sure those facilities knew we helped facilitate that contact by injecting some extra information into the email. The email functionality is left for another post.

We quickly ran into an issue with spam. As administrators of the site, we get copies of all of the email that is sent to facilities. We noticed that some weird spam-like email came through. Fortunately it was someone trying to exploit our site as an open mailer, but it didn’t seem to work, and no email got to our customer. We will have to investigate how to help prevent that (if it’s even possible to exploit) later. But we knew that we needed to do something to prevent spammers from being able to send automated junk mail to our customers, and CAPTCHA seemed like a good idea.

Until I tried it.

After some quick Googling for “rails captcha” and other terms, I discovered the simple_captcha plugin. This is a handy plugin that can be used to generate a CAPTCHA image with some convenient options. It also offers a friendly validation of said CAPTCHA in your controllers, amongst other things.

One “issue” that I had with simple_captcha is that it requires both the Imagemagick image manipulation program to be installed as well as the RMagick gem. The first part was already present, as I was using the mini_magick gem for another Rails application. Unfortunately, I didn’t have RMagick, and installing it proved less than trivial.

First, trying to install the RMagick gem resulted in an error:

Can’t install RMagick 2.9.1. Can’t find Magick-config in /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin

This got me to poking around. Now, most of the tutorials I had found referenced Debian as the Linux distribution onto which people were using simple_captcha. I happen to be running Fedora as a development server and CentOS in production. This meant things were a little different. After some creative Googling, I discovered what was required to install RMagick on CentOS. The current version of RMagick is actually 1.15.17 – slightly newer than the one referenced in the tutorial.

In the end, the tutorials on the simple_captcha website were sufficient to get what we needed going. Since we already had a contact form, it really was less than 15 new lines of code to get things up and running. I’m not as pleased as I could be with the CSS and formatting of where the CAPTCHA is on the contact form, but it looks good enough for now.

If you’ve got some contact forms and you’re concerned about spam, or you’ve got some registration forms that you want to anti-robot, give simple_captcha a try.


Building an API in Ruby on Rails

I know there are quite a few tutorials and links out there on building an API for your Ruby on Rails application, but I figured that I would document a little bit about how it was done for Riding Resource, but at a high level.  A partner had requested access to some of our data via an API, and wanted the results to be spit out as XML, so here’s a little bit about how that was done.

First things first, it took me a little while to figure out exactly how to handle the whole XML thing, but it was actually far easier than I originally had thought.  Rails is smart enough to look for lots of different file types that match the action, so in the case of our API, all I had to do was make sure I had a .rxml (as opposed to .rhtml) file that matched the action.

Second, I wanted to restrict use of the API to authorized parties.  Just like how Google has API keys, I decided I wanted to do something similar.  Since I didn’t want to take the time to create some crazy key system that reverse-lookups the requestor and does stuff like that, I thought it might be a little easier to simply use the key to specify a legal set of requestor IP ranges/addresses.

Lastly, if the API request wasn’t from a legal requestor, I wanted to return nothing.

Here’s what I did (pseudocody) for the search controller

class ApiController &lt; ApplicationController
  # api ip address ranges allowed
  API_KEYS = {"5b6ba960531c458021e8be98f3842c182c773b2f" =&gt; ['192.168.2.0/24', 'aaa.bbb.ccc.ddd'] }
 
  def search
    for ip in API_KEYS[params[:key]] do
      if IPAddr.new(ip).include?(IPAddr.new(request.remote_ip))
        @barns = Barn.find(:stuff)
        @count = @barns.length
        return
      end
    end
    render :action => :blank
  end

Some interesting things to note here. IPAddr is a nice ruby class that allows us to perform manipulations on IP addresses, one of which is checking if an address is included in a range. You have to require the ipaddr class in your environment in order to use it — it is not loaded by default.

for ip in API_KEYS[params[:key]] do
      if IPAddr.new(ip).include?(IPAddr.new(request.remote_ip))

Here we are iterating over all the ip ranges and IPs that are in the hash associated with the key provided by the incoming params. For each IP range/address, we check to see if the requesting IP is included in the range. Fortunately, IPAddr is smart enough to know that a single IP is included within itself, so using IP addresses as opposed to ranges is perfectly acceptable.

If the IP is within one of the ranges, we perform our find. If not, we render the blank action. The blank action has no code, and blank.rxml is empty as well. This way, if an illegal request comes in, we just do nothing — I don’t care to cater to people trying to access the API that shouldn’t be.

The XML part was tricky at first, but it actually turned out to be far simpler than I thought. Once you are in an XML view, Rails is kind enough to provide an xml object already for you, without needing to instantiate anything. This seemed to be contrary to a few tutorials I had found. Here’s some pseudocode to represent what i did:

xml.instruct! :xml, :version => "1.0"
xml.barns do |barn|
  xml.count @count
  xml.requestor request.remote_ip
  xml.api_key params[:key]
  xml.requested_location params[:zip]
  xml.requested_distance params[:dist]
  for b in @barns do
    xml.barn do
      xml.name b.name
      xml.address b.address
      xml.distance b.distance
      xml.phone b.phone
      xml.website b.website
      xml.url "http://www.ridingresource.com/contact/show/#{b.url}"
    end
  end
end

Because XML is a markup language that requires properly open and closed “containers” similar to HTML, you can see there are a lot of do/end blocks. The main do.end block is the barns block, which contains all of our results. I also was kind enough to let the API user know some of the things they asked of us, as well as the number of results we found.

xml.barns do |barn|
  for b in @barns do
    xml.barn do
      xml.something value
    end
  end
end

As we iterate over each result in @barns, we want to create an xml container for each one. The ease of Rails/Ruby here is awesome — you simply specify the container name, and then the value: xml.something value

The resultant XML output looks something like this:

<?xml version="1.0" encoding="UTF-8"?>
<barns>
  <count>10</count>
  <requestor>aaa.bbb.ccc.ddd</requestor>
  <api_key>key</api_key>
  <requested_location>30093</requested_location>
  <requested_distance>10</requested_distance>
  <barn>
    <name>Camp Creek Stables</name>
    <address>4150 Arcadia Industrial Circle SW  Lilburn GA, 30047</address>
    <distance>4.0317320843575</distance>
    <phone>7709252402</phone>
    <website></website>
    <url>http://www.ridingresource.com/contact/show/camp-creek-stables</url>
  </barn>

As you can see, it can be pretty easy to build an XML-returning API using Ruby on Rails. While this certainly is by no means a tutorial, it can provide some insight if you are a little stuck.

If you are interested in getting access to the Riding Resource API, please be sure to contact me at erik@ridingresource.com


An open letter on student loan interest

Recently it came to my attention that student loan interest is not completely deductible. This rather infuriated me, so I wrote to my elected officials using information I found on the USA.gov website. If you are interested in contacting your officials, or interested in why I felt this way, you can read my letter below.

These are trying economic times. I certainly understand the desire to get the housing market moving again, in an effort to stem sliding home values sending more individuals under water.

However, what I fail to understand is how on the one hand we can try to help people take on more debt who inevitably will have their purchasing power reduced by it, and do nothing to aid individuals who are already in debt who could easily have their purchasing power increased.

Essentially 100% of home mortgage interest has been made tax deductible over time. Currently, only up to $2500 worth of student loan interest is tax deductible. Considering that your average doctor, lawyer or engineer is leaving college saddled with over one hundred thousand dollars (yes, $100,000) in student loan debt, even at a relatively low interest rate, the yearly interest payment is significant.

Now, of what value is it to someone who is already hundreds of thousands of dollars in debt and paying interest to provide a tax break for an additional multi-hundred thousand dollar purchase of a home?  In fact, even if mortgage interest wasn’t deductible, for these indebted former students, the tax deduction of student loan interest could provide significant returns towards the purchase of a home.

It pains me to know that some of the most productive and creative members of our society are buried in debt that they have trouble repaying, which pulls countless dollars out of their pockets that they could be putting back into the economy.  At the same time, thousands of low- and middle-income blue-collar workers are trying to buy bigger homes simply because they can afford it due to the mortgage interest tax deduction.  Additionally, while the current administration explores the possibility of instituting new legislation allowing for cram-downs of failing mortgages, student loan interest remains one form of debt that is nary untouchable by any means save death.

For a country in such financial crisis, it seems almost criminal to not aid those who could most easily jump start the economy – our most intellectual resources.  It is refreshing to see that the administration is looking at aiding the organizations that provide student loans to keep people headed towards further education.  However, the simple economics behind expanding the student loan interest deduction are sound.  And the benefit to the economy and the financial system will likely far outweigh the small penalty to the budget.


Pages:1234