:::: MENU ::::
Browsing posts in: Ruby (on Rails)

Disconnected Ruby demos on OpenShift 3

I’m headed to China soon, and the Great Firewall can present issues. S2I builds on OpenShift 3 generally require internet access (for example, pulling from Github or installing Ruby Gems), so I wanted to see what it would take to go fully disconnected. It’s actually surprisingly easy. For reference, my environment is the same environment as the OpenShift Training repository. I am using KVM and libvirt networking and all three hosts are running on my laptop. My laptop’s effective IP address, as far as my KVM VMs are concerned, is 192.168.133.1

Also, I have pre-pulled all of the required Docker images into my environment, like the training documentation suggests. This means that OpenShift won’t have to pull any builder or other images from the internet, so we can truly operate disconnected

First, an http-accessible git repository is required for using S2I with OpenShift 3 right now. Doing a google search for a simple git HTTP server revealed a post entitled, unsurprisingly, Simple Git HTTP server. In it, the instructions suggest using Ruby’s built in HTTP server, WEBrick. Here’s what Elia says:

git update-server-info # this will prepare your repo to be served
ruby -run -ehttpd -- . -p 5000

One thing to note – you must run the update-server-info command after every commit in order for webrick to actually serve the latest commit. I figured this out the hard way. On Fedora and as a regular user, you usually want to use a high port for stuff, so I chose a really high port — 32768. I also had to open the firewall. Fedora, by default, uses firewalld. Your mileage may vary:

firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -p tcp -m tcp --dport 32768 -m conntrack --ctstate NEW -j ACCEPT

With the firewall open, the git repo lives at http://192.168.133.1:32768/.git — not too shabby! Next, we need to make the Ruby Gems accessible via HTTP locally as well. Some Google-fu again brings us to something useful. In this case, Run Your Own Gem Server. While the article indicates that you can just run gem server, I found that this produced strange results and I filed bug #1303. I was using RVM in my environment due to some other project work, so, in the end, my gem server syntax looked like:

gem server --port 8808 --dir /home/thoraxe/.rvm/gems/ruby-2.1.2 --no-daemon --debug

Of course, this is going to serve gems from your computer, which means the gems have to actually be installed there in the first place. In the case of the Sinatra example, you would have to gem install sinatra --version 1.4.6, which would bring in the gem dependencies. Of course, this requires that you have ruby and rubygems, but you already have that, right?

Running the gem server also requires opening a firewall port:

firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -p tcp -m tcp --dport 8808 -m conntrack --ctstate NEW -j ACCEPT

Note again that these firewall changes will not be permanent. You would need the --permanent option to persist these changes. You now have gems accessible at http://192.168.133.1:8808.

At this point you have:

  • A git http server running on port 32768
  • A gem server running on port 8808
  • Open firewall ports

In your OpenShift 3 environment you can now create a new application whose repository is the git HTTP server you set up with Webrick. Again, that’s http://192.168.133.1:32768/.git But, if you just do that, your build will fail if you don’t have internet access. A standard-looking Gemfile probably defines https://rubygems.org in its source. For example, the Sinatra example that OpenShift provides:

source 'https://rubygems.org'
 
gem 'sinatra', '1.4.6'

Without internet access, we’ll never get to https://rubygems.org. So we can change the Gemfile’s source line to point at our new gem server, which lives at http://192.168.133.1:8808. Feel free to clone the example repository and try it yourself. Remember, once you change the Gemfile you will need to run git update-server-info and then (re)start your Webrick server. Also, be sure you are doing this on the master branch, or you’ll need to point OpenShift at whatever branch you decided to use. This totally tripped me up a few times.

At this point, you should be able to do your build in OpenShift. In your build log you will see something like the following (ellipses indicate truncated lines):

...
I0703 19:44:33.264627       1 sti.go:123] Performing source build from http://192.168.133.1:32768/.git
...
I0703 19:44:34.010878       1 sti.go:388] ---> Running 'bundle install '
I0703 19:44:34.339680       1 sti.go:388] Fetching source index from http://192.168.133.1:8808/
I0703 19:44:35.019941       1 sti.go:388] Resolving dependencies...
I0703 19:44:35.281696       1 sti.go:388] Installing rack (1.6.4) 
I0703 19:44:35.437759       1 sti.go:388] Installing rack-protection (1.5.3) 
I0703 19:44:35.617280       1 sti.go:388] Installing tilt (2.0.1) 
I0703 19:44:35.841344       1 sti.go:388] Installing sinatra (1.4.6) 
I0703 19:44:35.841381       1 sti.go:388] Using bundler (1.3.5) 
I0703 19:44:35.841390       1 sti.go:388] Your bundle is complete!
I0703 19:44:35.841395       1 sti.go:388] It was installed into ./bundle
I0703 19:44:35.862289       1 sti.go:388] ---> Cleaning up unused ruby gems

And your application should work! Well, assuming all the rest of your OpenShift environment is set up correctly…


How to set the text with formtastic, select and collection

I’ve been on a tear again working on Riding Resource. We’re trying to do something interesting and slightly social, but I can’t give it all away just yet. There are some forms involved, and I decided that I was going to try and save some time by using Justin French’s formtastic plugin. Well, it surely saved some time, but, as with anything new, there’s a learning curve.

Since one of the big things that Riding Resource does is help stables see who is searching for them (by storing lots of demographic information), I wanted to make sure that any data these forms captured would be easily reportable. In the case of select lists, that means having models for them with integers and text associated. But when poking around with formtastic, I couldn’t figure out how to make a specific field of the model display in the dropdown for the select. Here’s an example:

f.input :preferred_discipline, :as =>, :select, :collection => DemographicPreferredDiscipline.all

melc in #rubyonrails on Freenode suggested that I try using a map. I’d seen these before, so I figured I’d give it a whirl:

f.input :preferred_discipline, :as => :select, :collection => DemographicPreferredDiscipline.all.map { |dp| [dp.text, dp.id] }

Text is the name of the field I wanted to display in the select. What do you know? It worked! I figured I would share this here for posterity and Google indexing.


Manipulating links with HTML select and jQuery

As with most web projects, there’s always some little new glitch that pops up. We’ve been building and massaging our own analytics back end for Riding Resource for some time now, and the change of year from 2009 to 2010 brought some new quirks that had to be dealt with.

While there were some minor issues related to year/day calculations creating invalid dates, the bigger issue that (I think) was solved rather elegantly was choosing, via a select tag, which year’s analytics report to generate. jQuery came to the rescue, with what turned out to be a far more simple solution than I originally had envisioned.

Select tags are not exactly the most complicated things in the world. But when you don’t have a form to go with them, it’s hard sometimes to figure out how to make them be useful. Instead of having a link for every year’s report, I figured a nice little drop-down would be an elegant way to choose. But this is where the difficulty was. I wanted a single text link to the report, but I wanted to change what that link actually… linked to… based on the selection in the box.

Here’s some of the original HTML that was generated by the app:

  <a id="union-county-veterinary-clinic-target" href="/analytics/show2/union-county-veterinary-clinic.pdf">PDF</a>
<select id="union-county-veterinary-clinic-select" name="date[year]">
    <option value="2009">2009</option>
    <option selected="selected" value="2010">2010</option>
  </select>

Here is where jQuery came to the rescue, and some generous fellow with the handle of “kit10” on the #jquery channel on IRC on Freenode. kit10 suggested that I add a “rel” attribute to the select element, and give that “rel” attribute the value of the id of the PDF link. In this way, jQuery could look at the select element, and, when it changed, update the link. After a few machinations, here’s what popped out:

  $(document).ready(function(){
      // courtesy of kit10 on #jquery on freenode
      $("select[id$='-select']").change(function() {
        var target = $('#'+$(this).attr('rel')); // set the target to be the value of the rel of the selector
        var regex = /year=\d\d\d\d/
        target.attr('href',target.attr('href').replace(regex,"year="+$(this).val()));
        });
      });

To read this code in a sense of plain english:
Whenever a select element that contains ‘-select’ in the ID changes
Create a variable called target, and assign it the value of the rel attribute of the select element
Replace the href attribute of the target with the new year by using the regex of /year=\d\d\d\d/ (to match year=2009, year=2010, etc)

That was really all there is to it. The new HTML ended up looking like as follows:

  <a id="1000-acres-ranch-target" href="/analytics/show2/1000-acres-ranch.pdf?year=2009">PDF</a>
<select id="1000-acres-ranch-select" name="date[year]">
    <option value="2009">2009</option>
    <option selected="selected" value="2010">2010</option>
  </select>

Git socket timeout issues with CentOS

So Riding Resource was developed in Ruby on Rails, as many of you may know. At some point this year I made the switch from using a local Subversion source control system to using git with Github, which has been pretty good. The one pain I was having, which I thought was a CentOS 4 / libcurl issue, actually turned out to be the APF firewall that Wiredtree uses on all of their VPS.

APF is a pretty neat firewall. It does a lot of neat things. It’s also installed by default on the VPS we use from Wiredtree. When I would pull or clone our private repository for Riding Resource, I had no issues with git. However, trying to clone public repositories I would always be greeted with something like:

fatal: unable to connect a socket (Connection timed out)

It took me a little while to figure out what was going on here, but what I tracked it down to was actually a firewall issue, and not any kind of issue with git or Github. I don’t know how I figured it out, but I discovered that cloning with Github uses port 9418. This link discusses using tunnels and mentions the port.

After some inspection, I realized that inbound and outbound traffic was blocked by APF on port 9418. A quickie modification to the EG_TCP_CPORTS and IG_TCP_CPORTS values by adding 9418 and restarting the APF service managed to do the trick.

This is definitely not relegated only to CentOS or to systems running APF when trying to clone from Github. Any Linux system could be subject to these timeout issues against Github if your firewall is configured to block 9418. So if you are seeing socket connection issues, or fatal errors with fetch-pack, you might just want to check your firewall.


Custom (dynamic) error pages with Ruby on Rails

Ruby on Rails is pretty neat stuff. Whenever I try to find out how to do something, it seems that I’m not the first to look. And, fortunately, many have usually solved that problem before. One thing that bugged me with Riding Resource was error pages. Sure, Rails allows you to create static 404 and 500 and other pages for those situations when things go awry. But the fact that those pages were static caused me some heartburn.

For one, if the layout of the website changed, it meant I needed to update the error pages. This is certainly not DRY. And, using static pages, I could not use any Ruby code in my error page, or do anything dynamically at all.

After some quick searching, I came across this post by Rob Hurring which lead me to this post on has_many :bugs, :through => :rails regarding how to create customized error pages with Ruby on Rails. Granted, neither of these solutions was exactly what I was looking for, so a little customization was required. However, the basic requirements were met.

  1. we can see that we can rescue_from many of the standard ActionController errors.
  2. using :with, we can specify a method to invoke to process the rescue activity
  3. the method called to rescue us can render a page with a layout

This took care of everything we needed. For almost all of the error types that would arise, we could redirect to a custom 404 template that allowed for Ruby to be embedded that would use any number of existing layouts, which keeps things tight and DRY.

One caveat that we had particularly related to Riding Resource was that meta tags are currently generated on the fly by a view helper, but we are actually looking at the current params to determine what controller/action we are in, and, therefore, what meta tags to spit out. Unfortunately, I could not find a way to detect, within the template, what status the template had been rendered with.

I ended up creating an instance variable before rendering the error template:

unless ActionController::Base.consider_all_requests_local
  # yeah, its a long line
  rescue_from ActiveRecord::RecordNotFound, ActionController::RoutingError, ActionController::UnknownController, ActionController::UnknownAction, :with => :render_404
  rescue_from RuntimeError, :with => :render_500
end
 
protected
 
def render_404
  @status = "404"
  render :template => "shared/404", :status => :not_found
end

By creating this instance variable, I could then check for the value of @status in my meta tag generator and properly handle things.

One caveat to this solution, in general, is that errors that occur with your process of rendering the custom page will result in no errors being shown at all, except for in the log. Note Ron’s trick to be able to generate the custom error pages even while in development mode.

In the long term I will probably clean up the meta tag generator to create instance variables and then use partials to get the information into the view, but that’s for another day. We also recently added Thoughtbot’s Paperclip plugin to be able to easily attach pictures of facilities, but that post will be made once we get watermarking working.


Internal Analytics with Open Flash Chart

We wanted to be able to get some analytics for the various facilities on RidingResource, and that required some thinking. While Google Analytics is certainly great, and we use it heavily, there are some things that it can’t capture that are valuable data to both us and our customers.

Since RidingResource is essentially a search engine, we realized that there was value in knowing how often a facility’s listing “came up,” either by being directly viewed on it’s detail page, or also by being seen in the search results page. Since we also built an API for a partner, which we’ll announce publicly once it goes live, we thought it would be valuable to track API “hits” as well.

Creating the table to store the analytics data was relatively simple. We just created an Analytic model and connected it to the Contact model – Contact is the model that stores the basic information about facilities listed on RidingResource.  We use Single Table Inheritance (STI) for the different types of facilities listed, but that’s for another posting.

We realized that there were not a lot of fields necessary for the analytics table. Since the analytics were connected to a contact, we needed to store the contact ID. Since we identified three different types of analytics, we store an integer for the type field, which we may make into an actual model later.

Lastly, we decided it would be valuable to store the parameters that were used at the time to cause this listing to be displayed. It’s entirely possible that there may some more valuable data that we could search on later, so knowing the params of the “hit” could be valuable.

class Analytic < ActiveRecord::Base
  belongs_to :contact
end
 
class CreateAnalytics < ActiveRecord::Migration
  def self.up
    create_table :analytics do |t|
      t.integer :contact_id
      t.string :parameters
      t.integer :analytic_type
      t.timestamps
    end
  end
 
  def self.down
    drop_table :analytics
  end
end

Ruby on Rails is kind enough to automatically store the params for us as serialized YAML. This way, when we want to actually process and dissect them later, we can simply do the following to get the params hash back the way we need it:

@the_analytics = Analaytic.find(:all, :conditions => :some_conditions)
@the_params = YAML::load(@the_analytics[some_specific_one].parameters)

One thing that needed to be carefully considered was storing “hits” on the search results pages. Because the database is currently a little bit nasty, we’re ending up finding all entries in the DB that match a subset of criteria, and then filtering out the rest that don’t match the remaining criteria. This is actually faster than all the weird joins that end up occuring. After that, there is still the matter of pagination. It’s entirely possible that a facility could be pulled from the DB several times without actually being displayed, so we couldn’t just assume that pulling from the DB in the results page was a hit.

What I realized was that Mislav’s will_paginate does something nice for us – it ends up lopping off all the records outside the pagination range, and leaves us with the few for the current page. This enabled us to simply iterate over the paginated records and store the hits.

@contacts = filter_results(@contact)
@contacts = @contacts.paginate(:page => params[:page], :per_page => 8)
@contacts.each do |contact|
  contact.analytics << Analytic.new( {:contact_id => contact.id, :parameters => params, :analytic_type => 1} )
end

Storing the hits for the detail page and for the API was trivial. There’s only one record to grab on the detail page, so obviously someone is looking at it – hit. Since we don’t know what the people at the other end of the API are actually doing with the data, all we can do is record that a record was provided to the API.

So now that we’re storing the analytics, how the heck do we display them? That’s where Open Flash Chart comes into play. Unfortunately, this turned out to initially be a nightmare for many reasons.

When we first started building RidingResource, I was certainly a rails noob. Not that I am by any means not a noob at this point, but at least I am a little more polished since those early days of not knowing how to do anything. Because I was busy fighting everything at that point, I decided to save myself some headache for the administration area and use Active Scaffold.

Active Scaffold certainly is a nice plugin. It does have a tendency to throw wrenches into the works on occasion because it does some strange things using the Prototype javascript libraries. My first crack at graphing data was to take a look at Flot because it seemed simple and could do the basic things we wanted. The Flot plugin I found (Flotilla) wanted to use jQuery via JRails (don’t think JRuby) which interfered with the Prototype implementation of Active Scaffold. Since the initial graphing was for our admin area, this was out.

After some pondering and question asking in the #rubyonrails channel on Freenode (you can find me there as thoraxe), a few other suggestions came up. Scruffy and Gruff were suggested, but these both used Scalable Vector Graphics. While Firefox supports these today, I was informed that IE does not without a plugin. Our customer base is mostly going to be IE people, and probably not the most tech-savvy. In case these analytics became customer-facing, I did not want to have to worry about teaching non-tech-savvy people how to install browser plugins for IE. Scruffy and Gruff = out.

Next came the flash implementations, of which there were two notable ones. The first I will mention is Ziya, although we did not ultimately choose it. Ziya charts certainly are sexy, but for some reason I decided implementation looked difficult and that the charts were a little bit of overkill for what we needed.

Enter Open Flash Chart, our savior. Well, in the end the savior. It was a hell of a headache getting it to work.

There are several implementations of Open Flash Chart in Ruby on Rails. I have to say that, to a certain extent, all of them are a little sucky. Don’t get me wrong – it’s unfair for me to complain about free code that could make me money! But there is something to say about the cleanliness and simplicity of Technoweenie’s code when compared to some of these plugins.

Open Flash Chart is smart. It is a flash file that you basically feed JSON data to generate charts. That makes it simple. Unless you are using the JSON gem already. Which we are because it is used by dancroak’s twitter_search plugin. Which makes things insane. Remember how Active Scaffold was interfering with Flot? Well, here we were again. Something I was already using interfering with something I wanted to do.

To make a long story short, after much headache surrounding various Open Flash Chart plugins that used Rails’ built-in JSONification, one of the plugins that is mentioned on the OFC webpage happened to use the JSON gem itself. Perfect!

Korin’s Open Flash Chart 2 plugin did the trick. I won’t go into the implementation of everything in its entirety, but I will share the following bits which you may or may not find useful.

Korin’s examples use two controller actions to generate the graph. The first action creates the @graph object which basically just stores the string which represents some code that the swfobject javascript library uses to create the proper html to display the openflashchart.swf.  The second action actually generates the JSON that gets fed into the SWF.

One of the other plugins I had found that did not work for me (because of the previously discussed JSON gem issues) was Pullmonkey’s Open Flash Chart plugin. Pullmonkey did something neat using the respond_to method of MimeResponds in ActionController.

def show
  # find the contact requested
  @contact = Contact.find_by_url(params[:id])
  respond_to do |wants|
  @all_results = Analytic.find(:all, :conditions => { :contact_id => @contact.id })
  @data_results = Analytic.find(:all, :conditions => { :analytic_type => 1, :contact_id => @contact.id})
  @data_details = Analytic.find(:all, :conditions => { :analytic_type => 2, :contact_id => @contact.id})
  @data_api = Analytic.find(:all, :conditions => { :analytic_type => 3, :contact_id => @contact.id})
 
    wants.html do
 
      # set up the graph on the request
      @graph_results = ofc2(650,300,url_for(:action => :show, :format => :json, :graphtype => :results),"")
      @graph_details = ofc2(650,300,url_for(:action => :show, :format => :json, :graphtype => :details),"")
      @graph_api = ofc2(650,300,url_for(:action => :show, :format => :json, :graphtype => :api),"")
    end
    wants.json do
      # provide the JSON back to the flash
      # call the function to generate the graph based on the graph type that is supplied via params
      render :text => results_graph.render
    end
  end
end

This implementation is a little more elegant, but I’m actually finding that, because both the HTML and JSON generation are happening inside the same controller action, some of my finds are being performed multiple times. This is because the records for these finds are needed by both parts of the action, but the action gets called each time any part of the respond_to gets called. I may end up de-elegantizing this and splitting it back into multiple actions if it becomes a performance issue.

In determining what to do with the analytics data, we decided that initially it made sense to simply graph hits by day. Since we cheated in our creation of the analytics table and used the built in timestamps, we already had a created_at field which contained the DateTime of the hit. If you look at that last sentence carefully, you realize that DateTime is not Date. So how do you lop off the time part? This also still leaves the trouble of calculating how many hits occured on each day, too. This is where the elegance of Ruby really shines.

line_values = []
x_labels_text = []
 
instance_variable_get("@data_#{params[:graphtype]}").group_by{ |a| Date.ordinal(a.created_at.year, a.created_at.yday) }.each do |day, results|
  # this will group all of the analytic hits together by day instead of date/time. it then iterates over
  # these results in a block, where day holds the value of the day of the results, and results is an array
  # containing the individual results from that day.
 
  x_labels_text << day.to_s # put the value of the day into the x axis label
  line_values  << results.length # how many hits occured on this day
 
end

group_by, ordinal, and iterating over blocks totally saved our butts here. What the above code enabled us to do was to group the entire array of analytics data by the day (after munching the time off using ordinal), and then iterate over the resulting groups. The blocks allow us to both store the day into the array of labels for the x-axis, as well as determine how many hits occured on that day by using the length of the array of data in the group. Brilliant!

As you can see, what started out as a relatively simple idea (“Let’s graph some analytics about the facilities on RidingResource!”) ended up being a relatively non-trivial coding exercise that took us almost a solid day of man-hours. But, in the end, we were left with something simple that did the job, but which leaves us a lot of room for growth and power.

The only real pain point right now has to do with the actual analytics data. If a day goes by without any hits, nothing gets stored in the database. Since we are grouping by the records that we actually pull out of the database, any dates in the middle with no hits will not be represented. So we are left with the problem of how to determine what dates are “in the middle” that have no hits. It’s not an issue right now, but it may become one in the future. I’m sure we’ll be able to figure it out.


CAPTCHA in Rails – an experiment in anti-spam

One of the things that we decided would be a good idea for RidingResource was to let users of the site contact the various facilities that we have listed. We also wanted to make sure those facilities knew we helped facilitate that contact by injecting some extra information into the email. The email functionality is left for another post.

We quickly ran into an issue with spam. As administrators of the site, we get copies of all of the email that is sent to facilities. We noticed that some weird spam-like email came through. Fortunately it was someone trying to exploit our site as an open mailer, but it didn’t seem to work, and no email got to our customer. We will have to investigate how to help prevent that (if it’s even possible to exploit) later. But we knew that we needed to do something to prevent spammers from being able to send automated junk mail to our customers, and CAPTCHA seemed like a good idea.

Until I tried it.

After some quick Googling for “rails captcha” and other terms, I discovered the simple_captcha plugin. This is a handy plugin that can be used to generate a CAPTCHA image with some convenient options. It also offers a friendly validation of said CAPTCHA in your controllers, amongst other things.

One “issue” that I had with simple_captcha is that it requires both the Imagemagick image manipulation program to be installed as well as the RMagick gem. The first part was already present, as I was using the mini_magick gem for another Rails application. Unfortunately, I didn’t have RMagick, and installing it proved less than trivial.

First, trying to install the RMagick gem resulted in an error:

Can’t install RMagick 2.9.1. Can’t find Magick-config in /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin

This got me to poking around. Now, most of the tutorials I had found referenced Debian as the Linux distribution onto which people were using simple_captcha. I happen to be running Fedora as a development server and CentOS in production. This meant things were a little different. After some creative Googling, I discovered what was required to install RMagick on CentOS. The current version of RMagick is actually 1.15.17 – slightly newer than the one referenced in the tutorial.

In the end, the tutorials on the simple_captcha website were sufficient to get what we needed going. Since we already had a contact form, it really was less than 15 new lines of code to get things up and running. I’m not as pleased as I could be with the CSS and formatting of where the CAPTCHA is on the contact form, but it looks good enough for now.

If you’ve got some contact forms and you’re concerned about spam, or you’ve got some registration forms that you want to anti-robot, give simple_captcha a try.


Building an API in Ruby on Rails

I know there are quite a few tutorials and links out there on building an API for your Ruby on Rails application, but I figured that I would document a little bit about how it was done for Riding Resource, but at a high level.  A partner had requested access to some of our data via an API, and wanted the results to be spit out as XML, so here’s a little bit about how that was done.

First things first, it took me a little while to figure out exactly how to handle the whole XML thing, but it was actually far easier than I originally had thought.  Rails is smart enough to look for lots of different file types that match the action, so in the case of our API, all I had to do was make sure I had a .rxml (as opposed to .rhtml) file that matched the action.

Second, I wanted to restrict use of the API to authorized parties.  Just like how Google has API keys, I decided I wanted to do something similar.  Since I didn’t want to take the time to create some crazy key system that reverse-lookups the requestor and does stuff like that, I thought it might be a little easier to simply use the key to specify a legal set of requestor IP ranges/addresses.

Lastly, if the API request wasn’t from a legal requestor, I wanted to return nothing.

Here’s what I did (pseudocody) for the search controller

class ApiController &lt; ApplicationController
  # api ip address ranges allowed
  API_KEYS = {"5b6ba960531c458021e8be98f3842c182c773b2f" =&gt; ['192.168.2.0/24', 'aaa.bbb.ccc.ddd'] }
 
  def search
    for ip in API_KEYS[params[:key]] do
      if IPAddr.new(ip).include?(IPAddr.new(request.remote_ip))
        @barns = Barn.find(:stuff)
        @count = @barns.length
        return
      end
    end
    render :action => :blank
  end

Some interesting things to note here. IPAddr is a nice ruby class that allows us to perform manipulations on IP addresses, one of which is checking if an address is included in a range. You have to require the ipaddr class in your environment in order to use it — it is not loaded by default.

for ip in API_KEYS[params[:key]] do
      if IPAddr.new(ip).include?(IPAddr.new(request.remote_ip))

Here we are iterating over all the ip ranges and IPs that are in the hash associated with the key provided by the incoming params. For each IP range/address, we check to see if the requesting IP is included in the range. Fortunately, IPAddr is smart enough to know that a single IP is included within itself, so using IP addresses as opposed to ranges is perfectly acceptable.

If the IP is within one of the ranges, we perform our find. If not, we render the blank action. The blank action has no code, and blank.rxml is empty as well. This way, if an illegal request comes in, we just do nothing — I don’t care to cater to people trying to access the API that shouldn’t be.

The XML part was tricky at first, but it actually turned out to be far simpler than I thought. Once you are in an XML view, Rails is kind enough to provide an xml object already for you, without needing to instantiate anything. This seemed to be contrary to a few tutorials I had found. Here’s some pseudocode to represent what i did:

xml.instruct! :xml, :version => "1.0"
xml.barns do |barn|
  xml.count @count
  xml.requestor request.remote_ip
  xml.api_key params[:key]
  xml.requested_location params[:zip]
  xml.requested_distance params[:dist]
  for b in @barns do
    xml.barn do
      xml.name b.name
      xml.address b.address
      xml.distance b.distance
      xml.phone b.phone
      xml.website b.website
      xml.url "http://www.ridingresource.com/contact/show/#{b.url}"
    end
  end
end

Because XML is a markup language that requires properly open and closed “containers” similar to HTML, you can see there are a lot of do/end blocks. The main do.end block is the barns block, which contains all of our results. I also was kind enough to let the API user know some of the things they asked of us, as well as the number of results we found.

xml.barns do |barn|
  for b in @barns do
    xml.barn do
      xml.something value
    end
  end
end

As we iterate over each result in @barns, we want to create an xml container for each one. The ease of Rails/Ruby here is awesome — you simply specify the container name, and then the value: xml.something value

The resultant XML output looks something like this:

<?xml version="1.0" encoding="UTF-8"?>
<barns>
  <count>10</count>
  <requestor>aaa.bbb.ccc.ddd</requestor>
  <api_key>key</api_key>
  <requested_location>30093</requested_location>
  <requested_distance>10</requested_distance>
  <barn>
    <name>Camp Creek Stables</name>
    <address>4150 Arcadia Industrial Circle SW  Lilburn GA, 30047</address>
    <distance>4.0317320843575</distance>
    <phone>7709252402</phone>
    <website></website>
    <url>http://www.ridingresource.com/contact/show/camp-creek-stables</url>
  </barn>

As you can see, it can be pretty easy to build an XML-returning API using Ruby on Rails. While this certainly is by no means a tutorial, it can provide some insight if you are a little stuck.

If you are interested in getting access to the Riding Resource API, please be sure to contact me at [email protected]


Fun with jqModal, flash, sending email and submission forms in Rails

As many of you may already know, Riding Resource has been up and running since January 1st, and we’re chugging right along. One thing we realized we wanted was to have people be able to send us their contact information to sign up for a mailing list.

Because we haven’t implemented anything like Constant Contact or Mailchimp, we just wanted a submission form to collect the information and receive it via email.  We didn’t want to require people to create an email themselves. Because we are anal retentive and want everything to look pretty, we also decided that it would be really nice to use modals to put the form in.

First step – using modals. Since Riding Resource already makes use of a good bit of jQuery and other jQuery plugins, I thought it would be rather easy to use a modal plugin for jQuery. Once such plugin happens to exist in jQmodal. Its usage is rather simple, but, in the case of Riding Resource, required a couple of tricks.

First, because of the way that Ruby on Rails works, I wanted to keep the modal for the submission form on the view for the action that would actually spawn the modal. Unfortunately, this caused a problem with Firefox 2 and the way the layers are handled. This was solved by using the “toTop” directive in the declaration of the modal. I also only wanted the modal to trigger when the links to sign up were clicked. This resulted in the following:

$('#mailing-modal').jqm({
  trigger: '.mailing',
  toTop: true
})

This brought up the modal just fine. But what to put in it? I had never built a contact form before, although I’ve obviously built some very complex forms. A quick Google search pointed me to this thread about just such a thing. A little bit of tweaking and I had built just the form I needed.

Unfortunately, I realized quickly that after the form was submitted, the user didn’t get any notification. Riding Resource had not had any need for Rails flash messages until this point, so what were we going to do with them? Since we were already using jqModal, why not continue in that trend? Popping up a friendly modal letting the user know their submission was received would surely do the trick. But this brought its own pitfall – how to make the modal only display when needed? The following snippet helped solve that problem:

$('#flashes').jqm({
  trigger: '#flashes'
});
<% if flash[:notice] || flash[:warning] || flash[:error] %>
  $('#flashes').jqmShow();
<% end %>

In this case, we are declaring that the #flashes div is a jqModal, and that it should only trigger on itself. This prevents it from popping up when other jqModal triggers are clicked (all modals will pop up by default). The second thing we did was force this modal to show only if the different flashes exist. The div for this looks like so:

<div id="flashes" class="jqmWindow">
  <%= display_standard_flashes %>
  <p><a href="#" class="jqmClose"><strong>Close</strong> X</a></p>
</div>

In case you’re wondering what display_standard_flashes is, it’s a handy helper that I happened to pick up from dZone here.

That’s pretty much all there was to it. While it took several hours to get all of this figured out, now that it’s done, it should be much easier to do submission forms using jqModal in the future.


Cleaning the sessions database store with a rake task in Rails

So, for one reason or another, I am using the built-in db store for sessions on the RidingResource site. I was about to embark on a path to utilizing sessions to help maintain the search selections across the site, and I noticed that the sessions database was full of… junk.

Rails is nice enough to provide us with a number of great built-in ways to handle sessions. Unfortunately, it is not nice enough to clean up after itself. I noticed that there is a built-in rake task (rake db:sessions:clear), but this actually just nukes the whole sessions table. Now, granted, I could run this as a cron at 4:00AM and it would probably not affect any users. But, I figured I should design a way to simply not affect any users regardless.

I found someone else who was trying to clean up session data and they figured out a MySQL query that works: DELETE FROM sessions WHERE updated_at < DATE_SUB(CURDATE(), INTERVAL 30 DAY); This will delete any sessions that haven’t been updated in 30 days. For our app, I decided that a 2-day interval is probably sufficient. The other cute benefit is that someone who comes back within two days will probably see their last search retained — that may or may not be desireable in the long run.

Since I had just written a rake task the other day to handle some stuff relating to paid advertisers and what not, I realized that I already had a tool that could (potentially) be used to process this MySQL query — rake. However, that required figuring out a way to run a MySQL query directly with rake.

Again, Rails to the rescue — it provides us with ActiveRecord::Base.connection.execute(sql). So, I created a rake task to do what was needed.

# clearing out sessions older than 2 days because we are using sessions to store
# search information to maintain the search preferences in the show action
desc "This task will delete sessions that have not been updated in 2 days."
task :clean_sessions => :environment do
  ActiveRecord::Base.connection.execute("DELETE FROM sessions WHERE updated_at < DATE_SUB(CURDATE(), INTERVAL 2 DAY);")
end

Combined with a cron job, this should work splendidly.


Pages:12