:::: MENU ::::
Browsing posts in: OpenShift

Rocket.chat on OpenShift 3

I’ve been on a tear recently getting various applications to run on the next-generation OpenShift Online preview. Yesterday I did some work with Node.JS and Gulp, and today I decided to give Rocket.chat a try, since we’re possibly going to use it as part of some cool demos.

Rocket.chat is self-described as “The Ultimate Open Source Web Chat Platform”. It’s 100% open source, and is built using Node.JS and Meteor, among other technologies. While the folks at Rocket.chat make a Docker image available, I generally don’t like to try to use them. They’re not usually built using best practices and require a lot of futzing to make work. They often use non-Red Hat friendly operating systems. Rocket.chat provides a downloadable tarball release. This, to the best of my understanding, is the “output” of using the Meteor build system. In looking at the installation instructions for Rocket.Chat on CentOS, it appeared that you could just run something like the following to get the requirements installed:

cd Rocket.Chat/programs/server
npm install

Then you simply export your environment variables and execute Node.JS with the application:

export PORT=3000
export ROOT_URL=http://your-host-name.com-as-accessed-from-internet:3000/
export MONGO_URL=mongodb://localhost:27017/rocketchat
node main.js

I’ve seen all of this stuff before. The requirements installation seems a lot like the normal assemble process with a slight change. I figured I would give our good friend source-to-image a try again.Yesterday’s article on Node.JS and Gulp talked about customized assemble scripts. Please visit that article for a quick refresher, and check out the source-to-image documentation, too.

Here’s how I walked through getting this app to run on OpenShift.

Make it Build

Since I was going to use source-to-image for this application, I needed a Git repository to build against. The tarball from Rocket.chat contains the release, so I simply put that into a GitHub repository: https://github.com/thoraxe/rocket-built

Since the Node.JS package installation required being in a different folder, I knew I had to customize the assemble script. You can find the whole assemble script here, but the relevant changes are just:

cd programs/server
npm install

I was able to fire up a build using the included OpenShift Node.JS 0.10 builder, and everything worked so far.

Make it Run

Not so fast. As I indicated in the introduction, Rocket.chat wants to be instantiated by executing Node.JS against the application file. However, the default run script for Node.JS uses this:

# Runs the nodejs application server. If the container is run in development mode,
# hot deploy and debugging are enabled.
run_node() {
  echo -e "Environment: \n\tDEV_MODE=${DEV_MODE}\n\tNODE_ENV=${NODE_ENV}\n\tDEBUG_PORT=${DEBUG_PORT}"
  if [ "$DEV_MODE" == true ]; then
    echo "Launching via nodemon..."
    exec nodemon --debug="$DEBUG_PORT"
  else
    echo "Launching via npm..."
    exec npm run -d $NPM_RUN
  fi
}

Just like how we overrode the assemble script by placing one in our repo, we can do the same with the run script, too. Here’s the entire script, but this is the relevant change to the function:

run_node() {
  echo -e "Environment: \n\tDEV_MODE=${DEV_MODE}\n\tNODE_ENV=${NODE_ENV}\n\tDEBUG_PORT=${DEBUG_PORT}"
  if [ "$DEV_MODE" == true ]; then
    echo "Launching via nodemon..."
    exec nodemon --debug="$DEBUG_PORT"
  else
    echo "Launching..."
    exec node main.js
  fi
}

This probably won’t work in the debug case, but I wasn’t trying to do that right now. We can fix that later!

Still Not Quite…

With the change to the run script, we now could get the application to run… sort of. If you look back in the original instructions, you see that Rocket.chat expects certain environment variables to be set. If they’re not, Rocket.chat will fail to start. Fortunately, OpenShift makes it easy to manage environment variables that get automatically injected into a container. Most of the variables are actually related to the database. I launched a MongoDB instance using the OpenShift UI, and then looked at the user, password and other variables that were auto generated for me.

Then, in the OpenShift UI, I was able to edit the Rocket.chat deployment and add the environment variables I needed. Yeah, I had to use a little YAML-fu to get things right. The other option would be to have deleted all of the Rocket.chat stuff, and then gone back and re-created the build and specified the desired environment variables from the beginning. The OpenShift UI team is constantly improving the user experience, and I fully expect to have better control over environment variables from the UI in an upcoming release.

Ready, Set, Chat!

Remember that you will need to provide the user and password in the environment variable that contains the database connection string. Once you’ve got all that set, your Rocket.chat instance should be up and running and usable!


Node.JS, Gulp and OpenShift 3 – Custom assemble script FTW

I’m heading to India for workshops with some of Red Hat‘s big SI partners, and one of them requested some use case information around Node.JS and Gulp on OpenShift 3. Since I have never worked with any of these technologies, I had to do some research.

Gulp is kinda-sorta a build… uh… system… for Node.JS. It supports a number of plugins and other things that can be used during the build phase to produce your Node application. Seems simple enough. However, OpenShift’s source-to-image process for Node doesn’t know about Gulp out of the box. So, a little bit of customization is required. And by “a little bit” I mean two lines. First, a refresher.

OpenShift 3 introduces the concept of source-to-image. Source-to-image is the process that OpenShift uses to combine an existing Docker image that has a runtime already installed with your code. Red Hat calls this runtime image a “builder”. I’m using one of the Node.JS images from Red Hat’s registry:

rhscl/nodejs-4-rhel7

The build process involves a script called assemble. Here’s the Node.JS assemble script that comes with the Node.JS builder image:

#!/bin/bash
 
# Prevent running assemble in builders different than official STI image.
# The official nodejs:4.4-onbuild already run npm install and use different
# application folder.
[ -d "/usr/src/app" ] && exit 0
 
set -e
 
# FIXME: Linking of global modules is disabled for now as it causes npm failures
#        under RHEL7
# Global modules good to have
# npmgl=$(grep "^\s*[^#\s]" ../etc/npm_global_module_list | sort -u)
# Available global modules; only match top-level npm packages
#global_modules=$(npm ls -g 2> /dev/null | perl -ne 'print "$1\n" if /^\S+\s(\S+)\@[\d\.-]+/' | sort -u)
# List all modules in common
#module_list=$(/usr/bin/comm -12 <(echo "${global_modules}") | tr '\n' ' ') # Link the modules #npm link $module_list echo "---> Installing application source"
cp -Rf /tmp/src/. ./
 
if [ ! -z $HTTP_PROXY ]; then
        echo "---> Setting npm http proxy to $HTTP_PROXY"
        npm config set proxy $HTTP_PROXY
fi
 
if [ ! -z $http_proxy ]; then
        echo "---> Setting npm http proxy to $http_proxy"
        npm config set proxy $http_proxy
fi
 
if [ ! -z $HTTPS_PROXY ]; then
        echo "---> Setting npm https proxy to $HTTPS_PROXY"
        npm config set https-proxy $HTTPS_PROXY
fi
 
if [ ! -z $https_proxy ]; then
        echo "---> Setting npm https proxy to $https_proxy"
        npm config set https-proxy $https_proxy
fi
 
echo "---> Building your Node application from source"
npm install -d
 
# Fix source directory permissions
fix-permissions ./

The above script is pretty simple. It basically just sets some config options and then runs:

npm install -d

In your source code repository, you can create a folder, .sti/bin, and insert your own assemble script in it. When the source-to-image process is executed, it will run your assemble script instead of the built-in one. As you can see, the assemble script is simply a Bash script in this case. It could be a script written in any locally executable language. Probably even in Node!

I am using a forked version of a Node+Gulp application written by Grant Shipley located here. Grant didn’t design the app to run on OpenShift, so I simply took it and added an assemble script. You can find my repository here: https://github.com/thoraxe/nodebooks

Since the assemble script is just a Bash script, we can actually run scripts from scripts. The built-in assemble script is located in the folder:

/usr/libexec/s2i/

Since Gulp itself is written in Node, we can launch our Gulp task with Node. Here’s the entirety of my customized assemble script:

# vim: set ft=sh:
#!/bin/bash
 
# original assemble
/usr/libexec/s2i/assemble
 
# gulp tasks
node node_modules/gulp/bin/gulp.js inject

That’s all there is to it! The script above calls the original assemble that’s built-in to the image. This causes the Node.JS dependencies to be installed. That ends up giving us Gulp. Then, we use Node.JS to execute the locally-installed Gulp, and to run the task inject. Since the Gulp tasks are very specific to my application, this actually makes sense. Not only does Gulp allow us to treat configuration as code, as we create additional Gulp tasks we can simply update which tasks are run by changing the assemble script.

Neat, huh? If you want to try OpenShift, head on over to www.OpenShift.com


Disconnected Ruby demos on OpenShift 3

I’m headed to China soon, and the Great Firewall can present issues. S2I builds on OpenShift 3 generally require internet access (for example, pulling from Github or installing Ruby Gems), so I wanted to see what it would take to go fully disconnected. It’s actually surprisingly easy. For reference, my environment is the same environment as the OpenShift Training repository. I am using KVM and libvirt networking and all three hosts are running on my laptop. My laptop’s effective IP address, as far as my KVM VMs are concerned, is 192.168.133.1

Also, I have pre-pulled all of the required Docker images into my environment, like the training documentation suggests. This means that OpenShift won’t have to pull any builder or other images from the internet, so we can truly operate disconnected

First, an http-accessible git repository is required for using S2I with OpenShift 3 right now. Doing a google search for a simple git HTTP server revealed a post entitled, unsurprisingly, Simple Git HTTP server. In it, the instructions suggest using Ruby’s built in HTTP server, WEBrick. Here’s what Elia says:

git update-server-info # this will prepare your repo to be served
ruby -run -ehttpd -- . -p 5000

One thing to note – you must run the update-server-info command after every commit in order for webrick to actually serve the latest commit. I figured this out the hard way. On Fedora and as a regular user, you usually want to use a high port for stuff, so I chose a really high port — 32768. I also had to open the firewall. Fedora, by default, uses firewalld. Your mileage may vary:

firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -p tcp -m tcp --dport 32768 -m conntrack --ctstate NEW -j ACCEPT

With the firewall open, the git repo lives at http://192.168.133.1:32768/.git — not too shabby! Next, we need to make the Ruby Gems accessible via HTTP locally as well. Some Google-fu again brings us to something useful. In this case, Run Your Own Gem Server. While the article indicates that you can just run gem server, I found that this produced strange results and I filed bug #1303. I was using RVM in my environment due to some other project work, so, in the end, my gem server syntax looked like:

gem server --port 8808 --dir /home/thoraxe/.rvm/gems/ruby-2.1.2 --no-daemon --debug

Of course, this is going to serve gems from your computer, which means the gems have to actually be installed there in the first place. In the case of the Sinatra example, you would have to gem install sinatra --version 1.4.6, which would bring in the gem dependencies. Of course, this requires that you have ruby and rubygems, but you already have that, right?

Running the gem server also requires opening a firewall port:

firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -p tcp -m tcp --dport 8808 -m conntrack --ctstate NEW -j ACCEPT

Note again that these firewall changes will not be permanent. You would need the --permanent option to persist these changes. You now have gems accessible at http://192.168.133.1:8808.

At this point you have:

  • A git http server running on port 32768
  • A gem server running on port 8808
  • Open firewall ports

In your OpenShift 3 environment you can now create a new application whose repository is the git HTTP server you set up with Webrick. Again, that’s http://192.168.133.1:32768/.git But, if you just do that, your build will fail if you don’t have internet access. A standard-looking Gemfile probably defines https://rubygems.org in its source. For example, the Sinatra example that OpenShift provides:

source 'https://rubygems.org'
 
gem 'sinatra', '1.4.6'

Without internet access, we’ll never get to https://rubygems.org. So we can change the Gemfile’s source line to point at our new gem server, which lives at http://192.168.133.1:8808. Feel free to clone the example repository and try it yourself. Remember, once you change the Gemfile you will need to run git update-server-info and then (re)start your Webrick server. Also, be sure you are doing this on the master branch, or you’ll need to point OpenShift at whatever branch you decided to use. This totally tripped me up a few times.

At this point, you should be able to do your build in OpenShift. In your build log you will see something like the following (ellipses indicate truncated lines):

...
I0703 19:44:33.264627       1 sti.go:123] Performing source build from http://192.168.133.1:32768/.git
...
I0703 19:44:34.010878       1 sti.go:388] ---> Running 'bundle install '
I0703 19:44:34.339680       1 sti.go:388] Fetching source index from http://192.168.133.1:8808/
I0703 19:44:35.019941       1 sti.go:388] Resolving dependencies...
I0703 19:44:35.281696       1 sti.go:388] Installing rack (1.6.4) 
I0703 19:44:35.437759       1 sti.go:388] Installing rack-protection (1.5.3) 
I0703 19:44:35.617280       1 sti.go:388] Installing tilt (2.0.1) 
I0703 19:44:35.841344       1 sti.go:388] Installing sinatra (1.4.6) 
I0703 19:44:35.841381       1 sti.go:388] Using bundler (1.3.5) 
I0703 19:44:35.841390       1 sti.go:388] Your bundle is complete!
I0703 19:44:35.841395       1 sti.go:388] It was installed into ./bundle
I0703 19:44:35.862289       1 sti.go:388] ---> Cleaning up unused ruby gems

And your application should work! Well, assuming all the rest of your OpenShift environment is set up correctly…