Kubernetes cluster made easy with Vagrant and CoreOS

Lately, I’ve been playing with Kubernetes. But it hasn’t been an easy ride! The guys running the project are really awesome and supportive, but they just can’t maintain documentation on the myriad of providers they’ve been able to use to provision a cluster.

Given this, I’ve tried to make it easy for people like me, the day-to-day developer that just wants to try out this amazing technology or develop apps on top of it. And so I did. Here you’ll find a quick and simple way to bootstrap a Kubernetes cluster on top of Vagrant (Virtualbox) and CoreOs.

Give it a spin and let me know what you think.

Clustering Hazelcast on Kubernetes

I’ve had experience with clustering Hazelcast in Google Cloud Engine before, but right now, in the advent of containers, Kubernetes is becoming a standard for app/container orchestration and therefore my previous effort has become somewhat obsolete – unless you’re limited to VMs and have to use TCP-based clustering for Hazelcast, since multicast traffic may be blocked (it is on most cloud providers).

Given this, I’ve assembled a Dockerfile which will run a small app (hazelcast-kubernetes-bootstrapper) on boot. This app queries Kubernetes API to discover all nodes in the [Kubernetes] cluster that are acting as an Hazelcast node, retrieves their IP addresses, configures and instantiates Hazelcast with TCP configuration, accordingly.

Please test it and give me feedback. All you need is Docker and a Kubernetes cluster (I’ve tested it locally and in Google Container Engine).

More information, including step-by-step instructrion can be found at https://github.com/pires/hazelcast-kubernetes.

Cheers,
Paulo Pires

RESTful web services with Jersey 1.12+ (JSON) and Glassfish 3.1.1+

Lately, I’ve been playing with Jersey for easy and fast development of RESTful services.My focus so far has been to implement CRUD operations on top of Glassfish container and with JSON support.

I’m sharing these experiments at my Github page. Feel free to take a look, copy and eventually create pull requests for your own additions/optimizations.

Apache Shiro 1.2.0+ (JDBC Realm) and Glassfish 3.1.1+

Lately, I’ve been experimenting with Apache Shiro for securing my Java EE applications. My focus so far has been to implement an authentication mechanism backed by a MySQL database and to be ran on top of a Glassfish container.

I’m sharing these experiments at my Github page. Feel free to take a look, copy and eventually create pull requests for your own additions/optimizations.

Glassfish cluster installation and administration on top of SSH + public key

Managing Glassfish instances in a non-centralized way may prove itself to be a real pain in the ass, even with as few as four (4) nodes. One solution that has been quite simplified on version 3.1 (it existed in version 2.x but you had to deploy node-agents) is the usage of SSH as a channel to perform administration operations on remote instances. This post compreehends a short but working version of the steps described in this blog post.

Pre-requisites

  • Linux system (actually, Glassfish runs on every platform with a suitable JRE)
  • SSH server up and running
  • Have a user configured in all desired nodes (in our case, dummy)
  • Glassfish 3.1.1 as version 3.1.2 forces secure-admin mode which has brought several issues to our current testbed. Actually, we had to reinstall and reconfigure the entire scenario. So if you’re willing to install/upgrade to a newer version, do it at your own risk. You’ve been warned!
  • Be sure that you have a clean Glassfish install on what is going to be your Domain Admin Server (DAS).

Setup authentication

You can authenticate your DAS against your nodes with one of two methods:

  1. Password authentication
  2. Public key authentication

We’ll choose the last one, since it’s easier to maintain. Let’s start by assuming a node named node1.

$GLASSFISHROOT/bin/asadmin setup-ssh node1

If you don’t have a key-pair, the command above will give you the option to create one. If this is your case, be ready to provide your user password. Repeat the same command when the process is finished.

Also, be sure that the key is present in the file /home/dummy/.ssh/authorized_keys2 of the node node1, since this script will wrongly put it into authorized_keys, which is used for SSH protocol version 1 and not version 2, the most widely spread.

Execute:

ssh dummy@node1
cd ~/.ssh
cat authorized_keys >> authorized_keys2
rm authorized_keys

Install Glassfish remotely

asadmin install-node –installdir /home/cmlabs/ node1

Create SSH nodes

asadmin create-node-ssh –nodehost node1 –installdir /home/dummy node1

Configure cluster and deploy applications

Right now, you’re ready to create one or several node clusters. I mean, if you have executed the steps above in a handful of nodes, of course! Just point your browser to http://das_address:4848 and head up to the Clusters on the left-most pane (tree).

Be sure to setup a new configuration for each cluster and include some nodes in it. The rest of the process is quite trivial. Just remember that when you’re creating resources such as JDBC, JMS and such, you should always define the target, which may be a node, a cluster or several clusters.

Troubleshooting

  1. asadmin commands over SSH fail – a workaround is to repeat step Setup authentication.

JMS in a container-managed context – learned the hard way edition

Hi fellow geeks,

Here I am living a new professional experience, this time in a company which core-business is audio/video streaming and real-time audio recognition. I’m starting on a couple of very interesting projects. One deals with real-time speaker recognition, while the other aims to be an automatic music-matching service powered by a revolutionary (or so they say) algorithm that calculates similarities and determines distances within a large universe of tracks.

Both projects share some algorithms and most of all, workflow. Both have to work in a distributed way, and by that I mean having multiple different jobs running in parallel in several machines, exhausting each machine processing units (CPUs/cores) while persisting resulting data in a distributed-filesystem.
The choices here were pretty obvious to us, JMS and Hadoop FS.

JMS is an API than can be better described by the publish-subscribe pattern and, what we’ll basically have is a bunch of Message-driven Beans (MDBs) per machine – let’s call it minion – that will receive messages with jobs to process. These jobs are sent from another application – let’s call it master – that’s responsible for load-balancing the queueing of the aforementioned messages, maintain a state-machine, etc.

Now, if you’ve worked with JMS before or at least with sockets, connections and sessions, you’ll know for sure that reusing such facilities is mandatory, since provisioning a new physical connection/session or acceptor on every client request will unleash hell on you soon enough as not only is it heavy on resources but worst, it will cripple your application throughput. You won’t be getting too much patting on your back, that’s for sure 😉

And I faced such issues very recently only because I failed to understand how things work on a container-managed environment. Let me explain..

JMS objects were designed to be re-used, right? Right. Now, what about an application running on an application server? Imagine for instance that you have an EJB acting like a service to send messages.

I thought “hell, yeah! EJBs allow me to do some start-up and tear-down operations (@PostContruct and @PreDestroy) so that’s where I’m going to manage the JMS objects that I wish to reuse”. Did you think this too? Well, you’re wrong in a way. You don’t need this! The container does it for you.. but unfortunately it’s not transparent to a developer at first. Actually, it may bring issues with session transaction management as also. And that’s why I had to look further for an explanation! Here‘s what I got.

Hope this will help others.

Cheers!!

Debug android-maven-plugin apps in Eclipse with DDMS

Lately, I’ve been developing Android applications with Maven support and, while it’s rather easy to mount a mature development environment with Eclipse, the usage of android-maven-plugin has brought some integration issues when debugging.

Usually, I do most of my Maven and debugging stuff in a terminal console. But others will prefer to use Eclipse! And I can understand why, since its DDMS perspective is so powerful and easy to use.  Here’s how you can do it:

  1. mvn clean package
  2. Deploy the target/xxx.apk to the device
  3. Open DDMS perspective in Eclipse
  4. Select the process you want to debug/trace
  5. There you go!

From profiling to thread debugging or simply adb logcat, you’ll have it all!

Don’t forget to configure permissions for your user to access the device. Ever hard those “?????” when issuing adb devices stating you have no permissions? Here’s how you can fix it (in Ubuntu-based distros at least):

  1. Run lsusb
  2. Check the line for your device, such as Bus 001 Device 008: ID 0bb4:0cab High Tech Computer Corp.
  3. Edit a new udev rule by executing sudo nano /etc/udev/rules.d/51-android.rules as seen bellow the bullet list
  4. Save file and issue sudo service udev restart
SUBSYSTEM==”usb”, SYSFS{idVendor}==”0bb4″, MODE=”0666
Reconnect your device and you’re done!

Debugging running APK in Eclipse DDMS

Xubuntu brought back the happy Linux user in me

I’m an Ubuntu user for some years now and I always enjoyed the “easy-mode” Debian flavor it’s got. But after delaying my workstation upgrade as much as I could, I knew the day would come that I’d be forced to move to Gnome 3 or Unity. The day came, and it really sucked!
I’m not going to argue why these new ways of seeing the desktop grew so much disdain in me, but I can assure you I felt quite attracted by the dark side (aka MacOS X) for my daily use and development. But then I heard about Xubuntu..

Xubuntu is Ubuntu-based and features XFCE, a old friend of mine “who” I used to have fun with back in the days when my hardware sucked – which now is totally not the case, since I’m relying on a I7 3.4GHz + 16GB DDR3 + OCZ Vertex 3 SSD disk.

And so it was, I downloaded the 64-bit alternate ISO, burned it and after 20 minutes my system was up and running. Boy, did I miss this.. you came a long way XFCE, and you still rock!
Besides lacking some of the integration I was used to in my now-gone-Gnome-days, the simplicity of this window-manager is making me really happy. In case you’re feeling the same way about Gnome 3 and/or Unity, do yourself a favor and give it a try!

Here’s some hints you may find useful. I’ll be updating this list!

  • Two DVI monitors side by side with xrandr

# Dual Monitors configuration on Xubuntu

# Monitor Order
xrandr –output DVI-I-1 –left-of DVI-I-2

#Tip: to configure it, exec ‘xrandr’ on bash to view the options by monitor

# Resolutions
xrandr –output DVI-I-1 –mode 1680×1050 –rate 60.0
xrandr –output DVI-I-2 –mode 1680×1050 –rate 60.0

# Primary Monitor
xrandr –output DVI-I-1 –primary

  • Don’t quit Pidgin when you press the close button

Don’t disable libnotify integration plug-in (if it’s not enabled by default, then enable it). You can however turn-off all options of this plug-in.

  • Dropbox support for Thunar file-manager

execute sudo apt-get install libthunarx-2-dev
and then follow the instructions available here.

Gerrit + Jenkins in LDAP environment

Today, I got Gerrit integrated with Jenkins. Even though there’s good info on the web on how to get this beautiful couple working together, I lack the explanation on how to configure Gerrit SSH for Jenkins usage when Gerrit authenticates its users on a LDAP service.

First of all, the Gerrit instance I’m working on is authenticating against the company LDAP directory. Nothing new here as LDAP users can log-in sucessfully. Now, the thing is Gerrit process is not running as an LDAP user, but rather an Unix one (local) and we need a Gerrit user (non-local) with a public SSH key for Jenkins to be able to acess the code review tool.

The confusion was set! How would I authenticate Jenkins without an LDAP user created for this sole purpose?! gerrit create-account is the way to go!

For this command to work, you must have an authenticated user in Gerrit with administrative privileges and public SSH key set.

First, let’s create a key for the user that Jenkins is going to use:

ssh-keygen -t rsa -b 2048

You should now have two new files, a private key and a public key. Never ever give the private key!! Imagining your recently created public key file is named id_rsa.pub and that you’ve got an xpto user configured in Gerrit as part of the group Administrators, let’s add thevirtual user:

cat id_rsa.pub | ssh -p 29418 xpto@gerrit.example.com gerrit create-account --ssh-key - jenkins

It should be OK now! Just install Gerrit Trigger Jenkins plug-in and configure it as described in the documentation. It won’t take more than two minutes before you’ve got Gerrit shaking hands with Jenkins 🙂