Clustering Hazelcast on Kubernetes

I’ve had experience with clustering Hazelcast in Google Cloud Engine before, but right now, in the advent of containers, Kubernetes is becoming a standard for app/container orchestration and therefore my previous effort has become somewhat obsolete – unless you’re limited to VMs and have to use TCP-based clustering for Hazelcast, since multicast traffic may be blocked (it is on most cloud providers).

Given this, I’ve assembled a Dockerfile which will run a small app (hazelcast-kubernetes-bootstrapper) on boot. This app queries Kubernetes API to discover all nodes in the [Kubernetes] cluster that are acting as an Hazelcast node, retrieves their IP addresses, configures and instantiates Hazelcast with TCP configuration, accordingly.

Please test it and give me feedback. All you need is Docker and a Kubernetes cluster (I’ve tested it locally and in Google Container Engine).

More information, including step-by-step instructrion can be found at https://github.com/pires/hazelcast-kubernetes.

Cheers,
Paulo Pires

Glassfish cluster installation and administration on top of SSH + public key

Managing Glassfish instances in a non-centralized way may prove itself to be a real pain in the ass, even with as few as four (4) nodes. One solution that has been quite simplified on version 3.1 (it existed in version 2.x but you had to deploy node-agents) is the usage of SSH as a channel to perform administration operations on remote instances. This post compreehends a short but working version of the steps described in this blog post.

Pre-requisites

  • Linux system (actually, Glassfish runs on every platform with a suitable JRE)
  • SSH server up and running
  • Have a user configured in all desired nodes (in our case, dummy)
  • Glassfish 3.1.1 as version 3.1.2 forces secure-admin mode which has brought several issues to our current testbed. Actually, we had to reinstall and reconfigure the entire scenario. So if you’re willing to install/upgrade to a newer version, do it at your own risk. You’ve been warned!
  • Be sure that you have a clean Glassfish install on what is going to be your Domain Admin Server (DAS).

Setup authentication

You can authenticate your DAS against your nodes with one of two methods:

  1. Password authentication
  2. Public key authentication

We’ll choose the last one, since it’s easier to maintain. Let’s start by assuming a node named node1.

$GLASSFISHROOT/bin/asadmin setup-ssh node1

If you don’t have a key-pair, the command above will give you the option to create one. If this is your case, be ready to provide your user password. Repeat the same command when the process is finished.

Also, be sure that the key is present in the file /home/dummy/.ssh/authorized_keys2 of the node node1, since this script will wrongly put it into authorized_keys, which is used for SSH protocol version 1 and not version 2, the most widely spread.

Execute:

ssh dummy@node1
cd ~/.ssh
cat authorized_keys >> authorized_keys2
rm authorized_keys

Install Glassfish remotely

asadmin install-node –installdir /home/cmlabs/ node1

Create SSH nodes

asadmin create-node-ssh –nodehost node1 –installdir /home/dummy node1

Configure cluster and deploy applications

Right now, you’re ready to create one or several node clusters. I mean, if you have executed the steps above in a handful of nodes, of course! Just point your browser to http://das_address:4848 and head up to the Clusters on the left-most pane (tree).

Be sure to setup a new configuration for each cluster and include some nodes in it. The rest of the process is quite trivial. Just remember that when you’re creating resources such as JDBC, JMS and such, you should always define the target, which may be a node, a cluster or several clusters.

Troubleshooting

  1. asadmin commands over SSH fail – a workaround is to repeat step Setup authentication.