Kubernetes cluster made easy with Vagrant and CoreOS

Lately, I’ve been playing with Kubernetes. But it hasn’t been an easy ride! The guys running the project are really awesome and supportive, but they just can’t maintain documentation on the myriad of providers they’ve been able to use to provision a cluster.

Given this, I’ve tried to make it easy for people like me, the day-to-day developer that just wants to try out this amazing technology or develop apps on top of it. And so I did. Here you’ll find a quick and simple way to bootstrap a Kubernetes cluster on top of Vagrant (Virtualbox) and CoreOs.

Give it a spin and let me know what you think.

Clustering Hazelcast on Kubernetes

I’ve had experience with clustering Hazelcast in Google Cloud Engine before, but right now, in the advent of containers, Kubernetes is becoming a standard for app/container orchestration and therefore my previous effort has become somewhat obsolete – unless you’re limited to VMs and have to use TCP-based clustering for Hazelcast, since multicast traffic may be blocked (it is on most cloud providers).

Given this, I’ve assembled a Dockerfile which will run a small app (hazelcast-kubernetes-bootstrapper) on boot. This app queries Kubernetes API to discover all nodes in the [Kubernetes] cluster that are acting as an Hazelcast node, retrieves their IP addresses, configures and instantiates Hazelcast with TCP configuration, accordingly.

Please test it and give me feedback. All you need is Docker and a Kubernetes cluster (I’ve tested it locally and in Google Container Engine).

More information, including step-by-step instructrion can be found at https://github.com/pires/hazelcast-kubernetes.

Cheers,
Paulo Pires

RESTful web services with Jersey 1.12+ (JSON) and Glassfish 3.1.1+

Lately, I’ve been playing with Jersey for easy and fast development of RESTful services.My focus so far has been to implement CRUD operations on top of Glassfish container and with JSON support.

I’m sharing these experiments at my Github page. Feel free to take a look, copy and eventually create pull requests for your own additions/optimizations.

Apache Shiro 1.2.0+ (JDBC Realm) and Glassfish 3.1.1+

Lately, I’ve been experimenting with Apache Shiro for securing my Java EE applications. My focus so far has been to implement an authentication mechanism backed by a MySQL database and to be ran on top of a Glassfish container.

I’m sharing these experiments at my Github page. Feel free to take a look, copy and eventually create pull requests for your own additions/optimizations.

Glassfish cluster installation and administration on top of SSH + public key

Managing Glassfish instances in a non-centralized way may prove itself to be a real pain in the ass, even with as few as four (4) nodes. One solution that has been quite simplified on version 3.1 (it existed in version 2.x but you had to deploy node-agents) is the usage of SSH as a channel to perform administration operations on remote instances. This post compreehends a short but working version of the steps described in this blog post.

Pre-requisites

  • Linux system (actually, Glassfish runs on every platform with a suitable JRE)
  • SSH server up and running
  • Have a user configured in all desired nodes (in our case, dummy)
  • Glassfish 3.1.1 as version 3.1.2 forces secure-admin mode which has brought several issues to our current testbed. Actually, we had to reinstall and reconfigure the entire scenario. So if you’re willing to install/upgrade to a newer version, do it at your own risk. You’ve been warned!
  • Be sure that you have a clean Glassfish install on what is going to be your Domain Admin Server (DAS).

Setup authentication

You can authenticate your DAS against your nodes with one of two methods:

  1. Password authentication
  2. Public key authentication

We’ll choose the last one, since it’s easier to maintain. Let’s start by assuming a node named node1.

$GLASSFISHROOT/bin/asadmin setup-ssh node1

If you don’t have a key-pair, the command above will give you the option to create one. If this is your case, be ready to provide your user password. Repeat the same command when the process is finished.

Also, be sure that the key is present in the file /home/dummy/.ssh/authorized_keys2 of the node node1, since this script will wrongly put it into authorized_keys, which is used for SSH protocol version 1 and not version 2, the most widely spread.

Execute:

ssh dummy@node1
cd ~/.ssh
cat authorized_keys >> authorized_keys2
rm authorized_keys

Install Glassfish remotely

asadmin install-node –installdir /home/cmlabs/ node1

Create SSH nodes

asadmin create-node-ssh –nodehost node1 –installdir /home/dummy node1

Configure cluster and deploy applications

Right now, you’re ready to create one or several node clusters. I mean, if you have executed the steps above in a handful of nodes, of course! Just point your browser to http://das_address:4848 and head up to the Clusters on the left-most pane (tree).

Be sure to setup a new configuration for each cluster and include some nodes in it. The rest of the process is quite trivial. Just remember that when you’re creating resources such as JDBC, JMS and such, you should always define the target, which may be a node, a cluster or several clusters.

Troubleshooting

  1. asadmin commands over SSH fail – a workaround is to repeat step Setup authentication.

JMS in a container-managed context – learned the hard way edition

Hi fellow geeks,

Here I am living a new professional experience, this time in a company which core-business is audio/video streaming and real-time audio recognition. I’m starting on a couple of very interesting projects. One deals with real-time speaker recognition, while the other aims to be an automatic music-matching service powered by a revolutionary (or so they say) algorithm that calculates similarities and determines distances within a large universe of tracks.

Both projects share some algorithms and most of all, workflow. Both have to work in a distributed way, and by that I mean having multiple different jobs running in parallel in several machines, exhausting each machine processing units (CPUs/cores) while persisting resulting data in a distributed-filesystem.
The choices here were pretty obvious to us, JMS and Hadoop FS.

JMS is an API than can be better described by the publish-subscribe pattern and, what we’ll basically have is a bunch of Message-driven Beans (MDBs) per machine – let’s call it minion – that will receive messages with jobs to process. These jobs are sent from another application – let’s call it master – that’s responsible for load-balancing the queueing of the aforementioned messages, maintain a state-machine, etc.

Now, if you’ve worked with JMS before or at least with sockets, connections and sessions, you’ll know for sure that reusing such facilities is mandatory, since provisioning a new physical connection/session or acceptor on every client request will unleash hell on you soon enough as not only is it heavy on resources but worst, it will cripple your application throughput. You won’t be getting too much patting on your back, that’s for sure ;-)

And I faced such issues very recently only because I failed to understand how things work on a container-managed environment. Let me explain..

JMS objects were designed to be re-used, right? Right. Now, what about an application running on an application server? Imagine for instance that you have an EJB acting like a service to send messages.

I thought “hell, yeah! EJBs allow me to do some start-up and tear-down operations (@PostContruct and @PreDestroy) so that’s where I’m going to manage the JMS objects that I wish to reuse”. Did you think this too? Well, you’re wrong in a way. You don’t need this! The container does it for you.. but unfortunately it’s not transparent to a developer at first. Actually, it may bring issues with session transaction management as also. And that’s why I had to look further for an explanation! Here‘s what I got.

Hope this will help others.

Cheers!!