IBM Bluemix and IBM DevOps Services with Github Integration

In this post I will give you some background on PaaS. Then I will show how you can create a IBM DevOps Services project, that is linked to GitHub, with automatic deployment to IBM Bluemix after code has been committed to the GitHub project.

The old way

One of the huge time saving benefits solved by cloud computing would be appreciated by anyone who understands the process for running and deploying websites.  There have always been ways to automate code deployments to some extent, but there are requirements for certain things to be in place.  These include:

  • A web hosting or VPS provider
  • Certain infrastructure components ready to go such as an operating system installed (e.g.: Red Hat, Ubuntu, etc), load balancing, SSL certificate management, and system administration
  • Sizing of the VM to deal with the expected loads and other capacity planning activities
  • Processes for dealing with patching and security vulnerabilities, and actually patching your website VM’s
  • Upgrading VM’s with later OS revisions, and dealing with the dependency issues this can cause
  • Website code staging and deployment processes
  • Database administration and management

The list is certainly not exhaustive, but you get the idea.

The new way (PaaS)

Platform as a Service (PaaS) conceptually, is usually wedged between Infrastructure as a Service (IaaS) and Software as a Service (SaaS). In effect – PaaS provides everything in the list above and more! You do not need to care about the underlying infrastructure, scaling up/down, storage, or compute capacity.  All the IaaS components are abstracted from you and all the developer needs to do is write and deploy code. It really is that simple.

Keep in mind, developing for PaaS (and cloud in general really) requires slightly different thinking for “cloud first” applications. For example, your filesystem needs to be treated as ephemeral, you need to keep client state server side somehow, and you would likely require the use some type of message passing technology to share information between application instances.

This leads me onto the main topic for this post.  IBM’s PaaS strategy revolves around IBM Bluemix. In 2013, IBM teamed up with Pivotal to announce a collaboration in PaaS using the CloudFoundry framework. Please watch this video for a great overview of CloudFoundry.

Essentially, IBM Bluemix gives a dashboard where you can deployment applications, that consume services. An application might be written in Python, and require the use of a MySQL service for storing structured data.  You bind the two together, and you have a scalable application without any infrastructure configuration at all! That is the magic and the key benefit. It allows you to work on your business problem, not having to deal with infrastructure.

IBM provides DevOps Services (previous name JazzHub) that provides a Git compatible environment for version control, an in browser IDE, “DevOps” style work tracking and team planning functionality, and deployment services to IBM Bluemix.

Things you need if you are following along at home:

  1. A free IBM.com id (just go to http://www.ibm.com/ and click Register on the top right)
  2. A free GitHub account (https://github.com/)
  3. A free IBM Bluemix account (https://bluemix.net/)
  4. A free IBM DevOps Services account linked to your Bluemix account

When logged into IBM DevOps Services, click the big blue “Create Project” button on the right.

Create Project

Give your project a name, and click the “Connect to an external GitHub repository” button. Then enter the GitHub URL for your project (it should already exist).

github-bluemix-connection-fig1

On the next screen, enter the options you want, but if you want auto “DevOps” style deployments to Bluemix make sure the Deploy to Bluemix checkbox is selected.

deploy-to-bluemix-fig2

Details of your project should pop up, and you can select which branch you want to track.  Leave the defaults if you are not sure.

github-tracking-fig3

In your project screen click on the top right the orange “Build & Deploy” button and you should see an empty pipeline. We need to add a builder, which detects code changes then starts the build process, and we also need to add a stage which for us will be deploying to Bluemix.

build-deploy-pipeline-fig4

Before we can add a builder we need a Github personal access token.  Go to GitHub and click on the settings icon then click Applications on the menu bar to the left.  Click “Generate new token”. Enter a description and leave the remaining defaults.

github-token-fig5

Now back to your deployment configuration, click on the Plus sign in “add a builder” and enter the config as shown, and enter your GitHub token as shown.

add-builder-fig6

Now click on the other Plus sign to add a deployer stage. For organisation enter your email address but it doesn’t really matter. Enter your application name and click Save.

add-deployer-fig7

Now everything should look ready to go!

pipeline-ready-fig8

Clone your github repo and force a push:

$ git clone https://github.com/mattgillard/mysql_flaskr.git
$ cd mysql_flaskr

Make a minor change to the source code, and push it

$ git commit -a
[master ab88237] Updated <h1> in layout
1 file changed, 1 insertion(+), 1 deletion(-)
$ git push
Counting objects: 7, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (4/4), 425 bytes | 0 bytes/s, done.
Total 4 (delta 2), reused 0 (delta 0)
To https://github.com/mattgillard/mysql_flaskr.git
4261d41..ab88237  master -> master

Soon after you push it, your pipeline will start moving and a minute or so later you should see a green traffic light and a deployment success:

bluemix-deployment-fig9

You will see your app appear in the IBM Bluemix dashboard with the deployment log on the right:

app-dashboard-fig10

Back in DevOps Services, click on the deployment success URL and it will switch you into your detailed log, where you can see all the steps that were taken to deploy the application.

deployment-summary-fig11

deployment-log-fig12

When your app is deployed, you can then go to the Internet URL to see it live on the Internet (in this case it is: http://flaskr-bluemix-dev.mybluemix.net).

flaskr-url-fig13

There you have it – a relatively straightforward process to connect up an existing Github repository with IBM DevOps Services & IBM Bluemix!

The next post will talk about the metadata files you need in your project so Bluemix can deploy it.

IBM, Microsoft partnering in Hybrid Cloud

microsoft-ibm

Yesterday, IBM and Microsoft jointly announced a partnership deal in Cloud. This deal is mainly about having key software components integrated into each others cloud offering. However for those paying attention, this was not necessarily a surprise. IBM has partnered with Microsoft for a long time now, most recently around Microsoft Dynamics.  IBM also has a partnership with Salesforce, and most recently Apple, just to name a few.

For a few years now, IBM has allowed customers to BYO their Passport Advantage software licenses to AWS. Even though IBM are not currently on the AWS marketplace, through the purchase of Cloudant earlier in 2014, IBM does have an AWS presence.  My view is that as the cloud market matures, many more cross pollination of products between cloud providers will occur, which will allow a lot more hybrid cloud architectures to evolve than has been happening until now. Most of what I have seen to date is hybrid cloud between on-premise and an external cloud provider.  Over the next few years, I believe hybrid cloud links between external cloud providers will be more common.  Having a hybrid model allows consumers of cloud to pick the right cloud provider for the particular business problem they are wanting to solve plus allow an additional layer of high availability to mitigate downtime risk with a single cloud provider.  An example would be to utilise IBM’s Watson Developer Cloud and integrate that with a hosted Azure SQL database with your front-end on Azure’s website PaaS offering.

Over the next year or so, I am sure the progressive Cloud vendors will announce more of these partnerships.  If you are one of those vendors that claim to do ‘cloud’ but are not announcing partnerships and living in your own silo, I would be concerned. It may even be too late to start now. Each Cloud vendor needs to play to their strengths, and when they do that, they will find their market.

IBM SoftLayer Melbourne PoD open!

Image courtesy of http://www.computerworld.com.au/slideshow/556788/pictures-ibm-softlayer-melbourne-data-centre/
Image courtesy of http://www.computerworld.com.au/slideshow/556788/pictures-ibm-softlayer-melbourne-data-centre/

As per the press release, IBM SoftLayer Melbourne is open for business as of Tuesday 7th October 2014!

Using the SoftLayer API we can create a very short Python script to build a new virtual Ubuntu image in the new Melbourne SoftLayer datacenter. Before you begin you need a SoftLayer account and your API key which you can find under Account | Users then click on View to see your API key.

To create a virtual image you simply call the SoftLayer_Virtual_Guest::createObject API.

The parameters for this call are fairly self explanatory – with the exception of datacenter.name. How do you know what the datacenter name for Melbourne is? (without going to the customer portal that is!)

locations_api

The answer: simply call the SoftLayer_Location::getDatacenters API. For simplicity I do this via the HTTP REST API using curl, with some python json post processing to make it easy to read:

$ curl -s https://SLxxxx:API_KEY@api.softlayer.com/rest/v3/SoftLayer_Location/getDatacenters.json | python -m json.tool

An array of locations will scroll up the screen – but the important one is Melbourne:

{
"id": 449596,
"longName": "Melbourne 1",
"name": "mel01"
},

There is the magic short name – mel01.

With that information we can now create the short script to provision the guest:

import SoftLayer

client = SoftLayer.Client(username='SLxxxx', api_key='API_KEY')
client_object = client['Virtual_Guest'].createObject({
'hostname': 'test',
'domain': 'myhost.com',
'startCpus': 1,
'maxMemory': 1024,
'hourlyBillingFlag': 'true',
'operatingSystemReferenceCode': 'UBUNTU_LATEST',
"datacenter": {
"name": "mel01"
},
'localDiskFlag': 'false'
});

for key, value in client_object.iteritems():
	print key, " -> ", value

Save the file as test_build_melbourne.py.

Before you can execute it, ensure you have the python Softlayer Library installed. See https://pypi.python.org/pypi/SoftLayer for instructions.

When you are ready – give it a whirl:

$ python test_build_melbourne.py

It will sit there for a moment then return some values about the new virtual guest, then will continue executing in the background.
By using the SoftLayer sl command line interface (CLI) to the API, you can see the progress of your virtual build and find out when it is ready. Ensure you setup your sl CLI following these instructions.

provision-1

Then after a few minutes:

provision-2

After that, you can grab your root password from the password repository in the portal. You can find your passwords under Devices | Manage | Passwords:

portal_password

Your host should be listed, and just click on the password field:

portal_root_password

Side note: you can also use the API to get at your passwords:

curl 'https://SLXXXX:APT_KEY@api.softlayer.com/rest/v3/SoftLayer_Account/getVirtualGuests.json?objectMask=mask\[softwareComponents\[passwords\]\]' | python -m json.tool

Armed with your root password, you can ssh in:

$ ssh root@168.1.xxx.yyy
Password:
Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-35-generic x86_64)

* Documentation: https://help.ubuntu.com/
Last login: Tue Oct 7 06:48:35 2014 from
root@test:~#

If your ssh session is slow to respond while you are trying to login, add the following line to your /etc/ssh/sshd_config file and reboot:

UseDNS no

From my Melbourne location, the pings are nice and quick as they should be:

$ ping 168.1.xxx.yyy
PING 168.1.xxx.yyy (168.1.xxx.yyy): 56 data bytes
64 bytes from 168.1.xxx.yyy: icmp_seq=0 ttl=55 time=9.301 ms
64 bytes from 168.1.xxx.yyy: icmp_seq=1 ttl=55 time=8.006 ms
64 bytes from 168.1.xxx.yyy: icmp_seq=2 ttl=55 time=7.800 ms

If you are finished for now – you can cancel your virtual guest – as you can create a new one whenever you need it using your python script.

To cancel:

sl vs cancel 6461446

For a couple of hours work, how much will this cost me?

$ curl -s https://SLxxxxxx:API_KEY@api.softlayer.com/rest/v3/SoftLayer_Account/getBalance.xml

.04

A grand total of $US0.04 cents.

There you have it. A very quick “getting started” tutorial for creating a virtual image via three different ways of using the API (REST, CLI and Python).

The elephant in the room of cloud providers

 SaaS-PaaS-IaaS (cloud)

Of all the tech press cloud articles I read on a daily basis, almost always the comparisons are between AWS, Azure, and Google where the battle for IaaS, PaaS and SaaS market share is happening.  There is the odd exception. But, as far as I am concerned, IBM are well in this race, and definitely in the top four of cloud providers. This is confirmed by the respected Gartner IaaS magic quadrant released around 2 weeks ago which places IBM 4th behind AWS, Microsoft, and Google.  Even Google only made their debut this year with their recently released IaaS product.

I will be first to admit that IBM were a slow starter in the IaaS market. And I know this frustrated many within the company. SmartCloud Enterprise (SCE) made some progress, but it was a small player compared to the competition. It was really aimed at IBM’s existing enterprise customers, and was an attempt at providing an unmanaged cloud option to those customers who were already dipping their toes into AWS, and more recently Azure.  The problem with SCE was that it was not designed as a solution for consumers or small/mid size businesses to use easily, with the lack of a fast and easy credit card signup option.  When you only have the option for setting up infrastructure via an enterprise account manager plus purchase order, your market will be limited, and many would not call that ‘cloud’ in the first place.

The real strategy shift at IBM began with the acquisition of SoftLayer.  This provides a scale of ability to execute and agility that is competitive with the other big providers. Prior to SoftLayer, IBM were not seen as a real player in cloud. Post SoftLayer it changed the game. SoftLayer were a proven cloud provider with many years of experience (they were established the year before AWS – around 2005). As an IBMer, I have seen the changes internally and externally over the last 12 months post acquisition.  As the SoftLayer acquisition was closed in mid 2013, it was the catalyst of an internal push to educate the whole company about IBM’s clear strategy of CAMS (Cloud, Analytics, Mobile, and Social).  All employees, no matter what their role, are given training on all these areas, with a special emphasis on the first friday of the month called ThinkFriday by IBM ThinkAcademy.  This NYT article is a good read with an explanation of the strategy.

With the public beta of BlueMix, based on Cloud Foundry and built on SoftLayer, IBM started to take up the challenge to the competition.  Subsequently launched was the IBM Cloud marketplace which ties all the IBM cloud offerings together. Whether it is PaaS offerings for Devs, IaaS offerings for infrastructure needs or pre configured SaaS offerings for business solutions. Having a clear one stop shop for business needs is essential and provides a good entry point for customers to see what IBM has to offer.

IBM has made very clear its strategy to the market and will leave no stone unturned to catch up to it’s competitors with $1.2 billion being spent in the expansion of SoftLayer data centres around the world including two in Australia opening up in the next couple of months, more strategic investments (anticipated) and partnerships (e.g.: the latest with the excellent Docker OS container based virtualisation product), and continual improvement and additions to the IBM Cloud marketplace.  Out of the “big four” cloud providers, only IBM has the experience and a complete set of offerings that can cover the whole spectrum of customers, from startups and small business to high end enterprise customers with specific needs such as SAP or Oracle which they can run on SmartCloud Enterprise+ (SCE+) – the high end enterprise managed cloud offering.

 

Checked luggage for Infants and Children on Australian domestic airlines

In this day and age, airlines like to grab as much cash off you as possible, with outrageous credit card booking fees, paying for checked luggage, and so on. While booking flights recently for my children it was not clear of each airlines policy for extra items that you would normally take on holiday with children, like strollers, car seats, high chairs, etc.  So I went looking for each airline’s policy on this.

Qantas make it easy and clear:

Children

  • Children receive the same baggage allowance as adults.
  • Children up to 12 years of age are permitted one car seat and one collapsible pram, stroller or push chair in addition to the checked baggage allowance.

Virgin was also easy to find:

Children

Children are entitled to the same baggage allowance as adults. Adults accompanying children are entitled to carry one car seat or booster seat per child as checked baggage free of charge, irrespective of the weight. If the adult does not have a baggage allowance, the car seat or booster seat can still be checked in free of charge.

The second hardest to locate one was Jetstar – where they hide it in the travelling with kids section, and not in the baggage allowance section which would be the first place to look!

From the Jetstar website:

Here’s a list of the bulkier things families can check on Jetstar flights without attracting excess baggage charges:

Strollers or pushers

Porta-cots and bedding

Infant car seats

Portable high-chairs

And finally Tiger, they are the strictest:

If you are travelling with an Infant, Tiger Airways will allow, at no additional charge, 1 piece of Infant equipment, such as a pram or portable cot to be checked, in addition to any Luggage Upsize™ allowance purchased. Infants do not qualify for baggage allowance as they are not fare-paying passengers.

Tiger are the only airline that restrict the number of extra items per infant at 1. They do not mention their policy for children and booster seats. I would assume it is the same as Infants, but it does not seem to be on their website in an obvious location.

I tweeted @tigerairwaysaus for confirmation on this and this is their response:

@mattgillard Hi Matt- if they are child fares you will have to buy checked luggage which you can do online here http://t.co/bz1omVKCNb

So they allow a car seat as a free checked luggage item for infants (aged < 2) but not for children (aged > 2).  This requires extra luggage purchase allowance – they call it Luggage Upsize™.

Requirements, constraints, and assumptions

ProjMgt

In any project planning exercise, whether it is via a pre-sales effort, internal project, or some other  architecture definition for a system, very early in the planning stages there is a requirements gathering process that takes place. In general, these requirements classically can be segmented into two categories. Functional or Non-Fuctional requirements. Some other types of requirements that in my experience are often overlooked until sometimes at the last minute, if at all, are constraints. This is a quick blog post detailing the difference between requirements, constraints and assumptions in an IT context but it probably maps to other industries as well.

Firstly to requirements. Requirements are usually gathered via workshops with the stakeholders, or via RFP or tender documents.  All requirements can be easily validated, and should be validated at regular intervals throughout delivery of the project.  If a particular requirement is not met at any point during the project lifecycle – knowing about it sooner rather than later is preferred. When solutioning a system, requirements come down to two different types.  Functional and non-functional requirements.

Functional requirements detail what the stakeholders expect from the solution, and what the end result should look like.

Whereas non-functional requirements are usually an extra type of requirement that details performance and availability metrics around the solution taking into consideration all the stakeholders views – these could also be known as constraints around requirements.

Which brings me to constraints.  Usually a constraint is a techinical limitation around the solution, which could result in additional functional requirements that need to be captured up front. But a constraint can also be a range of values of a component of the system which then becomes a non-functional requirement. An example of the former could be that the solution needs to use a particular product, operating system or other technology due to an IT Standard or other ruling, or perhaps dictated by the overarching Enterprise Architecture. A constraint that leads to a non-functional requirement could be that the CPU and memory utilisation of the upgraded software must be less than or equal to the exising system it will replace. Constraints are often overlooked until later in the solution lifecycle which is too late. The same amount of effort put into the requirements gathering process should also be put into surfacing constraints as early as possible.

I will end with a brief word on assumptions. The definition of assumption:

noun
a thing that is accepted as true or as certain to happen, without proof

Essentially an assumption is a requirement that has been ‘made up’ and un-validated by your project stakeholders – possibly an educated guess if you like.  When pricing a solution, if you have too many assumptions, it is possible that you do not really know what needs to be delivered in the first place and those assumptions are best re-worded as questions to the appropriate stakeholders. Moving assumptions into validated requirements or remove them altogether should be your aim.

Understanding the difference between functional requirements, non-functional requirements, constraints and assumptions when developing a solution or opportunity is absolutely essential. Especially when real money is on the line it could mean the difference between a successful delivery or ultimate failure of a project. It also shows you understand what needs to be delivered, demonstrates that you have thought about all facets of the problem and shows that you know many of the internal and external factors influencing the delivery of the project and can work around them.