IBM, Microsoft partnering in Hybrid Cloud


Yesterday, IBM and Microsoft jointly announced a partnership deal in Cloud. This deal is mainly about having key software components integrated into each others cloud offering. However for those paying attention, this was not necessarily a surprise. IBM has partnered with Microsoft for a long time now, most recently around Microsoft Dynamics.  IBM also has a partnership with Salesforce, and most recently Apple, just to name a few.

For a few years now, IBM has allowed customers to BYO their Passport Advantage software licenses to AWS. Even though IBM are not currently on the AWS marketplace, through the purchase of Cloudant earlier in 2014, IBM does have an AWS presence.  My view is that as the cloud market matures, many more cross pollination of products between cloud providers will occur, which will allow a lot more hybrid cloud architectures to evolve than has been happening until now. Most of what I have seen to date is hybrid cloud between on-premise and an external cloud provider.  Over the next few years, I believe hybrid cloud links between external cloud providers will be more common.  Having a hybrid model allows consumers of cloud to pick the right cloud provider for the particular business problem they are wanting to solve plus allow an additional layer of high availability to mitigate downtime risk with a single cloud provider.  An example would be to utilise IBM’s Watson Developer Cloud and integrate that with a hosted Azure SQL database with your front-end on Azure’s website PaaS offering.

Over the next year or so, I am sure the progressive Cloud vendors will announce more of these partnerships.  If you are one of those vendors that claim to do ‘cloud’ but are not announcing partnerships and living in your own silo, I would be concerned. It may even be too late to start now. Each Cloud vendor needs to play to their strengths, and when they do that, they will find their market.

IBM SoftLayer Melbourne PoD open!

Image courtesy of
Image courtesy of

As per the press release, IBM SoftLayer Melbourne is open for business as of Tuesday 7th October 2014!

Using the SoftLayer API we can create a very short Python script to build a new virtual Ubuntu image in the new Melbourne SoftLayer datacenter. Before you begin you need a SoftLayer account and your API key which you can find under Account | Users then click on View to see your API key.

To create a virtual image you simply call the SoftLayer_Virtual_Guest::createObject API.

The parameters for this call are fairly self explanatory – with the exception of How do you know what the datacenter name for Melbourne is? (without going to the customer portal that is!)


The answer: simply call the SoftLayer_Location::getDatacenters API. For simplicity I do this via the HTTP REST API using curl, with some python json post processing to make it easy to read:

$ curl -s | python -m json.tool

An array of locations will scroll up the screen – but the important one is Melbourne:

"id": 449596,
"longName": "Melbourne 1",
"name": "mel01"

There is the magic short name – mel01.

With that information we can now create the short script to provision the guest:

import SoftLayer

client = SoftLayer.Client(username='SLxxxx', api_key='API_KEY')
client_object = client['Virtual_Guest'].createObject({
'hostname': 'test',
'domain': '',
'startCpus': 1,
'maxMemory': 1024,
'hourlyBillingFlag': 'true',
'operatingSystemReferenceCode': 'UBUNTU_LATEST',
"datacenter": {
"name": "mel01"
'localDiskFlag': 'false'

for key, value in client_object.iteritems():
	print key, " -> ", value

Save the file as

Before you can execute it, ensure you have the python Softlayer Library installed. See for instructions.

When you are ready – give it a whirl:

$ python

It will sit there for a moment then return some values about the new virtual guest, then will continue executing in the background.
By using the SoftLayer sl command line interface (CLI) to the API, you can see the progress of your virtual build and find out when it is ready. Ensure you setup your sl CLI following these instructions.


Then after a few minutes:


After that, you can grab your root password from the password repository in the portal. You can find your passwords under Devices | Manage | Passwords:


Your host should be listed, and just click on the password field:


Side note: you can also use the API to get at your passwords:

curl '\[softwareComponents\[passwords\]\]' | python -m json.tool

Armed with your root password, you can ssh in:

$ ssh
Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-35-generic x86_64)

* Documentation:
Last login: Tue Oct 7 06:48:35 2014 from

If your ssh session is slow to respond while you are trying to login, add the following line to your /etc/ssh/sshd_config file and reboot:

UseDNS no

From my Melbourne location, the pings are nice and quick as they should be:

$ ping
PING ( 56 data bytes
64 bytes from icmp_seq=0 ttl=55 time=9.301 ms
64 bytes from icmp_seq=1 ttl=55 time=8.006 ms
64 bytes from icmp_seq=2 ttl=55 time=7.800 ms

If you are finished for now – you can cancel your virtual guest – as you can create a new one whenever you need it using your python script.

To cancel:

sl vs cancel 6461446

For a couple of hours work, how much will this cost me?

$ curl -s


A grand total of $US0.04 cents.

There you have it. A very quick “getting started” tutorial for creating a virtual image via three different ways of using the API (REST, CLI and Python).

The elephant in the room of cloud providers

 SaaS-PaaS-IaaS (cloud)

Of all the tech press cloud articles I read on a daily basis, almost always the comparisons are between AWS, Azure, and Google where the battle for IaaS, PaaS and SaaS market share is happening.  There is the odd exception. But, as far as I am concerned, IBM are well in this race, and definitely in the top four of cloud providers. This is confirmed by the respected Gartner IaaS magic quadrant released around 2 weeks ago which places IBM 4th behind AWS, Microsoft, and Google.  Even Google only made their debut this year with their recently released IaaS product.

I will be first to admit that IBM were a slow starter in the IaaS market. And I know this frustrated many within the company. SmartCloud Enterprise (SCE) made some progress, but it was a small player compared to the competition. It was really aimed at IBM’s existing enterprise customers, and was an attempt at providing an unmanaged cloud option to those customers who were already dipping their toes into AWS, and more recently Azure.  The problem with SCE was that it was not designed as a solution for consumers or small/mid size businesses to use easily, with the lack of a fast and easy credit card signup option.  When you only have the option for setting up infrastructure via an enterprise account manager plus purchase order, your market will be limited, and many would not call that ‘cloud’ in the first place.

The real strategy shift at IBM began with the acquisition of SoftLayer.  This provides a scale of ability to execute and agility that is competitive with the other big providers. Prior to SoftLayer, IBM were not seen as a real player in cloud. Post SoftLayer it changed the game. SoftLayer were a proven cloud provider with many years of experience (they were established the year before AWS – around 2005). As an IBMer, I have seen the changes internally and externally over the last 12 months post acquisition.  As the SoftLayer acquisition was closed in mid 2013, it was the catalyst of an internal push to educate the whole company about IBM’s clear strategy of CAMS (Cloud, Analytics, Mobile, and Social).  All employees, no matter what their role, are given training on all these areas, with a special emphasis on the first friday of the month called ThinkFriday by IBM ThinkAcademy.  This NYT article is a good read with an explanation of the strategy.

With the public beta of BlueMix, based on Cloud Foundry and built on SoftLayer, IBM started to take up the challenge to the competition.  Subsequently launched was the IBM Cloud marketplace which ties all the IBM cloud offerings together. Whether it is PaaS offerings for Devs, IaaS offerings for infrastructure needs or pre configured SaaS offerings for business solutions. Having a clear one stop shop for business needs is essential and provides a good entry point for customers to see what IBM has to offer.

IBM has made very clear its strategy to the market and will leave no stone unturned to catch up to it’s competitors with $1.2 billion being spent in the expansion of SoftLayer data centres around the world including two in Australia opening up in the next couple of months, more strategic investments (anticipated) and partnerships (e.g.: the latest with the excellent Docker OS container based virtualisation product), and continual improvement and additions to the IBM Cloud marketplace.  Out of the “big four” cloud providers, only IBM has the experience and a complete set of offerings that can cover the whole spectrum of customers, from startups and small business to high end enterprise customers with specific needs such as SAP or Oracle which they can run on SmartCloud Enterprise+ (SCE+) - the high end enterprise managed cloud offering.


Checked luggage for Infants and Children on Australian domestic airlines

In this day and age, airlines like to grab as much cash off you as possible, with outrageous credit card booking fees, paying for checked luggage, and so on. While booking flights recently for my children it was not clear of each airlines policy for extra items that you would normally take on holiday with children, like strollers, car seats, high chairs, etc.  So I went looking for each airline’s policy on this.

Qantas make it easy and clear:


  • Children receive the same baggage allowance as adults.
  • Children up to 12 years of age are permitted one car seat and one collapsible pram, stroller or push chair in addition to the checked baggage allowance.

Virgin was also easy to find:


Children are entitled to the same baggage allowance as adults. Adults accompanying children are entitled to carry one car seat or booster seat per child as checked baggage free of charge, irrespective of the weight. If the adult does not have a baggage allowance, the car seat or booster seat can still be checked in free of charge.

The second hardest to locate one was Jetstar – where they hide it in the travelling with kids section, and not in the baggage allowance section which would be the first place to look!

From the Jetstar website:

Here’s a list of the bulkier things families can check on Jetstar flights without attracting excess baggage charges:

Strollers or pushers

Porta-cots and bedding

Infant car seats

Portable high-chairs

And finally Tiger, they are the strictest:

If you are travelling with an Infant, Tiger Airways will allow, at no additional charge, 1 piece of Infant equipment, such as a pram or portable cot to be checked, in addition to any Luggage Upsize™ allowance purchased. Infants do not qualify for baggage allowance as they are not fare-paying passengers.

Tiger are the only airline that restrict the number of extra items per infant at 1. They do not mention their policy for children and booster seats. I would assume it is the same as Infants, but it does not seem to be on their website in an obvious location.

I tweeted @tigerairwaysaus for confirmation on this and this is their response:

@mattgillard Hi Matt- if they are child fares you will have to buy checked luggage which you can do online here

So they allow a car seat as a free checked luggage item for infants (aged < 2) but not for children (aged > 2).  This requires extra luggage purchase allowance – they call it Luggage Upsize™.

Requirements, constraints, and assumptions


In any project planning exercise, whether it is via a pre-sales effort, internal project, or some other  architecture definition for a system, very early in the planning stages there is a requirements gathering process that takes place. In general, these requirements classically can be segmented into two categories. Functional or Non-Fuctional requirements. Some other types of requirements that in my experience are often overlooked until sometimes at the last minute, if at all, are constraints. This is a quick blog post detailing the difference between requirements, constraints and assumptions in an IT context but it probably maps to other industries as well.

Firstly to requirements. Requirements are usually gathered via workshops with the stakeholders, or via RFP or tender documents.  All requirements can be easily validated, and should be validated at regular intervals throughout delivery of the project.  If a particular requirement is not met at any point during the project lifecycle – knowing about it sooner rather than later is preferred. When solutioning a system, requirements come down to two different types.  Functional and non-functional requirements.

Functional requirements detail what the stakeholders expect from the solution, and what the end result should look like.

Whereas non-functional requirements are usually an extra type of requirement that details performance and availability metrics around the solution taking into consideration all the stakeholders views – these could also be known as constraints around requirements.

Which brings me to constraints.  Usually a constraint is a techinical limitation around the solution, which could result in additional functional requirements that need to be captured up front. But a constraint can also be a range of values of a component of the system which then becomes a non-functional requirement. An example of the former could be that the solution needs to use a particular product, operating system or other technology due to an IT Standard or other ruling, or perhaps dictated by the overarching Enterprise Architecture. A constraint that leads to a non-functional requirement could be that the CPU and memory utilisation of the upgraded software must be less than or equal to the exising system it will replace. Constraints are often overlooked until later in the solution lifecycle which is too late. The same amount of effort put into the requirements gathering process should also be put into surfacing constraints as early as possible.

I will end with a brief word on assumptions. The definition of assumption:

a thing that is accepted as true or as certain to happen, without proof

Essentially an assumption is a requirement that has been ‘made up’ and un-validated by your project stakeholders – possibly an educated guess if you like.  When pricing a solution, if you have too many assumptions, it is possible that you do not really know what needs to be delivered in the first place and those assumptions are best re-worded as questions to the appropriate stakeholders. Moving assumptions into validated requirements or remove them altogether should be your aim.

Understanding the difference between functional requirements, non-functional requirements, constraints and assumptions when developing a solution or opportunity is absolutely essential. Especially when real money is on the line it could mean the difference between a successful delivery or ultimate failure of a project. It also shows you understand what needs to be delivered, demonstrates that you have thought about all facets of the problem and shows that you know many of the internal and external factors influencing the delivery of the project and can work around them.

Oracle don’t include this in the marketing material – SPARC T series servers have dismal single thread performance

I don’t do new years resolutions. But having said that, I am going to blog more this year.  In my day job I do a lot of research, and much of the time months later, the information is forgotten/not recorded/in a black hole. Or if I am lucky I have written it down or stored a bookmark somewhere. So without further ado, todays post is about making you aware that you need to be careful migrating legacy Solaris workloads to the T-series family of servers. This is especially relevant today, maybe more than ever before in the age of virtualisation and clouds. During my day job, at least once or twice a year I come across a situation that could have been avoided – a poorly written or poorly tuned application placed on an Oracle Sparc T-series server.

A Brief History

In 2002 Sun Microsystems invested in a company called Afara Websystems that was developing an innovative highly multi threaded processor tuned for highly multi-threaded applications.  The name of this processor was called Niagra, and they had some ex-Sun employees in their company. The idea was to build a platform which would run Java and the web screaming FAST, and being able to process large numbers of execution threads simultaneously. In conventional processors, the most latency during execution is when data has to be fetched from main memory. With a large number of hardware threads the thread requesting main memory can be idled while the request to main memory is in progress – and the core can execute another ready to run hardware thread.  Building the T-series platform was about re-thinking the way processors had evolved up until that point and stripping the layers back to the basics to reduce latency during instruction execution. Web traffic generally matches this multi-theaded profile that is suitable for T-series hardware.  Java can be – or not, depending on which developer you have writing your code. Java application servers can match this profile as well – as long as you don’t have lock contention, then all bets are off and your application slows to a crawl – but more of that later.  Just remember, more often than not – a legacy java app server has not been written with optimised multi-threading in mind.  Thats been my experience anyway.

Without going into too much detail – the T1 processor as it was later called, was:

  • optimised for speed by keeping each core’s instruction pipeline as busy as possible.
  • does not perform out of order execution of instructions.
  • the T1 consisted of a single processor, with up to 8 cores.
  • the T1 runs at speeds up to 1.4Ghz.
  • each core has 4 threads per core, giving a maximum concurrency of 32 threads.
  • provided a Hyper-Priviledged execution mode. (providing Sun’s entry into Virtualisation with LDOM’s).
  • single Floating Point Unit (FPU) shared with all cores.
  • single cryptographic unit shared with all cores.
  • Max 32GB memory.
  • Shared L2 Cache 3MB.
  • 8kB primary data cache (per core).
  • 16kB primary instruction cache (per core).
  • low power (72W).

The design of the T1 processor was open sourced in 2006, and called the OpenSPARC project.

In October 2007 came the T2:

  • 8 threads per core, doubling maximum concurrency to 64 threads
  • runs at speeds up to 1.6Ghz (slightly higher)
  • FPU per core, rather than per processor (significant improvement on FP operations)
  • Dual 10GB ethernet integrated onto the chip
  • 1 crypto unit per core
  • Shared L2 Cache increased from 3MB to 4MB
  • power consumption up to ~95W due to extra integration with the chip
  • improved instruction pre-fetching to improve single thread performance
  • 2 integer Algorithmic Logic Units (ALU’s) per group of 4 threads – up from one on the T1 – this increased throughput of integer operations

In 2008  the T2 Plus was released which provided up to 4 sockets of T2′s (total concurrency 64*4=256 threads).

In September 2010 the T3 launched:

  • 16 cores still with 8 threads per core
  • Shared L2 Cache increased from 4MB to 6MB
  • Primary instruction cache still 16kB (per core)
  • Primary data cache still 8kB (per core)
  • Came in 4 variants, 1, 2 or 4 physical processors running at 1.65Ghz (almost no change from the T2)
  • Up to 512GB memory depending on the T3 server model

Across the first three generations, the chip did not change much. Mostly incremental changes, a few more cores, a few more chips, some extra buses, but nothing spectacular.

One thing that is constant across them all, is extremely poor single thread performance. I cannot stress this enough!! Oracle don’t include this in the marketing material. A combination of a slow clock speed, coupled with no L3 cache, no out of order execution of instructions, and poorly written/tuned applications makes for pretty dismal performance. These poorly tuned applications run perfectly fine on the circa 2004-2006 SPARC IV/IV+ line of processors that power the multiprocessor V480, V490, V880, V890, and the E25k platforms! Your single threaded app runs perfectly fine on a SunFire V890 but when migrated to a new and shiny T3-4 it runs like a dog.  Sometimes up to 10x slower, you probably won’t be impressed.  But even more frustrating is that someone in the business has just spent a significant investment on newer hardware with an expectation of better performance.  On a T5220 (T2 based) the litmus test was to check CPU utilisation while your workload was running.  If it was running at a constant 3.125% utilisation and not going any higher, chances are you had a single thread problem.  This indicates that a single hardware thread is running at 100% (1/32 *100).

Single Thread Applications Live!

The T4 was released in late 2011:

  • Processor speed jumped to 2.85 or 3GHz
  • Integer and floating point pipelines more efficient
  • Primary data cache doubled to 16kB (per core)
  • L2 cache now 128kB and is now per core
  • new 4MB L3 cache shared by all 8 cores
  • first T-series chip that performs out of order execution
  • 8 cores per chip (down from 16)
  • still 8 threads per core
  • critical thread mode

This significantly improved single thread execution (Oracle quote a 5x single thread performance increase). The specs above really make it clear why it is faster for single threads:

  • much faster clock speed
  • extra on core cache + the new L3 shared cache
  • improved instruction pipelines
  • and the killer – critical thread mode.

Effectively what this means is that if a core detects it is executing a single thread – then all resources that would be used to handle other threads, are directed to help that single thread execute as fast as possible.

The T4 is the first T-series platform that you can do serious application consolidation from legacy hardware.  Using Oracle virtual machines (OVM) for SPARC (previously known as LDOMs) gives you the flexibility to carve up a T4-4 (4 processors) to place legacy workloads. As all T-series chips only support Solaris 10 (and now Solaris 11), if you want to consolidate Solaris 8 or 9 workloads you need branded zones within an OVM.


The underlying architecture of the T-series from inception has not changed much over about 7 years. I have seen customers consolidate existing workloads onto these platforms in the last few years where they saw many unexpected performance problems. Mostly they have been resolved by optimising database queries, or re-architecting applications to work more efficiently under the T-series multi-thread model. With the T4 and upcoming T5 – things should improve and sites that have large legacy Sun/Solaris footprints will be able to finally consolidate – without having to spend a bomb on M-series servers. Most of the time you want to consolidate and get power saving benefits – the T4 processor or higher fits the bill. The T5 is expected to bring back 16 cores per chip as per the T3 architecture but with the T4 single thread speed improvements.