Using ACL’s in Object Storage on SoftLayer


When you are using Object Storage in SoftLayer, there will come a time when you may find you need to share files with other SoftLayer accounts.  Unfortunately, manipulating container ACL’s appears to be not supported via the SoftLayer portal.  But, since SoftLayer Object Storage API is compatible with OpenStack Swift – this allows us to solve this problem using the API.  This means the examples in this post should work with any OpenStack Swift compatible Object Storage implementation.

I found howto’s around this topic hard to come by, so I have written up this quick guide.

First of all, ensure you have a working Python installation, and you have pip installed.

Next, install the python-swiftclient module:

$ pip install python-swiftclient

Hopefully you have a working swift command-line now like so:

$ swift
Usage: swift [--version] [--help] [--os-help] [--snet] [--verbose]
[--debug] [--info] [--quiet] [--auth <auth_url>]
[--auth-version <auth_version> |

For this example, I am using two swift configurations implemented via environment variables.  In SoftLayer, you can get your credentials from the Object Storage screen by clicking View Credentials:


For User A:

UserA$ cat
export ST_KEY=1871e8b4595079a…
export ST_AUTH=

For User B:

UserB$ cat
export ST_KEY=9fe12cc1927a5877…
export ST_AUTH=

Source each shell file:

UserA$ .
UserB$ .

Now, we want to share the MyNewContainer container in UserA SoftLayer account with UserB.

In the SoftLayer GUI under Object Storage the container looks like this:


Lets look at the default ACL’s on MyNewContainer:

UserA$ swift stat MyNewContainer
 Account: AUTH_150fef84-e459-4df7-a050-9f9f9f9f9f9c
 Container: MyNewContainer
 Objects: 2
 Bytes: 5
 Read ACL:
 Write ACL:
 Sync To:
 Sync Key:
 Accept-Ranges: bytes
X-Storage-Policy: standard
 X-Timestamp: 1462838226.47452
 X-Trans-Id: tx51d3b7ac89f64502ad3ba-0057314450
 Content-Type: text/plain; charset=utf-8

They look empty to me.  Now, lets get UserB to try and list the contents of the above object.  Note that we need to specify the URL to the storage which you can find either in the SoftLayer object storage GUI, or you can extract the important AUTH_ information from the above swift stat command. Pass –os-storage-url to swift and you can attempt to access the container:

UserB$ swift --os-storage-url list MyNewContainer
Container GET failed: 403 Forbidden [first 60 chars of response] <html><h1>Forbidden</h1><p>Access was denied to this resourc

As expected, it does not work.

Now update the ACL for MyNewContainer by adding UserB into the ACL:

UserA$ swift post MyNewContainer --read-acl ""

Check that the ACL was applied:

UserA$ swift stat MyNewContainer
 Account: AUTH_150fef84-e459-4df7-a050-9f9f9f9f9f9c
 Container: MyNewContainer
 Objects: 2
 Bytes: 5
 Read ACL:
 Write ACL:
 Sync To:
 Sync Key:
 Accept-Ranges: bytes
 X-Trans-Id: txfb18c6b3823c444b8e56b-005731449b
X-Storage-Policy: standard
 X-Timestamp: 1462838226.47452
 Content-Type: text/plain; charset=utf-8

Now try and list the contents of the MyNewContainer which is successful:

UserB$ swift --os-storage-url list MyNewContainer

Thats it!  Some references I used for this post:!/STXKQY_4.2.0/


IBM Bluemix and IBM DevOps Services with Github Integration

In this post I will give you some background on PaaS. Then I will show how you can create a IBM DevOps Services project, that is linked to GitHub, with automatic deployment to IBM Bluemix after code has been committed to the GitHub project.

The old way

One of the huge time saving benefits solved by cloud computing would be appreciated by anyone who understands the process for running and deploying websites.  There have always been ways to automate code deployments to some extent, but there are requirements for certain things to be in place.  These include:

  • A web hosting or VPS provider
  • Certain infrastructure components ready to go such as an operating system installed (e.g.: Red Hat, Ubuntu, etc), load balancing, SSL certificate management, and system administration
  • Sizing of the VM to deal with the expected loads and other capacity planning activities
  • Processes for dealing with patching and security vulnerabilities, and actually patching your website VM’s
  • Upgrading VM’s with later OS revisions, and dealing with the dependency issues this can cause
  • Website code staging and deployment processes
  • Database administration and management

The list is certainly not exhaustive, but you get the idea.

The new way (PaaS)

Platform as a Service (PaaS) conceptually, is usually wedged between Infrastructure as a Service (IaaS) and Software as a Service (SaaS). In effect – PaaS provides everything in the list above and more! You do not need to care about the underlying infrastructure, scaling up/down, storage, or compute capacity.  All the IaaS components are abstracted from you and all the developer needs to do is write and deploy code. It really is that simple.

Keep in mind, developing for PaaS (and cloud in general really) requires slightly different thinking for “cloud first” applications. For example, your filesystem needs to be treated as ephemeral, you need to keep client state server side somehow, and you would likely require the use some type of message passing technology to share information between application instances.

This leads me onto the main topic for this post.  IBM’s PaaS strategy revolves around IBM Bluemix. In 2013, IBM teamed up with Pivotal to announce a collaboration in PaaS using the CloudFoundry framework. Please watch this video for a great overview of CloudFoundry.

Essentially, IBM Bluemix gives a dashboard where you can deployment applications, that consume services. An application might be written in Python, and require the use of a MySQL service for storing structured data.  You bind the two together, and you have a scalable application without any infrastructure configuration at all! That is the magic and the key benefit. It allows you to work on your business problem, not having to deal with infrastructure.

IBM provides DevOps Services (previous name JazzHub) that provides a Git compatible environment for version control, an in browser IDE, “DevOps” style work tracking and team planning functionality, and deployment services to IBM Bluemix.

Things you need if you are following along at home:

  1. A free id (just go to and click Register on the top right)
  2. A free GitHub account (
  3. A free IBM Bluemix account (
  4. A free IBM DevOps Services account linked to your Bluemix account

When logged into IBM DevOps Services, click the big blue “Create Project” button on the right.

Create Project

Give your project a name, and click the “Connect to an external GitHub repository” button. Then enter the GitHub URL for your project (it should already exist).


On the next screen, enter the options you want, but if you want auto “DevOps” style deployments to Bluemix make sure the Deploy to Bluemix checkbox is selected.


Details of your project should pop up, and you can select which branch you want to track.  Leave the defaults if you are not sure.


In your project screen click on the top right the orange “Build & Deploy” button and you should see an empty pipeline. We need to add a builder, which detects code changes then starts the build process, and we also need to add a stage which for us will be deploying to Bluemix.


Before we can add a builder we need a Github personal access token.  Go to GitHub and click on the settings icon then click Applications on the menu bar to the left.  Click “Generate new token”. Enter a description and leave the remaining defaults.


Now back to your deployment configuration, click on the Plus sign in “add a builder” and enter the config as shown, and enter your GitHub token as shown.


Now click on the other Plus sign to add a deployer stage. For organisation enter your email address but it doesn’t really matter. Enter your application name and click Save.


Now everything should look ready to go!


Clone your github repo and force a push:

$ git clone
$ cd mysql_flaskr

Make a minor change to the source code, and push it

$ git commit -a
[master ab88237] Updated <h1> in layout
1 file changed, 1 insertion(+), 1 deletion(-)
$ git push
Counting objects: 7, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (4/4), 425 bytes | 0 bytes/s, done.
Total 4 (delta 2), reused 0 (delta 0)
4261d41..ab88237  master -> master

Soon after you push it, your pipeline will start moving and a minute or so later you should see a green traffic light and a deployment success:


You will see your app appear in the IBM Bluemix dashboard with the deployment log on the right:


Back in DevOps Services, click on the deployment success URL and it will switch you into your detailed log, where you can see all the steps that were taken to deploy the application.



When your app is deployed, you can then go to the Internet URL to see it live on the Internet (in this case it is:


There you have it – a relatively straightforward process to connect up an existing Github repository with IBM DevOps Services & IBM Bluemix!

The next post will talk about the metadata files you need in your project so Bluemix can deploy it.

IBM, Microsoft partnering in Hybrid Cloud


Yesterday, IBM and Microsoft jointly announced a partnership deal in Cloud. This deal is mainly about having key software components integrated into each others cloud offering. However for those paying attention, this was not necessarily a surprise. IBM has partnered with Microsoft for a long time now, most recently around Microsoft Dynamics.  IBM also has a partnership with Salesforce, and most recently Apple, just to name a few.

For a few years now, IBM has allowed customers to BYO their Passport Advantage software licenses to AWS. Even though IBM are not currently on the AWS marketplace, through the purchase of Cloudant earlier in 2014, IBM does have an AWS presence.  My view is that as the cloud market matures, many more cross pollination of products between cloud providers will occur, which will allow a lot more hybrid cloud architectures to evolve than has been happening until now. Most of what I have seen to date is hybrid cloud between on-premise and an external cloud provider.  Over the next few years, I believe hybrid cloud links between external cloud providers will be more common.  Having a hybrid model allows consumers of cloud to pick the right cloud provider for the particular business problem they are wanting to solve plus allow an additional layer of high availability to mitigate downtime risk with a single cloud provider.  An example would be to utilise IBM’s Watson Developer Cloud and integrate that with a hosted Azure SQL database with your front-end on Azure’s website PaaS offering.

Over the next year or so, I am sure the progressive Cloud vendors will announce more of these partnerships.  If you are one of those vendors that claim to do ‘cloud’ but are not announcing partnerships and living in your own silo, I would be concerned. It may even be too late to start now. Each Cloud vendor needs to play to their strengths, and when they do that, they will find their market.

IBM SoftLayer Melbourne PoD open!

Image courtesy of
Image courtesy of

As per the press release, IBM SoftLayer Melbourne is open for business as of Tuesday 7th October 2014!

Using the SoftLayer API we can create a very short Python script to build a new virtual Ubuntu image in the new Melbourne SoftLayer datacenter. Before you begin you need a SoftLayer account and your API key which you can find under Account | Users then click on View to see your API key.

To create a virtual image you simply call the SoftLayer_Virtual_Guest::createObject API.

The parameters for this call are fairly self explanatory – with the exception of How do you know what the datacenter name for Melbourne is? (without going to the customer portal that is!)


The answer: simply call the SoftLayer_Location::getDatacenters API. For simplicity I do this via the HTTP REST API using curl, with some python json post processing to make it easy to read:

$ curl -s | python -m json.tool

An array of locations will scroll up the screen – but the important one is Melbourne:

"id": 449596,
"longName": "Melbourne 1",
"name": "mel01"

There is the magic short name – mel01.

With that information we can now create the short script to provision the guest:

import SoftLayer

client = SoftLayer.Client(username='SLxxxx', api_key='API_KEY')
client_object = client['Virtual_Guest'].createObject({
'hostname': 'test',
'domain': '',
'startCpus': 1,
'maxMemory': 1024,
'hourlyBillingFlag': 'true',
'operatingSystemReferenceCode': 'UBUNTU_LATEST',
"datacenter": {
"name": "mel01"
'localDiskFlag': 'false'

for key, value in client_object.iteritems():
	print key, " -> ", value

Save the file as

Before you can execute it, ensure you have the python Softlayer Library installed. See for instructions.

When you are ready – give it a whirl:

$ python

It will sit there for a moment then return some values about the new virtual guest, then will continue executing in the background.
By using the SoftLayer sl command line interface (CLI) to the API, you can see the progress of your virtual build and find out when it is ready. Ensure you setup your sl CLI following these instructions.


Then after a few minutes:


After that, you can grab your root password from the password repository in the portal. You can find your passwords under Devices | Manage | Passwords:


Your host should be listed, and just click on the password field:


Side note: you can also use the API to get at your passwords:

curl '\[softwareComponents\[passwords\]\]' | python -m json.tool

Armed with your root password, you can ssh in:

$ ssh
Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-35-generic x86_64)

* Documentation:
Last login: Tue Oct 7 06:48:35 2014 from

If your ssh session is slow to respond while you are trying to login, add the following line to your /etc/ssh/sshd_config file and reboot:

UseDNS no

From my Melbourne location, the pings are nice and quick as they should be:

$ ping
PING ( 56 data bytes
64 bytes from icmp_seq=0 ttl=55 time=9.301 ms
64 bytes from icmp_seq=1 ttl=55 time=8.006 ms
64 bytes from icmp_seq=2 ttl=55 time=7.800 ms

If you are finished for now – you can cancel your virtual guest – as you can create a new one whenever you need it using your python script.

To cancel:

sl vs cancel 6461446

For a couple of hours work, how much will this cost me?

$ curl -s


A grand total of $US0.04 cents.

There you have it. A very quick “getting started” tutorial for creating a virtual image via three different ways of using the API (REST, CLI and Python).

The elephant in the room of cloud providers

 SaaS-PaaS-IaaS (cloud)

Of all the tech press cloud articles I read on a daily basis, almost always the comparisons are between AWS, Azure, and Google where the battle for IaaS, PaaS and SaaS market share is happening.  There is the odd exception. But, as far as I am concerned, IBM are well in this race, and definitely in the top four of cloud providers. This is confirmed by the respected Gartner IaaS magic quadrant released around 2 weeks ago which places IBM 4th behind AWS, Microsoft, and Google.  Even Google only made their debut this year with their recently released IaaS product.

I will be first to admit that IBM were a slow starter in the IaaS market. And I know this frustrated many within the company. SmartCloud Enterprise (SCE) made some progress, but it was a small player compared to the competition. It was really aimed at IBM’s existing enterprise customers, and was an attempt at providing an unmanaged cloud option to those customers who were already dipping their toes into AWS, and more recently Azure.  The problem with SCE was that it was not designed as a solution for consumers or small/mid size businesses to use easily, with the lack of a fast and easy credit card signup option.  When you only have the option for setting up infrastructure via an enterprise account manager plus purchase order, your market will be limited, and many would not call that ‘cloud’ in the first place.

The real strategy shift at IBM began with the acquisition of SoftLayer.  This provides a scale of ability to execute and agility that is competitive with the other big providers. Prior to SoftLayer, IBM were not seen as a real player in cloud. Post SoftLayer it changed the game. SoftLayer were a proven cloud provider with many years of experience (they were established the year before AWS – around 2005). As an IBMer, I have seen the changes internally and externally over the last 12 months post acquisition.  As the SoftLayer acquisition was closed in mid 2013, it was the catalyst of an internal push to educate the whole company about IBM’s clear strategy of CAMS (Cloud, Analytics, Mobile, and Social).  All employees, no matter what their role, are given training on all these areas, with a special emphasis on the first friday of the month called ThinkFriday by IBM ThinkAcademy.  This NYT article is a good read with an explanation of the strategy.

With the public beta of BlueMix, based on Cloud Foundry and built on SoftLayer, IBM started to take up the challenge to the competition.  Subsequently launched was the IBM Cloud marketplace which ties all the IBM cloud offerings together. Whether it is PaaS offerings for Devs, IaaS offerings for infrastructure needs or pre configured SaaS offerings for business solutions. Having a clear one stop shop for business needs is essential and provides a good entry point for customers to see what IBM has to offer.

IBM has made very clear its strategy to the market and will leave no stone unturned to catch up to it’s competitors with $1.2 billion being spent in the expansion of SoftLayer data centres around the world including two in Australia opening up in the next couple of months, more strategic investments (anticipated) and partnerships (e.g.: the latest with the excellent Docker OS container based virtualisation product), and continual improvement and additions to the IBM Cloud marketplace.  Out of the “big four” cloud providers, only IBM has the experience and a complete set of offerings that can cover the whole spectrum of customers, from startups and small business to high end enterprise customers with specific needs such as SAP or Oracle which they can run on SmartCloud Enterprise+ (SCE+) – the high end enterprise managed cloud offering.


Checked luggage for Infants and Children on Australian domestic airlines

In this day and age, airlines like to grab as much cash off you as possible, with outrageous credit card booking fees, paying for checked luggage, and so on. While booking flights recently for my children it was not clear of each airlines policy for extra items that you would normally take on holiday with children, like strollers, car seats, high chairs, etc.  So I went looking for each airline’s policy on this.

Qantas make it easy and clear:


  • Children receive the same baggage allowance as adults.
  • Children up to 12 years of age are permitted one car seat and one collapsible pram, stroller or push chair in addition to the checked baggage allowance.

Virgin was also easy to find:


Children are entitled to the same baggage allowance as adults. Adults accompanying children are entitled to carry one car seat or booster seat per child as checked baggage free of charge, irrespective of the weight. If the adult does not have a baggage allowance, the car seat or booster seat can still be checked in free of charge.

The second hardest to locate one was Jetstar – where they hide it in the travelling with kids section, and not in the baggage allowance section which would be the first place to look!

From the Jetstar website:

Here’s a list of the bulkier things families can check on Jetstar flights without attracting excess baggage charges:

Strollers or pushers

Porta-cots and bedding

Infant car seats

Portable high-chairs

And finally Tiger, they are the strictest:

If you are travelling with an Infant, Tiger Airways will allow, at no additional charge, 1 piece of Infant equipment, such as a pram or portable cot to be checked, in addition to any Luggage Upsize™ allowance purchased. Infants do not qualify for baggage allowance as they are not fare-paying passengers.

Tiger are the only airline that restrict the number of extra items per infant at 1. They do not mention their policy for children and booster seats. I would assume it is the same as Infants, but it does not seem to be on their website in an obvious location.

I tweeted @tigerairwaysaus for confirmation on this and this is their response:

@mattgillard Hi Matt- if they are child fares you will have to buy checked luggage which you can do online here

So they allow a car seat as a free checked luggage item for infants (aged < 2) but not for children (aged > 2).  This requires extra luggage purchase allowance – they call it Luggage Upsize™.