In this day and age, airlines like to grab as much cash off you as possible, with outrageous credit card booking fees, paying for checked luggage, and so on. While booking flights recently for my children it was not clear of each airlines policy for extra items that you would normally take on holiday with children, like strollers, car seats, high chairs, etc.  So I went looking for each airline’s policy on this.

Qantas make it easy and clear:

Children

  • Children receive the same baggage allowance as adults.
  • Children up to 12 years of age are permitted one car seat and one collapsible pram, stroller or push chair in addition to the checked baggage allowance.

Virgin was also easy to find:

Children

Children are entitled to the same baggage allowance as adults. Adults accompanying children are entitled to carry one car seat or booster seat per child as checked baggage free of charge, irrespective of the weight. If the adult does not have a baggage allowance, the car seat or booster seat can still be checked in free of charge.

The second hardest to locate one was Jetstar – where they hide it in the travelling with kids section, and not in the baggage allowance section which would be the first place to look!

From the Jetstar website:

Here’s a list of the bulkier things families can check on Jetstar flights without attracting excess baggage charges:

Strollers or pushers

Porta-cots and bedding

Infant car seats

Portable high-chairs

And finally Tiger, they are the strictest:

If you are travelling with an Infant, Tiger Airways will allow, at no additional charge, 1 piece of Infant equipment, such as a pram or portable cot to be checked, in addition to any Luggage Upsize™ allowance purchased. Infants do not qualify for baggage allowance as they are not fare-paying passengers.

Tiger are the only airline that restrict the number of extra items per infant at 1. They do not mention their policy for children and booster seats. I would assume it is the same as Infants, but it does not seem to be on their website in an obvious location.

I tweeted @tigerairwaysaus for confirmation on this and this is their response:

@mattgillard Hi Matt- if they are child fares you will have to buy checked luggage which you can do online here http://t.co/bz1omVKCNb

So they allow a car seat as a free checked luggage item for infants (aged < 2) but not for children (aged > 2).  This requires extra luggage purchase allowance – they call it Luggage Upsize™.

ProjMgt

In any project planning exercise, whether it is via a pre-sales effort, internal project, or some other  architecture definition for a system, very early in the planning stages there is a requirements gathering process that takes place. In general, these requirements classically can be segmented into two categories. Functional or Non-Fuctional requirements. Some other types of requirements that in my experience are often overlooked until sometimes at the last minute, if at all, are constraints. This is a quick blog post detailing the difference between requirements, constraints and assumptions in an IT context but it probably maps to other industries as well.

Firstly to requirements. Requirements are usually gathered via workshops with the stakeholders, or via RFP or tender documents.  All requirements can be easily validated, and should be validated at regular intervals throughout delivery of the project.  If a particular requirement is not met at any point during the project lifecycle – knowing about it sooner rather than later is preferred. When solutioning a system, requirements come down to two different types.  Functional and non-functional requirements.

Functional requirements detail what the stakeholders expect from the solution, and what the end result should look like.

Whereas non-functional requirements are usually an extra type of requirement that details performance and availability metrics around the solution taking into consideration all the stakeholders views – these could also be known as constraints around requirements.

Which brings me to constraints.  Usually a constraint is a techinical limitation around the solution, which could result in additional functional requirements that need to be captured up front. But a constraint can also be a range of values of a component of the system which then becomes a non-functional requirement. An example of the former could be that the solution needs to use a particular product, operating system or other technology due to an IT Standard or other ruling, or perhaps dictated by the overarching Enterprise Architecture. A constraint that leads to a non-functional requirement could be that the CPU and memory utilisation of the upgraded software must be less than or equal to the exising system it will replace. Constraints are often overlooked until later in the solution lifecycle which is too late. The same amount of effort put into the requirements gathering process should also be put into surfacing constraints as early as possible.

I will end with a brief word on assumptions. The definition of assumption:

noun
a thing that is accepted as true or as certain to happen, without proof

Essentially an assumption is a requirement that has been ‘made up’ and un-validated by your project stakeholders – possibly an educated guess if you like.  When pricing a solution, if you have too many assumptions, it is possible that you do not really know what needs to be delivered in the first place and those assumptions are best re-worded as questions to the appropriate stakeholders. Moving assumptions into validated requirements or remove them altogether should be your aim.

Understanding the difference between functional requirements, non-functional requirements, constraints and assumptions when developing a solution or opportunity is absolutely essential. Especially when real money is on the line it could mean the difference between a successful delivery or ultimate failure of a project. It also shows you understand what needs to be delivered, demonstrates that you have thought about all facets of the problem and shows that you know many of the internal and external factors influencing the delivery of the project and can work around them.

I don’t do new years resolutions. But having said that, I am going to blog more this year.  In my day job I do a lot of research, and much of the time months later, the information is forgotten/not recorded/in a black hole. Or if I am lucky I have written it down or stored a bookmark somewhere. So without further ado, todays post is about making you aware that you need to be careful migrating legacy Solaris workloads to the T-series family of servers. This is especially relevant today, maybe more than ever before in the age of virtualisation and clouds. During my day job, at least once or twice a year I come across a situation that could have been avoided – a poorly written or poorly tuned application placed on an Oracle Sparc T-series server.

A Brief History

In 2002 Sun Microsystems invested in a company called Afara Websystems that was developing an innovative highly multi threaded processor tuned for highly multi-threaded applications.  The name of this processor was called Niagra, and they had some ex-Sun employees in their company. The idea was to build a platform which would run Java and the web screaming FAST, and being able to process large numbers of execution threads simultaneously. In conventional processors, the most latency during execution is when data has to be fetched from main memory. With a large number of hardware threads the thread requesting main memory can be idled while the request to main memory is in progress – and the core can execute another ready to run hardware thread.  Building the T-series platform was about re-thinking the way processors had evolved up until that point and stripping the layers back to the basics to reduce latency during instruction execution. Web traffic generally matches this multi-theaded profile that is suitable for T-series hardware.  Java can be – or not, depending on which developer you have writing your code. Java application servers can match this profile as well – as long as you don’t have lock contention, then all bets are off and your application slows to a crawl – but more of that later.  Just remember, more often than not – a legacy java app server has not been written with optimised multi-threading in mind.  Thats been my experience anyway.

Without going into too much detail – the T1 processor as it was later called, was:

  • optimised for speed by keeping each core’s instruction pipeline as busy as possible.
  • does not perform out of order execution of instructions.
  • the T1 consisted of a single processor, with up to 8 cores.
  • the T1 runs at speeds up to 1.4Ghz.
  • each core has 4 threads per core, giving a maximum concurrency of 32 threads.
  • provided a Hyper-Priviledged execution mode. (providing Sun’s entry into Virtualisation with LDOM’s).
  • single Floating Point Unit (FPU) shared with all cores.
  • single cryptographic unit shared with all cores.
  • Max 32GB memory.
  • Shared L2 Cache 3MB.
  • 8kB primary data cache (per core).
  • 16kB primary instruction cache (per core).
  • low power (72W).

The design of the T1 processor was open sourced in 2006, and called the OpenSPARC project.

In October 2007 came the T2:

  • 8 threads per core, doubling maximum concurrency to 64 threads
  • runs at speeds up to 1.6Ghz (slightly higher)
  • FPU per core, rather than per processor (significant improvement on FP operations)
  • Dual 10GB ethernet integrated onto the chip
  • 1 crypto unit per core
  • Shared L2 Cache increased from 3MB to 4MB
  • power consumption up to ~95W due to extra integration with the chip
  • improved instruction pre-fetching to improve single thread performance
  • 2 integer Algorithmic Logic Units (ALU’s) per group of 4 threads – up from one on the T1 – this increased throughput of integer operations

In 2008  the T2 Plus was released which provided up to 4 sockets of T2′s (total concurrency 64*4=256 threads).

In September 2010 the T3 launched:

  • 16 cores still with 8 threads per core
  • Shared L2 Cache increased from 4MB to 6MB
  • Primary instruction cache still 16kB (per core)
  • Primary data cache still 8kB (per core)
  • Came in 4 variants, 1, 2 or 4 physical processors running at 1.65Ghz (almost no change from the T2)
  • Up to 512GB memory depending on the T3 server model

Across the first three generations, the chip did not change much. Mostly incremental changes, a few more cores, a few more chips, some extra buses, but nothing spectacular.

One thing that is constant across them all, is extremely poor single thread performance. I cannot stress this enough!! Oracle don’t include this in the marketing material. A combination of a slow clock speed, coupled with no L3 cache, no out of order execution of instructions, and poorly written/tuned applications makes for pretty dismal performance. These poorly tuned applications run perfectly fine on the circa 2004-2006 SPARC IV/IV+ line of processors that power the multiprocessor V480, V490, V880, V890, and the E25k platforms! Your single threaded app runs perfectly fine on a SunFire V890 but when migrated to a new and shiny T3-4 it runs like a dog.  Sometimes up to 10x slower, you probably won’t be impressed.  But even more frustrating is that someone in the business has just spent a significant investment on newer hardware with an expectation of better performance.  On a T5220 (T2 based) the litmus test was to check CPU utilisation while your workload was running.  If it was running at a constant 3.125% utilisation and not going any higher, chances are you had a single thread problem.  This indicates that a single hardware thread is running at 100% (1/32 *100).

Single Thread Applications Live!

The T4 was released in late 2011:

  • Processor speed jumped to 2.85 or 3GHz
  • Integer and floating point pipelines more efficient
  • Primary data cache doubled to 16kB (per core)
  • L2 cache now 128kB and is now per core
  • new 4MB L3 cache shared by all 8 cores
  • first T-series chip that performs out of order execution
  • 8 cores per chip (down from 16)
  • still 8 threads per core
  • critical thread mode

This significantly improved single thread execution (Oracle quote a 5x single thread performance increase). The specs above really make it clear why it is faster for single threads:

  • much faster clock speed
  • extra on core cache + the new L3 shared cache
  • improved instruction pipelines
  • and the killer – critical thread mode.

Effectively what this means is that if a core detects it is executing a single thread – then all resources that would be used to handle other threads, are directed to help that single thread execute as fast as possible.

The T4 is the first T-series platform that you can do serious application consolidation from legacy hardware.  Using Oracle virtual machines (OVM) for SPARC (previously known as LDOMs) gives you the flexibility to carve up a T4-4 (4 processors) to place legacy workloads. As all T-series chips only support Solaris 10 (and now Solaris 11), if you want to consolidate Solaris 8 or 9 workloads you need branded zones within an OVM.

Conclusion

The underlying architecture of the T-series from inception has not changed much over about 7 years. I have seen customers consolidate existing workloads onto these platforms in the last few years where they saw many unexpected performance problems. Mostly they have been resolved by optimising database queries, or re-architecting applications to work more efficiently under the T-series multi-thread model. With the T4 and upcoming T5 – things should improve and sites that have large legacy Sun/Solaris footprints will be able to finally consolidate – without having to spend a bomb on M-series servers. Most of the time you want to consolidate and get power saving benefits – the T4 processor or higher fits the bill. The T5 is expected to bring back 16 cores per chip as per the T3 architecture but with the T4 single thread speed improvements.

Just bought a Netgear WN2500RPT Dual Band wireless range extender from JB Hifi for $87.  Bargain really! Even the sale price of $99 (down from $129) is good value.  It also has 4 wired ports as well which is handy – although I do not really need them at the moment.  Speed of the ports is not specified – my assumption is that they are 100MB/s.  The only other product on the market (that’s not an apple airport express) which is a dual band extender appears to be a Belkin. However it is $149.

This extender covers the last dark area around my house where wireless drop out more often than not.  I set it up as a wireless client to a 2.4GHz and 5GHz WLAN provided by my Apple Time Capsule/Airport Extreme.  I also have a Airport Express to extend my WLAN in the other direction.

A quick tip – when extending your WLAN – make sure you keep the same SSID – this way your devices will always connect to the strongest signal with no further configuration required.

This evening I just discovered this issue with the Facebook iPhone application.  If you create a new album with the iPhone app, the privacy settings for this album will be set to everyone!

Steps to reproduce:

  1. Go to Photos in the facebook iPhone application
  2. Press + to add a new album
  3. Enter an album name
  4. Click Create (note the lack of Privacy setting)
  5. Now open a web browser. Go to your profile and click on Photos.
  6. Click Album Privacy
  7. You will see this:
  8. However if you create a new album with the web browser interface the default security is correct:

I have reported this to facebook as a bug – they have not commented as yet.

My advice is to check your privacy settings if you have been creating photo albums on your iPhone!

Update 15/04/2010: Facebook have responded acknowledging this as a bug:

—– Original message —–
From: “Facebook Support” <xxxx@support.facebook.com>
To: matt@xxx
Date: Wed, 14 Apr 2010 17:58:08 -0700
Subject: Re: My updated privacy settings are not functioning correctly: iphone facebook app default settings…

Hi Matt,

We are aware of the problem that you described and apologize for the inconvenience.  Unfortunately, we do not have a specific date for when this issue will be resolved but hope to fix it as soon as possible. We appreciate your patience.

Thanks for contacting Facebook.

Something that has been bugging me since I upgraded to Firefox 3.6.  My mouse wheel broke.  Now – you don’t realise you miss something until it is gone… and man did I miss it.

After trying many things (upgrading drivers, etc…) I just found a simple solution to my problem this morning.

Make sure “Microsoft Office 97 Scrolling emulation is turned on”.  I had “Universal Scrolling” set. There appears to be two different mouse wheel protocols and Firefox has started to only support one.  Thanks guys. ;)

Solution is here:

https://support.mozilla.com/en-US/forum/1/572726?s=scroll%20wheel%203.6

Now why Mozilla did not publish this in the release notes I have no idea!

Follow

Get every new post delivered to your Inbox.

Join 365 other followers