Current C++ Compile-time Performance

I’ve started building a number of large C and C++ packages regularly again (chromium, Linux kernel, gcc and others) so I thought it would be useful to get a sense of how fast the current compilers are relative to each other.

For my testcase I chose a library from the LLVM 2.8 toolchain.  I went with 2.8 as it’s the newest version I found that would compile with GCC (g++) 3.4.  I originally hoped to find a test that would build with all versions of g++ going back to 2.7, but unfortunately the C++ language specifications and compiler strictness have changed a lot in 20 years, so that wasn’t possible.  I compared built time using debug options (-O0 -g), standard build options (-O2 -g) and optimized build options (-O3 alone).  Time was measured in seconds.

Continue reading

Is G++ getting slower over time?

A rather common complaint about GCC, the GNU C/C++/Java/etc. compiler suite, is that each new release is slower than the last.  To verify this, I ran some tests building LLVM 2.8’s code-generation support library which seemed a fair benchmark given that it’s a sizable collection of C++ code.

As results below indicate, looking at all versions of g++ from 3.4 to the current development source (‘trunk’), there has been an overall slowdown between g++ 3.4 (released in 2004) and current trunk (to be released in earl 2013) at both the standard (-O2 -g) and optimized (-O3) build level, but in-between versions have fluctuated.  g++ 4.0 improved in almost all respects on g++ 3.4, and version 4.3 and 4.4 offered some speedups over their predecessors.  More recent g++ versions, starting with 4.5, have been steadily getting slower, by an amount varying between 5% and 20% with each release.

Continue reading

Faster builds with multithreaded make

Multiple core processors are the norm at this point, with four and now six cores being pretty much standard on desktop machines.  While the vast majority of software packages including pretty much all compilers are single-threaded, the build tool GNU Make has had the ability to use many threads for quite a while now with the -j flag.  While this requires makefiles to be written carefully to avoid implicit dependencies, the speedup can be quite significant.

As an example, here are the build-times for the clang C/C++ compiler and LLVM support libraries using from 1 to 10 threads.  The build system has a 4-core Intel Core i7 920 processor (2.6GHZ, Hyperthreading) which was state of the art about 3 years ago.

Continue reading

One way to reduce compile time

GCC has been the de-facto standard compiler on Linux and most other platforms for compiling open-source software (e.g. apache, firefox, mysql etc.) for quite a while.  A standard complaint though is that the compiler has been getting steadily slower over time.  I’ve found this anecdotally true, and while for small projects it doesn’t really matter, when compiling Firefox takes upwards of an hour, it starts to be a more significant concern.

Continue reading

Building obsolete software, with debootstrap and chroot

One of my ongoing projects has been to build pretty much all major versions of GCC, mainly so that I can test the performance of the compilers over time.  For the last 7 or 8 versions, that’s not so hard – they build more or less okay on a current Ubuntu system.  Unfortunately, the further back you go, the more breakage there is.  For example, the C++ runtime in gcc 3.3 and 3.4 appears not to work with the current version of GNU libc.  And I can’t very well downgrade glibc without breaking, well, everything.

Happily, the combination of the chroot and debootstrap tools offers a rather painless alternative.

Continue reading

Building clang on Ubuntu Linux 12.04

Background

For a long time, the LLVM project used the various GCC front-ends with their compiler.  However, at the prompting of Apple who was looking for a faster compiler, as well as one unencumbered by the GPL 3 license, the clang project was begun in 2007.  Since then, clang has become a full featured C and C++ compiler. While installing the compiler on current versions of Linux isn’t quite the chore that GCC is, there are still a number of complications.  Below are my notes on installing the compiler on the  Ubuntu 12.04 (Precise) Linux distribution on a 64-bit (x86_64) machine.  Note that the compilers can still generate 32-bit code (using the -m32 switch).

Continue reading

Software Dependency Hell

Back when I was first starting to use Linux, I made the mistake of attempt to upgrade my installation, package by package, to a newer release (I didn’t have a  CD burner at the time).  Unbeknownst to me, the newer distribution included a newer, incompatible version of the GNU C library (glibc), so when I upgraded the glibc package, every application that was dynamically linked against glibc, including the package manager itself, broke.  The system was hosed.

Continue reading

SSDs and Lightroom: Bitten by Amdahl’s Law

Spinning pizza of death

I’ve been using Adobe’s Lightroom image processing software since pretty much the very first beta release to organize and edit my photos.  On the whole, it’s a well laid-out application with a number of very useful features and it’s capable of producing excellent quality output.  That said, using Lightroom has always been an exercise in patience.  It’s simply not a very fast program.  For bulk tasks like exporting JPEGs from RAW images, that’s not a problem – you get it started and go off and do something else.  But when editing individual images starts to bog down, it’s a lot more frustrating.

The sluggishness has been particularly noticeable since I got my (16MP) Olympus E-M5 this summer.  The files aren’t that much bigger than those from my older 12MP cameras, but for whatever reason, editing them has been a lot more painful.  So in a fit of frustration, I finally broke down and ordered an SSD (solid state drive) for my main computer.

Continue reading

iPhone 5 – Hooray for refinement

iPhone5

 

So according to the pundits, Apple’s iPhone 5 is a massive disappointment because it doesn’t offer a completely new body design. And uses a different dock connector.  And doesn’t make toast.  Or something.

Look – I understand that the press can’t just go around saying nice things about Apple constantly.  There is a rather vocal legion of people who have a visceral dislike for the company and are liable to make lives unpleasant if the press is insufficiently ‘critical’ of new Apple products.  That said, it all seems a bit thin.

What Apple did is more or less what they have a history of doing since the beginning of the Steve Jobs era – taking a successful product and improving it (or trying to).  They’ve never followed the approach of competitors who change their designs 180 degrees every two years.  Does this mean that they’re getting lazy and uncompetitive?  I guess that’s one way to read it.  But as a consumer, I find it refreshing not having to relearn everything each time I update. Familiarity is (generally) good.

Continue reading

Apple’s missing xMac

Mac mini

One of the big complaints about Apple’s computers is that they’re expensive.  In fact, this is often not true – comparing like products with like components one tends to find that Apple prices things reasonably closely to where their competitors do.  What Apple does do however is choose not to compete in certain segments of the market.

In most cases, this makes sense.  They don’t really have an entry in the cheap desktop and cheap laptop areas, and it’s hard to see how they would do so without offering a product with serious drawbacks.

There is one lack that I find fairly annoying however – a mid-priced expandable desktop machine.

Continue reading