Windows 8.1 – change for the sake of change

I’ve now installed Windows 8.1 three separate times and used it on and off for a couple of months, and I can’t say I’m really warming to it.

The problem, as has been pointed out many times now, is that Microsoft seems to have decided that tablets are the future, and rather than build an OS specifically for tablets, they transformed their desktop OS to make it more tablet friendly, and in the process made it very desktop unfriendly.

Continue reading

2014 desktop computer upgrade

After four years, it finally looks like it’s time to upgrade my desktop machine.  For most things, the machine is fine, but when it comes to processing images in Lightroom, the delay involved in rendering each image (4-5 seconds per 16MP raw file) has started to be annoying.  The hope is that by jumping forward 3 processor generations, and doing some modest overclocking, I can get that down by 40-50%.

Continue reading

The case for the (still) missing Apple xMac

Xmac

Ever since the introduction of the first Power Mac G5 towers, a number of Mac users have been holding out hope for a mid-range machine that would offer more expansion and upgradability than an iMac, at less cost than the a Power Mac G5 (or later Mac Pro).  Ars Technica’s John Siracusa gave this elusive product a name: the xMac.

The basic premise of the xMac of course was that we needed a Mac that was both semi-affordable (not huge was nice too) and at the same time somewhat future-proofed.  The iMac for all its virtues has never been a particularly friendly machine for upgrading and has only gotten worse with time.  These days, even changing the hard drive is a pain, and of course if the display goes, you’re hosed.  The Mac Pro meanwhile has always been pretty good on the expansion front, but it’s hard to justify paying $2500+ when a much less expensive machine would work equally well for my tasks.  Plus recent rumors notwithstanding, Apple seems to have more or less abandoned the Mac Pro at this point, leaving it woefully outdated (CPUs are 2 generations out of date, machines lack Thunderbolt, etc.).

Continue reading

The 2013 Apple Mac Pro wastebasket edition

Apple Mac Pros

Clearly most striking thing about Apple’s newest Mac Pro is how little it looks like it’s predecessor.  Eschewing the large silver tower design that Apple has used more or less unchanged since releasing the original Power Mac G5 in 2001, the new Mac Pro has the appearance of a sleek dark-gray cylinder, or as some have unkindly suggested, a trashcan.

Continue reading

make scales with # processors

I recently had access to an 18(!) core machine, so I naturally ran my favorite benchmark – building the clang 3.2 C/C++ compiler – using between 1 and 36 threads.  The build scaled quite well.  Going from 1 to 6 threads gave a 5.5x speedup, while going up from 1 to 12 gave a 9.3x speedup.  At 18 threads, the speedup was 11.8x.  Above 18 threads, there was no speedup.  Given that the makefiles don’t seem to have been tailored specifically to many threads, that’s overall pretty good.

Below are the charts, first of time vs. number of threads used by make, then of speedup vs. number of threads used by make.

Continue reading

Another case where SSDs don’t help

The project I’m working on involves compiling a huge number of source files which means we need a sizable amount of scratch space on which to run our experiments.  Our main compute server was running a little short on disk space, so it seemed like the appropriate time to add a new disk. We considered both solid state and traditional spinning platter drives.  The SSD seemed like the better performing options, but we eventually settling on an HDD, as the price of SSDs (due to restriced suppliers) was still prohibitive.

I was curious though how much performance improvement an SSD might have yielded, so I ran a small experiment on my desktop, which does have a (small) SSD.  I built the clang C/C++ compiler version 3.2, first off the HDD (a typical 7200RPM 750GB affair, then off my SSD (a Crucial Sandforce MLC device), and finally off of a ramdisk (Linux tmpfs).

The result?

Less than a 1% difference in compile time between the 3 options.

Similar to the case of Lightroom, it looks like compiling, at least for a mid-sized project (500MB of source) doesn’t benefit from an SSD vs. a hard disk.  Considering that ramdisk and HDD performance were virtuall identical, it seems quite likely that the whole thing never even left the memory of the operating system’s disk cache.

File compression on UNIX

I’ve been moving around a lot of data lately, particularly over the network, so it seemed like a good idea to settle on a compression regimen.  Networks are fast and all, especially at school, but moving multiple gigabytes of data still doesn’t happen instantly.  So I did a comparison of the current mainstream compression programs on Linux.  The system had a fast SSD drive, so operations were mainly CPU bound.

The contenders

  • bzip2 – a fairly popular replacement for gzip, though generally believed to be slower for archiving and unarchiving.
  • compress – interesting for historical purposes and accessing old archives, but no longer really used otherwise.
  • gzip – intended as a free compress replacement, it’s still the most commonly used UNIX compression tool.
  • lzip – an FSF-endorsed LZMA-based encoder claiming higher efficiency than more common tools.
  • lzop – uses a similar algorithm to gzip, but claims to be much faster and so particularly useful for large data files.
  • xz – an LZMA-based encoder claiming high efficiency and speed.
  • zip – still the de-facto standard on Windows, but not particularly popular on Linux.

The Test

To compare, I compressed and decompressed a 220MB tar archive, containing a distribution of the clang C/C++ compiler.  For all program other than compress which only has one setting, I tried the minimum compression setting (-1), the maximum compression setting (-9) and the default setting (no option).

Continue reading

Speeding up builds with distcc on Ubuntu

distcc is a very clever utility aimed at speeding up builds for large projects.  It distributes compilations across a range of machines, piggybacking on make’s ability to multithread builds.  If you have multiple machines of fairly comparable processing power, distcc can vastly decrease the amount of time a large build takes.  Here, we look at how to use distcc on an open network with machines running the same operating system.  Note that you can also use distcc with a cross-compile environment, as described here.

For the purposes of this tutorial, we’ll call the machine that runs the build locally the build host and the other machines compile hosts.

Continue reading

Building compcert

Compcert is a formally verified compiler produced for the French National Institute for Research (INRIA).  It uses machine-verified proofs to guarantee the correctness of generated code.  As such, it’s an interesting tool for exploring compiler correctness.

Below are instructions for building it (from scratch), applicable for the current 1.12 release.

Continue reading

Building GCC from trunk on Ubuntu 12.04

The current development version of GCC lives in the trunk of a subversion repository.  It is fairly easy to build on Ubuntu, however there are some dependencies that have to be met.  As it’s often handy to have the latest ‘bleeding edge’ version of GCC around (for verifying compiler bugs), I’ve written out the procedure below.

Continue reading