Installing Snow Leopard in VMware Fusion

Snow Leopard

When it comes to running older software on current systems, Apple has a pretty poor track record.  Last year’s  macOS 10.15 Catalina release removed support for all 32-bit applications, including for example Lightroom 6 and Photoshop CS6.  I am as result sticking with the prior release (macOS 10.14).  Eventually I will probably have to upgrade and resort to running my old software in a virtual machine.  As it happens, I did more or less the same dance nine years ago when Apple released MacOS X 10.7 Lion and remove the ability to run older PowerPC applications via Rosetta.  Fortunately, at the time VMware’s Fusion product did allow running the prior 10.6 release, and I have relied on this more than a few times in the years since.  I’m a big fan of Fusion as it allows one to test/experiment with all manned of different operating systems, without of course having to buy any hardware or risk breaking you main machine.

Continue reading

Windows 8.1 – change for the sake of change

I’ve now installed Windows 8.1 three separate times and used it on and off for a couple of months, and I can’t say I’m really warming to it.

The problem, as has been pointed out many times now, is that Microsoft seems to have decided that tablets are the future, and rather than build an OS specifically for tablets, they transformed their desktop OS to make it more tablet friendly, and in the process made it very desktop unfriendly.

Continue reading

2014 desktop computer upgrade

After four years, it finally looks like it’s time to upgrade my desktop machine.  For most things, the machine is fine, but when it comes to processing images in Lightroom, the delay involved in rendering each image (4-5 seconds per 16MP raw file) has started to be annoying.  The hope is that by jumping forward 3 processor generations, and doing some modest overclocking, I can get that down by 40-50%.

Continue reading

The case for the (still) missing Apple xMac

Xmac

Ever since the introduction of the first Power Mac G5 towers, a number of Mac users have been holding out hope for a mid-range machine that would offer more expansion and upgradability than an iMac, at less cost than the a Power Mac G5 (or later Mac Pro).  Ars Technica’s John Siracusa gave this elusive product a name: the xMac.

The basic premise of the xMac of course was that we needed a Mac that was both semi-affordable (not huge was nice too) and at the same time somewhat future-proofed.  The iMac for all its virtues has never been a particularly friendly machine for upgrading and has only gotten worse with time.  These days, even changing the hard drive is a pain, and of course if the display goes, you’re hosed.  The Mac Pro meanwhile has always been pretty good on the expansion front, but it’s hard to justify paying $2500+ when a much less expensive machine would work equally well for my tasks.  Plus recent rumors notwithstanding, Apple seems to have more or less abandoned the Mac Pro at this point, leaving it woefully outdated (CPUs are 2 generations out of date, machines lack Thunderbolt, etc.).

Continue reading

The 2013 Apple Mac Pro wastebasket edition

Apple Mac Pros

Clearly most striking thing about Apple’s newest Mac Pro is how little it looks like it’s predecessor.  Eschewing the large silver tower design that Apple has used more or less unchanged since releasing the original Power Mac G5 in 2001, the new Mac Pro has the appearance of a sleek dark-gray cylinder, or as some have unkindly suggested, a trashcan.

Continue reading

make scales with # processors

I recently had access to an 18(!) core machine, so I naturally ran my favorite benchmark – building the clang 3.2 C/C++ compiler – using between 1 and 36 threads.  The build scaled quite well.  Going from 1 to 6 threads gave a 5.5x speedup, while going up from 1 to 12 gave a 9.3x speedup.  At 18 threads, the speedup was 11.8x.  Above 18 threads, there was no speedup.  Given that the makefiles don’t seem to have been tailored specifically to many threads, that’s overall pretty good.

Below are the charts, first of time vs. number of threads used by make, then of speedup vs. number of threads used by make.

Continue reading

Another case where SSDs don’t help

The project I’m working on involves compiling a huge number of source files which means we need a sizable amount of scratch space on which to run our experiments.  Our main compute server was running a little short on disk space, so it seemed like the appropriate time to add a new disk. We considered both solid state and traditional spinning platter drives.  The SSD seemed like the better performing options, but we eventually settling on an HDD, as the price of SSDs (due to restriced suppliers) was still prohibitive.

I was curious though how much performance improvement an SSD might have yielded, so I ran a small experiment on my desktop, which does have a (small) SSD.  I built the clang C/C++ compiler version 3.2, first off the HDD (a typical 7200RPM 750GB affair, then off my SSD (a Crucial Sandforce MLC device), and finally off of a ramdisk (Linux tmpfs).

The result?

Less than a 1% difference in compile time between the 3 options.

Similar to the case of Lightroom, it looks like compiling, at least for a mid-sized project (500MB of source) doesn’t benefit from an SSD vs. a hard disk.  Considering that ramdisk and HDD performance were virtuall identical, it seems quite likely that the whole thing never even left the memory of the operating system’s disk cache.

File compression on UNIX

I’ve been moving around a lot of data lately, particularly over the network, so it seemed like a good idea to settle on a compression regimen.  Networks are fast and all, especially at school, but moving multiple gigabytes of data still doesn’t happen instantly.  So I did a comparison of the current mainstream compression programs on Linux.  The system had a fast SSD drive, so operations were mainly CPU bound.

The contenders

  • bzip2 – a fairly popular replacement for gzip, though generally believed to be slower for archiving and unarchiving.
  • compress – interesting for historical purposes and accessing old archives, but no longer really used otherwise.
  • gzip – intended as a free compress replacement, it’s still the most commonly used UNIX compression tool.
  • lzip – an FSF-endorsed LZMA-based encoder claiming higher efficiency than more common tools.
  • lzop – uses a similar algorithm to gzip, but claims to be much faster and so particularly useful for large data files.
  • xz – an LZMA-based encoder claiming high efficiency and speed.
  • zip – still the de-facto standard on Windows, but not particularly popular on Linux.

The Test

To compare, I compressed and decompressed a 220MB tar archive, containing a distribution of the clang C/C++ compiler.  For all program other than compress which only has one setting, I tried the minimum compression setting (-1), the maximum compression setting (-9) and the default setting (no option).

Continue reading

Lightroom Performance – part 3

As part of my attempt to puzzle out why Adobe Lightroom has been (comparatively) slow with my E-M5, I’ve taken a number of timings of different attributes.  Among other things I’ve concluded that:

  1. There’s no significant difference in speed between versions 1, 2, 3 and 4.
  2. Import speed scales roughly linearly with image resolution – a file with 2x the megapixels will take approx. 2x the amount of time to import.
  3. Different RAW file formats generally don’t impact processing speed, with the exception of Fuji’s RAF, Olympus’s ORF and Samsung’s SRW.  The Fuji and Olympus files are slower to process (roughly 150% and 60% respectively), and the Samsung slightly faster.
  4. Correcting for lens flaws – particularly in the case chromatic aberration, does cause some slowdown.

Continue reading

Lightroom performance – part 2

So I’ve been complaining for some time about the speed of Adobe’s Lightroom photo processing software.

I finally got around to doing some comparisons of import times, using files from different cameras.  My initial thought was that the auto-correction used by micro 4/3 lenses was slowing things down on my recently acquired E-M5, along with larger files.  To test that theory, I took 100 RAW files from a number of different cameras and lenses, and measured how long it took to import them and generate 1:1 previews.

Continue reading