HPC Consortium: University of Ulm’s Solaris Geek

Yesterday afternoon, Thomas Nau who is Head of the Infrastructure Department at the University of Ulm and a self-described Solaris geek, gave a talk titled “Storage the Solaris Way” at the Sun HPC Consortium meeting here in Dresden. The main points of his talk were an overview of the ZFS value proposition and a quick tour through cool things one can do with Solaris out of the box, for example using iSCSI and using the various network attached storage solutions available as part of Solaris.

Thomas first reminded the audience of what he and most other people in the HPC community want in a storage solution: safety and reliability, fast error detection and correction, performance, expandability, and interoperability via open standards. All of which are offered by ZFS.

With respect to safety and reliability, Thomas mentioned the following ZFS attributes:

  • 256-bit checksums for everything, not just metadata
  • ditto blocks to create copies of mission-critical metadata
  • transactional i/o semantics using COW (copy-on-write)
  • instant snapshots and clones
  • exploitation of on-disk and on-array caches for performance

With respect to built-in Solaris storage options, Thomas took the audience through a whirlwind tour of Solaris network attached storage (NAS) capabilities as well as block-level access using iSCSI. He also managed to demo all of this using his laptop, which was running two virtual machines called Angelina and Brad.

As shipped, Solaris has built-in support for NFSv4, Samba, and (in OpenSolaris) CIFS. As Thomas pointed out, the Samba implementation has been modified to support ZFS as a virtual file system back end and the CIFS server has been implemented in the kernel for maximum performance advantage.

To demonstrate iSCSI, Thomas set up several storage pools and then exported them via iSCSI as normal disks from Angelina to Brad where he mounted the disks in a mirrored configuration, which was all quite easy to do. Your correspondent, however, was not fast enough to capture the details of the demo. I expect that slides will be made available on the HPC Consortium website at some point.

In terms of performance, a test done at Ulm showed that the iSCSI approach which used several small storage arrays onto 2×2 redundant x4500 servers delivered comparable performance to a previous FC-AL solution that had been used with several small storage arrays.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: