Archive for September, 2008

Buddha’s Message to Washington Politicians

September 30, 2008

Look not to the faults of others,
nor to their omissions and commissions.
But rather look to your own acts,
to what you have done and left undone.


The Joy of Christmas

September 26, 2008

I realized this evening when pondering the desirability of having Christmas cookies available year-round, that the graph below captures a truth for many adults. Think how fondly you might view Christmas while munching on a Santa cookie sometime in June versus how you will feel in the weeks approaching Dec 25th.

The Future of Cloud Computing

September 24, 2008

The cloud computing discussion at this week’s High Performance on Wall Street conference stimulated some questions in my mind about the future of cloud computing.

Cloud Computing is currently at a very early stage in that clouds are just starting to appear, each with its own approach and with people now starting to explore how to use cloud infrastructures like Amazon’s EC2, AT&T’s Synaptic Hosting and others to advantage–both academic experimentation as well as a leveraging of these rented resources by startups, etc., as a fundamental part of their business infrastructure.

Most important, however, is the fact that cloud supply currently far exceeds cloud demand as one would expect in the early adoption stage of a highly-hyped concept. Because of the over-capacity of available resources, one does not currently need to worry about whether cloud resources will be available when you need them. But what happens when that changes?

Speakers at the High Performance on Wall Street conference were sure that as clouds became much more like commodities and resources became more constrained due to increased demand, then free market mechanisms like futures markets would evolve to mediate access to these resources.

If that is true, and it does sound reasonable, how will that change who uses the cloud and how they use it? Will startups be able to build their businesses with cloudy back ends if they must bid for access and utilization on an open market? It isn’t clear. Right now, utilization costs for a startup using cloud resources will fluctuate as their business needs fluctuate–more customers, more business, generally more processing and higher rental costs on the cloud. The increased complexity and unpredictability of fluctuating cloud infrastructure costs in addition to fluctuations due to changing business demands may reduce the attractiveness of the cloud approach for these businesses.

Another question. As these shared resources become scarcer, might there be an increased risk that firms could use denial of cloud resources as a strategic weapon against competitors by pre-purchasing significant cloud resources in advance of a competitor’s planned use of that resource? Perhaps somewhat farfetched…or perhaps not.

It will be interesting to see how cloud computing evolves as it matures and to see whether these kinds of problems will arise in practice or not. It does seem apparent, however, that the current nascent movement towards cloud computing is bound to get much more complicated in the relatively near future.

HPC on Wall Street

September 23, 2008

I attended the High Performance on Wall Street Conference yesterday in New York City and found the experience odd in several respects.

First, there were my personal logistics. Due to a personal commitment this past weekend in Upstate New York, I had to kidnap my wife and drive into NYC rather than either fly or take the train from Boston as I normally would. As it turned out, the conference was held near the UN, the UN was in general session, and Ahmadinejad was speaking. The protests and the blockades and the police details at every intersection in the area caused incredible traffic jams for hours during both our arrival and departure.

Second, the conference attendees. I had wondered how the recent and continuing turmoil on Wall Street might affect attendance. There were several last minute speaker changes– so-and-so from Lehman Brothers unable to attend, so-and-so no longer with Merrill Lynch, etc., but the main conference room was reasonably full so I assumed there wasn’t much impact. Until someone took a poll during one of the sessions. By a rough hand count, he estimated that 75% of the people in the room were vendors of some sort, while only 25% were associated with customers. I think he was very generous in that estimation. I would have guessed only 10-15% were customers based on the hands I saw. Very disappointing for an attendee like myself who was primarily interested in understanding the customer perspectives of HPC in this veritical market.

The third odd aspect of the conference was the content and there were two specific weirdnesses. First, there was little or no sharing of real information by customers, which I suppose is not surprising since much of what is done with HPC on Wall Street is used to establish a strategic advantage against competing companies. More surprising, however, was the strong sense I got that the problems discussed (primary by vendor representatives) mostly sounded familiar to me as a long-time HPC person. The domain was different, but the problems were the same. And yet the problems were presented as industry specific and novel with seemingly no acknowledgement of the earlier work done in these areas within other more traditional HPC segments. Here are two examples.

  • Market Data Distribution. As near as I can tell (and I should state now the caveat that much of my HPC background is in developer tools, not customer application architectures) the challenge with MDD is to distribute and deliver a real-time stream of market data with low latency and jitter and with some amount of in-band data filtering to some number of consumer endoints, which might be human traders at workstations or electronic trading programs. To me, that problem sounds a lot like what the intelligence community has been dealing with for years, though I would imagine the intel community deals with much larger data volumes, whether from satellite feeds or from other sources. I’d be very surprised if there is not a large body of knowledge held at numerous government contractor organizations on how to architect and deploy systems capable of dealing with these kinds of infeed and distribution issues.
  • Low overhead messaging. One speaker at the conference, who appeared on several panels, seemed to be evangelizing the idea of getting software out of the communication pathway to increase messaging performance. It’s a good idea. In HPC circles, we’ve called that “OS bypass” or something similar for over a decade. It’s a well-known and widely used concept in current HPC systems.

I’m very excited to see HPC techniques being adopted and exploited by a new and important vertical like Financial Services. There is much in HPC that can be of value to customers in markets like these. But a knowledge of the work done earlier in the field is essential for the rapid and effective adoption of HPC as a differentiating advantage. Time spent analyzing old problems and rediscovering proven solutions is time and money lost. Perhaps it is time to apply the considerable experience and expertise of some of those crusty old HPCers out there to new application areas.

New England OpenSolaris User Group Meeting: Wednesday, September 10th!

September 3, 2008

The fifth meeting of NEOSUG (New England OpenSolaris User Group) will be held next Wednesday, September 10th at Sun’s Burlington, Massachusetts site. The featured speaker will be Jim Mauro, who will talk about Solaris 10 and OpenSolaris Performance, Observability, and Debugging. Full details below.

The New England Open Solaris User Group (NEOSUG) Meeting

Topic for this meeting:

Solaris 10 and OpenSolaris Performance, Observability and Debugging (The Abridged Version)

Who should attend? : UNIX Developers, Solaris users, System Managers and System Administrators.


New England OpenSolaris User Group Meeting (NEOSUG)
Sept 10, 2008 6:30-9:30 pm (registration opens @5:30)
Sun Microsystems
One Network Drive
Burlington, MA
5:30-6:30: Registration, Refreshments
6:30-6:40: Introductions, Peter Galvin
6:40-8:30: Solaris 10 and OpenSolaris Performance, Jim Mauro, Sun Microsystems
8:30-9:00: Questions and Discussion

Please RSVP at :


Solaris 10 and OpenSolaris Performance, Observability and Debugging (The Abridged Version)

The observability toolbox in Solaris 10 and OpenSolaris is loaded with powerful tools and utilities for analyzing applications and the underlying system. Solaris Dynamic Tracing (DTrace), allows you to connect the dots between the process and thread-centric tools, and the system utilization tools, and get a complete picture on what your applications are doing, how they are interacting with the kernel, and to what extent they are consuming hardware resources (CPU, Mem, etc).

This two hour talk walks through the tools, utilities and methods for analyzing workloads on your Solaris systems.


Peter Galvin : Chief Technologist, Corporate Technologies Inc.
Peter Baer Galvin is the Chief Technologist for Corporate Technologies, Inc., a systems integrator and VAR, and was the Systems Manager for Brown University’s Computer Science Department. He has written articles for Byte and other magazines. He wrote the Pete’s Wicked World and Pete’s Super Systems columns at SunWorld Magazine. He is currently contributing editor for SysAdmin Magazine, where he managed the Solaris Corner. Peter is co-author of the Operating Systems Concepts and Applied Operating Systems Concepts texbooks. Blog:

Jim Mauro: Principal Engineer in the Systems Group, Sun Microsystems, Inc.
Jim Mauro works on improving delivered application performance on Sun hardware and Solaris. Jim’s recent project work includes Solaris performance as a guest operating system on Xen and VMware virtual machines, Solaris large memory page performance, and Solaris performance on large SPARC systems. Jim co-authored Solaris Internals (1st Ed, Oct 2000), Solaris Internals (2nd Ed, June 2006) and Solaris Performance and Tools (1st Ed, June 2006).

ug-neosug mailing list:

Another Worm in My Apple: iPhone 3G Woes

September 2, 2008

[generic iphoto shot]

I thought I was smart to wait for the second version of Apple’s iPhone after having suffered through a host of early-adopter issues with my first-generation Mac Book Pro. Apparently not.

Up until last Saturday, I had been mostly satisfied with the iPhone 3G, having resigned myself to the poor battery life, the intermittent switching between Edge and 3G networks, and the occasional Failed Call. Even with these problems, the iPhone experience had been a compelling one for me.

On Saturday, I went away for the holiday weekend. On Saturday afternoon, all of my 3rd party applications — both free and those I had paid for — stopped working. Every such application would immediately exit after I launched it. Power cycling had no effect. I could not try re-syncing until Monday evening when I got home, though in retrospect I could have tried deleting the apps and downloading them again from the iTunes store (though with the Edge/3G flipping I’m not sure I would have wanted to try that.) In any case, syncing to my Mac Book Pro did not help. So I deleted the applications and tried to sync again, hoping this would clear the problem. No joy. This time, iTunes complained my computer was not authorized to use any of the applications I had previously downloaded and refused to reinstall them on the phone.

I called Apple support and we fixed the problem by re-authorizing my computer and then completing a sync that reloaded the apps, which now seem to be working again. The rep told me this is a known problem that sometimes needs to be fixed by deleting the apps from both the iPhone and the computer and reloading them from the iTunes store (which keeps track of purchased apps so you do not need to pay again.)

Just before calling Apple, I had two Call Failed incidents during one conversation and had to switch to a landline to complete the call. Not a great advertisement for a phone or for either Apple or ATT, I’m afraid.