IBM has demanded Microsoft kill a site which claims IBM’s WebSphere runs better on Windows.
IBM lawyers have contacted Microsoft about the “Who Knew?” site, which claims that customers will save money and get better performance by running WebSphere on Windows Server 2008, instead of on IBM operating systems.
IBM has asked Microsoft to cease and desist from advertising its claims of superior performance and value, running IBM’s WebSphere on Windows Server 2008, eWEEK was told by Steven Martin, senior director of development platform products at Microsoft.
In late April, Microsoft established a Website with the theme of “Who Knew” that celebrated the use of Windows for running IBM middleware technology. IBM later took exception to the claims Microsoft made on the site, to which Microsoft’s Martin responded in a blog post challenging IBM to a bake-off.
In that post, Martin said:
“Yesterday I blogged about some recent findings regarding both system cost and performance when comparing Windows Server 2008 on an HP Blade Server against AIX on a POWER 570/POWER6 based server. As I stated, the tests showed that WebSphere loved running on Windows… to the tune of 66 percent cost savings and with better performance.”
However, rather than take the challenge to prove which solution was better performance- and value-wise, IBM instead responded through its lawyers.
“I’m disappointed that we heard from their attorneys rather than their performance team,” Martin said. “They didn’t respond to our request. But they asked us to take our site down.” Martin said Microsoft is still deciding what to do about IBM’s request.
Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector
Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…
Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…
Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…
Explore the future of work with the Silicon In Focus Podcast. Discover how AI is…
Executive hits out at the DoJ's “staggering proposal” to force Google to sell off its…
View Comments
Here is a simple recipe.
Keep in mind this is a hypothetical exercise.
We have two pieces of hardware: A (ours) and B (theirs).
We have the corresponding platform software: for A (our platform sw) and for B (their platform sw).
We have the product being tested on each (in this case, it's their server software).
The first step is to make sure we find an A so that it outperforms their B hardware. This is easy to do unless B is the fastest supercomputer on record. It isn't, obviously, so we can definitely find an A that beats whatever B is. [Eg, a 4gighz x86 beats a 1gighz x86 from the same vendor.]
Each platform software performs about the same as the other under ordinary circumstances (or maybe ours is a bit worse). This means we will optimize extra for the occasion. This is easy to do by removing security and other tests. We can keep special task/process related memory objects around preinitialized in anticipation. We can simplify and speed up our scheduling. We can give the special process high priority to the CPU and to the filesystem (bypassing security checks, etc). We put everything else, including the GUI, into slow low priority mode. We turn kernel dynamic lists into static lists. Etc. Really, it is possible to optimize well for the occasion if we know the system will only be used for a specific purpose (to win in some benchmark). Also, the platform software we chose for their side is their generic platform software if possible (eg, their regular platform software not optimized for this benchmark).
So that is how we easily got the improved performance.
However, we need to control further context in order to pull off the coop. What about the price, right? After all, a supercomputer outperforms a pocket calculator, but people don't buy supercomputers to compute tax at the restaurant. The context in this case is that the supercomputer is a LOT MORE expensive. We need to get the price of our "supercomputer" down to a competitive level.
Here is how we carry out this step. We work with the hardware partner. They develop an exclusive model that they will price near cost. We also give away our platform software at near cost (it's a "special configuration" remember). Voila! We got our costs down because we and our partner have no intention to actually sell many of these models to actual customers.
So we kick their buttocks, and customers flock to our product.
Then...
The hardware model runs out quickly and a very slightly differently named/numbered hardware model is put in its place at a higher price.
Also, our platform software is changed back to normal, except that now, it actually doesn't run their server software all that well in comparison to our own server software that competes with theirs (but which was not tested in the benchmark). It's extremely easy to change platform software bits around so that one app that was favored is no longer favored and is actually handicapped. It's also very difficult to catch this if third parties don't have the source code. Also, for subtlety, this change in the platform can be achieved later on through one or more automatic online updates/patches.
Of course, the price of the platform software also goes up eventually, if not initially. Maybe its price goes up at the one year renewal or else when they exceed an artificially low user count. Or perhaps the price is raised transparently through the bundled software/service package "deal" the customer actually ended up buying. There are many ways to guide them into these higher priced options.
Profit.
Recap: We found better hardware, tweaked only our platform software to game the benchmark, and artificially lowered the price on this model in order to win the benchmark price comparison test. Then we switched this system with a regular one, threw in some more items, and modified the platform software (over time) to disfavor their application that we favored for the benchmark. Through this bait and switch we won the contract, and later by controlling the platform software, we disgraced their product to upsell our product in its place. We had the slightly worse software perhaps yet won and pulled in much more money than what they were advertising as their price tag. A full sleigh of hand.
This is dirty, absolutely. It's deceptive. It's anti-consumer and anti-competitive. It likely leverages monopolies later on in the upsell. It is perfectly within Microsoft's capabilities to pull off. It would be consistent with Microsoft's past behavior.
Keep in mind, however, that this was only a hypothetical exercise.
What you say is very interesting - and of course hypothetical.
To carry on the hypothetical discussion, how should vendor B respond?
A detailed debunking of the flawed benchmark carried out by vendor A (along the lines you presented here)?
An appeal to a third party test house (though difficult to finance this and still keep it independent)?
A lawyer's letter?
In the hypothetical situation, going legal doesn't necessarily help.
I can think of some things, but who knows what is the deal in this particular case.
In the hypothetical situation, going legal doesn't necessarily help.
That might be the easiest or most conservative way to deal with this or maybe step one of a longer process.
Microsoft is probably the company on the planet that can least afford a complicated battle that would result in closed source and vendor trust (over what they hide from the customer) coming into serious question in the eyes of consumers.
Who knows, maybe Microsoft is being their usual rash self raising the stakes in their protracted battle with IBM. Maybe IBM is waiting for such a move.
The other big issue is over the gaming of benchmarks. Not sure what to expect there or how IBM stacks up, except that I do expect them to be at least a little cleaner than Microsoft (judging by the characters of the companies and recent past behavior).