Wednesday, January 16, 2008

Hey.....this makes sense!

The VmWare decision to buy Thinstall makes sense. VmWare has been semi-public about their recognition that hypervisor technology is not going to be a long term solution for desktop applications and their VDI. They see the Trigence technology as clouding the virtualization landscape. This is due to our ability to support server applications. Even more so problematic for VmWare is the ability to define a network identity for server applications (IP address, hostname, MAC, system ID) in what Trigence calls a capsule. A highly self-contained server application in a virtual environment, particularly with a discrete network identity, looks too much like what VmWare calls an application payload. The fact that both approaches offer similar value propositions is too much. I've had this conversation with one of the VmWare CTOs.

That said, the Thinstall product does seem to be a solid solution for the Windows desktop, however, the Trigence offering differs in several critical ways:

1) Support of server applications on not only Windows but also Linux & Unix

2) Shared virtual environments; the ability for applications in one virtualized environment to correctly access the persistent state in another virtual environment (shared capsules solving what Softgrid calls the context problem)

3) Far more comprehensive encapsulation; mechanisms for placing an application in a virtual environment allow you to encapsulate applications on existing servers, new applications and learned applications. Thinstall offers one mechanism based on multiple snapshots of a system, starting with a "clean" system.

Now, call me naive, but I'm surprised at the level of apparent intimidation caused by an extension of virtualization technology. VmWare and other vendors of hypervisor technology, but VmWare in particular, have the opportunity to extend their capability by allowing encapsulated applications to execute directly on their hypervisor; ESX server. This provides significant flexibility and places VmWare in a very different place in the market. They would have the opportunity of providing a platform that hosts applications with and without an existing commercial OS.

Interesting dynamics.

Wednesday, December 19, 2007

Application Virtualization is Confusing – But It Shouldn’t Be

Tennis elbow is one of those medical garbage terms that could mean any of ten or more diagnosis – just knowing that someone has tennis elbow is not enough to really know what is wrong with their arm. Virtualization is quickly becoming this way as many vendors are running to jump on to the trendy ‘virtualization’ bandwagon and calling anything they can some form of the solution. For example, some with application streaming solutions call their products Application Virtualization, which, at first glance, may seem reasonable – users are able to stream applications over the network to client computer, without the need to actually install the application on the client computer. However, just avoiding install is not necessarily a good definition for Application Virtualization – it misses much of the revolutionary change that Application Virtualization brings to how we manage applications. Under this definition, clients and server technology would be a part of Application Virtualization. The thing is, the application is really not running on the resources of the client computer – the product is more aptly named server-based computing, and while server-based computing bring some of the same advantages that Application Virtualization does, it misses on most of the important pieces. You still have to install the application on the server, and you still have to manage the application dependencies and conflicts and versioning on that server the same as you would if the application was installed on the desktop. Server-based computing does not change the way the application works with the underlying OS or platform.

Bottom line - Streaming does not make Application Virtualization.

So how is true Application Virtualization different?

Application Virtualization is a paradigm shift. Applications are freed from dependencies on the infrastructure, and the infrastructure is freed from dependencies on the applications. One standard platform can be a reality in the datacenter (significant reduction in management complexity), and one golden application configuration is possible for the application group (no more need to reconfigure application for different platforms or as patches and new OS’s must be used).

Let’s all agree on what our technical terms mean – it will simplify everything. In an effort to end the confusion, Trigence has started a glossary of industry terms. Check it out at http://trigence.com/glossary/.

Monday, December 10, 2007

A (Semi) Complete Idiot's Guide to...The Green Datacenter

Green is definitely in fashion these days, at least in the sense of IT world infrastructure, where it seems as if there is a new “green initiative” announcement daily. But when it comes to virtualization, the ‘green’ theme usually comes in reference to consolidation -- being able to reduce the total size of the infrastructure. To be fair, virtualization is not necessarily reducing the infrastructure footprint, that fact that servers and other hardware today can do exponentially more in a significantly smaller package is really the firststep in going green. Virtualization is the all mighty enabler, making it possible to move applications from old or under-utilized servers onto bigger and usually smaller new servers. The result, what would have needed a full row of infrastructure just a few years ago can often now be from a single stack (in fact, IBM has even made a commercial about that).

So how does virtualization make things greener? Well, with less infrastructure, you take up less space and often times going green = more green in the ol’ wallet. In general terms, with less infrastructure, you are cooling and powering less, which directly translates into significant cost savings , as well as less resources indirectly used from the environment Virtualization is the technology that makes this consolidation practically possible – you can easily migrate what you have now into a more compact (and greener) datacenter infrastructure.

Sohat kind of virtualization are we talking about? Great question. Many people would automatically assume Virtual Machines and the like from VMware. While certainly the most popular, don’t be fooled into thinking that Virtual Machines are the answer. Sure, they have their place, but there are some other factors to consider, namely – the people factor (i.e. the management equation). When I virtualize 500 servers so I can migrate the applications and services down to 50 servers, I have certainly reduced my space, power, and cooling requirements, but, I have increased my management equation, now managing 500 virtual machines (same OS maintenance and management overhead as 500 actual servers) plus 50 new servers.

500 + 50 = 550 servers that now need management.

There is a better way. Using another flavor of virtualization – Application Virtualization – you can largely accomplish the same kind of server consolidation, without spawning Virtual Machines that require management. No more proliferation of OS patches or the requirement to maintain hundreds of OS personalities just for application dependencies. In fact, with Application Virtualization, you have the ability to actually standardize your infrastructure platforms. Using the example above, that means 500 servers consolidated and migrated down to 50 new servers will reduce your infrastructure management effort by a factor of ten – and you can take that to the bank (literally).

Now get out there and save the world!

Wednesday, November 14, 2007

Change my infrastructure? Thanks, but no thanks….

Thinstall’s recent announcement with Macrovision (see here) reflects an important trend in the market – that you don’t have to get locked into a proprietary infrastructure to support a virtualization solution. In the case of both Microsoft and Citrix, application virtualization is only available when you have their large infrastructures and client applications installed and working. If this is not your current infrastructure, not only are you buying an application virtualization solution, but you are migrating to a new infrastructure -- just to make it work.

Thinstall’s partnership with Macrovision points to the voice of the market starting to speak – we do not want to change out our infrastructure just to take advantage of a virtual environment. Application virtualization solutions should not require an infrastructure change out. Thinstall is not unique in its ability to deliver an open application virtualization solution that works with an existing infrastructure – Trigence has been doing this for years. Though some of the larger vendors in the space will proclaim loudly how their complete solution will encapsulate, distribute, and manage applications, In fact, at Trigence, we take this ‘any way you like it’ approach even farther. First, why stop at the Windows desktop? Isn’t application virtualization just as valuable in the datacenter and on other platforms, such as Linux and UNIX? Trigence offers hardened solutions for the datacenter as well as the desktop. Second, applications are only becoming more complex and dependent on other services and even other applications. Only Trigence offers a truly open application virtualization solution that allows you to take control of defining which services and applications are available and shared between application capsules. Sometimes referred to as the ‘context issue’, Trigence’s “capsule sharing” technology brings deployment and management flexibility that simply cannot be found anywhere else – and that, at the end of the day, is precisely the ‘any way you like it’ response the market is crying for.

Tuesday, October 16, 2007

I want it to work, and I want it to work now!

I just read this article from ChannelWeb, and it really got me thinking….Virtualization of the desktop is nothing new!

Remember when everyone, including Oracle, was on the thin client band wagon, even having their own appliances?

Although thin client appliances still exist today, they never replaced the desktop. Appliances, VDI, Citrix and application virtualization all have their place, but there is no “one size fits all” solution. In fact, they complement each other. Some organizations already use appliances to access Citrix which runs on a virtual machine, on virtual storage and use virtual applications.

From a managed services perspective SMBs may buy into VDI however adoption may be slow and somewhat limited. Cost savings for any environment would be minimal since you are actually shifting costs from the desktop to the server and backend infrastructure environment. Issues are now shifted from your desktop to your server or “virtual desktop”. And, VDI has the same problems as the desktop with application conflicts, deployment and manageability. The biggest benefit of thin client technology including VDI is quicker time to market.

Throw in application virtualization and you now have on-demand applications that actually work!

Friday, October 5, 2007

A Brave New (virtualized) World -- Cost Savings

What is the appeal of virtualization in IT today? Many enterprise business cases have been built around cost savings, and the analysts have fallen suit as well (one of the latest reports on virtualization cost savings recently coming out from Butler, noted in this article). Certainly the analysis looks compelling, but is the bigger opportunity being missed?

To date, most business case analysis in virtualization have been centered on virtual machines – that is only one aspect of virtualization, and cost savings with virtual machines is really focused on hardware savings. The general idea that I can take three under-utilized hardware servers and replace them with one larger hardware server running three or four virtual machine images does have merit – the cost of one larger server is usually less than three smaller servers, plus – and this was one of the main features that Butler pointed out – companies can save on space, energy, and cooling (which are especially critical where these resources are very scarce).

Is that the end of the story? Hardly.

In fact, you may have shifted some of your costs over to other areas. How is that possible? To start, you have not reduced software costs at all – in fact, you have probably increased OS license cost to boot. It is all too common to hear the tales of virtual machine sprawl – “I used to manage 200 hardware servers, and now I manage 500 virtual servers”. Ask yourself this question, how much easier is it to manage a virtual server compared to a hardware server? They each have an OS that needs patching and maintenance and even application testing and regression. The thing is, in heavily virtualized environments, there are typically more servers now to manage than there were before – this is an example of shifting hardware costs to software and human costs.

Machine virtualization has its place, to be sure, but there is another solution that prevents cost shifting and can even significantly reduce software and human costs – application virtualization. The genius in application virtualization is its capability to liberate the application from the infrastructure (or to liberate the infrastructure from the application, whichever way you prefer). This paradigm shift allows applications to be encapsulated with all of their dependencies, separate from any changes in the UOS made by the OS or even other applications. In the case of three smaller servers being consolidated into one larger server, application virtualization gives the control needed so that all of the applications from the smaller servers can be run on the one larger server, all sharing one OS. All of the environmental concerns and savings (space, energy, and cooling) have been addressed exactly the same as with virtual machines. However, server management and licenses have been significantly reduced and simplified – managing three servers has changed to managing one server. Plus, having all of the application dependencies encapsulated with the application, the application lifecycle and the infrastructure lifecycle have been simplified (little to no regression testing, much less maintenance) and now completely separate. This revolution is the real brave new world we are looking for.

Tuesday, September 25, 2007

JeOS – How much OS is needed?

Earlier this month, VMware announced JeOS, or just enough OS, at VMWorld. Targeted at managing applications, this solution is poised to be an improvement on the virtual appliance approach. It’s interesting to see that VMware recognizes the need to limit OS content in order to manage and deploy applications.

It’s starting to become clear in many enterprise data centers that bundling an entire OS into a virtual image in order to deploy and use an application is not an optimal solution. Organizations have been including a kernel in a virtual image because there has been no option. However, when given a choice these same organizations are starting to realize that managing the application itself as a distinct object is preferable in most use case scenarios. Not only are VMware, Microsoft, Sun and the rest of the “big guys” looking at what this means, there are a number of start-ups targeting the definition and delivery of an application.

It’s funny to see how these things evolve. Who wants to boot another OS in order to access a specific application? You already have an OS running….it’s your desktop. Why would you boot another OS if there is an alternative? The VmWare strategy in particular is starting to unfold; as they are ready, willing and able to admit that something different is needed for desktop applications. That said, the universe of server applications continues to be intertwined with an OS and a hypervisor.

But why?

If separating desktop applications from their dependencies on an OS is a good approach, why is it not also a good approach for server applications? Application independence is certainly more effective than hypervisors and multiple kernels. There is less overhead, applications are more deterministic, they are easier to manage as a discrete object; independent of an OS. Ask any financial services company how you deploy a trading application in a virtual machine. The answer? You don’t; not in production.

Why is it difficult for the system and virtualization vendors to accept that if OS independence for a desktop application is a good thing, it’s also a good thing for a server app? Why is Sun focused solely on migration of existing applications into zones on Solaris 10, without any application specifics? Why has Microsoft stated publicly that they will not virtualize server applications? Because it’s difficult? It’s different; a virtualized server application, particularly one that includes an IP address, hostname &/or MAC address, blurs the lines of what we understand as virtualization categories. It forces us to understand in detail how applications interact with the underlying OS.

It’s not just about the technology; there is also a real vested interest in protecting what you have. None of the aforementioned “big guys” wants to see the OS defocused while the focus is placed on the application itself, highly independent of an OS. It’s simply more complex to virtualize an N-tier application and/or an Oracle database, or Sybase with a dependency on Veritas volume manager than it is to virtualize MS Word and/or PowerPoint.

I know this first-hand; from within the trenches. It’s taken us a long time to get it right. Server applications are dependent on the same things a desktop application is dependent upon. But, a server application may also be dependent upon a network identity, it is likely to be licensed to a specific platform and number of CPUs, it is most certainly very sensitive to network throughput and performance. Creating the environment where multiple Oracle databases can effectively co-exist on the same OS, the same kernel, is not for the faint of heart. It’s not the same as Word, PowerPoint, Excel and Acrobat reader.

We all know that the simple design is most often the most effective design. An application that is abstracted from the OS, one that has its dependencies self contained within an application object, is far less complex than deploying multiple kernels on the same platform in order to deploy applications. But, here’s the problem; in order to deliver such a solution you have to understand what the application is doing. It can’t be about the OS where such a solution is concerned. And that is just simply scary to some organizations that have a vested interest in focusing on the OS.