How games will show who is the remote protocol winner

Posted by Jim Moyle on June 18th, 2009

CallOfDuty_WorldatWar

If remote protocols are almost exclusively used in regard to business applications, why are games important?  The reason is that if I try and think of what would be the hardest thing to do over a remote protocol, it would be to play games with the same quality as you would see them on your desktop.

Of course I’m not talking about web based flash games, I mean full on, high frame rate with lots of 3D and explosions, all in DirectX with HD sound games, actually lets add some kind of TeamSpeak in there too.

There are two goals in respect to remoting protocols:

  • Get desktop behaviour no matter the application over the LAN
  • Scale the fidelity of the connection according to the bandwidth and endpoint device

The first case is the one I want to talk about, VDI and TS vendors need to be able to prove that their remote protocol can cope with any type of application or companies are not going to be convinced that the old bugbears of bad sound and choppy video poorly synced are over.

If people are out there touting the ‘better than desktop experience’ line I want to see it and as yet the performance just isn’t quite there.

When Microsoft bought Calista back at the beginning of 2008, I had hopes that the features they were working on would have made it into RDP by now, but they just announced that their remote DirectX technology isn’t going to make it into final release.

VMware have the software Teradici stuff in the works and I have no doubt something from Citrix is out there.

The wild card as regards remote protocols go is a company called OnLive who plan to provide games over the cloud remoted to your PC.  I’ve no clue how it works, but I’m anxious to see.

Wouldn’t it be interesting to see someone get up on stage and demo a game over a remote protocol?  I wonder who’s going to be first?  I would say that in the court of public opinion, even if not quite in the technical detail (silverlight etc) then they would have ‘won’.

I’ve always had customers ask me, why can’t I just use VOIP over Citrix, when it works to talk to my niece in Oz?  Once we have good quality bi-directional audio the second device on the users desktop can disappear.  Once we have rich multimedia, users will no longer have to manage without seeing that great presentation from their CEO :).

People are talking about Avistar at the moment in regards to this, but from the brief time I’ve had to look at it I think it requires some kind of broker server in the middle.  So if anyone can enlighten me a bit more about exactly what they do and how they do it, please leave me a comment.

Edit:  It seems I’m not the only one thinking about protocols

Virtualization Display Protocol Wars

Brian Madden on Calista

Where’s my MSI?

Posted by Jim Moyle on June 18th, 2009
When implementing a new VDI or terminal server project, the biggest stumbling block is not usually the solution framework, be that VMware, Microsoft or Citrix.  It’s the applications.
It’s those odd one or two apps that have either been created in house, are cheap bespoke applications or an app so old that it’s ceased being developed and is now out of support.
If the application is old and out of support I can’t blame the vendors, it’s the customer who should never have gotten themselves into that situation.  It’s the other two situations that need to be looked at.

Small application vendors need to raise their game, it’s no longer good enough to code an application, check it works on your local copy of XP or Vista and sell it to the customer.  Terminal services has been around fifteen years, and Application Virtualisation five years, these are no longer new technologies.  If I phone up a vendor and ask them what’s the correct way to install their application on terminal services or App-V, I don’t want to hear ‘sorry that isn’t supported’.

In the past, I’ve had an application vendor hand me a ten sheet document with installation instructions for their app on TS, it went like this:
Create user X,
Assign Y and Z rights to User X
Install weird application service
Add User X to application service
Find Reg key HKLMSoftwareVendorxxxxxxxxxxxxxxxxxx-xxx-xxxxxxxxxxxxx and create DWORD value zzz IMPORTANT! see note
Once all these steps are finished, run the application and click the buttons m through p
Once done install the plug-in as normal.
note:
If you cannot find the regkey DO NOT install weird application service, create ODBC connection as shown on page 9

etc.

In my opinion the customer should have refused to accept this and asked the vendor to finish the application.
The reason that I want vendors to provide MSIs is that they have several advantages over other methods of installation:

  • Database driven instead of script driven
  • The application is installed in an administrative context
  • MSI provides a standard package format
  • Transactional install and rollback
  • Customisation via MST files
  • Many tools available

The tools part is starting to get really interesting, Apptitude have released their App-DNA product, which will test whether your app is suitable for Citrix, App-V, Windows 7, x64 and more.  If you have an MSI, it only needs to look at the MSI tables, you don’t even have to install the application to get the report.

Acresso, the folks who make Admin Studio, have developed a new feature which allows direct conversion from an MSI to an App-V, Citrix Streaming or VMware ThinApp package.

Both the above technologies can drastically reduce the time taken to implement new application delivery methods.  To best take advantage of both tools you need applications provided in an MSI format.

The main reason that I have found applications not being delivered in the correct format is that organisations have not realised that it is vital that the IT department of any organisation is involved in the decision making process when it comes to purchasing new applications, at the very least they need to set the minimum standards required:

  • The application should be provided in an MSI format
  • The vendor must suport multi user OS deployment
  • The vendor must support application virtualisation/streaming

If you are an application vendor and it’s ‘too much effort’ to support the above minimum standards, I would suggest you are cutting yourself off from a large and growing sector of the market.

If you develop applications in-house or are purchasing a bespoke product, there is no reason why standards should slip, apply the same set of rules to these as you would to an off the shelf product.  A bit more development time, is going to save you a whole lot of heartache in the future.

Citrix Streaming vs Application Installation best practice

Posted by Jim Moyle on June 3rd, 2009

Citrix say that best practice is to always stream your applications and only to install them as a last resort, I think there are at least a few occasions where the right thing to do isn’t to stream first and I thought I’d have a go at looking at those situations.

Last week I watched a webinar by Daniel Feller who is the “Server, desktop and application virtualization senior architect from Citrix” and he went into why it might be a good idea to stream all your apps.  He did say that since it normally wasn’t possible to stream all apps, mixed mode environments were what you would expect, with some streamed and some installed applications.

If you want to watch the webinar its recorded here TechTalk: Fact vs. Fiction: The Truth about App Compatibility & Citrix.

I like to keep things as simple as possible and anticipating using two technologies to put apps onto a Citrix server seems the wrong way to me, I’d much rather just use one.

As streaming can’t cope with drivers, services or apps that license based on MAC address you are likely to get at least a few that are not suitable for streaming.

Installing has a much higher hit rate than Streaming, most of the issues around multi user apps have now been solved including DLL Hell.

If you can get all your apps installed and working, why would you employ streaming?  This doesn’t mean I’m in favour of application silos, I mean if you can get all of your apps installed on the same server.  The apps will tend to work as they should as they are installed ‘naturally’.  Of course you should then package them for distribution, but that’s a whole other post.
Daniel put up the following slide near the end of his presentation:

I’d say your first impulse should be to install your apps, if you think you can get them all without any silos.  So columns one and two should be reversed.

If you need help getting your apps to work, have a look at this application validation toolkit

When installing applications you need a reliable way to build and re-build servers, re-install and update apps, while it is possible to script, a more reliable method should be used. You could use a generic server build tools or there are more specialist advanced building tools available from Citrix partners.

On a separate note, if you are interested in VDI or Terminal server solutions you could do a lot worse than look at Terminal Server and XenApp tuning tips recently published by Pierre Marmignon

Why is VDI changing into Terminal Server?

Posted by Jim Moyle on May 21st, 2009

It is, and I’m about to try and prove it to you.  Not only is VDI changing into Terminal Server it’s been done through a series of entirely logical and yet very stupid choices.

To work this out we need to start from first principles, way back in 2005ish.  We had many expensively maintained fat desktops, spare CPU cycles in the data center and a virtualisation layer.  This meant that we could take the fat desktops not already covered by terminal server (which only counted for around 20%) and move them into the data-center.  These new desktops would allow our users to install apps, personalise their OS, and IT could keep the environment stable.  People were saying things like ‘I can give my users local admin privileges!’.

That was the dream and it all sounded pretty good.  Then people realised that they would have to change cheap storage on the end point for expensive storage in the data-center.  Also it just seemed, well silly, to have 5000 copies of explorer.exe sitting on the SAN.  The advantages of data de-dupe were talked about, but the model that everyone settled on was a golden OS image, Citrix had Provisioning Server and VMware had linked clones.  Not only did this solve the high SAN demands, it enabled us to only update/patch one golden image and it worked for everyone! Double win!

So now we have thousands of users on one golden image, trouble is we need different application sets.  No Problem! said the industry, we have application virtualisation, it’s even a fairly mature technology, ThinApp, Citrix Streaming, App-V and all the rest.  Except not all applications are suitable for streaming, some have license requirements that rely on MAC addresses, some install drivers or services, etc. etc.

In any large organisation there are maybe 2% of these applications which are generally more than 10 years old, but that can’t be dumped.  Out of say six hundred apps that’s only twelve apps that need to be in the golden image, so we increase the number of golden images to twelve, and the rest of the applications are streamed.

So far so good, although with this golden image model, we have hit a snag, to allow users to install applications, we need to use block level deltas to save the personal information.  Over time these block level deltas can grow to the size of the original installation, ruining our nice SAN space saving ideas!  Not only that, when you update the base image you can’t reconcile the deltas, you have to throw them away.  That’s no good, you can’t give users a facility and then randomly remove their changes.  OK, lets lock down the OS, we can use a profile solution to save user personalisation using the file system (although obviously no user installed apps).  For a great explanation of block vs file see Brian Madden’s post “Atlantis Computing hopes to solve the “file-based” versus “block-based” VDI disk image challenge

Lots of vendors already in the Terminal Server space, immediately said ‘We have a profile solution!’ and Appsense, RES, RTO, Tricerat etc put out VDI profile solutions.

All of this worked great in the POCs and pilots, trouble is when it scaled up to 1000s of users we found that the power users who were moving gigs of VMDK’s around or working with large media files etc. meant we had to have REALLY expensive Tier 1 storage at the SAN, it became uneconomical to move those users to VDI so we left them on their fat desktops.

So where does that leave us on our big VDI project?

  • Multiple users on an OS image
  • Application silos
  • Locked down desktops
  • Profile solutions from Appsense, RTO, RES etc.
  • Users limited to Task and knowledge workers
  • Oh yeah, print solutions from Citrix and ThinPrint.
  • Desktops accessed via RDP or ICA

I mean what does that sound like to you?  To me it sounds EXACTLY like Terminal Server.  What we have done is taken a VDI dream and apply terminal server thinking to it, unsurprisingly, it’s now looking just like terminal server, but with extra licensing costs.

We need to apply some brand new thinking, there are vendors out there trying to do this, like the afore mentioned Atlantis, but before VDI really takes off we need to rethink a lot of things or Gartners prediction of VDI being a $65 billion business with 40% of the worlds professional desktops seems to be a long way off.


Copyright © 2007 JimMoyle.com. All rights reserved.