VDI Session reliability

No matter how utopian the idea of VDI, the harsh reality of remote users, and the networks they are working on, can cause issues for VDI delivery and use. Taking into consideration the real world, should be part of your VDI delivery plan, when implementing a solution.

VDI Clients working on overloaded networks, or remotely on WiFi will be prone to dropouts. So how does the drop out issue get handled by VDI? It’s worth knowing, one of the biggest issues, I’ve come across is multiple sessions in use, usually caused by data packet loss, and high latency due to inferior speed connections to the server, a recipe for creating bottlenecking processes. All of which lead to server overload in that, excessive use of server system memory, and CPU by the ghost/dead sessions.

For that reason I’m more in favour of RDWeb RDS service, as it restricts what remote users can actually do to harm your VDI operation in such events.

RDS
The advantage of the RDS 2012 system, is that it uses Microsoft’s remote desktop standard technology to connect the VDI client to the server(s) and services.

This means that when a disconnection occurs. The system gives the reconnection up to a max of 20 attempts so as to establish the connection, the user cannot attempt another connection until the reconnection attempts succeed or fail.

There’s a visual warning and display of the reconnection attempts while the “dropped” session awaits reconnection. Should all 20 attempts fail, then the session is flagged as disconnected, and the server housekeeping removes the dead session in the back ground.
At that point the user can pick up another session from the broker server.

Citrix

Citrix deals with drop outs differently. XenApp/XenDesktop are controlled by the receiver which will basically keep on trying to reconnect the session, no matter how long it’s been disconnected. But, that is where the issue lies with using Citrix ‘without a leash’.

A user may experience a drop out, working remotely, they’d be tempted to fire up another session to continue working, whilst ‘in the background’ the receiver is wrestling to re-establish back the lost connection, when that connection is established it’s presented back to the originating user.

The issue is that if the VDI is being used to address other software on servers, the user has kicked off multiple sessions in other software, loss of the client on their side may not terminate the back ground session on other servers, leaving dead sessions that could be utilising high resource of the other server. So again the latency build up becomes an issue.

Depending on user stubbornness, this could actually easily lead to a crippling overload a server, if session drop outs are frequent. Although Citrix utilises thin wire technology to improve issues on high latency networks, it can cause “nasty side effects” over the network if not configured appropriately.

A correct Citrix setup/configuration should by policy/rules have timeout disconnects set for situations like these, also idle timeouts are recommended. Also raise users awareness of drop outs and what to do, so that the don’t end up causing your IT more issues

Conclusion
RDS makes for an easier solution, without as much configuration for the drop out handling, waiting for the green light from the server before proceeding helps to eliminate problems with dropped connections. While Citrix on the other had requires some consideration in handling such events.

Not that Citrix has inherent issues. I’ve seen customer set ups where Citrix is as solid as a rock within their own network, perfect for delivering a standard painted desktop solution, the issue lies with not planning for remote workers on external networks.

Windows 10 Phone

The leap of faith

Although I used the technical preview for PC, I stayed away from the phone version of Windows 10 due to the incompatibility issues with the older Nokia phones, after all I didn’t want to spend a fortune in buying a new phone, to trial an “in test” version of the OS, as it was at that time.

I’m a big fan of the technical preview concept, it is an ideal way to approach a release of software, by putting ideas into pre-release testing, and having users allowed their say.  This all makes for a tailored solution for users, but as Windows 10 was supposedly the same across platform, I didn’t see the point of double testing both phone and PC.

But, as no doubt others have found, the apps in the store for Windows phone 8 are now becoming dated, in that newer features are not being brought into the apps. Obviously Microsoft are concentrating the focus on the Windows 10 platform albeit one way or another.

So, the time was now right to look to upgrading the phone, and since I’ve been using the Windows 10 platform on PC for over a year, it seemed right to take the leap and go for a Windows 10 phone, to ensure that the apps I use are up to date, and after all that I’m hopefully using a system that will be around for a while for mobile/smart phone requirements

Makes and models

Thankfully there’s a wide variety of Microsoft phones available to choose from, with them not all being £599.00 mega phones both in features and price.

Tempted as I was to get the 950XL due to me being a phablet fan, pure economics kept me away from that model. The 640 was within reach, and the Other half uses and adores that model, but the phone released 2015 does not come with Windows 10 installed (you can upgrade), but I wanted the “out of the box” experience with 10, rather avoiding any OS update wrestling that may ensue.

So the Lumia 650 came into view, not a hole in the pocket burner like the top of the range phone, but a 10 version of the 640 model, a smaller screen than I’m used to. I thought this a better option than the 550 entry model, as after all I have to use the phone for both work and home usage.

Additionally, I have some NFC work coming up (I’ve been using Android for that purpose still). So when the opportunity arose to obtain the Lumia 650 at a reasonable price I grabbed a hold of the handset and off we go into Windows 10 mobile.

The road ahead

The following future Windows10 phone articles will look at the changes, not just the layouts of settings screens, but features added (and taken away) to the platform. This is where the usual bring back the old version protests, usually begin, but so far the changes I’ve encountered seem valid and make senseChange is inevitable, and for a platform to succeed, you can’t always build on the ashes of what remained. The biggest hurdle to taking advantage of Windows 10 has been users staunch refusal that the old peripherals, and old drivers are compatible with a newer and more efficient OS keeping them on old versions. Embracing the change can open up a better OS experience and allow you to work with a truly innovative environment

VDI Considerations

 

After doing the initial proofing of VDI, its worth pointing out some important stages for consideration when making the system live for use, ignoring these stages can lead to the implementation of such a project failing in its application to users.

 

The user experience and efficiency.

The design and delivery of the VM’s is important, try and deliver what the user expects from their desktop, not only the software, but the appearance too. This can save time on training, and dramatically reduce the onslaught of questions such as where do a find this.

There will be differences to the “way the client works”, by anticipating what those are again will provide a smooth and less stressful transfer when working with VDI.

 

BYOD and the end-point. VDI allows a lot of diversity in the use of devices to deliver a desktop experience to. Stand your ground and ensure that you clearly state what type of device and specification is required.( list of supported devices)

For example yes RD Web and VDI will work on an android platform, but attempting to run rdp on an 8 gb memory phone, with a 5inch screen just isn’t practical, although it may be suitable for vapps

Also ensure you know who is accessing the device, and that they are following corporate security policies and procedures. The link with domain control is essential for validating users

VDI

 

Capacity requirements.

Ensuring your infrastructure is adequate and can cope with a VDI deployment is essential, keeping graphics, memory requirements, storage to minimums when creating the VM helps to keep the delivery and use of VDI with user expectations.

I read the term boot storm, where a large requirement of the workload is needed at once (such as all VDI users logging in at the same time), so large that it may cause massive spike in usage can cripple a storage area network (SAN) that is not prepared for such an event. So ensuring you’re network speed and storage requirements are up to the task.

Servers with Solid-State Disk (SSD) or onboard flash cache may help prevent spikes in I/O. Monitoring I/O and controller metrics of the broker then, administrators can make decisions which will determine performance specifics for their storage system

 

Virtualization is the key

A solid management plan can make all the difference in the success of your VDI deployment. For example, golden images and server provisioning helps administrators quickly provision resources to VDI workloads. Similarly, hypervisor VM redundancy capabilities (how VMs failover if there is an event) allow you to create powerful high availability and resiliency policies. Remember, today’s VDI ecosystem is a lot smarter. You can now set specific alerts and monitoring thresholds to find issues before they become major problems.

That require greater levels of insight and visibility, so you’d want to deploy agent-based AV technologies. However, if you’ve got a kiosk running a locked-down VM with little user interaction, a simple agentless solution can help. Data loss prevention (DLP), intrusion prevention services (IPS) and intrusion detection services (IDS) each provide granular security controls around VDI. For example, if Social Security numbers are getting transferred within a network against protocol, these types of systems can catch it and stop it from happening.

 

RDS Pet project

Usually the most work I do with Windows Server, is the creation and maintenance of File and IIS servers on Private Cloud/Hosted  to deliver the SaaS software for company I work for, delivers to its clients. But, an opportunity arose to perform a proving exercise for disaster recovery, which I took on as a pet project (no funding, no real project time allocated) that involved Remote Desktop delivery.

Previously, we had already used terminal services, and found that to be cumbersome and restrictive for purposes of the users, so there was some caution expressed over trying a similar project, that posed some restrictions, but as they say where there’s a will, there’s always a  way .

The task

Firstly getting management to understand the benefits of RD working, secondly and more importantly although we use cloud heavily, to reduce costs to a minimum in the proving of the concept. Finally the advantages of presenting RD, as a solution.

There are several clients with a thin client/painted desktop solution, and the one thing I’ve noticed is the slowness of its operation. The objective in my mind was not to present another ‘looks nice but ruddy awful to work with platform’ as we had with terminal services. But, something more adaptable that suited the needs of, not one part of the company but several operations, implementation, IT. support and management.

1.Advantages of RD Working

Providing equipment to users that can work from home, was becoming a logistical problem, firstly procuring equipment, followed by the configuring the software for the user, then locking down access both at user level and with bit locker, so as to protect from theft and ensuring data security.

So, when you inform management that the user can safely work from home with the own machine and not risk breaching security, management became interested. Follow that up with reducing equipment costs, and that implementing RD Web would allow BYOD safely they were hooked. As the big clincher was the Mac users, along with usual issues of cached credential problems, and having to run Office 365 as a solution for them with all of the drive mapping problems, offering a workable solution and reducing IT time on this platform, would be a big advantage.

2.Costs of proving

We have a Cloud hosting provider, but to commission a machine, operating system(s) and storage is a unrecoverable cost of R&D which I didn’t want to waste not knowing how successful the RD solution would be.

Thankfully Microsoft technet to the rescue, with a fantastic evaluation for 180 days of its 2012 R2 Server. So, the cost of the operating system gone in the evaluation of the project, even better it’s a simple matter of buying the licence and presto a fully working copy, once all proven.

Machinery wise, being on cloud means that there’s a minimum amount of physical hardware available for testing, and not wanting to run up a cost for a machine that may be scrapped (again wasting money and resource), a physical server to test on was decided on. Believe it or not the acting server for brokering, virtualised machines storage, and running was an i3 laptop (of all things!) with a 8Gb memory that was scheduled for junking, not a heavyweight server by anyone’s means. But as the story has a happy ending you’ll be glad to know that it held its own.

Licencing of virtual machines, again costs of licences would only be a RD burden at this early point, so the ‘risk’ of running with Microsoft’s Windows 10 Technical preview seemed an ideal solution.

The ISO is available for free download, there’d be no need to upgrade a basic OS in a proving exercise, and it’s a solid platform for running all of the company software requirements on, another perfect solution for evaluation.

3. Time and effort

The next big hurdle was the amount of time spent on this project, yes there are quiet moments at work but working in IT/Application support they are few and far between. The whole proving exercise was completed in lunch breaks, and the last part of a Friday afternoon, so work commitment to the project was at minimal cost to company time, after all I was keen to prove the project worked. Passing the finish line without any excessive cost or time lost was just an added boon.

The Solution

The idea so as to simply the testing exercise would be to put the broker server, virtual hosts and link to the Active directory all on the one machine. This machine would be place on the user level domain of the company, (we have several both direct and non-direct internet facing). This would be an advantage in reducing setup configuration of the virtual machines, as they all be generated on the user domain, and with the association to active directory pick up  automatically the policy that the company directs.

Server 2012R2 has to be one of the most clearly laid out operating systems I’ve had the pleasure of using. Future articles will explain why, but the configuration of Internet Information Services (IIS), the RD Brokering was simple. There’s even a quick option (a wizard) to do all of this for you, but following the usual quide, give you an appreciation, on how you could incorporate other separate servers to deliver a full service without overloading a single server.

The next part was the stop and think stage, access and security to users. Users of RD services don’t have to be domain users, but if you’re opening doors to your systems you want to know whose coming in. A decision had to made in proofing. Do we show off how flexible this method of access can be or do we stay with a limited delivery and let the management decide to take it further once the appetite has been wetted.

Originally there had been one concept for the RD Web service but now other possibilities we coming into view which has not been considered.

  1. Disaster recovery – total failure of the office systems
  2. BYOD/Flexible/Home workers – ability to access company network with no worry of security
  3. External support to clients – Ability to join clients network for support purposes, without any fear of passing on malware/viruses, or them getting one from the external company
  4. Machine client app hosting – In addition to the company SaaS solution, there are additional products, RD offers the opportunity to host  desktop machine access  and applications on their behalf.

The little idea was beginning to grow, luckily the taking a breath put both feet back on the ground and a basic core delivery system would be the plot of the test from now on. Once we have a working model other could decide which direction(s) it should take.

Environment Delivery

Provision the virtual template for the proving exercise would be simple, firstly all of the company applications are web based so that was the simple part, but there’s one matter of a specific application requiring Java! An additional  requirement for Office applications was necessary not only for office working but as an integral part of the interface outputs of the the company software (mailmerge, payroll reporting extracts etc.), again this was made much easier with the existing use of Office 365 and the “office” accounts we have

Administrator account

The virtual would be constructed on the server as an admin account (no inclusion of a Microsoft sign would be necessary), as the virtual is requested from the user, it’s the users access level and rights from the domain that are built into the delivered desktop.

Java would be happy as the pig in the proverbial muck at that level, and the company application would run fine from user level requests to the java development kit.

Office again installed as local admin would inherit the domain user account fred.blogs@company.local

System preparation

Sysprep under Windows 10 has its pitfalls, firstly there are issues with the modern apps that cause the process to fail. Secondly the updated versions can be a problem, and finally there are additional flags, that benefit the build of the vhd file for use.

The best method discovered was to grab the latest available version (and run  update), then use powershell to strip the OS to the necessary bones, and finally install Java and Office.

Sysprep ran through without issues to create a solid basic operating system to work with, and more importantly without having to go through the error logs in pather directory working out what next to delete. The update is important, otherwise trying to work out how to eliminate Candy Crush Soda Saga once its part updated could drive you to a stay in hospital for recovery purposes!

The flags for the build were /oobe for out of the box, the virtual would build itself from scratch. The /vm flag is essential for making easy construction of the VM, and forget about /unattend files, with the Broker server, you can specify no need for unattended files so the VM will build without prompting the user or checking the platform, provided you’re satisfied that users will use appropriate equipment.

 

End results from the test

To be honest even with this “two bob” cheap solution, there a lot to admire in the workings of RD Web. Over a basic computer there’s very little additional set up for the client side.

Just visiting the https: secure URL and only responding to valid domain users got the approval nod from those worried about security, and the delivery and creation of the virtual was complete in under a minute including the various login prompts (remember we’re using a single laptop to represent a three server delivery).

A colleague tried the system with a Samsung Galaxy phone, the big drum roll of delivering a Windows desktop environment to another platform was under way, after the download the snag of what to do with the file, as there wasn’t a native app available on that phone.

Simply downloading MicroSoft Remote Desktop app from Google Play (it’s free)  was the solution, and presto one Windows 10 machine running on an Android phone! Again a test, but a valid one proving that RD Web is cross platform and with a more realistic device such as a tablet users can access systems easily. We also downloaded the same app on the ipad to test, and had RDS capable on other popular platforms, no longer tying users down to having a PC available.

Using  ‘shortcuts’ made the proofing task easier, for example the server was put on the domain along with other user machines, to reduce the work involved in ip configuration, cross domain verification and port opening. Thanks to that the IP address of the generated virtual machine is on the same section of the network as other physical users.

All in all a positive result with a few hours work, and very little cost.

What next ?

Seeing is believing, and with the system demonstrated to management, and nods of approval given, the next stage is to advance the delivery of the virtual machines, designate specific ones to users (easily done through the domain controller).

Create alternative pools of machines for user groups such as service team, management etc.

The double check of running the system on Mac platform, will be obligatory no doubt, but basically, having got a simple solution that you demonstrate, and more importantly, can build on from has to be the way forward in IT directing company infrastructure, without risking budgets, wasting on costs,  to quote someone

 

“If you build it, they will come”