Usually the most work I do with Windows Server, is the creation and maintenance of File and IIS servers on Private Cloud/Hosted to deliver the SaaS software for company I work for, delivers to its clients. But, an opportunity arose to perform a proving exercise for disaster recovery, which I took on as a pet project (no funding, no real project time allocated) that involved Remote Desktop delivery.
Previously, we had already used terminal services, and found that to be cumbersome and restrictive for purposes of the users, so there was some caution expressed over trying a similar project, that posed some restrictions, but as they say where there’s a will, there’s always a way .
Firstly getting management to understand the benefits of RD working, secondly and more importantly although we use cloud heavily, to reduce costs to a minimum in the proving of the concept. Finally the advantages of presenting RD, as a solution.
There are several clients with a thin client/painted desktop solution, and the one thing I’ve noticed is the slowness of its operation. The objective in my mind was not to present another ‘looks nice but ruddy awful to work with platform’ as we had with terminal services. But, something more adaptable that suited the needs of, not one part of the company but several operations, implementation, IT. support and management.
1.Advantages of RD Working
Providing equipment to users that can work from home, was becoming a logistical problem, firstly procuring equipment, followed by the configuring the software for the user, then locking down access both at user level and with bit locker, so as to protect from theft and ensuring data security.
So, when you inform management that the user can safely work from home with the own machine and not risk breaching security, management became interested. Follow that up with reducing equipment costs, and that implementing RD Web would allow BYOD safely they were hooked. As the big clincher was the Mac users, along with usual issues of cached credential problems, and having to run Office 365 as a solution for them with all of the drive mapping problems, offering a workable solution and reducing IT time on this platform, would be a big advantage.
2.Costs of proving
We have a Cloud hosting provider, but to commission a machine, operating system(s) and storage is a unrecoverable cost of R&D which I didn’t want to waste not knowing how successful the RD solution would be.
Thankfully Microsoft technet to the rescue, with a fantastic evaluation for 180 days of its 2012 R2 Server. So, the cost of the operating system gone in the evaluation of the project, even better it’s a simple matter of buying the licence and presto a fully working copy, once all proven.
Machinery wise, being on cloud means that there’s a minimum amount of physical hardware available for testing, and not wanting to run up a cost for a machine that may be scrapped (again wasting money and resource), a physical server to test on was decided on. Believe it or not the acting server for brokering, virtualised machines storage, and running was an i3 laptop (of all things!) with a 8Gb memory that was scheduled for junking, not a heavyweight server by anyone’s means. But as the story has a happy ending you’ll be glad to know that it held its own.
Licencing of virtual machines, again costs of licences would only be a RD burden at this early point, so the ‘risk’ of running with Microsoft’s Windows 10 Technical preview seemed an ideal solution.
The ISO is available for free download, there’d be no need to upgrade a basic OS in a proving exercise, and it’s a solid platform for running all of the company software requirements on, another perfect solution for evaluation.
3. Time and effort
The next big hurdle was the amount of time spent on this project, yes there are quiet moments at work but working in IT/Application support they are few and far between. The whole proving exercise was completed in lunch breaks, and the last part of a Friday afternoon, so work commitment to the project was at minimal cost to company time, after all I was keen to prove the project worked. Passing the finish line without any excessive cost or time lost was just an added boon.
The idea so as to simply the testing exercise would be to put the broker server, virtual hosts and link to the Active directory all on the one machine. This machine would be place on the user level domain of the company, (we have several both direct and non-direct internet facing). This would be an advantage in reducing setup configuration of the virtual machines, as they all be generated on the user domain, and with the association to active directory pick up automatically the policy that the company directs.
Server 2012R2 has to be one of the most clearly laid out operating systems I’ve had the pleasure of using. Future articles will explain why, but the configuration of Internet Information Services (IIS), the RD Brokering was simple. There’s even a quick option (a wizard) to do all of this for you, but following the usual quide, give you an appreciation, on how you could incorporate other separate servers to deliver a full service without overloading a single server.
The next part was the stop and think stage, access and security to users. Users of RD services don’t have to be domain users, but if you’re opening doors to your systems you want to know whose coming in. A decision had to made in proofing. Do we show off how flexible this method of access can be or do we stay with a limited delivery and let the management decide to take it further once the appetite has been wetted.
Originally there had been one concept for the RD Web service but now other possibilities we coming into view which has not been considered.
- Disaster recovery – total failure of the office systems
- BYOD/Flexible/Home workers – ability to access company network with no worry of security
- External support to clients – Ability to join clients network for support purposes, without any fear of passing on malware/viruses, or them getting one from the external company
- Machine client app hosting – In addition to the company SaaS solution, there are additional products, RD offers the opportunity to host desktop machine access and applications on their behalf.
The little idea was beginning to grow, luckily the taking a breath put both feet back on the ground and a basic core delivery system would be the plot of the test from now on. Once we have a working model other could decide which direction(s) it should take.
Provision the virtual template for the proving exercise would be simple, firstly all of the company applications are web based so that was the simple part, but there’s one matter of a specific application requiring Java! An additional requirement for Office applications was necessary not only for office working but as an integral part of the interface outputs of the the company software (mailmerge, payroll reporting extracts etc.), again this was made much easier with the existing use of Office 365 and the “office” accounts we have
The virtual would be constructed on the server as an admin account (no inclusion of a Microsoft sign would be necessary), as the virtual is requested from the user, it’s the users access level and rights from the domain that are built into the delivered desktop.
Java would be happy as the pig in the proverbial muck at that level, and the company application would run fine from user level requests to the java development kit.
Office again installed as local admin would inherit the domain user account firstname.lastname@example.org
Sysprep under Windows 10 has its pitfalls, firstly there are issues with the modern apps that cause the process to fail. Secondly the updated versions can be a problem, and finally there are additional flags, that benefit the build of the vhd file for use.
The best method discovered was to grab the latest available version (and run update), then use powershell to strip the OS to the necessary bones, and finally install Java and Office.
Sysprep ran through without issues to create a solid basic operating system to work with, and more importantly without having to go through the error logs in pather directory working out what next to delete. The update is important, otherwise trying to work out how to eliminate Candy Crush Soda Saga once its part updated could drive you to a stay in hospital for recovery purposes!
The flags for the build were /oobe for out of the box, the virtual would build itself from scratch. The /vm flag is essential for making easy construction of the VM, and forget about /unattend files, with the Broker server, you can specify no need for unattended files so the VM will build without prompting the user or checking the platform, provided you’re satisfied that users will use appropriate equipment.
End results from the test
To be honest even with this “two bob” cheap solution, there a lot to admire in the workings of RD Web. Over a basic computer there’s very little additional set up for the client side.
Just visiting the https: secure URL and only responding to valid domain users got the approval nod from those worried about security, and the delivery and creation of the virtual was complete in under a minute including the various login prompts (remember we’re using a single laptop to represent a three server delivery).
A colleague tried the system with a Samsung Galaxy phone, the big drum roll of delivering a Windows desktop environment to another platform was under way, after the download the snag of what to do with the file, as there wasn’t a native app available on that phone.
Simply downloading MicroSoft Remote Desktop app from Google Play (it’s free) was the solution, and presto one Windows 10 machine running on an Android phone! Again a test, but a valid one proving that RD Web is cross platform and with a more realistic device such as a tablet users can access systems easily. We also downloaded the same app on the ipad to test, and had RDS capable on other popular platforms, no longer tying users down to having a PC available.
Using ‘shortcuts’ made the proofing task easier, for example the server was put on the domain along with other user machines, to reduce the work involved in ip configuration, cross domain verification and port opening. Thanks to that the IP address of the generated virtual machine is on the same section of the network as other physical users.
All in all a positive result with a few hours work, and very little cost.
What next ?
Seeing is believing, and with the system demonstrated to management, and nods of approval given, the next stage is to advance the delivery of the virtual machines, designate specific ones to users (easily done through the domain controller).
Create alternative pools of machines for user groups such as service team, management etc.
The double check of running the system on Mac platform, will be obligatory no doubt, but basically, having got a simple solution that you demonstrate, and more importantly, can build on from has to be the way forward in IT directing company infrastructure, without risking budgets, wasting on costs, to quote someone
“If you build it, they will come”