Using Quick Assist in Windows10

We’ve all been there whether you’re a support analyst or not, you’re home, comfy and the phone rings with a friend or relative calling to say they have a problem with the computer. Instead of having to call around or do the dreaded talking down a passenger trying to land a 747, as the pilot has passed out. You can put yourself in front of the computer remotely with a built in easy to use feature of Windows10.
This secure connection feature the other person must assist in the initiation. It’s not possible to remote control without their input. Also both machines should be running Windows 10 as Quick Assist in a new take on the older Remote Assistance, if the other persons running an older version of Windows, I’m afraid you’ll have to use that.
Offering assistance
Either using Cortana search or navigating the menu you need to run Quick Assist.
To initiate the connection by wantin to help someone else by remotely accessing their computer, click “Give Assistance”.

qa1

You’ll then have to sign in with your Microsoft account. After you do, you’ll receive a security code/pin of six digits

qa2

This code need to be sent to the person requiring assistance, you can tell them over the phone, or as you can see copy and paste it into an email which you can send them ..
Receiving Assistance

The person requiring assistance needs to run Quick Assist, with one important difference they’ll then need to click “Get Assistance” in the Quick Assist window that appears.

qa3

At this point, they’ll be prompted to enter the security code you received. When this is submitted the other person will then see a confirmation prompt,usually detailing that you (your name is displayed from your Microsoft account) what to remote in, and they’ll have to agree to give you access to their PC.

Once Connected

After confirming the remote the connection will now established, give it some time as your home broadband speed may be better or worst than the person you’re connecting to.

Once remoted you have full access to their computer as if you were sitting in front of it, so you can launch any programs or access any files they could. You’ll have all the privileges the computer’s owner has, so you won’t be restricted from changing any system settings. You can now tinker way to look at the issue….

qa4
The person wanting assistance will see everything that you’re doing, and can snatch back control at any time (typing in passwords) or even terminating the connection once completed

At the top right corner of the window, you’ll see icons that let draw on the screen (handy for the big red arrow!), change the size of the window, remotely restart the computer, open the task manager, or pause or end the Quick Assist connection.

This can be done by the either user by closing the application from the “Quick Assist” bar at the top of the screen.
The “remote reboot” option is designed to reboot the remote computer and immediately resume the Quick Assist session without any further input. This may not always work properly, however. Be prepared to talk the other person through signing back into their PC and re-initiating the Quick Assist session if there’s problem and this doesn’t happen automatically.

The computing world is flat

As the deadline for the Windows 10 update looms quickly upon users, there are still a lot of users that are declining the upgrade.

Folks and companies declining the offer may have good reason to, but in my opinion there’s a few that are creating a “the world is flat” camp, and that by not moving to Windows 10 either by upgrading, or dare I say it buying a new device, will miss out on an awful lot of improvements.

When I say improvements, I don’t just mean the new features such as Cortana (nice as it is). Computing needs have moved on, and Windows 10 is the OS that works well with the needs in my opinion, computing isn’t chained to a desk anymore  (office or dining room) it’s now flexible, the ability to work anywhere is important not just for office work but for the home too.

For example, a touch screen interface instead of a keyboard. Perfect input method for being out and about, or lounging on the sofa. Windows 10 has a native tablet mode to allow you to take advantage of this interface (hardware allowing obviously). Although it’s not impossible in Windows 7 you can use touch if you don’t mind squinting one eye, sticking your tongue out of the corner of mouth as you line up the aim of your finger for just the exact point on the screen.

Mobile computing has rapidly becoming more and more a requirement, Windows 10 caters for it well, offering a linked OS across devices, from the home PC to the tablet/2 in 1 and even on the mobile phone. The ability to convert the phone to a full PC via continuum feature another work anywhere advantage.

But it’s not just new shiny methods of working that that Windows 10 improves on. Work wise Windows 10 offers better ways to perform standard computing.

For example the multiple desktop, allows users to section off areas of work, in that you can have desktop screen open with applications for one customer, and another screen with the same or different applications open for another. This tidy method means that you don’t have a tool bar cluttered with icons, and reduce the chance of losing that open document/file  you’ve been working on by closing the wrong one!

Remote desktop availability via a simple app, a straight forward create and record details allows you to easily build a library of rdp links you connect to via a simple click, no remembering login details/ dns names or ip addresses.

If you’re not a home user, then the availability to run virtual PC’s via the included Hyper-V is an option allowing your computer to become multiple computers, the list can go on and on.

Windows 7 is clocking on for being 8 years old, already there are no new developments on that platform, just updates to attempt to keep away security issues, so with no new features being added, it’s started to get long in the tooth.

At the end of the day some machines just won’t cut it old architecture and hardware has Windows 10 at a disadvantage. (That’s why there’s an update tool I recommend you run first to ensure that the machine will update, not just do it blindly and moan in the forums). Along with older software too it has to be said. But, that’s down to the user and the person/company who built the machine.

When other goods become long in the tooth, you replace them, it may well be time you need to consider hardware replacement if you wish to keep up with changes, and meet the requirements of a changing computing environment.

P2V easy as ABC (with Hyper-V)

Having discovered just how easy it is to deploy a DaaS server setup, thanks for Server 2012’s RDS Roles, the next big issue is the putting meat on the bones. In that the creation of Virtual machines for the broker server to work with.

When I completed the pet project, I basically started from the ground up. In that the VM’s were created from scratch, and just the additional required apps were added. But what if you need to virtualise an existing system ?

Luckily, there’s a dead handy application, available from Microsoft that allows you to convert both physical and existing virtuals to virtual disks, that a Hyper V server can utilise.

Say hello to the sysadmin’s new friend in VM construction for HyperV disk2vhd.exe

p2v1

Converting Physical to Virtual machines under Hyper V

Having migrated manually physical servers into virtual hosting on the cloud, the task can be a long and laborious one, ensuring that the initial configuration is right, then applying the applications and data, then looking at the security on top.

The ability to convert a system and just drop it onto the virtual hosting is a god send to say the least. In this article I’m focusing on virtualizing PC’s as VM for RDS, as a follow on to the RDS article, but the following exact same procedure can be used for servers too, just add a little more disk space for storage and a hint more time to prep the VM disk. 

What shouldn’t you virtualize

Hyper V has a “limit” of 127GB, so anything larger than that is a no- no I’m afraid. Also certain server roles such as a domain controller, are better suited to being built from scratch and then let the new server synch with the main DC, rather than duplicate and let the ensuing chaos of sync to matched servers battle it out.

 Obtain the Utility

The application is available from the Windows Sysinternals page so that you can download the program utility. The facility also existed to run the utility from the web page at the time of writing, so if you have a fast internet connection and cloud storage you can run the utility from the cloud.

 

Using disk2vhd.exe

The utility is downloaded as a zip file. You need extract the utility on the machine that you wish to convert from Physical to Virtual.

Tip always check the Use Vhdx box to create the disk this newer new disk format that was introduced in Windows Server 2012. Compared to traditional VHD, VHDX has several improvements, including a special internal log to reduce the chances of data corruption, a bigger capacity (up to 64 TB) and other great features. It’s worth using!

The Filename allows a path to store the virtual hard disk you’re going to create, its much betters to create the virtual disk on a separate disk to that of what you’re virtualizing (avoid the black hols scenario and copying the copy you’re creating!). Remember to include any disk/volume you want to virtualize. If you want it to be a bootable disk, then include a system disk plus boot area (tick System Reserved label). Click Create to start the process.

p2v2

 Convert disk(s) to VHDX format and copy it to Hyper-V host

Once the conversion is running, you’ll see the estimated time of its completion, displayed:-

p2v3

This screen shot shows the VM being copied to a new local drive, but as said you can copy to accessible server drives, or the cloud if required

As a result of the operation, you’ll get a VHDX file/disk, which you can now copy to your Hyper-V server and place in the folder where you have the VM disks.

p2v4

 Create a new VM on a Hyper-V host

Having created your disk, you should create a VM first. Run the New Virtual Machine wizard in Hyper-V Manager and configure it according to your needs. Configuration options are straight forward

p2v5

 

NOTE: Choose your VM generation carefully. Starting with Windows 2012 R2, Hyper-V has a new option:

Generation 2 Virtual machines. This is a second generation firmware for VMs is a revised set of virtual hardware and new opportunities for users, such as a boot from an SCSI device. There’s limitations in that only newer machines Windows 8 + are gen2 compliant, so if you’re virtualizing older machines take care in your choice. Gen 2 really is best suited to 64 bit builds, I’ve used them for Windows 10 and 2012 Server VM’s for which they’re fine, if you’re not 100% sure stick to Generation 1.

p2v6

Connecting up the vdhx disk

 While creating the virtual machine you also have to configure the virtual hard disk, so then pick a disk you already have created with the utility and complete the rest of the steps in the wizard.

p2v7

 

After completing the wizard you should be able to run the VM , simply by right-click on a VM, select Run, then right-click again and connect to it.

If the hardware configuration of the VM is different to the one being run on the VM host it may take some time for the VM to boot up. However, in a few minutes you’ll see the welcome screen and be ready to log in to the system. Presto as simple as that

 

 

VDI Session reliability

No matter how utopian the idea of VDI, the harsh reality of remote users, and the networks they are working on, can cause issues for VDI delivery and use. Taking into consideration the real world, should be part of your VDI delivery plan, when implementing a solution.

VDI Clients working on overloaded networks, or remotely on WiFi will be prone to dropouts. So how does the drop out issue get handled by VDI? It’s worth knowing, one of the biggest issues, I’ve come across is multiple sessions in use, usually caused by data packet loss, and high latency due to inferior speed connections to the server, a recipe for creating bottlenecking processes. All of which lead to server overload in that, excessive use of server system memory, and CPU by the ghost/dead sessions.

For that reason I’m more in favour of RDWeb RDS service, as it restricts what remote users can actually do to harm your VDI operation in such events.

RDS
The advantage of the RDS 2012 system, is that it uses Microsoft’s remote desktop standard technology to connect the VDI client to the server(s) and services.

This means that when a disconnection occurs. The system gives the reconnection up to a max of 20 attempts so as to establish the connection, the user cannot attempt another connection until the reconnection attempts succeed or fail.

There’s a visual warning and display of the reconnection attempts while the “dropped” session awaits reconnection. Should all 20 attempts fail, then the session is flagged as disconnected, and the server housekeeping removes the dead session in the back ground.
At that point the user can pick up another session from the broker server.

Citrix

Citrix deals with drop outs differently. XenApp/XenDesktop are controlled by the receiver which will basically keep on trying to reconnect the session, no matter how long it’s been disconnected. But, that is where the issue lies with using Citrix ‘without a leash’.

A user may experience a drop out, working remotely, they’d be tempted to fire up another session to continue working, whilst ‘in the background’ the receiver is wrestling to re-establish back the lost connection, when that connection is established it’s presented back to the originating user.

The issue is that if the VDI is being used to address other software on servers, the user has kicked off multiple sessions in other software, loss of the client on their side may not terminate the back ground session on other servers, leaving dead sessions that could be utilising high resource of the other server. So again the latency build up becomes an issue.

Depending on user stubbornness, this could actually easily lead to a crippling overload a server, if session drop outs are frequent. Although Citrix utilises thin wire technology to improve issues on high latency networks, it can cause “nasty side effects” over the network if not configured appropriately.

A correct Citrix setup/configuration should by policy/rules have timeout disconnects set for situations like these, also idle timeouts are recommended. Also raise users awareness of drop outs and what to do, so that the don’t end up causing your IT more issues

Conclusion
RDS makes for an easier solution, without as much configuration for the drop out handling, waiting for the green light from the server before proceeding helps to eliminate problems with dropped connections. While Citrix on the other had requires some consideration in handling such events.

Not that Citrix has inherent issues. I’ve seen customer set ups where Citrix is as solid as a rock within their own network, perfect for delivering a standard painted desktop solution, the issue lies with not planning for remote workers on external networks.

Windows 10 Phone

The leap of faith

Although I used the technical preview for PC, I stayed away from the phone version of Windows 10 due to the incompatibility issues with the older Nokia phones, after all I didn’t want to spend a fortune in buying a new phone, to trial an “in test” version of the OS, as it was at that time.

I’m a big fan of the technical preview concept, it is an ideal way to approach a release of software, by putting ideas into pre-release testing, and having users allowed their say.  This all makes for a tailored solution for users, but as Windows 10 was supposedly the same across platform, I didn’t see the point of double testing both phone and PC.

But, as no doubt others have found, the apps in the store for Windows phone 8 are now becoming dated, in that newer features are not being brought into the apps. Obviously Microsoft are concentrating the focus on the Windows 10 platform albeit one way or another.

So, the time was now right to look to upgrading the phone, and since I’ve been using the Windows 10 platform on PC for over a year, it seemed right to take the leap and go for a Windows 10 phone, to ensure that the apps I use are up to date, and after all that I’m hopefully using a system that will be around for a while for mobile/smart phone requirements

Makes and models

Thankfully there’s a wide variety of Microsoft phones available to choose from, with them not all being £599.00 mega phones both in features and price.

Tempted as I was to get the 950XL due to me being a phablet fan, pure economics kept me away from that model. The 640 was within reach, and the Other half uses and adores that model, but the phone released 2015 does not come with Windows 10 installed (you can upgrade), but I wanted the “out of the box” experience with 10, rather avoiding any OS update wrestling that may ensue.

So the Lumia 650 came into view, not a hole in the pocket burner like the top of the range phone, but a 10 version of the 640 model, a smaller screen than I’m used to. I thought this a better option than the 550 entry model, as after all I have to use the phone for both work and home usage.

Additionally, I have some NFC work coming up (I’ve been using Android for that purpose still). So when the opportunity arose to obtain the Lumia 650 at a reasonable price I grabbed a hold of the handset and off we go into Windows 10 mobile.

The road ahead

The following future Windows10 phone articles will look at the changes, not just the layouts of settings screens, but features added (and taken away) to the platform. This is where the usual bring back the old version protests, usually begin, but so far the changes I’ve encountered seem valid and make senseChange is inevitable, and for a platform to succeed, you can’t always build on the ashes of what remained. The biggest hurdle to taking advantage of Windows 10 has been users staunch refusal that the old peripherals, and old drivers are compatible with a newer and more efficient OS keeping them on old versions. Embracing the change can open up a better OS experience and allow you to work with a truly innovative environment

VDI Considerations

 

After doing the initial proofing of VDI, its worth pointing out some important stages for consideration when making the system live for use, ignoring these stages can lead to the implementation of such a project failing in its application to users.

 

The user experience and efficiency.

The design and delivery of the VM’s is important, try and deliver what the user expects from their desktop, not only the software, but the appearance too. This can save time on training, and dramatically reduce the onslaught of questions such as where do a find this.

There will be differences to the “way the client works”, by anticipating what those are again will provide a smooth and less stressful transfer when working with VDI.

 

BYOD and the end-point. VDI allows a lot of diversity in the use of devices to deliver a desktop experience to. Stand your ground and ensure that you clearly state what type of device and specification is required.( list of supported devices)

For example yes RD Web and VDI will work on an android platform, but attempting to run rdp on an 8 gb memory phone, with a 5inch screen just isn’t practical, although it may be suitable for vapps

Also ensure you know who is accessing the device, and that they are following corporate security policies and procedures. The link with domain control is essential for validating users

VDI

 

Capacity requirements.

Ensuring your infrastructure is adequate and can cope with a VDI deployment is essential, keeping graphics, memory requirements, storage to minimums when creating the VM helps to keep the delivery and use of VDI with user expectations.

I read the term boot storm, where a large requirement of the workload is needed at once (such as all VDI users logging in at the same time), so large that it may cause massive spike in usage can cripple a storage area network (SAN) that is not prepared for such an event. So ensuring you’re network speed and storage requirements are up to the task.

Servers with Solid-State Disk (SSD) or onboard flash cache may help prevent spikes in I/O. Monitoring I/O and controller metrics of the broker then, administrators can make decisions which will determine performance specifics for their storage system

 

Virtualization is the key

A solid management plan can make all the difference in the success of your VDI deployment. For example, golden images and server provisioning helps administrators quickly provision resources to VDI workloads. Similarly, hypervisor VM redundancy capabilities (how VMs failover if there is an event) allow you to create powerful high availability and resiliency policies. Remember, today’s VDI ecosystem is a lot smarter. You can now set specific alerts and monitoring thresholds to find issues before they become major problems.

That require greater levels of insight and visibility, so you’d want to deploy agent-based AV technologies. However, if you’ve got a kiosk running a locked-down VM with little user interaction, a simple agentless solution can help. Data loss prevention (DLP), intrusion prevention services (IPS) and intrusion detection services (IDS) each provide granular security controls around VDI. For example, if Social Security numbers are getting transferred within a network against protocol, these types of systems can catch it and stop it from happening.

 

RDS Pet project

Usually the most work I do with Windows Server, is the creation and maintenance of File and IIS servers on Private Cloud/Hosted  to deliver the SaaS software for company I work for, delivers to its clients. But, an opportunity arose to perform a proving exercise for disaster recovery, which I took on as a pet project (no funding, no real project time allocated) that involved Remote Desktop delivery.

Previously, we had already used terminal services, and found that to be cumbersome and restrictive for purposes of the users, so there was some caution expressed over trying a similar project, that posed some restrictions, but as they say where there’s a will, there’s always a  way .

The task

Firstly getting management to understand the benefits of RD working, secondly and more importantly although we use cloud heavily, to reduce costs to a minimum in the proving of the concept. Finally the advantages of presenting RD, as a solution.

There are several clients with a thin client/painted desktop solution, and the one thing I’ve noticed is the slowness of its operation. The objective in my mind was not to present another ‘looks nice but ruddy awful to work with platform’ as we had with terminal services. But, something more adaptable that suited the needs of, not one part of the company but several operations, implementation, IT. support and management.

1.Advantages of RD Working

Providing equipment to users that can work from home, was becoming a logistical problem, firstly procuring equipment, followed by the configuring the software for the user, then locking down access both at user level and with bit locker, so as to protect from theft and ensuring data security.

So, when you inform management that the user can safely work from home with the own machine and not risk breaching security, management became interested. Follow that up with reducing equipment costs, and that implementing RD Web would allow BYOD safely they were hooked. As the big clincher was the Mac users, along with usual issues of cached credential problems, and having to run Office 365 as a solution for them with all of the drive mapping problems, offering a workable solution and reducing IT time on this platform, would be a big advantage.

2.Costs of proving

We have a Cloud hosting provider, but to commission a machine, operating system(s) and storage is a unrecoverable cost of R&D which I didn’t want to waste not knowing how successful the RD solution would be.

Thankfully Microsoft technet to the rescue, with a fantastic evaluation for 180 days of its 2012 R2 Server. So, the cost of the operating system gone in the evaluation of the project, even better it’s a simple matter of buying the licence and presto a fully working copy, once all proven.

Machinery wise, being on cloud means that there’s a minimum amount of physical hardware available for testing, and not wanting to run up a cost for a machine that may be scrapped (again wasting money and resource), a physical server to test on was decided on. Believe it or not the acting server for brokering, virtualised machines storage, and running was an i3 laptop (of all things!) with a 8Gb memory that was scheduled for junking, not a heavyweight server by anyone’s means. But as the story has a happy ending you’ll be glad to know that it held its own.

Licencing of virtual machines, again costs of licences would only be a RD burden at this early point, so the ‘risk’ of running with Microsoft’s Windows 10 Technical preview seemed an ideal solution.

The ISO is available for free download, there’d be no need to upgrade a basic OS in a proving exercise, and it’s a solid platform for running all of the company software requirements on, another perfect solution for evaluation.

3. Time and effort

The next big hurdle was the amount of time spent on this project, yes there are quiet moments at work but working in IT/Application support they are few and far between. The whole proving exercise was completed in lunch breaks, and the last part of a Friday afternoon, so work commitment to the project was at minimal cost to company time, after all I was keen to prove the project worked. Passing the finish line without any excessive cost or time lost was just an added boon.

The Solution

The idea so as to simply the testing exercise would be to put the broker server, virtual hosts and link to the Active directory all on the one machine. This machine would be place on the user level domain of the company, (we have several both direct and non-direct internet facing). This would be an advantage in reducing setup configuration of the virtual machines, as they all be generated on the user domain, and with the association to active directory pick up  automatically the policy that the company directs.

Server 2012R2 has to be one of the most clearly laid out operating systems I’ve had the pleasure of using. Future articles will explain why, but the configuration of Internet Information Services (IIS), the RD Brokering was simple. There’s even a quick option (a wizard) to do all of this for you, but following the usual quide, give you an appreciation, on how you could incorporate other separate servers to deliver a full service without overloading a single server.

The next part was the stop and think stage, access and security to users. Users of RD services don’t have to be domain users, but if you’re opening doors to your systems you want to know whose coming in. A decision had to made in proofing. Do we show off how flexible this method of access can be or do we stay with a limited delivery and let the management decide to take it further once the appetite has been wetted.

Originally there had been one concept for the RD Web service but now other possibilities we coming into view which has not been considered.

  1. Disaster recovery – total failure of the office systems
  2. BYOD/Flexible/Home workers – ability to access company network with no worry of security
  3. External support to clients – Ability to join clients network for support purposes, without any fear of passing on malware/viruses, or them getting one from the external company
  4. Machine client app hosting – In addition to the company SaaS solution, there are additional products, RD offers the opportunity to host  desktop machine access  and applications on their behalf.

The little idea was beginning to grow, luckily the taking a breath put both feet back on the ground and a basic core delivery system would be the plot of the test from now on. Once we have a working model other could decide which direction(s) it should take.

Environment Delivery

Provision the virtual template for the proving exercise would be simple, firstly all of the company applications are web based so that was the simple part, but there’s one matter of a specific application requiring Java! An additional  requirement for Office applications was necessary not only for office working but as an integral part of the interface outputs of the the company software (mailmerge, payroll reporting extracts etc.), again this was made much easier with the existing use of Office 365 and the “office” accounts we have

Administrator account

The virtual would be constructed on the server as an admin account (no inclusion of a Microsoft sign would be necessary), as the virtual is requested from the user, it’s the users access level and rights from the domain that are built into the delivered desktop.

Java would be happy as the pig in the proverbial muck at that level, and the company application would run fine from user level requests to the java development kit.

Office again installed as local admin would inherit the domain user account fred.blogs@company.local

System preparation

Sysprep under Windows 10 has its pitfalls, firstly there are issues with the modern apps that cause the process to fail. Secondly the updated versions can be a problem, and finally there are additional flags, that benefit the build of the vhd file for use.

The best method discovered was to grab the latest available version (and run  update), then use powershell to strip the OS to the necessary bones, and finally install Java and Office.

Sysprep ran through without issues to create a solid basic operating system to work with, and more importantly without having to go through the error logs in pather directory working out what next to delete. The update is important, otherwise trying to work out how to eliminate Candy Crush Soda Saga once its part updated could drive you to a stay in hospital for recovery purposes!

The flags for the build were /oobe for out of the box, the virtual would build itself from scratch. The /vm flag is essential for making easy construction of the VM, and forget about /unattend files, with the Broker server, you can specify no need for unattended files so the VM will build without prompting the user or checking the platform, provided you’re satisfied that users will use appropriate equipment.

 

End results from the test

To be honest even with this “two bob” cheap solution, there a lot to admire in the workings of RD Web. Over a basic computer there’s very little additional set up for the client side.

Just visiting the https: secure URL and only responding to valid domain users got the approval nod from those worried about security, and the delivery and creation of the virtual was complete in under a minute including the various login prompts (remember we’re using a single laptop to represent a three server delivery).

A colleague tried the system with a Samsung Galaxy phone, the big drum roll of delivering a Windows desktop environment to another platform was under way, after the download the snag of what to do with the file, as there wasn’t a native app available on that phone.

Simply downloading MicroSoft Remote Desktop app from Google Play (it’s free)  was the solution, and presto one Windows 10 machine running on an Android phone! Again a test, but a valid one proving that RD Web is cross platform and with a more realistic device such as a tablet users can access systems easily. We also downloaded the same app on the ipad to test, and had RDS capable on other popular platforms, no longer tying users down to having a PC available.

Using  ‘shortcuts’ made the proofing task easier, for example the server was put on the domain along with other user machines, to reduce the work involved in ip configuration, cross domain verification and port opening. Thanks to that the IP address of the generated virtual machine is on the same section of the network as other physical users.

All in all a positive result with a few hours work, and very little cost.

What next ?

Seeing is believing, and with the system demonstrated to management, and nods of approval given, the next stage is to advance the delivery of the virtual machines, designate specific ones to users (easily done through the domain controller).

Create alternative pools of machines for user groups such as service team, management etc.

The double check of running the system on Mac platform, will be obligatory no doubt, but basically, having got a simple solution that you demonstrate, and more importantly, can build on from has to be the way forward in IT directing company infrastructure, without risking budgets, wasting on costs,  to quote someone

 

“If you build it, they will come”