VDI Session reliability

No matter how utopian the idea of VDI, the harsh reality of remote users, and the networks they are working on, can cause issues for VDI delivery and use. Taking into consideration the real world, should be part of your VDI delivery plan, when implementing a solution.

VDI Clients working on overloaded networks, or remotely on WiFi will be prone to dropouts. So how does the drop out issue get handled by VDI? It’s worth knowing, one of the biggest issues, I’ve come across is multiple sessions in use, usually caused by data packet loss, and high latency due to inferior speed connections to the server, a recipe for creating bottlenecking processes. All of which lead to server overload in that, excessive use of server system memory, and CPU by the ghost/dead sessions.

For that reason I’m more in favour of RDWeb RDS service, as it restricts what remote users can actually do to harm your VDI operation in such events.

The advantage of the RDS 2012 system, is that it uses Microsoft’s remote desktop standard technology to connect the VDI client to the server(s) and services.

This means that when a disconnection occurs. The system gives the reconnection up to a max of 20 attempts so as to establish the connection, the user cannot attempt another connection until the reconnection attempts succeed or fail.

There’s a visual warning and display of the reconnection attempts while the “dropped” session awaits reconnection. Should all 20 attempts fail, then the session is flagged as disconnected, and the server housekeeping removes the dead session in the back ground.
At that point the user can pick up another session from the broker server.


Citrix deals with drop outs differently. XenApp/XenDesktop are controlled by the receiver which will basically keep on trying to reconnect the session, no matter how long it’s been disconnected. But, that is where the issue lies with using Citrix ‘without a leash’.

A user may experience a drop out, working remotely, they’d be tempted to fire up another session to continue working, whilst ‘in the background’ the receiver is wrestling to re-establish back the lost connection, when that connection is established it’s presented back to the originating user.

The issue is that if the VDI is being used to address other software on servers, the user has kicked off multiple sessions in other software, loss of the client on their side may not terminate the back ground session on other servers, leaving dead sessions that could be utilising high resource of the other server. So again the latency build up becomes an issue.

Depending on user stubbornness, this could actually easily lead to a crippling overload a server, if session drop outs are frequent. Although Citrix utilises thin wire technology to improve issues on high latency networks, it can cause “nasty side effects” over the network if not configured appropriately.

A correct Citrix setup/configuration should by policy/rules have timeout disconnects set for situations like these, also idle timeouts are recommended. Also raise users awareness of drop outs and what to do, so that the don’t end up causing your IT more issues

RDS makes for an easier solution, without as much configuration for the drop out handling, waiting for the green light from the server before proceeding helps to eliminate problems with dropped connections. While Citrix on the other had requires some consideration in handling such events.

Not that Citrix has inherent issues. I’ve seen customer set ups where Citrix is as solid as a rock within their own network, perfect for delivering a standard painted desktop solution, the issue lies with not planning for remote workers on external networks.


Windows 10 Phone

The leap of faith

Although I used the technical preview for PC, I stayed away from the phone version of Windows 10 due to the incompatibility issues with the older Nokia phones, after all I didn’t want to spend a fortune in buying a new phone, to trial an “in test” version of the OS, as it was at that time.

I’m a big fan of the technical preview concept, it is an ideal way to approach a release of software, by putting ideas into pre-release testing, and having users allowed their say.  This all makes for a tailored solution for users, but as Windows 10 was supposedly the same across platform, I didn’t see the point of double testing both phone and PC.

But, as no doubt others have found, the apps in the store for Windows phone 8 are now becoming dated, in that newer features are not being brought into the apps. Obviously Microsoft are concentrating the focus on the Windows 10 platform albeit one way or another.

So, the time was now right to look to upgrading the phone, and since I’ve been using the Windows 10 platform on PC for over a year, it seemed right to take the leap and go for a Windows 10 phone, to ensure that the apps I use are up to date, and after all that I’m hopefully using a system that will be around for a while for mobile/smart phone requirements

Makes and models

Thankfully there’s a wide variety of Microsoft phones available to choose from, with them not all being £599.00 mega phones both in features and price.

Tempted as I was to get the 950XL due to me being a phablet fan, pure economics kept me away from that model. The 640 was within reach, and the Other half uses and adores that model, but the phone released 2015 does not come with Windows 10 installed (you can upgrade), but I wanted the “out of the box” experience with 10, rather avoiding any OS update wrestling that may ensue.

So the Lumia 650 came into view, not a hole in the pocket burner like the top of the range phone, but a 10 version of the 640 model, a smaller screen than I’m used to. I thought this a better option than the 550 entry model, as after all I have to use the phone for both work and home usage.

Additionally, I have some NFC work coming up (I’ve been using Android for that purpose still). So when the opportunity arose to obtain the Lumia 650 at a reasonable price I grabbed a hold of the handset and off we go into Windows 10 mobile.

The road ahead

The following future Windows10 phone articles will look at the changes, not just the layouts of settings screens, but features added (and taken away) to the platform. This is where the usual bring back the old version protests, usually begin, but so far the changes I’ve encountered seem valid and make senseChange is inevitable, and for a platform to succeed, you can’t always build on the ashes of what remained. The biggest hurdle to taking advantage of Windows 10 has been users staunch refusal that the old peripherals, and old drivers are compatible with a newer and more efficient OS keeping them on old versions. Embracing the change can open up a better OS experience and allow you to work with a truly innovative environment

VDI Considerations


After doing the initial proofing of VDI, its worth pointing out some important stages for consideration when making the system live for use, ignoring these stages can lead to the implementation of such a project failing in its application to users.


The user experience and efficiency.

The design and delivery of the VM’s is important, try and deliver what the user expects from their desktop, not only the software, but the appearance too. This can save time on training, and dramatically reduce the onslaught of questions such as where do a find this.

There will be differences to the “way the client works”, by anticipating what those are again will provide a smooth and less stressful transfer when working with VDI.


BYOD and the end-point. VDI allows a lot of diversity in the use of devices to deliver a desktop experience to. Stand your ground and ensure that you clearly state what type of device and specification is required.( list of supported devices)

For example yes RD Web and VDI will work on an android platform, but attempting to run rdp on an 8 gb memory phone, with a 5inch screen just isn’t practical, although it may be suitable for vapps

Also ensure you know who is accessing the device, and that they are following corporate security policies and procedures. The link with domain control is essential for validating users



Capacity requirements.

Ensuring your infrastructure is adequate and can cope with a VDI deployment is essential, keeping graphics, memory requirements, storage to minimums when creating the VM helps to keep the delivery and use of VDI with user expectations.

I read the term boot storm, where a large requirement of the workload is needed at once (such as all VDI users logging in at the same time), so large that it may cause massive spike in usage can cripple a storage area network (SAN) that is not prepared for such an event. So ensuring you’re network speed and storage requirements are up to the task.

Servers with Solid-State Disk (SSD) or onboard flash cache may help prevent spikes in I/O. Monitoring I/O and controller metrics of the broker then, administrators can make decisions which will determine performance specifics for their storage system


Virtualization is the key

A solid management plan can make all the difference in the success of your VDI deployment. For example, golden images and server provisioning helps administrators quickly provision resources to VDI workloads. Similarly, hypervisor VM redundancy capabilities (how VMs failover if there is an event) allow you to create powerful high availability and resiliency policies. Remember, today’s VDI ecosystem is a lot smarter. You can now set specific alerts and monitoring thresholds to find issues before they become major problems.

That require greater levels of insight and visibility, so you’d want to deploy agent-based AV technologies. However, if you’ve got a kiosk running a locked-down VM with little user interaction, a simple agentless solution can help. Data loss prevention (DLP), intrusion prevention services (IPS) and intrusion detection services (IDS) each provide granular security controls around VDI. For example, if Social Security numbers are getting transferred within a network against protocol, these types of systems can catch it and stop it from happening.


RDS Pet project

Usually the most work I do with Windows Server, is the creation and maintenance of File and IIS servers on Private Cloud/Hosted  to deliver the SaaS software for company I work for, delivers to its clients. But, an opportunity arose to perform a proving exercise for disaster recovery, which I took on as a pet project (no funding, no real project time allocated) that involved Remote Desktop delivery.

Previously, we had already used terminal services, and found that to be cumbersome and restrictive for purposes of the users, so there was some caution expressed over trying a similar project, that posed some restrictions, but as they say where there’s a will, there’s always a  way .

The task

Firstly getting management to understand the benefits of RD working, secondly and more importantly although we use cloud heavily, to reduce costs to a minimum in the proving of the concept. Finally the advantages of presenting RD, as a solution.

There are several clients with a thin client/painted desktop solution, and the one thing I’ve noticed is the slowness of its operation. The objective in my mind was not to present another ‘looks nice but ruddy awful to work with platform’ as we had with terminal services. But, something more adaptable that suited the needs of, not one part of the company but several operations, implementation, IT. support and management.

1.Advantages of RD Working

Providing equipment to users that can work from home, was becoming a logistical problem, firstly procuring equipment, followed by the configuring the software for the user, then locking down access both at user level and with bit locker, so as to protect from theft and ensuring data security.

So, when you inform management that the user can safely work from home with the own machine and not risk breaching security, management became interested. Follow that up with reducing equipment costs, and that implementing RD Web would allow BYOD safely they were hooked. As the big clincher was the Mac users, along with usual issues of cached credential problems, and having to run Office 365 as a solution for them with all of the drive mapping problems, offering a workable solution and reducing IT time on this platform, would be a big advantage.

2.Costs of proving

We have a Cloud hosting provider, but to commission a machine, operating system(s) and storage is a unrecoverable cost of R&D which I didn’t want to waste not knowing how successful the RD solution would be.

Thankfully Microsoft technet to the rescue, with a fantastic evaluation for 180 days of its 2012 R2 Server. So, the cost of the operating system gone in the evaluation of the project, even better it’s a simple matter of buying the licence and presto a fully working copy, once all proven.

Machinery wise, being on cloud means that there’s a minimum amount of physical hardware available for testing, and not wanting to run up a cost for a machine that may be scrapped (again wasting money and resource), a physical server to test on was decided on. Believe it or not the acting server for brokering, virtualised machines storage, and running was an i3 laptop (of all things!) with a 8Gb memory that was scheduled for junking, not a heavyweight server by anyone’s means. But as the story has a happy ending you’ll be glad to know that it held its own.

Licencing of virtual machines, again costs of licences would only be a RD burden at this early point, so the ‘risk’ of running with Microsoft’s Windows 10 Technical preview seemed an ideal solution.

The ISO is available for free download, there’d be no need to upgrade a basic OS in a proving exercise, and it’s a solid platform for running all of the company software requirements on, another perfect solution for evaluation.

3. Time and effort

The next big hurdle was the amount of time spent on this project, yes there are quiet moments at work but working in IT/Application support they are few and far between. The whole proving exercise was completed in lunch breaks, and the last part of a Friday afternoon, so work commitment to the project was at minimal cost to company time, after all I was keen to prove the project worked. Passing the finish line without any excessive cost or time lost was just an added boon.

The Solution

The idea so as to simply the testing exercise would be to put the broker server, virtual hosts and link to the Active directory all on the one machine. This machine would be place on the user level domain of the company, (we have several both direct and non-direct internet facing). This would be an advantage in reducing setup configuration of the virtual machines, as they all be generated on the user domain, and with the association to active directory pick up  automatically the policy that the company directs.

Server 2012R2 has to be one of the most clearly laid out operating systems I’ve had the pleasure of using. Future articles will explain why, but the configuration of Internet Information Services (IIS), the RD Brokering was simple. There’s even a quick option (a wizard) to do all of this for you, but following the usual quide, give you an appreciation, on how you could incorporate other separate servers to deliver a full service without overloading a single server.

The next part was the stop and think stage, access and security to users. Users of RD services don’t have to be domain users, but if you’re opening doors to your systems you want to know whose coming in. A decision had to made in proofing. Do we show off how flexible this method of access can be or do we stay with a limited delivery and let the management decide to take it further once the appetite has been wetted.

Originally there had been one concept for the RD Web service but now other possibilities we coming into view which has not been considered.

  1. Disaster recovery – total failure of the office systems
  2. BYOD/Flexible/Home workers – ability to access company network with no worry of security
  3. External support to clients – Ability to join clients network for support purposes, without any fear of passing on malware/viruses, or them getting one from the external company
  4. Machine client app hosting – In addition to the company SaaS solution, there are additional products, RD offers the opportunity to host  desktop machine access  and applications on their behalf.

The little idea was beginning to grow, luckily the taking a breath put both feet back on the ground and a basic core delivery system would be the plot of the test from now on. Once we have a working model other could decide which direction(s) it should take.

Environment Delivery

Provision the virtual template for the proving exercise would be simple, firstly all of the company applications are web based so that was the simple part, but there’s one matter of a specific application requiring Java! An additional  requirement for Office applications was necessary not only for office working but as an integral part of the interface outputs of the the company software (mailmerge, payroll reporting extracts etc.), again this was made much easier with the existing use of Office 365 and the “office” accounts we have

Administrator account

The virtual would be constructed on the server as an admin account (no inclusion of a Microsoft sign would be necessary), as the virtual is requested from the user, it’s the users access level and rights from the domain that are built into the delivered desktop.

Java would be happy as the pig in the proverbial muck at that level, and the company application would run fine from user level requests to the java development kit.

Office again installed as local admin would inherit the domain user account fred.blogs@company.local

System preparation

Sysprep under Windows 10 has its pitfalls, firstly there are issues with the modern apps that cause the process to fail. Secondly the updated versions can be a problem, and finally there are additional flags, that benefit the build of the vhd file for use.

The best method discovered was to grab the latest available version (and run  update), then use powershell to strip the OS to the necessary bones, and finally install Java and Office.

Sysprep ran through without issues to create a solid basic operating system to work with, and more importantly without having to go through the error logs in pather directory working out what next to delete. The update is important, otherwise trying to work out how to eliminate Candy Crush Soda Saga once its part updated could drive you to a stay in hospital for recovery purposes!

The flags for the build were /oobe for out of the box, the virtual would build itself from scratch. The /vm flag is essential for making easy construction of the VM, and forget about /unattend files, with the Broker server, you can specify no need for unattended files so the VM will build without prompting the user or checking the platform, provided you’re satisfied that users will use appropriate equipment.


End results from the test

To be honest even with this “two bob” cheap solution, there a lot to admire in the workings of RD Web. Over a basic computer there’s very little additional set up for the client side.

Just visiting the https: secure URL and only responding to valid domain users got the approval nod from those worried about security, and the delivery and creation of the virtual was complete in under a minute including the various login prompts (remember we’re using a single laptop to represent a three server delivery).

A colleague tried the system with a Samsung Galaxy phone, the big drum roll of delivering a Windows desktop environment to another platform was under way, after the download the snag of what to do with the file, as there wasn’t a native app available on that phone.

Simply downloading MicroSoft Remote Desktop app from Google Play (it’s free)  was the solution, and presto one Windows 10 machine running on an Android phone! Again a test, but a valid one proving that RD Web is cross platform and with a more realistic device such as a tablet users can access systems easily. We also downloaded the same app on the ipad to test, and had RDS capable on other popular platforms, no longer tying users down to having a PC available.

Using  ‘shortcuts’ made the proofing task easier, for example the server was put on the domain along with other user machines, to reduce the work involved in ip configuration, cross domain verification and port opening. Thanks to that the IP address of the generated virtual machine is on the same section of the network as other physical users.

All in all a positive result with a few hours work, and very little cost.

What next ?

Seeing is believing, and with the system demonstrated to management, and nods of approval given, the next stage is to advance the delivery of the virtual machines, designate specific ones to users (easily done through the domain controller).

Create alternative pools of machines for user groups such as service team, management etc.

The double check of running the system on Mac platform, will be obligatory no doubt, but basically, having got a simple solution that you demonstrate, and more importantly, can build on from has to be the way forward in IT directing company infrastructure, without risking budgets, wasting on costs,  to quote someone


“If you build it, they will come”




Cloud myths or considerations

First of all, I’m not a techie luddite. I’m a huge fan of Cloud services and use them regularly both at home and for work purposes. But, I don’t believe in using cloud for clouds sake either. Often cloud is glamourized by IT, sales people and the press, into being the next best thing since sliced bread.

On the whole Cloud is advantageous, for several reasons, sometimes costs, simplification etc.  Especially now since working flexibly is becoming more and more of a requirement, for companies and clients alike. But there are oversights that can make people behave like the rumours of suicidal lemmings, one goes they all go.

All this article intends to do is to point out some all too obvious glaring oversights in venturing into the cloud, simply pointing out to concentrate on the journey rather than just the destination.

What’s your strategy?

Firstly do you have one? Many companies still don’t have a cloud strategy, and the default is often (stated or not) that they are just doing what their CEO wants. Sometimes the CEO has actually dictated that the cloud is the strategy (without a connection to an actual business goal). Not unlike other examples of “airline magazine syndrome,” hype and unrealistic expectations are often behind the interest. This is not a cloud strategy and is often based on one or more of the myths outlined in this note.

Asses the need is the simple answer, it’s all to easy to bung something on cloud as a solution when you could actually find alternative cheaper solutions (e.g hosted services, virtualised solutions etc.)

The usual fear with internal IT is that moving to cloud lessens their workload, and jobs feel under threat. Simply that is not the problem, cloud providers seldom have the working knowledge and manpower to replace internal IT. If anything this is an additional string to the bow, expand work responsibility and keeping the cloud operationally at full functionality

So ensure that client IT don’t think they can sit back and enjoy the ride, or that you can pass everything on to the provider, a good cloud strategy will rely on co operation with provider(s)


Cloud is about money

All too often the financial considerations not part of an IT decision process, especially when that process concerns cloud computing. The Myth that cloud is that it always saves money, is not always the case, simply bad decision making without financial afore thought contributes highly to cloud costs.

Often a “what we have and can it be put in Cloud” is the overall view point, no thought given to expansion and additional costs of OS, storage and connection services as the customer base grows. Check the financial implications that a switch from capital expenditure (capex) getting what you have, to the operating expenditure once you move to cloud the expansion requirements.

If you’re providing a private cloud, or using a hosted solution, what have you done to cater for clients future choices to change provider, amend/expand the network configuration on their site etc.

The simple fact is that: don’t assume you will save money unless you have done the hard work of honestly analysing all of the situations, Utilize total cost of ownership and other models on a case-by-case basis. Segment cloud into use cases. (see next section)

It could well be that to gain the real advantages you require from Cloud, that you have to look beyond cost issues, and therefore provide a more “expensive solution” to get the working solution required.

The sole vendor scenario

Any new IT implementation is all too common to fall into an easy option, of putting all your eggs in one basket. But, cloud computing is not one single concept, there are multiple solutions that span over several concepts or models such as SaaS, DaaS even just simplified Cloud storage.

Although most cloud providers will say they will cover all bases on your behalf, financially you could be handing out free money for something that can be done cheaper by another provider, or alternatively by yourself.

This should be looked into thoroughly, for example SaaS solutions by an external “hosted” servers for clients, will effectively save you storage costs, and the end user/client IT costs. Again look to expansion, provisioning of more storage, additional servers etc, additional OS and other hosting costs will apply

But your own infrastructure, could be put on Azure to provide SaaS internally or for smaller client bases, and run DaaS models so as to provide a work anywhere scenario, that you could maintain internally without having to pay external hosting providers dead money for costings and services that you don’t use all the time.

The behaviour of inter operational cloud is not impossible, cloud strategy should be based on aligning business objectives and getting benefits from it. Those goals and benefits are different in various use cases and should be the driving force for businesses, rather than any attempts to standardize on one offering or strategy.


If you’re not Cloud your history

Looking at cloud from both sides, or cloud rinsing the habit of calling something cloud that isn’t (e.g. a hosted service) is another common myth.

Basically call a spade a spade, there are other capabilities (e.g., automation, virtualization) and characteristics can be good and have no need to be “clouded”, let these strategies be if they work for you and the client, and avoid misplaced expectations.

For example private cloud hosted services, may make use of a secure connection to assure privacy/security, this link could be point to point between companies, so any opportunity to access cloud style may be prohibited unless connectivity is taken into consideration


Virtualisation Fog/Use Cloud for everything ?

One term banded about is hybrid cloud, and in reality it’s the better solution. A percent split of services that benefit from being on cloud, and maintained where they are, is often the reality that is left once your objectives have been assessed.

Where services are kept in situ, where cloud is not suitable then perhaps virtualisation, is an alternative. Virtualization is a commonly used enabling technology for cloud computing, but, it is not the only way to implement cloud computing (established SaaS while new approaches such as containerization are gaining traction).

Not only is it not necessary, it is not sufficient either. Even if virtualization is used (and used well), the result is not cloud computing. This is most relevant in private cloud discussions where highly virtualized, automated environments are common and, in many cases, are exactly what is needed. Unfortunately, these are often erroneously described as “private cloud”






Which tablet is really right for your business?

I once heard a business owner say we equip all of our staff with IPad, “its apple and the best thing”. Okay, after the grinding of teeth I let him demonstrate how he uses his tablet for business.

To be fair, I was impressed to see Office on there (a fairly standard and essential item of business software) but was slightly concerned to see his iTunes music collection and various games. The response “kids eh?” was slightly more concerning to be honest!

If you are going to do business work on a tablet, the Ipad may be fine for your needs, but I would prefer the Windows tablet any day in my humble opinion. Apple still seem to be the one size fits all, even though there’s various sizes of Ipad, its  either that or a MacBook if you’re needing the keyboard.

Windows offers a wider variety of tablet, the surface and the 2in1 which can take on both roles, so you can get a device that more suited to your needs light/causal right up to heavy processor.

In addition to that here’s some other advantages I noted:-

Encryption of data.

I lost count of the number of time I’ve seen the “this person stole my phone” message in social media, where the culprit has a photo taken of themselves and broadcast on the internet. Granted tablets are slightly different but the question of security  has to be the most important

Windows 10 enables you to encrypt files to protect from unauthorized access, but also allows for encryption of entire drives, meaning that should the tablet be lost or stolen it’s a worthless item for the person who took/found it, and more importantly your data is secure

Shared device

User Account Control has been in existence on windows by default since Vista, and is a handy control to allow Tom, Dick, and Harry to have their own personal tablet.

Granted there are additional apps that allow Ipad some account control, but not everything is really controlled. UAC has some real advantages allowing accounts to be Microsoft enabled to download apps etc. Or, standard local users. In addition to that there’s the administrator and standard user choice too. Securing your tablet from having apps etc. loaded onto to it without your knowledge.


Multi tasking?

Okay I’ll go into this further in a later article, but the Windows tablet features snap allowing you to work with two programs side by side.

Change the mode of the tablet to desktop mode (simple option) and you can snap 4 windows. Use the multiple desktop option from the tool bar (in desktop mode) and you can have another desktop and snap all over again. It also gets better you can transfer active windows, between multiple desktops.

To a degree the I need multiple monitors argument goes out of the window (pardon the pun)


File Management

Since the dawn of time.. Ok, well not really but Windows File manager has been around for a while through the various incarnations of Windows, no need to download and learn an extra app as you’d have to with Ipad. Once again I’ve sung it’s praises but the offering of Onedrive cloud storage, allows you access to data both on and off line too.


Legacy browsing

Like it or not the need for speed browser options sometimes fall flat when you using apps that have certain requirements. Yes, Microsoft are pushing Edge browser, but like Chrome which is now removing its add ins, your limited to extras that may be required.

IE 11 is still available in Windows 10, as it has to cater to the enterprise market. Java, active X although not considered back doors to intrusion, is still widely used by commercial software. (it’s also secure if you adjust settings trust me!) .

You can use both, under Windows with no fuss, so you have the need for speed option and good old faithful as and when needed. Noice option too, if you’re using Edge and come up against a “wall” take the option open page in IE and the trade over begins

Utilising Cloud IT

Utilising “Cloud IT” and having a well-supported IT system for small businesses makes for a lot easier, faster and smoother running IT system. Also with the current price war with cloud providers, it makes economic sense.

By utilising cloud computing, and its advantages of SaaS company’s big or small can run equivalent software systems that large company’s do without a huge bill for expensive hardware or a dedicated in house round-the-clock IT team

In the past, the majority of small business could only afford cheap servers and software that is less reliable and secure. When events like the system going down, the harassed IT guy might take hours to rectify the issue and get the system back up and running

Small business can now look elsewhere with the advantage of Cloud and look to replace old in-house servers and subscribe to cloud-based systems for not only they’re basic IT needs (office 365, email etc.) but their vertical software requirements too.

What was an expensive investment to deploy can now be at your fingertips for a low cost simple monthly fee. For small subscription fees, cloud-based services bring SMEs the economies of scale that only huge companies or multi-national corporations used to enjoy.

It has now become so affordable, a fraction of the price it used to be, to give your business a more stable and cutting edge IT department that the big companies enjoy. This advantage applies not just to email and word-processing, but to the necessary back-office chores from leave management and approval, to travel and expense claims and time-tracking. Right up to heavy dedicated mainstream applications such as administration, payroll, financial, engineering etc etc.

The change in computing practices out of the dedicated office and the growing access everywhere to the cloud means harnessing IT (instead of suffering it) essentially giving small businesses a better competitive edge.

Moving your IT department into the cloud means that your IT needs are no longer the domain of just one IT personnel, but entire teams in the background. Also it doesn’t make your internal IT redundant, but gives them better time for other tasks and projects (you still need someone internal to assist with the setup and occasional maintenance/changes to the cloud.

The majority of the large commercial Cloud providers such as Azure, Google and other business dedicated providers boast a  99.9 per cent uptime without small business having to suffer headaches and stress thro time lost when an in-house system, was maintained, as majority of cloud maintenance is invisble to the end small business user. With maintenance carried out outside of work hours, or swapped over to back up servers, whilst your server is being upgraded etc.

Having access from anywhere, and now via any device, opens up cheap and flexible IT use, all for the price of the subscription Cloud service and that of a good internet connection. Also by paying via subscription based, cloud can be scaled up or down depending on how the number of users, the amount of business required (e.g small companies don’t need to run payroll every day of the month for example) so look at subscriptions that offer pay as you use, rather than just flat monthly rate charges

If your existing systems stop you from getting your IT needs fulfilled then do look at Cloud seriously as an option, as  Cloud applications and IT offer real alternatives and improvements.