Recently, one of our Remote Desktop Reporter customers – a Managed Service Provider (MSP) - shared with us an interesting use case about how they were going to utilize our Remote Desktop Reporter software.
This and other conversations we were having at the time resulted in two big discoveries on how MSPs can now decrease operating costs by measuring server based computing (SBC) metrics and minimizing VM images.
Problem: Peak Loads On SBC Platforms
This particular MSP currently offers Citrix XenApp server-based computing environments to their existing customer base, and leverages Amazon’s EC2 platform to host the virtual machines running Citrix XenApp and other applications.
Within Remote Desktop Reporter, there is a subset of reports that track peak concurrent Remote Desktop Services and Citrix session counts. Since Remote Desktop Reporter can show these concurrent user metrics in a variety of different time intervals, such as hourly or daily, an MSP can quickly determine times of “peak load” for their SBC platform.
Solution: Intelligent Scaling and Scheduling of Server Instances
This particular MSP planned on using the historic concurrent user data to intelligently schedule the scaling of server instances running XenApp or equivalent.
By studying historic usage trends and putting into place a scheduled scaling program, the MSP can immediately decrease cloud computing costs by only running the instances they need - when they need them - to serve their concurrent users. Now that is smart.
Another Problem: VM Image Waste
Typically, in an SBC environment such as a Citrix XenApp farm, different classes of users often have very different needs in terms of the applications they will run in a typical session. In private clouds or on-premise networks, the typical solution is to create multiple virtual machine images that serve different groups of users - regardless of whether or not the underlying technology is oriented towards VDI, SBC, or a hybrid approach.
However, this approach has very costly implications when leveraging a large cloud-based IaaS provider like Amazon EC2 or Windows Azure. If you have multiple images defined for different classes of users, this greatly limits how much you can reduce the number of instances based on historical load data using scheduled scaling.
For instance, you may have a set of programs that need Java support and other dependencies, which cannot coexist with other core applications on the same XenApp or RDS image. This means that you must always have a subset of specialized instances running to serve users with special application needs, which increases computing costs and reduces the potential for economies of scale.
Solution: Minimizing VM Images
Thanks to the new groundbreaking technology developed by FSLogix, there is a way MSPs can create a single master image for all classes of remote desktop users. Since FSLogix Apps uses per-user policies to dynamically hide or reveal specific applications to specific users at the file-system level, MSPs can move away from the traditional “siloing” technique of creating different types of images for different types of applications.
As a result, economies of scale based on up-scaling and down-scaling virtualized server instances can be more fully realized.
Measuring historic utilization and minimizing images is a sure fire recipe for greater profits and reduced complexity. In the super cost-sensitive world of the MSP, both must be done to remain competitive.