The coronavirus (COVID-19) pandemic has placed remarkable demands on IT teams. Citrix admins have been tasked with supporting new business continuity efforts and remote work scenarios. Among the challenges they’ve faced: limited staff and already-stretched data center capabilities. This has led many of our customers to turn to public cloud solutions to expand their computing footprint. Leveraging public clouds for Citrix workloads enables easy capacity expansion, on-demand billing, and improved IT responsiveness.

As new environments come online, we have received many customer requests for guidance on provisioning and power management operations in public clouds.  In this blog post, we provide guidance for optimized deployments of Amazon Web Services (AWS) at scale.

Like public clouds, the Citrix Virtual Apps and Desktops service is an ever-evolving platform. To support the changing nature of cloud connections and allow for improvements over time, we have exposed connection settings inside Citrix Studio to control the rate of API calls for virtual machine creation and updates. To help customers optimize their environments, we periodically benchmark performance results and share tuning recommendations across cloud providers.

A standard customer deployment for high availability distributes workloads across multiple availability zones. To provide customers with baseline numbers that can help design and optimize their environment, we have tested operations for the following scenario:

  • 1,500 pooled VDAs equally distributed across two availability zones (using two catalogs) in a single region

Please note:

  • We have used default tenancy for the VDAs so they used shared hardware.
  • All operations are performed on both catalogs in parallel. Operations are initiated on both catalogs simultaneously and are considered completed when both catalogs have completed the operation.

To achieve our recommended scale of 1,500 VMs while ensuring we don’t exceed AWS request quotas, we have used the following configuration for this scenario:

  • Set “Simultaneous actions (all types), Absolute” to 100
  • Set “Simultaneous actions (all types), Percentage” to 100 percent
  • Set “Maximum new actions per minute” to 50

Type of catalog: Pooled

Instance type: t3a.medium

Operation Time (Minutes)
Create two catalogs 10
Provision 750 machines in each catalog 249
Start all machines 33
Stop all machines 33
Update catalogs’ images 6
Post update start all machines (machines initially stopped) 35
Delete catalogs (machines initially stopped) 34

We expect similar numbers when resources are distributed across environments with more than two availability zones.  We also anticipate similar performance across different instance types (e.g. A1, M5)

While performing these tests, we found minor performance differences between test runs; cloud platforms generally have increased variance compared to on-prem dedicated hardware, so the results published above represent an average of multiple runs. Increased demand due to the sudden spike in cloud adoption could be another factor affecting cloud performance, and numbers may change as vendors bring new cloud hardware online or as APIs evolve.

It’s worth mentioning that the recent spike in demand for cloud resources can cause some deployment challenges on AWS such as exceeding the service quotas. We recommend that customers intending to deploy large environments talk to their AWS Technical Account Manager to ensure their account does not have any limitations in place.

It’s also important to note that Citrix Machine Creation Services uses an IAM service account to orchestrate multiple processes such as mastering and cloning, autoscaling and updating the deployed images. Because AWS request quotas are enforced per region, in order to scale resources, a single service account can be used to create multiple catalogs across multiple regions.

The numbers above are a good baseline for admins deploying machines in a given AWS region. As cloud capabilities change and more capacity is available, Citrix cloud services will continue to adapt, and we will update APIs and service interactions to maximize efficiency. More tuning of these parameters is possible, and we’ll explore that topic in a future blog post.