W-Systems joins SugarCRM! Read Blog

Amazon EC2 April 2019 Update

Released on April 30th, 2019

Elastic Fabric Adapter is Now Generally Available

Elastic Fabric Adapter (EFA), a low-latency network adapter for Amazon EC2 Instances, is now Generally Available for production use.  EFA was first announced as a preview in November 2018.  

EFA is a network interface for Amazon EC2 instances that enables customers to run High Performance Computing (HPC) applications requiring high levels of inter-instance communications, like computational fluid dynamics, weather modeling, and reservoir simulation, at scale on AWS. It uses a custom-built operating system bypass technique to enhance the performance of inter-instance communications, which is critical to scaling HPC applications. With EFA, HPC applications using popular HPC technologies like Message Passing Interface (MPI) can scale to thousands of CPU cores. EFA supports industry-standard libfabric APIs, so applications that use a supported MPI library can be migrated to AWS with little or no modification. EFA is available as an optional EC2 networking feature that you can enable on C5n.18xl and P3dn.24xl instances at no additional cost. EFA is currently available in US East (Ohio), US East (N. Virginia), US West (Oregon), EU (Ireland), and AWS GovCloud (US). Support for additional instances and regions will be added in the coming months.

Learn more about using EFA for your HPC workloads here.

Amazon EC2 T3a Instances Are Now Generally Available

Amazon Web Services (AWS) announces general availability of Amazon EC2 T3a instances. T3a instances are variants of T3 instances and feature AMD EPYC processors with an all core turbo clock speed of up to 2.5 GHz. T3a instances provide additional options for customers who are looking to achieve a 10% cost savings on their Amazon EC2 compute environment for applications with moderate CPU usage that may experience temporary spikes in use.

Similar to T3 Instances, T3a instances are next generation burstable general-purpose instance type that provide a baseline level of CPU performance with the ability to burst CPU usage at any time for as long as required. T3a instances offer a balance of compute, memory, and network resources for a broad spectrum of general purpose workloads including micro-services, low-latency interactive applications, small and medium databases, virtual desktops, development environments, code repositories, and business-critical applications.

These instances are available today in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Singapore) AWS Regions. T3a instances are available in 7 sizes, with 2, 4, and 8 vCPUs. The new instances can be purchased as On-Demand, Reserved or Spot Instances.

To get started, visit the AWS Management ConsoleAWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the T3 Instance Page.

Amazon EKS Supports EC2 A1 Instances as a Public Preview

You can now use Amazon Elastic Container Service for Kubernetes (EKS) to run containers on Amazon EC2 A1 Instances as part of a public developer preview. This preview lets you take advantage of the latest EC2 functionality and start validating performance and stability of containerized applications running on the Arm processor architecture.

EC2 A1 instances deliver significant cost savings for scale-out and Arm-based applications such as web servers, containerized microservices, caching fleets, and distributed data stores that are supported by the extensive Arm ecosystem. A1 instances are the first EC2 instances powered by AWS Graviton Processors which features 64-bit Arm Neoverse cores and custom silicon designed by AWS.

Starting with Kubernetes version 1.12, the developer preview of A1 instances for Amazon EKS lets you begin to test and validate using Arm-based nodes for containers managed by Amazon EKS.

Visit Amazon's GitHub to learn more about the developer preview and get started launching Arm-based containers with Amazon EKS.

AWS CloudFormation Coverage Updates for Amazon EC2, Amazon ECS and Amazon Elastic Load Balancer

AWS CloudFormation provides a common language for you to describe and provision all the infrastructure resources in your cloud environment. It allows you to use a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts.

The following resource was added as part of this release:


Use the AWS::EC2::CapacityReservation resource to create a Capacity Reservation.

The following resources was updated:


Use the ProxyConfiguration property to specify the configuration details for an App Mesh proxy.

In the ContainerDefinition property type:

● Use the DependsOn property to specify the dependencies defined for container startup and shutdown.

● Use the StartTimeout property to specify the time duration to wait before giving up on resolving dependencies for a container.

● Use the StopTimeout property to specify the time duration to wait before the container is forcefully killed if it doesn't exit normally on its own.


Use the HealthCheckEnabled property to indicate whether health checks are enabled.

The Port, Protocol, and VpcId properties are now required only if the target type is instance or IP.

AWS Batch Now Supports GPU Scheduling for Accelerating Batch Jobs

AWS customers can now seamlessly accelerate their High Performance Computing (HPC), machine-learning, and other batch jobs through AWS Batch simply by specifying the number of GPUs each job requires. Starting today, you can use AWS Batch to specify the number and type of accelerators your jobs require as job definition input variables, alongside the current options of vCPU and memory. AWS Batch will scale up instances appropriate for your jobs based on the required number of GPUs and isolate the accelerators according to each job’s needs, so only the appropriate containers can access them.

Hardware-based compute accelerators such as Graphics Processing Units (GPUs) enable users to increase application throughput and decrease latency with purpose-built hardware. Until now, AWS Batch users wanting to take advantage of accelerators needed to build a custom AMI and install the appropriate drivers, and have AWS Batch scale GPU accelerated EC2 P-type instances based on their vCPU and memory characteristics. Now, customers can simply specify the desired number and type of GPUs, similar to how they can specify vCPU and memory, and Batch will launch the EC2 P-type instances needed to run the jobs. Additionally, Batch isolates the GPU to the container, so each container gets the appropriate amount of resources it needs.

Learn more about GPU support on AWS Batch here.

« Back to Releases