Read our blog for the latest insights on sales and marketing Take Me There
Webinar: Use Sugar Data to Easily Generate Complex Documents Register
Webinar: Advanced Calendar Solution for Sugar Register
Today, AWS is announcing general availability of Image Scanning for Amazon Elastic Container Registry. Amazon ECR is a fully managed container registry that makes it easy for developers to store, manage and deploy container images. Image Scanning is an automated vulnerability assessment feature in ECR that helps improve the security of your application’s container images by scanning them for a broad range of operating system vulnerabilities.
You can enable image scans on push for your repositories to ensure every image is automatically checked against an aggregated set of Common Vulnerabilities and Exposures (CVEs). This can help you automate detection and responses to container image vulnerabilities prior to promoting and deploying into production. You can also scan images using an API command, allowing you to set up periodic scans for running container images to ensure continued monitoring. ECR notifies you when a scan completes, and results are available in the console and over the API.
Image Scanning for Amazon ECR is available at no additional charge, and you can now use it in all commercial AWS Regions and GovCloud (US). To learn more, see Image Scanning in the Amazon ECR User Guide. To get started, go to the ECR console in your AWS account, or use the CLI to enable ‘scan on push’ for your repositories.
Amazon SageMaker Neo is now available in the Middle Eastern (Bahrain) region. Amazon SageMaker Neo enables developers to train machine learning models once and run them anywhere in the cloud and at the edge. Amazon SageMaker Neo optimizes models to run up to twice as fast, with less than a tenth of the memory footprint, with no loss in accuracy.
Developers spend a lot of time and effort to deliver accurate machine learning models that can make fast, low-latency predictions in real-time. This is particularly important for edge devices where memory and processing power tend to be highly constrained, but latency is very important. Amazon SageMaker Neo automatically optimizes machine learning models. You start with a machine learning model built using MXNet, TensorFlow, PyTorch, or XGBoost and trained using Amazon SageMaker. Then you choose your target hardware platform from Intel, NVIDIA, or ARM. With a single click, SageMaker Neo will then compile the trained model into an executable. The compiler uses a neural network to discover and apply all of the specific performance optimizations that will make your model run most efficiently on the target hardware platform. The model can then be deployed to start making predictions in the cloud or at the edge.
Starting today, Amazon EC2 z1d instances are available in AWS US East (Ohio), and AWS Asia Pacific (Mumbai, Seoul) regions
Amazon EC2 z1d instances deliver high single thread performance due to a custom Intel® Xeon® Scalable processor with a sustained all core frequency of up to 4.0 GHz, the fastest of any cloud instance. z1d provides both high compute performance and high memory, which is ideal for electronic design automation (EDA), gaming, and certain relational database workloads with high per-core licensing costs.
Amazon EC2 z1d instances come in 7 sizes, including bare metal instances that provide applications with direct access to the Intel® Xeon® Scalable processor and memory resources of the underlying server. These instances are ideal for workloads that require access to the hardware feature set (such as Intel® VT-x), for applications that need to run in non-virtualized environments for licensing or support requirements, or for customers who wish to use their own hypervisor.
With this expansion, EC2 z1d instances are now available in the US East (N. Virginia, Ohio), US West (Oregon and N. California), Europe (London, Frankfurt and Ireland), and Asia Pacific (Singapore, Sydney, Tokyo, Mumbai, Seoul) Regions.
To learn more about EC2 z1d instances, visit our product page.
You can now configure Amazon EC2 instances to mount Amazon EFS file systems using the EC2 Launch Instance Wizard. This integration simplifies the process of configuring EC2 instances to mount EFS file systems at launch time with recommended mount options. Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. Multiple EC2 instances can mount an EFS file system and share file data using standard Linux tools and commands.
Amazon EFS file systems are often used for web serving and content management, application development and testing, media and entertainment workflows, machine learning and data science notebooks, database backups, container storage, and enterprise applications that utilize shared storage.
To get started, open the EC2 Launch Instance Wizard and select the EFS file system you want to mount from Step 3: Configure Instance Details, or click Create new file system. To learn more, see the EC2 Launch Instance Wizard documentation and the Amazon EFS documentation.
Amazon EC2 expands Hibernation support for Windows Server including: Windows Server 2012, Windows Server 2012 R2, Windows Server 2016 and Windows Server 2019. You can now hibernate newly launched EC2 Instances running Windows Server, in addition to Amazon Linux, Amazon Linux 2 and Ubuntu 18.04 LTS OS.
Hibernation provides the convenience of pausing and resuming the instances, saves time by reducing the startup time taken by applications and saves effort in setting up the environment or applications all over again. Instead of having to rebuild the memory footprint, hibernation allows applications to pick up exactly where they left off.
Hibernation is available for On-Demand and Reserved Instances running on M3, M4, M5, C3, C4, C5, R3, R4, and R5 instances running Amazon Linux 2, Ubuntu 18.04 LTS, Amazon Linux and Windows. For Windows, Hibernation is supported for instances up to 16 GB of RAM. For other operating systems, Hibernation is supported for instances with less than 150 GB of RAM.
The EC2 Hibernation feature is available in the US East (N. Virginia, Ohio), US West (N. California, Oregon), Canada (Central), South America (Sao Paulo), Europe (Frankfurt, London, Ireland, Paris, Stockholm), Asia Pacific (Hong Kong, Mumbai, Seoul, Singapore, Sydney, Tokyo), and China (Beijing – operated by Sinnet, Ningxia – operated by NWCD).
This feature is available through AWS Command Line Interface (CLI), AWS SDKs, AWS Tools for Powershell or the AWS Management Console at no extra charge. To learn more about hibernation, visit this blog. For information about enabling hibernation for your EC2 instances, visit our FAQs or technical documentation.
Starting today, Amazon EC2 C5d, M5d, M5a, M5ad, R5, R5d, R5a, R5ad, T3, T3a, I3 instances are available in South America (Sao Paulo) region. In addition, C5 will now also be available in new 12xlarge, 24xlarge, and Bare Metal sizes and M5 instances will now be available as Bare Metal in South America (Sao Paulo) region.
The 3 new C5 instance sizes are powered by custom 2nd Generation Intel Xeon Scalable Processors (based on the Cascade Lake architecture) with sustained all-core turbo frequency of 3.6 GHz and maximum single core turbo frequency of 3.9 GHz. C5 instances are optimized for compute-intensive workloads and deliver cost-effective high performance at a low price per compute ratio. These instances are ideal for high-performance computing (HPC), machine learning, deep inference, distributed analytics, batch processing, and much more. With the new 24xlarge size, C5 instances increases available resources by 33% to provide even more resources and performance for those compute intensive workloads. Furthermore, the new C5 bare metal option provides your applications with direct access to the processor and memory resources of the underlying server. M5 instances offers a balance of compute, memory, and networking resources for a broad range of workloads including web and application servers, back-end servers for enterprise applications, gaming servers, caching fleets, and app development environments.
Amazon EC2 R5 instances are ideally suited for applications such as high-performance databases, distributed in-memory caches, in-memory databases, and big data analytics. R5 instances offer Intel Xeon Platinum 8000 series processors with a sustained all core frequency of up to 3.1 GHz, with up to 50% more vCPUs and 60% more memory over R4 instances. Amazon EC2 T3 instances are the latest generation Amazon Burstable Performance instance types. T3 instances offer a balance of compute, memory, and network resources and are ideal for database workloads with moderate CPU usage that experience temporary spikes in use. Amazon EC2 I3 instances are the latest generation of Storage Optimized High-I/O instances, designed for the most demanding High I/O workloads, featuring low-latency Non-Volatile Memory Express (NVMe) based SSDs. I3 instances are ideal for workloads like transactional processing systems, relational and NoSQL databases, data warehousing applications, analytics workloads and Elasticsearch workloads.
Amazon EC2 C5d, M5d, and R5d all have local NVMe-based SSD block level storage physically connected to the host server. These instances are a great fit for applications that need access to high-speed, low latency local storage, to temporary storage of data such as batch and log processing, and to high-speed caches and scratch files. C5d instances are ideal for applications such as high performance web servers, high-performance computing (HPC), batch processing, ad serving, highly scalable multiplayer gaming, video encoding, scientific modelling, distributed analytics and machine/deep learning inference. M5d instances offers a balance of compute, memory, and networking resources for a broad range of workloads. This includes web and application servers, back-end servers for enterprise applications, gaming servers, caching fleets, and app development environments. R5d instances are well suited for memory intensive applications such as high performance databases, distributed web scale in-memory caches, mid-size in-memory databases, real time big data analytics, and other enterprise applications.
The AMD-based instances provide additional options for customers who are looking to achieve a 10% cost savings on their Amazon EC2 compute environment for a variety of workloads. M5a and M5ad instances are ideal for business-critical applications, web and application servers, back-end servers for enterprise applications, gaming servers, caching fleets, and app development environments. R5a and R5ad instances are ideal for high performance databases, distributed web scale in-memory caches, mid-size in-memory databases, real time big data analytics, and other enterprise applications. T3a instances offer a balance of compute, memory, and network resources for a broad spectrum of general purpose workloads including micro-services, low-latency interactive applications, small and medium databases, virtual desktops, development environments, code repositories, and business-critical applications.
R5, R5d, and M5d bare metal instances are now available in the South America (Sao Paulo) region. Amazon EC2 bare metal instances provide your applications with direct access to the Intel® Xeon® Scalable processor and memory resources of the underlying server. These instances are ideal for workloads that require access to the hardware feature set (such as Intel® VT-x), for applications that need to run in non-virtualized environments for licensing or support requirements, or for customers who wish to use their own hypervisor. Bare metal instances allow EC2 customers to run applications that benefit from deep performance analysis tools, specialized workloads that require direct access to bare metal infrastructure, legacy workloads not supported in virtual environments, and licensing-restricted Tier 1 business critical applications. Bare metal instances also make it possible for customers to run virtualization secured containers such as Clear Linux Containers. Workloads on bare metal instances continue to take advantage of all the comprehensive services and features of the AWS Cloud, such as Amazon Elastic Block Store (EBS), Elastic Load Balancer (ELB) and Amazon Virtual Private Cloud (VPC).
C5d instances are available in 7 sizes, with 2, 4, 8, 16, 36, 72 vCPUs and bare metal. M5d instances are available in 9 sizes, with 2, 4, 8, 16, 32, 48, 64, 96 vCPUs and bare metal. R5 and R5d instances are available in 9 sizes, with 2, 4, 8, 16, 32, 48, 64, 96 vCPUs and bare metal. M5a, M5ad, R5a and R5ad instances are available in 8 sizes, with 2, 4, 8, 16, 32, 48, 64, and 96 vCPUs, and T3a instances are available in 7 sizes, with 2, 4 and 8 vCPUs. I3 instances will come in six sizes, with 2, 4, 8, 16, 32 and 64 vCPUs. These new instances can be purchased as On-Demand, Reserved or Spot Instances.
AWS Firewall Manager is a security management tool to centrally configure and manage firewall rules across your accounts and Amazon VPCs. AWS Firewall Manager now supports Amazon VPC security groups, making it easier for security administrators to centrally configure security groups across multiple accounts in their organization, and continuously audit them to detect overly permissive or misconfigured rules.
A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. As customers scale up their number of instances and accounts, security administrators find it difficult to maintain a central view of their security posture across their entire organization. With AWS Firewall Manager support for security groups, administrators now have the ability to centrally create common security groups across the organization and enforce them consistently even as new accounts and resources are created. Administrators can also create audit policies to define what security group rules can or cannot be created across their organization. In addition, AWS Firewall Manager also provides pre-configured policies that detect unused and redundant security groups. Administrators can choose to automatically remediate or get notifications when misconfigured rules are detected.
With AWS Firewall Manager support for security groups, customers can now centrally manage rules applied to EC2-VPC instances and ENI resource types. To get started, see the documentation for more details. See the AWS Region Table for the list of regions where AWS Firewall Manager is currently available. To learn more about AWS Firewall Manager, its features and pricing, please visit the website.
Amazon Web Services (AWS) announces the availability of Amazon EC2 M5n, M5dn, R5n, and R5dn instances that can utilize up to 100 Gbps of network bandwidth, and Elastic Fabric Adapter (EFA) for HPC/ML workloads. These instances offer significantly higher network performance across all instance sizes, ranging from 25 Gbps of network bandwidth on smaller instance sizes to 100 Gbps of network bandwidth on the largest instance size, and support automatic encryption of in-transit traffic between instances. These new instances are designed for workloads such as databases, High Performance Computing, analytics, Big Data, and in-memory cache that can take advantage of improved network bandwidth and packet rate performance.
Based on the next generation AWS Nitro System, M5n, M5dn, R5n, and R5dn instances make 100 Gbps networking available to network-bound workloads without requiring customers to use custom drivers or recompile applications. Customers can also take advantage of this improved network performance to accelerate data transfer to and from Amazon S3, reducing the data ingestion time for applications and speeding up delivery of results.
M5n, M5dn, R5n, R5dn instances are powered by custom second-generation Intel® Xeon® Scalable processors (Cascade Lake) with sustained all-core turbo frequency of 3.1 GHz. They also provide support for the new Intel Vector Neural Network Instructions (AVX-512 VNNI) which will help speed up typical machine learning operations like convolution, and automatically improve inference performance over a wide range of deep learning workloads.
These instances are available today in US East (N. Virginia, Ohio), US West (Oregon), Europe (Frankfurt, Ireland), and Asia Pacific (Singapore) AWS Regions. Customers can use these instances as On-Demand, Reserved, or Spot Instances.
Starting today, new Amazon EC2 A1 bare metal instances are available. Similar to all other A1 instances, the new A1 bare metal instances are powered by AWS Graviton Processors that feature 64-bit Arm Neoverse cores and custom silicon designed by AWS.
A1 bare metal instances provide your applications with direct access to the processor and memory resources of the underlying server. These instances are ideal for workloads that need to run in non-virtualized environments for licensing or support requirements.
All AWS services and features, such as Amazon Machine Images (AMI), Elastic Block Store (EBS) and Auto Scaling, that are supported on other A1 instances are also available on A1 bare metal instances.
A1 instances deliver up to 45% cost savings and are well suited for scale-out applications such as web servers, containerized microservices, caching fleets and distributed data stores. They are also ideal for Arm-based workloads that are supported by the extensive Arm ecosystem. A1 instances are the first EC2 instances powered by AWS Graviton Processors that are custom designed by AWS utilizing Amazon’s extensive expertise in building platform solutions for cloud applications running at scale.
Amazon EC2 A1 instances are available in the AWS US East (N. Virginia and Ohio), US West (Oregon), Europe (Frankfurt and Ireland), and Asia Pacific (Sydney, Tokyo and Mumbai) regions. These instances are available in 5 sizes, with 1, 2, 4, 8, and 16 vCPUs, in addition to the bare metal instance, and are purchasable as On-Demand, Reserved or Spot Instances.
Starting today, Amazon EC2 High Memory instances with 18 TB and 24 TB of memory are generally available.
EC2 High Memory instances now offer 6, 9, 12, 18, and 24 TB of memory in an instance. These instances are purpose-built to run large in-memory databases in the cloud, including production deployments of SAP HANA in-memory databases. EC2 High Memory instances allow you to run large in-memory databases and business applications that rely on these databases in the same, shared Amazon Virtual Private Cloud (VPC), ensuring predictable performance and reducing the management overhead associated with complex networking.
All EC2 High Memory instances are EBS-Optimized by default, offering dedicated storage bandwidth to encrypted and unencrypted EBS volumes. New High Memory instances with 18 TB and 24 TB will offer double the amount of dedicated storage bandwidth as existing sizes, with 28 Gbps of total bandwidth. All EC2 High Memory instances also deliver high networking throughput and low-latency using Amazon Elastic Network Adapter (ENA)-based Enhanced Networking. New High Memory instances with 18 TB and 24 TB offer 4x the amount of network bandwidth as existing sizes, with 100 Gbps of total aggregate network bandwidth. High Memory instances with 18 TB and 24 TB of memory are the first Amazon EC2 instances powered by an 8-socket platform with 2nd Generation Intel® Xeon® Scalable (Cascade Lake) processors.
Amazon EC2 High Memory instances are certified by SAP for running Business Suite on HANA, the next-generation Business Suite S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, and SAP BW/4HANA in production environments.
High Memory instances are available as bare metal instances on EC2 Dedicated Hosts on a 3-year reservation. 6 TB, 9 TB, and 12 TB sizes are available in the US East (N. Virginia), US West (Oregon), Europe (Ireland), Asia Pacific (Tokyo), and AWS GovCloud (US-East and US-West) regions. 18 TB and 24 TB sizes are available in the US East (N. Virginia), with additional regional availability coming soon. For more information, visit this page. To get started with EC2 High Memory instances, contact your AWS account team or use the Contact Us page.
Starting today, customers can queue purchases of EC2 RIs in advance by specifying a time of their choosing in the future to execute those purchases.
In Sep 2016, AWS launched Regional RIs on EC2 which made RIs easier to use by providing the discount benefit to EC2 instances regardless of the underlying AZ or size used. Now with the ability to queue purchases, customers can enjoy uninterrupted RI coverage, making Regional RIs even easier to use. By scheduling purchases in the future, customers can conveniently renew their expiring RIs and also plan for future events. This helps customers to reduce their overall bill without incurring additional management overhead.
Amazon are pleased to announce the general availability for the latest release of AWS Thinkbox Deadline 10.1. Based on customer requests and trends, this release reflects Amazons's ongoing investment in performance and scale, providing the ability to scale bigger and faster than ever to complete your projects.
Deadline 10.1 also adds more support for popular content production tools. SideFX Houdini can now be used in AWS Portal in Deadline, which is a set of Deadline features that enables you to more easily extend your on-premises rendering to AWS, and usage based licensing (UBL) for SideFX Houdini Engine is now available on the AWS Thinkbox Marketplace (Mantra support to follow in an upcoming release). Foundry Modo 13 is now supported in Deadline with updated submitter functionality, and there are a number of updates to plugins for your favorite tools including Autodesk Maya, OTOY Octane, Maxon Cinema 4D, Foundry Nuke, ftrack, and many others.
Amazon also made it easier to use Deadline on AWS. While it’s always been the case that when you pay for AWS resources that you use for rendering (EC2 instances, EFS storage, and so forth) you could use Deadline at no charge, it required a few extra steps to get credit. With Deadline 10.1, a new improved billing process provides AWS Thinkbox Deadline customers with a more streamlined experience by removing the upfront payment requirement. Additionally, Deadline can now automatically detect if it is running on AWS and will no longer require any licensing setup of any kind.
Finally, Deadline has now completely removed Mono as a dependency, leveraging .NET core for MacOS, Windows, and Linux. This change allows us to standardize on a single framework, enabling more frequent updates for Deadline with the latest performance and security improvements.