fbpx

Nextlink had attended the AWS annual re:Invent conference in Las Vegas from December 2 to 6, 2019. The re:Invent was amazing this year as over 20 new services and features were being introduced. The new services and features meanly focus on compute, infrastructure, database, machine learning, and serverless.

Compute

EKS with Fargate

With AWS Fargate, you can get rid of allocating the right amount of compute and be focused on developing your applications. Fargate helps you to provision the infrastructure for pods. Also, in long run, you no longer have to manage EC2 instances for your Elastic Kubernetes Service (EKS) cluster as Fargate would perform patching, scaling, and securing for you. Fargate can closely match the specified resource requirement to scale the compute and you only need to pay for the resources which is necessary to run your containers. AWS Fargate is now available to work with Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS).

 

EC2-Graviton2 Processor

Graviton2 is the new ARM-based processor designed by AWS. Graviton2 gives a significant performance enhancement over the 5th generation (M5, C5, R5) of EC2 instances with up to 24% increase performance for HTTPS load balancing with NGINX and up to 43% increase performance for Memcached. A series of new instance type such as M6g for general-purpose workloads, C6g for compute intensive applications, and R6g for memory intensive workloads will be equipped with the new Graviton2 processor. M6g is now available in preview for testing on non-production workloads.

EC2-Inf1 Instance

The four new instance sizes of Inf1 instance are powered by AWS Inferentia chips. Inf1 instance is optimized for Machine Learning workloads with fast, low-latency inferencing. AWS Inferentia can run a 32-bit trained model at the speed of a 16-bit model using BFloat16. If you have already built and trained your model on a GPU instance such as P3 or P3dn, you can also move it to an Inf1 instance to enjoy up to 2000 TOPS of throughput.

Compute Optimizer

AWS Compute Optimizer provides recommendations tailored to your resource usages by analysing the history of your resource consumption with machine learning techniques. AWS Compute Optimizer makes use of CloudWatch metrics to make recommendations from an hypervisor point of view such as CPU utilization, network IO, and disk IO. From the recommendation given by AWS Compute Optimizer, you may know whether your EC2 instances are currently under-provisioned, optimized, or over-provisioned. By taking action to adopt the recommended instance type, you may enjoy  a cost-effective cloud infrastructure with adequate application performance.

 

Nitro Enclaves

Nitro Enclaves create isolated compute environments for you to further protect and securely process highly sensitive data such as personally identifiable information (PII), healthcare, financial, and intellectual property data within your EC2 instances. Nitro Enclaves allows you to make sure only authorized code is running with cryptographic attestation. Preview for Nitro Enclaves is coming soon.

 

Infrastructure

Outposts

Outposts packs the AWS-designed infrastructure into a rack and deploy in on-premises datacentre. You can manage your resource of an Outposts rack with AWS APIs and tools as if in AWS Regions.

Local Zones

AWS deploys cloud infrastructures closer to large population, industry where no AWS Region exists yet. By utilizing the resources from Local Zone which is an extension of an AWS Region, you can deliver single-digit millisecond latency for latency-sensitive applications such as real-time gaming or multi-media content creation to end-user in a specific location.

Wavelength Zones

Wavelength is the new platform for 5G edge computing infrastructure. AWS deploys AWS environments such as compute and storage infrastructure within telecommunication providers’ datacentres at the edge of 5G cellular network. You can deliver latency-sensitive applications such as VR/AR and mobile gaming at the edge of 5G network with Wavelength Zones to provide single-digit millisecond latency to serve end-users. Wavelength will be available soon in the United States, Europe, Japan, and Korea.

 

Database / Data Warehouse

Managed Apache Cassandra Service

With AWS Managed Apache Cassandra Service, you can easily maintain scalable, and highly available Cassandra cluster. AWS Managed Apache Cassandra Service performs provisions, patch, and manage servers needed for the cluster for you. You can simply use Cassandra Query Language (CQL) to build applications, and let the service scales tables up and down based on the traffic.

RDS Proxy

RDS Proxy is a managed database proxy placed between your application and your relational database to relay queries from applications. RDS Proxy manages connections with connection pooling, and provides seamless failover if the database goes down. Amazon RDS Proxy supports MySQL currently. Support for PostgreSQL is coming soon.

Redshift RA3 Nodes

Redshift RA3 nodes separate the compute and storage when you perform scaling on a Redshift cluster. For the existing DC2 nodes, you have to scale your cluster storage on proportional to the compute resources. The new RA3 nodes provide you the flexibility to only scale compute without limiting to local storage. RA3 nodes equipped large and high performance SSDs as local cache and automatically scaling storage to S3.

 

Advanced Query Accelerator (AQUA)

 

Advanced Query Accelerator (AQUA) is a data warehousing service for AWS Redshift. Since network bandwidth is a bottleneck for moving data back and forth between storage and compute, AQUA runs data intensive tasks closer to the storage layer so as to accelerate Redshift queries. By adopting AWS Nitro chips to the storage layer, AQUA can speed up operations such as filtering and aggregation.

Redshift Data Lake Export

Redshift Data Lake Export allows you to save results of an Redshift query to S3 data lake as Apache Parquet which is an efficient columnar storage format for analytics. By using the UNLOAD command to export with Parquet format, you may enjoy up to 2 times faster and up to 6 times less storage consumption in S3, compared to text format. The data exported can be analyzed with Amazon Athena, Amazon EMR, and Amazon SageMaker.

UltraWarm

UltraWarm is a low-cost, warm storage tier for AWS ElasticSearch Service. UltraWarm offers up to 900TB of storage with up to 90% cost reduction over AWS ElasticSearch Service I3-based hot nodes. UltraWarm enables you to query across both hot and UltraWarm data from Kibana interface. By using UltraWarm, you may keep huge volumes of machine generated log data with a cost-effective storage and retain years of data for analysis. UltraWarm is now available in preview.

 

Machine Learning (ML)

Fraud Detector

AWS Fraud Detector allows businesses easily identifying potential online fraud such as registration of fake accounts and service abuse of ‘Try Before You Buy’ programs in real-time. By making use of your data, machine learning (ML), and the 20 years of fraud detection experience from Amazon.com, Fraud Detector allows you to start catching fraud faster. Instead of hiring top data scientists, you can create a fraud detection model with just a few clicks with AWS Fraud Detector.

SageMaker Studio

AWS SageMaker Studio is an all-in-one integrated development environment (IDE) for machine learning (ML). You can manage your entire ML workflow from writing code, tracking experiments, visualizing data, to performing debugging and monitoring within an unified, web-based user interface. SageMaker Studio boosts your productivity by letting you efficiently upload data, create new notebooks, train and tune models, compare results, and deploy models to production.

SageMaker Autopilot

SageMaker Autopilot can automatically build machine learning (ML) models and gives you a full control over the built models. Instead of just giving you one model, SageMaker Autopilot provides you a flexibility to make trade-offs between accuracy and prediction latency. With just a few clicks, SageMaker Autopilot can automatically inspect your raw data, perform feature process, pick suitable algorithms, train and tune multiple models, and rank the models based on their performance. For experienced developers, SageMaker Autopilot can also generate Python code to show how data was preprocessed and the generated model can be treated as a baseline model for further development.

 

Kendra

Kendra is a highly accurate enterprise search service powered by machine learning (ML). Users can make queries with natural language and let Kendra to look for the correct answer. Instead of just connecting Kendra to popular data sources, you may also ingest data with API from other source. Kendra is now available in preview.

DeepComposer

Boring math, computer science, and code are the hot areas to learn machine learning (ML). DeepComposer let you get started with machine learning through an interesting way. DeepComposer provides tutorials, sample code, and training data for you to learn building generative AI models without writing a single line of code. You can simply input a melody with a 32-key AWS DeepComposer keyboard to generate an original musical composition with the pre-trained genre models.

 

CodeGuru

CodeGuru uses machine learning (ML) to perform automatic code review. CodeGuru helps to investigate computational expensive lines of code from your application and provides you specific recommendations on how to fix or improve it. By applying the recommendations, your application can be run with less CPU utilization and cut compute costs. CodeGuru currently supports Java with capability to find code issues such as resource leak, concurrency race conditions, and wasted CPU cycles. AWS CodeGuru is in preview now.

 

Serverless

Step Functions Express Workflows

Express Workflows are the new option in Step Functions. Express Workflows are designed for high-volume, short-duration use cases which request per-account invocation rate with over 100,000 events per second. It is suitable for IoT data ingestion, streaming data processing, and mobile application backends.

Lambda Provisioned Concurrency

With Provisioned Concurrency, your Lambda functions can be served with a consistent latency. Before having Provisioned Concurrency, “cold start” is required once a function has not been invoked for some time or a function has been updated. “Cold start” can introduce latency which may not be acceptable for some applications such as web and mobile backends. Provisioned Concurrency helps to prepare the requested number of execution environment in advance in order to avoid the latency.

 

Other

End-of-Support Migration Program for Windows Server

End-of-Support for a Windows operating system (OS) is a bad news for many organizations. Migrating legacy Windows Server applications to the latest version of Windows Server could be a challenge since legacy applications always contain tight dependencies on an older OS, and even installation media of the application is missing. End-of-Support Migration Program (EMP) helps you to migrate your legacy applications from Windows Server 2003, 2008, and 2008 R2 to newer and supported versions on AWS.

Braket

AWS Braket provides a single environment for scientists and developers to do experiment with quantum computing. Multiple quantum hardware technologies are available on Braket such as quantum annealing superconductor computers from D-Wave, gate based superconductor computers from Rigetti, and ion trap computers from IonQ. You can explore quantum computing with Braket.

 

That’s it. Do you find it complicated? Don’t worry. Nextlink is here to help you getting the most out of the AWS cloud platform with optimized cost.

See you again at AWS re:Invent next year.