Amazon Web Services has made it easier for customers of its DynamoDB cloud database service to manage capacity requirements for their applications.
The company this week introduced a new Auto Scaling for DynamoDB capability that is designed to automate capacity management of database tables and secondary indexes.
Administrators can now simply specify the upper and lower bounds for the read and write capabilities for their applications and also their target utilization rates and leave it up to DynamoDB to take care of the rest, Jeff Barr, AWS chief evangelist stated in a blog.
Even when administrators are not around, the auto scaling capability will monitor tables and indexes to determine when throughput adjustments are required in order to handle changes in application traffic. The capability ensures maximum application availability and optimizes the costs associated with DynamoDB, Barr said.
“With Auto Scaling you can get the best of both worlds,” he said. Organizations can get “an automatic response when an increase in demand suggests that more capacity is needed, and another automated response when the capacity is no longer needed.”
Amazon DynamoDB is a NoSQL database service that the company has positioned as ideal for applications that require consistent, extremely low-latency single-digit millisecond response even at massive scale. Amazon has described the technology as a fully-managed cloud database suitable especially for gaming, web commerce, mobile, internet of things and other applications.
More than 100,000 organizations around the world currently use the technology for a wide range of uses. Amazon’s own retail site uses DynamoDB because it can handle the traffic surges associated with events such as Black Friday and Cyber Monday, according to the company.
Increasingly, according to Barr, many customers have begun using DynamoDB in serverless environments where the computing resources that an application consumes are allocated on the fly based on actual requirements. Customers have been taking advantage of DynamoDB’s provisioned capacity model to set and change the amount of throughput capacity required for their applications, Barr noted.
They have been able to change provisioning for their applications via API calls or simply by clicking on the appropriate buttons in AWS’s Management Console. Auto Scaling for DynamoDB makes this process even simpler, he said.
The new auto-scaling feature is optimized for environments where throughput change requests happen in a relatively predictable and periodic fashion. It is less suited for environments where throughout change requirements happen in short, unpredictable bursts, Barr noted.
In such situations, organizations should consider also taking advantage of the in-memory acceleration capabilities offered by Amazon’s DynamoDB Accelerator (DAX) he said. DAX is a fully managed caching service that is designed to accommodate read-intensive workloads. Amazon announced a free public preview of DAX earlier this year. The preview is currently available in Amazons U.S. East, U.S. West and EU cloud regions.