Several new tools will help developers add some key new capabilities to applications and workloads that are running inside Google's Cloud Storage offering.
Several new features in the Google Cloud Storage environment aim to make it easier for developers to manage, access and upload data into the cloud.
The new capabilities, including automatic deletion policies, regional buckets and faster uploads
, were revealed in a July 22 post on the Google Cloud Platform Blog
by Brian Dorsey, a Google developer programs engineer.
"With a tiny bit of up-front configuration, you can take advantage of these improvements with no changes to your application code—and we know that one thing better than improving your app is improving your app transparently," wrote Dorsey.
The new Object Lifecycle Management
feature will allow Cloud Storage to automatically delete objects based on certain conditions, according to Dorsey. "For example, you could configure a bucket so objects older than 365 days are deleted, or only keep the three most recent versions of objects in a versioned bucket. Once you have configured Lifecycle Management, the expected expiration time will be added to object metadata when possible, and all operations are logged in the access log
Developers can also use Object Lifecycle Management alongside Object Versioning
to limit the number of older versions of objects that are retained, he wrote. "This can help keep your apps cost-efficient while maintaining a level of protection against accidental data loss due to user application bugs or manual user errors."
The Regional Buckets
feature allows developers to co-locate Durable Reduced Availability
data in the same region as your Google Compute Engine
instances to improve performance, wrote Dorsey. "Since Cloud Storage buckets and Compute Engine instances within a region share the same network fabric, this can reduce latency and increase bandwidth to your virtual machines, and may be particularly appropriate for data-intensive computations."
Developers will still have ultimate control over which data centers are used, he wrote. "You can still specify the less-granular United States or European data center locations
if you'd like your data spread over multiple regions, which may be a better fit for content distribution use cases."
The upload improvements are part of the latest version of Gsutil Version 3.34
, which now automatically uploads large objects in parallel for higher throughput, wrote Dorsey. "Achieving maximum TCP throughput on most networks requires multiple connections, and this makes it easy and automatic. The support is built using Composite Objects
More details about temporary objects can be found in the accompanying Parallel Composite Uploads documentation
, wrote Dorsey. "To get started, simply use 'gsutil cp' [command] as usual. Large files are automatically uploaded in parallel."
Earlier in July, Google invited developers to participate in its new "Build Day" program
for its Cloud platform. The participating developers, who will be selected by Google after responding to a questionnaire, are being solicited to help make its Cloud Platform even better by offering their ideas and insights at one of several in-person, hands-on sessions in the next several weeks. The study involves developing an application using Google Cloud Platform services.