The first was Amazon Elastic File System (EFS), a new storage service that allows customers to store their files directly in the AWS cloud.
Previously, Amazon has only offered object-based storage via its Simple Storage Service (S3), SAN-style block-based storage via the Elastic Block Store (EBS), or archival storage via the Amazon Glacier service. This new offering mimics a shared file system in the cloud, accessible via the NFSv4 protocol and scalable up to petabytes in size.
"Whether you want to use this for an application that's small, development test, or if you want to use it for something that's very large with high demand and scalability; because it grows to petabyte-scale or more, it handles all of those use cases," Jassy said.
File systems can be created and managed using GUI, command-line tools, and APIs. Each file system is backed by multiple Elastic Compute Cloud (EC2) instances, with SSD-based storage for maximum performance. To ensure high availability, all files, directories, and links are replicated across multiple Availability Zones.
Fees for the service are straightforward at $0.30 per gigabyte of storage used, billed monthly based on the average usage throughout the month. You'll have to wait to get your hands on it, though; Amazon says EFS will become available in preview "in the near future," but it's accepting applications to try it out now.
Machine learning for the masses
Big data is another topic that has been on Amazon's mind, and on Thursday it announced a new service designed to enable developers to add machine learning to their applications, even if they have no direct experience in the field.
Jassy explained that Amazon has been experimenting with machine learning since its very early days as an online bookseller, for things like recommendation engines and fraud detection. Over the years it has developed in-house tools to make creating new machine learning models easier, and those tools have now become the basis of its new public service, Amazon Machine Learning.
Amazon data scientist Matt Wood took the stage on Thursday to explain how the new service can automatically pull data from S3, Amazon Redshift, or MySQL databases hosted on the AWS Relational Database Service, run that data through its built-in machine learning algorithms, and use it to generate predictive models.
No comments:
Post a Comment