But you're right; other people did it. Why couldn't we have done it sooner? I think that really just boils down to being a company that's relatively small and focusing on first building a database and building great commercial products. And now we're going after what for us is a new segment of the market, which includes people who are not going to spend thousands of dollars per server per year, but if you bundle the whole thing up with the underlying infrastructure, then that starts to make a whole lot more sense to them.
There seemed to be a big reception for this service. Was this move demand-driven? What caused you to do this now?
If you look at the market, just look at the companies you alluded to. Others have been doing this. I think there are four companies to think about. One is IBM, which has a product called Compose. Rackspace has a product called ObjectRocket. There's an independent company called mLab. And then Parse, which was acquired by Facebook a few years ago, who announced several months ago that they are winding down that service, so people have to find a new home. Parse itself has half a million apps in production. mLab has 300,000. I don't know how many Compose and ObjectRocket have. But it's somewhere between 800,000 to a million apps running on comparable services for MongoDB. So there is clearly demand. We invited about 2,000 customers to try out Atlas on a private beta program and there was overwhelming enthusiasm—like yes, finally, what took you guys so long. So I think there is plenty of demand.
Where we have a limitation today is that we're not on Azure and we're not on Google. And we're not on all Amazon regions, but we'll get there. We'll get to additional regions on Amazon in the next few months and the other cloud platforms later. But we know that just with the four regions we're launching on, there's lots and lots of opportunity for us.
What was the impetus for the new Spark Connector?
There's a lot of interest in people using Spark with MongoDB and what that's about is if you think about the way people use MongoDB with Hadoop, they've got different operational systems and their data moves through ETL [Extract, Transform and Load] or some other process into Hadoop. Then people start to run analytics on it. And maybe Spark is faster than MapReduce, but it's still all this time to move out of the operational system into Hadoop. But people are saying that with the kind of machine learning and analytics that they're doing on data today, they want to move some of that to run on the operational data as it's being created. And that's the demand of using Spark with MongoDB. So last year, we took our connector for Hadoop and enhanced it so that it would be compatible with Spark. We learned a lot and decided there's enough interest there to make an engineering investment to make a dedicated connector for Spark.
I wouldn't be surprised if it is as popular as the Hadoop connector, if not more popular than that several months from now. Clearly in the Hadoop community there is a lot of focus on Spark these days. I think it will be quite popular, but give us a couple of months to see what the data looks like.
What's big in MongoDB 3.4 that we will see later this year?
We previewed a couple of things in 3.4. One of them is graph technology, which I think will be really interesting to some folks. What's going to be nice about graphs in MongoDB is you're going to be able to take advantage of all the other capabilities such as availability and scalability and security and so forth, where it seems like the graph databases out there are less far along in those areas than MongoDB. So we'll have some of the core graph analytical capabilities in the database, but we won't have everything a dedicated graph database has. So graph is one thing.