Archive | general RSS feed for this section

Production MongoDB Replica Sets now available on Windows Azure!

After many months of development and testing we are pleased to announce MongoLab‘s first production-ready database plans on the Windows Azure platform with immediate availability in Windows Azure’s East US and West US datacenters.

What does this new plan include?

  • A three-node Replica Set cluster (two data-bearing nodes plus one arbiter node)
  • Dedicated mongod processes on shared Windows Azure virtual machines
  • Up to 8GB of storage
  • High-availability via automatic failover in the event that the primary node fails or should become unreachable
  • Integration with MongoDB Monitoring Service (MMS)
  • Log file access (real-time and historical)

This is in addition to what every MongoLab user enjoys:

  • Continuous monitoring, 24/7
  • The ability to create backup plans (hourly/daily/weekly/monthly) and initiate one-time database snapshots
  • Rich, web-based management tools
  • Thoughtful, timely email support (support@mongolab.com) from real developers
  • Standard driver and REST API support

Continue Reading →

{ "comments": 11 }

MongoDB 2.4 now available on all MongoLab plans

Greetings, mongoers!

The team here at MongoLab is very excited to announce that version 2.4 of MongoDB is now available for all of our plans!

What about current databases, you might ask? Users will be receiving an email sometime this week containing everything they need to know about the upgrade process. Keep your eyes posted on your inboxes!

Continue Reading →

{ "comments": 5 }

Build your own lead capture page with Meteor and MongoDB in minutes

This is a guest blog post written by Niall O’Higgins and Peter Braden at Frozen Ridge, a full-stack web consultancy offering services around databases, node.js, testing & continuous deployment, mobile web and more. They can be contacted at hello@frozenridge.co.

Meteor is a framework for building real-time client-server applications in JavaScript. It is build from the ground up to work with MongoDB – a JSON database which gives you storage that’s idiomatic for JavaScript.

We were incredibly impressed with how easy it is to write apps with Meteor using MongoLab as our MongoDB provider. With less that 100 lines of Javascript code we were able to build a fully-functioning newsletter signup application, and with MongoLab we don’t have to think about database management or hosting.

To demonstrate Meteor working with MongoLab, we’ll walk you though building a lead capture web application.

Since MongoDB is a document-oriented database, it is very easy to modify the application to store any data you want. In our example, we are building this as an email newsletter signup system. However, you could just as easily make this into a very simple CRM by capturing additional fields like phone number, full name etc.

Overview of our newsletter signup app

Our newsletter signup app will consist of two views:

  • A user-facing landing page for people to enter their email address
  • An internal-facing page with tabular display of signups and other metadata such as timestamp, referrer, etc.

You can grab the complete source to the finished newsletter signup app on Github here and view a fully-functional, running example of the application here.

Create the Meteor app

First install Meteor:

> curl https://install.meteor.com | sh

Once Meteor is on your system, you can create an app called “app” with the command:

> meteor create app

Now you will have a directory named app which contains files app.jsapp.css and app.html.

Landing page template

First we need a nice HTML landing page. In the Meteor app you just created, your templates are stored in app.html. At the moment, Meteor only supports handlebars for templating.

It’s worth noting that everything must be specified in template tags, as Meteor will render everything else immediately. This enforces thinking of your app as a series of views rather than a series of pages.

Let’s look at an example from our finished app to illustrate. We have a “main” template which looks like this:

Data is bound from client-side code to templates through the Meteor template API.

Hence, the variable showAdmin is actually bound to the return value of the JavaScript functionTemplate.main.showAdmin in the client-side code. In our app.js, the implementation is as follows:

Due to Meteor’s data bindings, when the session variable “showAdmin” is set to true, the “admin” template will be rendered. Otherwise, the “signup” template will be rendered. Meteor doesn’t have to be explicitly told to switch the views – it will update automatically when the value changes.

This brings us to the client-side code.

Client-side code

Since Meteor shares code between the client and the server, both client and server code are contained in app.js. We can add client specific code by testing Meteor.isClient:

Inserting data on form submit

For the user-facing landing page, we merely need to insert data into the MongoDB collection when the form is submitted. We thus bind to the form’s submit event in the “signup” template and check to see if the email appears to be valid, and if so, we insert it into the data model:

One of the nice things about Meteor is that the client and server side data model API’s are the same.  If we insert the data here in the client, it is transparently synced with the server and persisted to MongoDB.

This is very powerful. Because we can use any MongoDB client to also connect directly to the database, we can easily use this data from other parts of our system. For example,  we can later link-up mailmerge software to make use of our database of emails to send newsletters.

Adding authentication

Now that we’ve got our newsletter signup form working, we will want the ability to see a list of emails in the database. However, because this is sensitive information, we don’t want it to be publicly visible. We only want a select list of authenticated users to be able to see it.

Fortunately, Meteor makes it easy to add authentication to your application. For demonstration purposes, we piggy-back off our Github accounts via OAuth2. We don’t want to create additional passwords just to view newsletter signups. Instead, we’ll consider a hardcoded list of Github usernames to view the admin page:

Meteor makes it very easy to add a “login with Github” UI flow to your application with the accounts and accounts-ui packages. You can add these with the command:

> meteor add accounts-ui accounts-github

Once these are added to your app, you can render a “login with Github” button in your templates by adding the special template variable {{loginButtons}}. For example in our finished app we have:

Email list view

The data display table is simply a handlebars table that we’ll populate with data from the database. Meteor likes to live-update data, which means if you specify your templates in terms of data accessors, when the underlying data changes, the DOM will automatically reflect the changes:

This is a pretty different approach to typical frameworks where you have to manually specify when a view needs to refresh.

We also make it possible for admin users to toggle the display of the email list in the app by inverting the value of the ‘showAdmin’ Meteor session variable:

Server-side code

Meteor makes it super easy to handle the server-side component and marshalling data between MongoDB and the browser. Our newsletter signup simply has to publish the signups collection for the data display view to be notified of its contents and it will update the view in real-time.

The entire server-side component of our Meteor application consists of:

With a unified data model between client and server, Meteor.publish is how you make certain sets of server-side data available to clients. In our case, we wish to make the Github username available in the current user object. We also only wish to publish the emails collection to admin users for security reasons.

Bundling the Meteor app

For deployment, Meteor apps can be translated to Node.JS applications using the meteor bundle command. This will output a tarball archive. To run this application, uncompress it and install its only dependency – fibers.

Fibers can be installed with the command

> npm install fibers

Deploying the Meteor app with MongoLab

Now your Meteor application is ready to run. There are a number of configuration options which can be set at start-time via UNIX environment variables. This is where we specify which MongoDB database to use. MongoLab is a great choice, taking a lot of the hassle out of running and managing your database, with a nice free Sandbox plan that you can create in seconds here.

In order to have you Meteor application persist data to your MongoLab database, set the MONGO_URL environment variable to the MongoDB URI provided by MongoLab for your database:

> export MONGO_URL=mongodb://user:password@dsNNNNNN.mongolab.com:port/db

For Meteor to correctly set up authentication with Github, you need to set the ROOT_URL environment variable:

> export ROOT_URL=http://localhost:8080

To run your Meteor application on port 8080, simply execute main.js:

> PORT=8080 node main.js

You should now be able to connect to it at http://localhost:8080!

{ "comments": 2 }

MongoLab now supports Google Cloud Platform!

01-digital_google_cloud_platform_logo_lockup-03

This week at Google I/O we are launching support for MongoLab‘s fifth cloud provider – Google Cloud Platform. You can now use MongoLab to provision and manage MongoDB deployments on Google Compute Engine (GCE)!

So far we are very impressed with the capabilities of the GCE infrastructure.  In particular:

  • The network is fast. I mean really fast. Some of the bandwidth and latency benchmark scores are astounding. Since I/O is king for databases this will be great for connecting your GCE-hosted application to a MongoDB instance hosted by MongoLab.

  • GCE has a global private network connecting GCE regions across the world. This will be great for global multi-region clusters. We don’t support this quite yet, but when we do GCE will provide a high-speed private backbone upon which to build a great solution.

  • The API is clean, and VMs spin-up fast. This is key for automation, and we like to automate.

For now we are in an early access beta, supporting only our free Sandbox database plans in GCE’s us-central1 region. We will be launching support for the rest of our product line in subsequent releases.

We will have a Developer Sandbox (a.k.a “booth”) at the conference on Friday May 17th. If you are at Google I/O and into MongoDB come visit us!

{ "comments": 6 }

MongoSF 2013 : scaling the hyperbola of evolution with MongoDB

Palace Hotel lobby, c. 1930

Palace Hotel, c. 1930

You know, I attend a fair number of MongoDB events, and frankly I keep expecting them to get stale. But after being at MongoSF this past Friday, I’m happy to say it hasn’t happened yet. The growth and vigor of the Mongo ecosystem was everywhere apparent, and it has never been more encouraging. Our sincere thanks go out to the 10gen team for putting together another fabulous and informative event.

If you were there and managed to stop by MongoLab’s table in the exhibit hall of the super-elegant Palace Hotel, then thanks! It was nice to meet you and/or see you again! Hope you got as much out of the day as we did. If you didn’t — or if you’d just like my personal take on the whole thing — well, please read on.

Ecosystem predicts viability

Setting aside any of the relative merits of MongoDB as a database for just a moment, I have to say my top takeaway continues to be amazement at the size and enthusiasm of the community around MongoDB.

[I]f an organism or aggregate of organisms sets to work with a focus on its own survival and thinks that that is the way to select its adaptive moves, its “progress” ends up with a destroyed environment. If the organism ends up destroying its environment, it has in fact destroyed itself. … The unit of survival is a flexible organism-in-its-environment.” [1]

History is littered with the scarcely recognizable fossils of good ideas, clever inventions, and even superior products that might have flourished save for one thing: adoption. The modern proving ground for technological species looks less and less like the traditional “marketplace” with pockets of asymmetric information and discrete “deals.”  Today, the landscape has evolved to include open-source transparency, synergies of technologies and ideas, and a globally interconnected (and often, informed) fabric of opinion. A vigorous ongoing conversation (and overlap!) among diversified populations of users and developers is now the surest predictor, I believe, of long-term survival.

So, more than the database technology (which is impressive) or the well-capitalized company devoted to developing it (which is formidable), it is the people and the strength of this community that inspire my confidence that MongoDB will continue to thrive, improving and growing in popularity as a viable or even preëminent database for an ever increasing number of applications.

MongoDB: the Next Generation

Eliot Horowitz, 10gen CTO & Co-founder, kicked things off on a strong note, clearly articulating his focus for the immediate future of MongoDB. In my opinion, these are exactly the right priorities for taking the platform to the next level:

  • Maturity
  • Innovation
  • Operations

If you peer into its internals today, you’ll see the evolutionary legacy of MongoDB: steadily improving and expanding functionality, accreted around a core of pragmatic and sometimes downright scrappy engineering — just what you might expect from a small, clever team with a product rapidly establishing itself in the marketplace. But many of the expedients that accelerate a large piece of software in the short term can eventually bog down development and become obstacles to its further progress. You want a larger team to be able to add and maintain a growing number of features, without commensurate increases in code complexity. At some point, once experience has shown where the grain boundaries lie, there comes a time to refactor (not reinvent!) the core, teasing out clear and minimal abstraction contracts that the new implementations of existing and future features can target.

This engineering story arc is not lost on Eliot. Cleaner factoring, he explains, will be a a key enabler to efficiently deliver capabilities that MongoDB has needed for a long time, to make it a more “mature,” fully-featured general purpose database. It will also form the groundwork for innovating and building on the strengths of MongoDB as a data substrate for modern applications. Specific examples Eliot mentioned included:

  • non-constant query constraints — e.g., find all documents where the values of fields “a” and “b” are equal.
  • inline aggregation operations — e.g., update each document to set its “total” field to the sum of the “dollarAmt” field of each element of its “lineItems” array.
  • index intersections — e.g., optimize a query like {a: 3, b: 6} by dynamically combining an index on “a” with an index on “b” to yield performance comparable to what today would require an explicit compound index comprising both fields.

So that’s the broad story around Maturity and Innovation — right on. What about the third item: Operations? This of course refers to the realities of keeping a database running and available behind a production system of any kind. Happily, there is another three-item list here:

  • Monitoring
  • Backups
  • Management

Eliot spoke to 10gen’s efforts on each of these facets: MMS, which became available some 18 months ago; the remote backup service, which is in Limited Release now; and a suite of management tools to be announced later this year.

Of course, the topic of production-class operations is near to our hearts: seamlessly handling these three facets for our customers is what MongoLab is all about!

You got your lagerstätten in my Burgess Shale!

Opabinia

Opabinia, c. 505,000,000 BC

Max Schireson, the 10gen CEO who claims to have been born the same year as the relational database, followed up with a pointedly evolutionary perspective on database technologies. He compared today’s landscape to the early part of the Cambrian Explosion, in which biodiversity increased by orders of magnitude in a small fraction of the total history of life on earth up to that point. Of course, the unstated implication was that hitherto more “established” databases (Oracle, MySQL) were the long-dominant single-celled organisms in this analogy, whereas MongoDB would be perhaps more like a sighted predator of some kind.

Schireson quoted some consumption figures from the top of the food chain (e.g., 3 of the top 10 global investment bank use MongoDB) and noted some recent shifts in environmental pressures (e.g., developer-driven decision making).  He also cited an amusing factoid: prior to this year’s report, the last Gartner Research update on databse technology came out in 2003. That’s right: a full ten years ago. Something new must be going on. (Can you guess what?)

In short, Schireson made it sound like a pretty exciting time to be in databases, with MongoDB figuring prominently on the changing landscape.

Okay, now back to your niche…

After this inspiring keynote, of course, there followed a full day of stimulating talks and sessions at all levels of the mongo-guru ladder — oceans of fresh, insightful, useful stuff.

My personal favorite was probably the session led by Charity Majors, who is responsible for the MongoDB servers at the heart of Parse.com. If you were lucky enough to catch her outstanding talk on the care and feeding of a grown-up mongo deployment, you’ll know that there’s a whole host of operational issues that you’d just rather not worry about — or at the very least, you’d very much like an experienced hand at the helm when you do. Why do I say her talk was outstanding? Because that stuff is our bread and butter. It’s what we do all day every day here at MongoLab: hook you up with the database of tomorrow, so you can use more of your energy to dominate YOUR product’s ecological niche today, and still get a good night’s sleep (assuming your species isn’t nocturnal).

There’s never been a richer ecosystem, or a better time to be a database consumer. And there are more reasons than ever today for your consumption preferences to be of the MongoDB phyla. Yummy! Why not try one right now?

T. Dampier, 2013-05-11

Notes

[1] Source: Gregory Bateson, “Form, Substance and Difference”, 19th Annual Alfred Korzybski Memorial Lecture, 9 Januiary 1970, Oceanic Institute, Hawaii. From the book Ecology and Consciousness, edited by Richard Grossinger, North Atlantic Books, 1978. p. 32.

{ "comments": 1 }

Introducing flip-flop: MongoDB Replica Set demonstration and experimentation service

Greetings adventurers!

A lot of our users upgrade from single-node databases to replica set clusters without fully understanding how their driver, and therefore their application, will react to failover. In fact, we get so many questions about best practices with MongoDB replica sets that we thought it could be cool to host a replica set that anyone can connect to using their MongoDB driver of choice.

Today we invite you to check out flip-flop, a MongoDB Replica Set demonstration and experimentation service.  The flip-flop service consists of:

  • A live replica set that fails-over (i.e. “flips” and “flops”) every 60 seconds.  This cluster is always running and available to all at the following address:
    mongodb://testdbuser:testdbpass@flip.mongolab.com:53117,flop.mongolab.com:54117/testdb
  • A set of example client scripts (currently just in Python) that simulate client interactions with the cluster that you can use as a starting point for your own experimentation

The flip-flop service is also great for those of you working on third-party drivers. Gustavo Niemeyer, author of mgo, a MongoDB driver for the Go language, told us flip-flop helped him find and quickly fix a small bug in the driver: “This is brilliant. I actually managed to find an edge case coding a trivial example against it due to the timing of the server re-election.” Pretty cool!

Continue Reading →

{ "comments": 2 }

Backup your MongoDB databases with MongoLab

Last year, a lot of folks asked us if they could use MongoLab’s admin tools on databases not hosted with MongoLab. We thought this was a cool idea, and released Remote Connections, a feature that allows you to point MongoLab’s web interface at any cloud MongoDB instance. Since then, this feature has received great response.

Today… Remote Connections got even better! You can now use exactly the same backup tools on remote databases that our users that host with MongoLab know, love, and trust.

MongoLab’s backup system makes it extremely easy to schedule and manage backups. You can use the system to perform one-time backups or create recurring Backup Plans with custom schedules and retention policies. Backups can be stored in MongoLab’s own secure cloud containers or in a container at the cloud storage provider of your choice (e.g. Amazon S3).

Continue Reading →

[“Thinking”, “About”, “Arrays”, “In”, “MongoDB”]

Greetings adventurers!

The growing popularity of MongoDB means more and more people are thinking about data in ways divergent from traditional relational models. For this reason alone, it’s exciting to experiment with new ways of modelling data. However, with additional flexibility comes the need to properly analyze the performance impact of data model decisions.

Embedding arrays in documents is a great example of this. MongoDB’s versatile array operators ($push/$pull, $addToSet, $elemMatch, etc.) offer the ability to manage data sets within documents. However, one must be careful. Data models that call for very large arrays, or arrays with high rates of modification, can often lead to performance problems.

Continue Reading →

{ "comments": 25 }