Archive | JSON RSS feed for this section

Build your own lead capture page with Meteor and MongoDB in minutes

This is a guest blog post written by Niall O’Higgins and Peter Braden at Frozen Ridge, a full-stack web consultancy offering services around databases, node.js, testing & continuous deployment, mobile web and more. They can be contacted at hello@frozenridge.co.

Meteor is a framework for building real-time client-server applications in JavaScript. It is build from the ground up to work with MongoDB – a JSON database which gives you storage that’s idiomatic for JavaScript.

We were incredibly impressed with how easy it is to write apps with Meteor using MongoLab as our MongoDB provider. With less that 100 lines of Javascript code we were able to build a fully-functioning newsletter signup application, and with MongoLab we don’t have to think about database management or hosting.

To demonstrate Meteor working with MongoLab, we’ll walk you though building a lead capture web application.

Since MongoDB is a document-oriented database, it is very easy to modify the application to store any data you want. In our example, we are building this as an email newsletter signup system. However, you could just as easily make this into a very simple CRM by capturing additional fields like phone number, full name etc.

Overview of our newsletter signup app

Our newsletter signup app will consist of two views:

  • A user-facing landing page for people to enter their email address
  • An internal-facing page with tabular display of signups and other metadata such as timestamp, referrer, etc.

You can grab the complete source to the finished newsletter signup app on Github here and view a fully-functional, running example of the application here.

Create the Meteor app

First install Meteor:

> curl https://install.meteor.com | sh

Once Meteor is on your system, you can create an app called “app” with the command:

> meteor create app

Now you will have a directory named app which contains files app.jsapp.css and app.html.

Landing page template

First we need a nice HTML landing page. In the Meteor app you just created, your templates are stored in app.html. At the moment, Meteor only supports handlebars for templating.

It’s worth noting that everything must be specified in template tags, as Meteor will render everything else immediately. This enforces thinking of your app as a series of views rather than a series of pages.

Let’s look at an example from our finished app to illustrate. We have a “main” template which looks like this:

Data is bound from client-side code to templates through the Meteor template API.

Hence, the variable showAdmin is actually bound to the return value of the JavaScript functionTemplate.main.showAdmin in the client-side code. In our app.js, the implementation is as follows:

Due to Meteor’s data bindings, when the session variable “showAdmin” is set to true, the “admin” template will be rendered. Otherwise, the “signup” template will be rendered. Meteor doesn’t have to be explicitly told to switch the views – it will update automatically when the value changes.

This brings us to the client-side code.

Client-side code

Since Meteor shares code between the client and the server, both client and server code are contained in app.js. We can add client specific code by testing Meteor.isClient:

Inserting data on form submit

For the user-facing landing page, we merely need to insert data into the MongoDB collection when the form is submitted. We thus bind to the form’s submit event in the “signup” template and check to see if the email appears to be valid, and if so, we insert it into the data model:

One of the nice things about Meteor is that the client and server side data model API’s are the same.  If we insert the data here in the client, it is transparently synced with the server and persisted to MongoDB.

This is very powerful. Because we can use any MongoDB client to also connect directly to the database, we can easily use this data from other parts of our system. For example,  we can later link-up mailmerge software to make use of our database of emails to send newsletters.

Adding authentication

Now that we’ve got our newsletter signup form working, we will want the ability to see a list of emails in the database. However, because this is sensitive information, we don’t want it to be publicly visible. We only want a select list of authenticated users to be able to see it.

Fortunately, Meteor makes it easy to add authentication to your application. For demonstration purposes, we piggy-back off our Github accounts via OAuth2. We don’t want to create additional passwords just to view newsletter signups. Instead, we’ll consider a hardcoded list of Github usernames to view the admin page:

Meteor makes it very easy to add a “login with Github” UI flow to your application with the accounts and accounts-ui packages. You can add these with the command:

> meteor add accounts-ui accounts-github

Once these are added to your app, you can render a “login with Github” button in your templates by adding the special template variable {{loginButtons}}. For example in our finished app we have:

Email list view

The data display table is simply a handlebars table that we’ll populate with data from the database. Meteor likes to live-update data, which means if you specify your templates in terms of data accessors, when the underlying data changes, the DOM will automatically reflect the changes:

This is a pretty different approach to typical frameworks where you have to manually specify when a view needs to refresh.

We also make it possible for admin users to toggle the display of the email list in the app by inverting the value of the ‘showAdmin’ Meteor session variable:

Server-side code

Meteor makes it super easy to handle the server-side component and marshalling data between MongoDB and the browser. Our newsletter signup simply has to publish the signups collection for the data display view to be notified of its contents and it will update the view in real-time.

The entire server-side component of our Meteor application consists of:

With a unified data model between client and server, Meteor.publish is how you make certain sets of server-side data available to clients. In our case, we wish to make the Github username available in the current user object. We also only wish to publish the emails collection to admin users for security reasons.

Bundling the Meteor app

For deployment, Meteor apps can be translated to Node.JS applications using the meteor bundle command. This will output a tarball archive. To run this application, uncompress it and install its only dependency – fibers.

Fibers can be installed with the command

> npm install fibers

Deploying the Meteor app with MongoLab

Now your Meteor application is ready to run. There are a number of configuration options which can be set at start-time via UNIX environment variables. This is where we specify which MongoDB database to use. MongoLab is a great choice, taking a lot of the hassle out of running and managing your database, with a nice free Sandbox plan that you can create in seconds here.

In order to have you Meteor application persist data to your MongoLab database, set the MONGO_URL environment variable to the MongoDB URI provided by MongoLab for your database:

> export MONGO_URL=mongodb://user:password@dsNNNNNN.mongolab.com:port/db

For Meteor to correctly set up authentication with Github, you need to set the ROOT_URL environment variable:

> export ROOT_URL=http://localhost:8080

To run your Meteor application on port 8080, simply execute main.js:

> PORT=8080 node main.js

You should now be able to connect to it at http://localhost:8080!

{ "comments": 2 }

MongoLab Discount for JS.everywhere() 2012

MongoLab is happy to be sponsoring the JS.everywhere() conference in Silicon Valley at the end of October. If you’re interested in joining us, please use this discount code on the registration page: “mongolabJS”.  You’ll get 50% discount on attendance. We are looking forward to seeing you there.

MongoDB’s native support for JSON of course makes it a natural fit to work with Javascript.  Javascript’s growing popularity beyond browser clients is driving the need for a scalable JSON persistence layer.  Having that cloud persistence layer at MongoLab, we get to see many interesting new projects in enterprises large and small.  So we’re excited to be reaching out to meet new users.

Details on our events page:

URL: http://www.jseverywhere.org/
Registration URL: http://jse2012.eventbrite.com/
Discount Code: mongolabJS
Dates: October 26-27, 2012
Location: San Jose, CA

Nodestack


We’re excited be part of the Oct 17 Nodestack.org online conference with Joyent, Clock Ltd, 10gen, and Nodejitsu.

What is Nodestack?

If you’re a Web developer you may have felt the same thing in the last year or so: Javascript is winning. More precisely, Joyent’s Node.js, supported by 10gen’s JSON-centric MongoDB database for persistence is winning. And by using SmartOS as the host for Node.js, Joyent offers the inspectability, performance, and debuggability capabilities of DTrace and ZFS to Nodestack.

Parochially, I’ve included a Google Trends widget above comparing “node”, “ruby”, and “java” when searched with “mongodb”. As of this writing, “node” had just crossed “ruby”s trend line and was headed up to challenge “java”.

Why Nodestack?

There are many reasons why Nodestack is emerging as a leading developer choice, including:

  • developer familiarity with Javascript from front-end browser domains
  • the battle-tested underlying Google V8 Javascript engine for high performance
  • a harmonious non-blocking asynchronous IO environment resulting in efficient CPU utilization
  • good fitness for demanding near real-time dynamic web and mobile applications
  • effortless JSON-awareness across the stack means fewer developer cycles wasted on data translation
  • a well-supported package management system with growing library of components for basic and advanced needs
  • a deep-bench ecosystem of infrastructure, platform and consulting services from vendors like Joyent Cloud, Nodejitsu and Clock Ltd. for even easier design, development, and production.
  • mdb_v8, DTrace and flame graphs (visual temporal call graphs) on SmartOS for fast root-cause analysis / debugging.

Nodestack Conference

At the Oct 17 online conference, you’ll talk with:

  • Nodejitsu’s Nuno Job on “Crazy, Cool Things You can do with Node.js”
  • 10gen’s Aaron Heckmann on “Node.js + MongoDB = Love” and why these technologies fit so well together.
  • Joyent’s Bryan Cantrill on “Stack Foundation = SmartOS” on SmartOS’ hypervisor benefits for Nodestack including flexibility for KVM virtualization
  • A panel including 10gen’s Jared Rosoff, Joyent’s Jason Hoffman, Clock Ltd’s Paul Serby and yours truly on the economic benefits of Nodestack.

So please sign up here to join us. The webcast is scheduled to start at 9am PT on Oct 17, 2012.

*If you are local in San Francisco, CA, we’re also inviting a few folks to join us as part of the studio audience. Email ben at mongolab dot com if you’re interested. See our Events page for other events.

Updated: 2012-09-28 with exact start time. Grammar fix ^less^fewer. Added link to Aaron’s preview post; mdb_v8, David Pacheo deck.

2012-10-01 fixed broken SmartOS link.

Why is MongoDB wildly popular? It’s a data structure thing.

Updated 11/7/14: Fixed typos

“Show me your code and conceal your data structures, and I shall continue to be mystified. Show me your data structures, and I won’t usually need your code; it’ll be obvious.” – Eric Raymond, in The Cathedral and the Bazaar, 1997

Linguistic innovation

The fundamental task of programming is telling a computer how to do something.  Because of this, much of the innovation in the field of software development has been linguistic innovation; that is, innovation in the ease and effectiveness with which a programmer is able to instruct a computer system.

While machines operate in binary, we don’t talk to them that way. Every decade has introduced higher-level programming languages, and with each, an advancement in the ability of programmers to express themselves. These advancements include improvements in how we express data structures as well as how we express algorithms.

The Object-Relational impedance mismatch

Almost all modern programming languages support OO, and when we model entities in our code, we usually model them using a composition of primitive types (ints, strings, etc…), arrays, and objects.

While each language might handle the details differently, the idea of nested object structures has become our universal language for describing ‘things’.

The data structures we use to persist data have not evolved at the same rate. For the past 30 years the primary data structure for persistent data has been the Table – a set of Rows comprised of Columns containing scalar values (ints, strings, etc…). This is the world of the relational database, popularized in the 1980’s by its transactionality, speedy queries, space efficiency over other contemporary database systems, and a meat-eating ORCL salesforce.

The difference between the way we model things in code, via objects, and the way they are represented in persistent storage, via tables, has been the source of much difficulty for programmers. Millennia of man-effort have been put  against solving the problem of changing the shape of data from the object form to the relational form and back.

Tools called Object-Relational Mapping systems (ORMs) exist for every object-oriented language in existence, and even with these tools, almost any programmer will complain that doing O/R mapping in any meaningful way is a time-consuming chore.

Ted Neward hit it spot on when he said:

“Object-Relational mapping is the Vietnam of our industry”

There were attempts made at object databases in the 90s, but there was no technology that ever became a real alternative to the relational database. The document database, and in particular MongoDB, is the first successful Web-era object store, and because of that, represents the first big linguistic innovation in persistent data structures in a very long time. Instead of flat, two-dimensional tables of records, we have collections of rich, recursive, N-dimensional objects (a.k.a. documents) for records.

An Example: the Blog Post

Consider the blog post. Most likely you would have a class / object structure for modeling blog posts in your code, but if you are using a relational database to store your blog data, each entry would be spread across a handful of tables.

As a developer, you need to know how to convert each ‘BlogPost’ object to and from the set of tables that house them in the relational model.

A different approach

Using MongoDB, your blog posts can be stored in a single collection, with each entry looking like this:

{
    _id: 1234,
    author: { name: "Bob Davis", email : "bob@bob.com" },
    post: "In these troubled times I like to …",
    date: { $date: "2010-07-12 13:23UTC" },
    location: [ -121.2322, 42.1223222 ],
    rating: 2.2,
    comments: [
       { user: "jgs32@hotmail.com",
         upVotes: 22,
         downVotes: 14,
         text: "Great point! I agree" },
       { user: "holly.davidson@gmail.com",
         upVotes: 421,
         downVotes: 22,
         text: "You are a moron" }
    ],
    tags: [ "Politics", "Virginia" ]
 }

With a document database your data is stored almost exactly as it is represented in your program. There is no complex mapping exercise (although one often chooses to bind objects to instances of particular classes in code).

What’s MongoDB good for?

MongoDB is great for modeling many of the entities that back most modern web-apps, either consumer or enterprise:

  • Account and user profiles: can store arrays of addresses with ease
  • CMS: the flexible schema of MongoDB is great for heterogeneous collections of content types
  • Form data: MongoDB makes it easy to evolve the structure of form data over time
  • Blogs / user-generated content: can keep data with complex relationships together in one object
  • Messaging: vary message meta-data easily per message or message type without needing to maintain separate collections or schemas
  • System configuration: just a nice object graph of configuration values, which is very natural in MongoDB
  • Log data of any kind: structured log data is the future
  • Graphs: just objects and pointers – a perfect fit
  • Location based data: MongoDB understands geo-spatial coordinates and natively supports geo-spatial indexing

Looking forward: the data is the interface

There is a famous quote by Eric Raymond, in The Cathedral and the Bazaar (rephrasing an earlier quote by Fred Brooks from the famous The Mythical Man-Month):

“Show me your code and conceal your data structures, and I shall continue to be mystified. Show me your data structures, and I won’t  usually need your code; it’ll be obvious.”

Data structures embody the essence of our programs and our ideas. Therefore, as programmers, we are constantly inviting innovation in the ease with which we can define expressive data structures to model our application domain.

People often ask me why MongoDB is so wildly popular. I tell them it’s a data structure thing.

While MongoDB may have ridden onto the scene under the banner of scalability with the rest of the NoSQL database technologies,  the disproportionate success of MongoDB is largely based on its innovation as a data structure store that lets us more easily and expressively model the ‘things’ at the heart of our applications. For this reason MongoDB, or something very like it, will become the dominant database paradigm for operational data storage, with relational databases filling the role of a specialized tool.

Having the same basic data model in our code and in the database is the superior method for most use-cases, as it dramatically simplifies the task of application development, and eliminates the layers of complex mapping code that are otherwise required. While a JSON-based document database may in retrospect seem obvious (if it doesn’t yet, it will), doing it right, as the folks at 10gen have, represents a major innovation.

will@mongolab

{ "comments": 47 }

Remote Dex: Index Analysis Using the Profile Collection

(also posted to the 10gen blog: here)

Greetings Adventurers!

I’m excited to report that Dex (github) is now equipped with his first planned upgrade. For those of you who haven’t met him, Dex is an open-source python tool that suggests indexes for your MongoDB database. The initial release of Dex supported logfile-based analysis only. Now, Dex can be run remotely by leveraging MongoDB’s built-in database profiler. This new feature is invoked with the -p or --profile option. When you run Dex with -p, it connects to the specified MongoDB database and analyzes the queries already logged to your system.profile collection.

As the diagram below shows, this is particularly good news for MongoLab’s shared plan customers and anyone who does not have direct terminal access to their database machine.

In case you missed my introduction of Dex at the San Francisco MongoDB User Group Meetup, I’ll be presenting Dex at the Silicon Valley MongoDB Users Group on July 17 at 10gen’s offices in Palo Alto!

Here’s a quick set of steps to get you started.

  1. If you haven’t already, get Dex:
    sudo pip install dex
  2. Or, if you already have Dex:
    sudo pip install dex --upgrade
  3. Log into your database through the mongo shell and run db.setProfilingLevel(1). If your MongoDB is hosted with us at MongoLab, you can also enable profiling through our UI.
  4. Let your app run for a while. With profiling enabled, MongoDB deposits documents into system.profile. By default, each of these documents represents a database operation that took more than 100ms to complete. For apps that perform specific operation at specific times, you will need to profile during those times. Once you feel that your profile collection is populated with a representative set of data, you’re ready to run Dex!
  5. Run Dex with --profile or -p (instead of -f):
    dex <mongodb-uri> -p

    Where <mongodb-uri> is your database connection string (ex: mongodb://me:mypassword@myserver.mongolab.com:27227/mydb)

    Note: If you use a Sandbox plan on MongoLab (or do not have an admin URI for other reasons) you must provide a -n/–namespace filter to narrow your request, or your Dex attempt will fail for authentication reasons. (ex: -n "mydb.*")

    > dex mongodb://me:mypassword@myserver.mongolab.com:27227/mydb -p -n "mydb.*"

  6. Dex outputs index recommendations and corresponding creation syntax.  Because Dex relies on heuristics that don’t take your data into account, you’ll want to validate and sanity check Dex’s suggestions before implementing.
  7. Run db.setProfilingLevel(0) to disable profiling when you’re done. Profiling requires a small bit of overhead and is entirely diagnostic, so you don’t need to leave it running. If you like, you can also drop the system.profile collection afterwards.
  8. Enjoy!

As always: if you have any questions, bug reports, or feature requests, please email us at support@mongolab.com.

Until next time, good luck out there!

Sincerely,
Eric@MongoLab

(updates 2012-07-16: fixed missing URI; 2012-07-17: added 10gen cross-post URL)

{ "comments": 2 }

MongoDB Users Group Events July 2012

Silicon Valley July 17, 2012: Dex and Fluentd

http://www.meetup.com/MongoDB-SV-User-Group/events/72760092/

10gen Palo Alto
555 University Avenue
Palo Alto, CA 94301

Query performance is critical for most applications.  Proper MongoDB index creation can mean over two orders of magnitude in latency improvement.   At the Silicon Valley MongoDB Users Group (SVMUG), MongoLab Engineer Eric Sedor will be presenting Dex, the Index Robot and the query optimizing rules that went into making Dex.  Dex is available under a liberal MIT open source license.

San Francisco July 25, 2012: MongoDB 2.2 and MongoCtl

http://www.meetup.com/San-Francisco-MongoDB-User-Group/events/60532682/

Mozilla HQ
2 Harrison Street
San Francisco, CA 94105

At the San Francisco MongoDB Users Group (SFMUG), MongoLab CEO Will Shulman will be presenting mongoctl, our open source (MIT License) MongoDB replica set cluster configuration tool.  Mongoctl uses declarative statements (optionally in JSON) to simplify creation, maintenance, and deprovisioning of MongoDB servers.

Thank you to 10gen, Mozilla HQ and the rest of the sponsors for hosting us! Hope to see you there!

(Updated: 2012-07-12 Giving thanks)

{ "comments": 1 }

Aggregation Framework Example

(also posted to the 10gen blog here)

In this blog post, you run a concise set of aggregation framework examples on the mongo Javascript shell against a MongoLab hosted 2.2 database.  The framework includes the aggregation operators $project, $unwind, $group, and others.  These operators allow you to calculate values across documents in a collection, like averages and sums.  They also let you reshape documents, unpacking nested structures and regrouping them as needed.

The aggregation framework, one of the most powerful and highly anticipated features in the forthcoming production MongoDB 2.2 release, lets you construct a server-side processing pipeline to be run on a collection.  A rich set of operations are available for incorporation in the pipeline so as to achieve various kinds of collection transforms, ranging from simple multi-document calculations (e.g., sums and averages) to complex projections and pivots.

The framework fits nicely in a range of data manipulation tools available in MongoDB from basic built-in functions like document counts to map-reduce and Javascript, to custom code and language-specific packages, including Hadoop.

Overview

  1. Create a 2.2 MongoLab database Continue Reading →
{ "comments": 5 }

Introducing Dex: the Index Bot

(update 2012-07-19: A new remote feature detailed here.)
(update 2012-10-09: Version 0.5 detailed here.)

Greetings adventurers! MongoLab is pleased to introduce Dex! We hope he assists you on your quests.


Dex is a MongoDB performance tuning tool, open-sourced under the MIT license, that compares logged queries to available indexes in the queried collection(s) and generates index suggestions based on a simple rule of thumb. To use Dex, you provide a path to your MongoDB log file and a connection URI for your database. Dex is also registered with PyPi, so you can install it easily with pip.

Quick Start

pip install dex

THEN

dex -f mongodb.log mongodb://localhost

Dex provides runtime output to STDERR as it finds recommendations:

{
"index": "{'simpleIndexedField': 1, 'simpleUnindexedFieldThree': 1}",
"namespace": "dex_test.test_collection",
"shellCommand": "db.test_collection.ensureIndex(
  {'simpleIndexedField': 1, 'simpleUnindexedFieldThree': 1},
  {'background': true})"
}

As well as summary statistics:

Total lines read: 7
Understood query lines: 7
Unique recommendations: 5
Lines impacted by recommendations: 5

Just copy and paste each shellCommand value into your MongoDB shell to create the suggested indexes.

Dex also provides the complete analysis on STDOUT when it’s done, so you will see this information repeated before Dex exits. The output to STDOUT is an entirely JSON version of the above, so Dex can be part of an automated toolchain.

For more information check out the README.md and tour the source code at https://github.com/mongolab/dex. Or if you’re feeling extra adventurous, fiddle with the source yourself!

git clone https://github.com/mongolab/dex.git

The motivation behind Dex

MongoLab manages tens of thousands of MongoDB databases, heightening our sensitivity to slow queries and their impact on CPU.  What started as a set of Continue Reading →

{ "comments": 17 }