A Quick-Start Guide to the MEAN Stack (on a Mac)

Want to get a web application up and running quickly? Consider using the MEAN stack – MongoDB, ExpressJS, AngularJS, and Node.js. In this quick-start guide I’ll walk you through the process of getting everything set up properly with a MEAN stack on your Mac, then we’ll jump right in to developing a simple web application.

What are the technologies in the MEAN stack?

The MEAN stack consists of:

  • MongoDB – a NoSQL database. Consists of a set of Collections, which hold a set of Documents. Each Document is a set of key-value pairs, and is analogous to a JSON object. One key difference between a Document and a JSON object is that a Document is in BSON format, which is a binary form of JSON.
  • ExpressJS - a Node.js web application framework. Provides robust set of features for building single, multipage, and hybrid web applications.
  • AngularJS – a Javascript UI framework for web apps. It lets you extend HTML’s syntax and works very well for single-page applications.
  • Node.js – a server-side Javascript runtime environment. Node.js uses an event-driven, non-blocking I/O model based on the V8 Javascript engine.


Install Homebrew

Before installing the MEAN stack on your Mac, you’ll want to ensure Homebrew is installed. Homebrew is a package manager for Mac that will make installing everything in the MEAN stack much simpler. First, check if you have Homebrew installed by opening up the terminal and typing:

brew help

If it is installed, proceed to install Node and npm (node package manager). Otherwise, install it by running this code:

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Then run:

brew doctor

Finally, you may have to set up the permissions of your /usr/local folder by running this code:

sudo chgrp -R staff /usr/local
sudo chmod -R g+w /usr/local
sudo chgrp -R staff /Library/Caches/Homebrew
sudo chmod -R g+w /Library/Caches/Homebrew/

The code above assumes your user’s group is staff. You can check your usergroup by typing this in the terminal:


It should print out a long list of different variables, with the 2nd and 3rd ones being gid=xx(staff) groups=xx(staff). If your group is different from ‘staff’, then modify the provided code accordingly with your group name.

Install Node and npm

Run this:

brew update
brew install node

Check that it succeeded using:

node -v
npm -v

Install MongoDB

brew install mongodb

Next, create the data directory for the mongod process to write data:

[sourecode]sudo mkdir -p /data/db[/sourcecode]

Then update the permissions of the directory:

sudo chgrp -R staff /data
sudo chmod -R g+w /data
Check MongoDB is working


mongod --httpinterface

The –httpinterface part allows you to access the interface via your browser (which is disabled by default). Navigate to http://localhost:28017/. You should see a web interface that provides overview, clients, dbtop, and writelock % rows, as well as the log.

Go back to your terminal and hit control + c to shut down the server.

Install Bower

Next, install Bower, which you’ll use to manage your front-end packages:

npm install -g bower

Install Grunt

Up next is Grunt, which is a task runner that helps you automate your development process (minification, compilation, unit testing, etc.):

npm install -g grunt-cli

Install Yeoman

We’re not done installing programs yet. Let’s set you up with Yeoman, a generator that provides scaffolding for projects:

npm install -g yo

Install MEAN.js

You’re finally ready to install MEAN.js, a generator that helps you create a MEAN.js application, a CRUD (Create, Read, Update and Delete) module, Angular module, and other AngularJS/Express entities.

npm install -g generator-meanjs

Start creating your first application

Now that you have everything set up, let’s start working on a simple application. First, create a new project folder. My folder is located at /Users/ryanwolniak/Development/MEAN/p1.

cd /Users/ryanwolniak/Development/MEAN/p1

If you’re unfamiliar with how to create a new directory, you’ll want to run a command similar to this first:

mkdir -p /Users/ryanwolniak/Development/MEAN/p1

With your project directory created, generate your application with yo:

yo meanjs

You will be asked a number of questions. For the Mean.js version I chose 0.4.1. Press enter when asked for a folder, then call your project whatever you like. I called mine p1. For the description, describe it however you wish. Add simple keywords if you like, and put your name down as the author of the application. When asked if you want to generate a CRUD module and a chat example, type yes for both. Then wait for the files to be generated.

What to do if the generator fails

If the generator fails, try running the following code:

npm config set registry http://registry.npmjs.org/ --global

Then ensure that you have accepted the XCode license by running:

sudo xcodebuild -license

Scroll all the way to the bottom of the license and type ‘agree’.

If prompted, close and reopen your terminal. Then install nvm:

curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.29.0/install.sh | bash

Switch your node version to v0.12.7:

nvm install 0.12.7
nvm use 0.12.7

If prompted, close and reopen your terminal.

After running the above code, delete the mean folder that was created from the first time you ran ‘yo meanjs’ (it should be in the directory you ran ‘yo meanjs’ from). Then retry the generator:

yo meanjs

After that, you are almost ready to run your application. Change directories to the directory you generated the files at:

cd /Users/ryanwolniak/Development/MEAN/p1/mean

Then install the ruby gem sass:

sudo gem install sass

Run your application

Let’s start your application up!

First, start MongoDB:

mongod &

Note: the & symbol will allow mongod to run in the background so that you can start up your application. Speaking of which, run:


Your application should now be up and running. Navigate to http://localhost:3000/

You should see a screen similar to this one:

MEAN.JS Sample Application

Stopping your application

Want to end your application? If your terminal is still up, hit ‘control + c’. That should stop the application. But mongod will still be running in the background. To stop mongod, first run ‘ps’ to find a list of processes running on your machine:


Then run:

kill 59178

where 59178 is the process id (PID). If it’s still hung, hit ‘control + c’.


Since things sometimes vary from machine to machine, feel free to reach out if you’re having trouble getting any of this working via email or in the comments.

Getting Started with SparkSQL on the Hortonworks Sandbox I: Installing Zeppelin

SparkSQL is a module for Spark which allows you to run SQL queries over the Spark engine. It works alongside Hive, in the sense that it reuses the Hive frontend and metastore to give you complete compatibility with any of your Hive UDFs, data, or queries. The advantage of SparkSQL over classic Hive (which used MapReduce to run queries) is speed and integration with Spark.

Before diving into SparkSQL, it’s worth noting that Hive on Apache Tez, of which integration was brought about by Hortonworks’ Stinger Initiative, increased Hive’s performance on interactive queries. This slide deck by Hortonworks even suggests that Hive on Tez outperforms SparkSQL for short running queries, ETL, large joins and aggregates, and resource utilization. However, SparkSQL still makes for a great way to explore your data on your cluster. With APIs in Java, Python, and Scala, as well as a web based interpreter called Zeppelin, it offers a multitude of options for running queries and even visualizing data.

To start you off easy, we’ll begin this guide by installing Zeppelin so that we can use the web based interpreter. If you’re familiar with R, think of Zeppelin as a bit like RStudio. It allows for data ingestion, discovery, analytics, and visualization.

Install Zeppelin

Download Zeppelin

Let’s start by downloading the Zeppelin files. To do this, we’ll clone it from its Github repository:

git clone https://github.com/apache/incubator-zeppelin.git

Install Maven

In order to build Zeppelin, you’ll need Maven on your sandbox. To install it, first download it:

wget http://apache.cs.utah.edu/maven/maven-3/3.3.3/binaries/apache-maven-3.3.3-bin.tar.gz

Extract it:

tar xvf apache-maven-3.3.3-bin.tar.gz

Move it:

mv apache-maven-3.3.3 /usr/local/apache-maven

Add the environment variables:

export M2_HOME=/usr/local/apache-maven
export M2=$M2_HOME/bin
export PATH=$M2:$PATH

Run this command:

source ~/.bashrc

Then verify that Maven works by running this command:

mvn -version

Build Zeppelin

With Maven installed, you’re now ready to build Zeppelin:

mvn clean install -DskipTests -Pspark-1.3 -Dspark.version=1.3.1 -Phadoop-2.6 -Pyarn

Prepare for it to take about 15-20 minutes to build. Let it run.

When the build finishes you should get a screen that looks like this:

Zeppelin Maven Build

Next, create the zeppelin-env.sh file by copying the zeppelin-env.sh.template:

cp conf/zeppelin-env.sh.template conf/zeppelin-env.sh

The above code assumes you are in the directory that you downloaded zeppelin to.

Configure Zeppelin

Next, edit the zeppelin-env.sh file:

vi conf/zeppelin-env.sh

Hit i to enter edit/insert mode. Add these lines at the end of the file:

export HADOOP_CONF_DIR=/etc/hadoop/conf
export ZEPPELIN_PORT=10008
export ZEPPELIN_JAVA_OPTS="-Dhdp.version="

Note: the Dhdp.version should be the version of Hadoop that you are running. If you are running the 2.3 version of the sandbox your version will be the same as mine.To figure out your version in other cases, run this code:

hadoop version

You should get something that looks like this:

Hadoop version

The version you’ll want to type in comes after the first three numbers (2.7.1).

Once you know your version, go back and edit the zeppelin-env.sh file as instructed. Hit escape, then ‘:wq’ to save and quit.

Next, copy the hive-site.xml file to the conf folder:

cp /etc/hive/conf/hive-site.xml conf

The above code assumes you are in the directory you downloaded Zeppelin to.

Switch to the hdfs user next:

su hdfs

Then create a directory in HDFS for zeppelin:

hdfs dfs -mkdir /user/zeppelin
hdfs dfs -chown zeppelin:hdfs /user/zeppelin

You’re almost ready to start Zeppelin. But first you need to change your port forwarding settings on your sandbox.

Add Port Forwarding

Power off your sandbox, then navigate to the Machine > Settings while your Hortonworks sandbox is selected in the VirtualBox Manager. Click on Network once in the settings. You should be in NAT mode. Click on Advanced > Port Forwarding.

Port Forwarding

Next, add a rule by clicking the green plus side. Call the rule zeppelin, and give it the Host Name, the Host Port 10008, and the Guest Port 10008.

Zeppelin Port

Click OK twice, then start your sandbox back up.

Start Zeppelin

Ready to start Zeppelin? Navigate to wherever you downloaded Zeppelin (the incubator-zeppelin folder), then type this code into your command line:

bin/zeppelin-daemon.sh start

Congratulations! You should now have Zeppelin up and running on port 10008, i.e.

If you want to stop it, run this code:

bin/zeppelin-daemon stop

With Zeppelin up and running, it’s time to start exploring SparkSQL. Check out Part II of this guide for an introduction to SparkSQL.

Setting up IPython Notebook with Apache Spark on the Hortonworks Sandbox

Apache Spark, if you haven’t heard of it, is a fast, in-memory data processing engine that can be used with data on Hadoop. It offers excellent performance and can handle tasks such as batch processing, streaming, interactive queries and machine learning.

Spark offers APIs in Java, Scala, and Python. Today, I’ll be covering how to get an IPython notebook set up on your Hortonworks sandbox so you can use it to run ad-hoc queries, data exploration, analysis, and visualization over your data.

Let’s get started


First, make sure your sandbox’s network adapter is in NAT mode.

While your sandbox is powered off, navigate to the settings of your Hortonworks sandbox by clicking Machine > Settings while ‘Hortonworks Sandbox with HDP 2.3_1′ is highlighted.

Navigate to Network once in the settings. Ensure that your NAT network adapter is turned on, and that it is the only network adapter turned on:

Enable NAT Adapter

The default settings should be fine.

With the proper network adapter enabled, start up your machine.

Install IPython

Next, use yum to install a number of necessary dependencies:

yum install nano centos-release-SCL zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel db4-devel libpcap-devel xz-devel libpng-devel libjpg-devel atlas-devel

Make sure you include all of them. When you’re prompted with a y/N question, be sure to type y.

Next, install the development tools dependency for Python 2.7:

yum groupinstall "Development tools"

Then, install Python 2.7:

yum install python27

Since there are now multiple Python versions on your Hortonworks sandbox, you need to switch to Python 2.7:

source /opt/rh/python27/enable

Don’t worry, the Python version will switch back to its default the next time you power off your sandbox.

Next, download ez_setup.py, which will let you install ez_install. You’ll then use ez_install to install pip. It’s a bit of jumping through hoops, but these tools will get you what you need. Start by navigating to a test directory, or any directory you feel comfortable downloading files to. I chose to create a directory called /test_dev/:

mkdir /test_dev/
cd /test_dev/

Next, download ez_setup.py:

wget http://bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py

Run ez_setup.py:

python ez_setup.py

Now install pip:

easy_install-2.7 pip

With pip installed, you can now install some of the packages you’ll want to use with IPython. You want to install the following packages:

numpy scipy pandas scikit-learn tornado pyzmq pygments matplotlib jinja2 jsonschema

You can install them by running this code, followed by the packages you want to install:

pip install

For example:

pip install scipy pandas scikit-learn

I recommend installing only a few at a time, as you may run into issues if you try to install them all at once. The first couple of installations take a good amount of time to install.

Install IPython Notebook next:

pip install "ipython[notebook]"

Next, create an IPython profile for Spark:

ipython profile create pyspark

Before we continue any further, you should set up port forwarding for port 8889 on your sandbox.

Setting up port forwarding

Power off your sandbox. Go into the settings of your Hortonworks sandbox, then navigate to Network:

Network settings

Click Port Forwarding. Then click the plus sign to add a new rule. Call the rule ipython, give it the host IP, host port 8889, and guest port 8889. Leave the guest IP blank. It should look like this when you are done:

Port Forwarding in VirtualBox

Click OK, then OK again.

Start your sandbox back up.

Create a shell script to launch IPython Notebook

To make things easier for you to launch IPython Notebook, let’s create a shell script. Navigate to the folder you’d like to launch IPython Notebook from. For me that was /dev_test/:

cd /dev_test/

Launch nano to create the shell script:

nano start_ipython_notebook.sh

Enter this code in nano:

source /opt/rh/python27/enable
IPYTHON_OPTS="notebook --port 8889 --notebook-dir=u'/usr/hdp/' --ip='*' --no-browser" pyspark

Type control + o to write/save the file. Then control + x to quit.

Run it using this code:

sh start_ipython_notebook.sh

It should look like this:

IPython Notebook running on Hortonworks sandbox

Congratulations! You’re now running IPython Notebook on the Hortonworks sandbox in conjunction with Spark.

Using your new IPython Notebook

To start exploring with your new notebook, go to this web address:

That’s it for this guide. In the next guide I’ll cover some basics of IPython Notebook and how you can get started using it with Spark.


Credit goes out to Hortonworks for writing their own guide, which I used as a basis of knowledge for this post. Since their guide was outdated at the time of writing, this post has updates and modifications which ensure a seamless installation of IPython Notebook with Apache Spark on Hortonworks Sandbox 2.3 and 2.3.1.

Connecting to an Accumulo Instance Remotely via Java Client Code

In my last guide, I showed you how to properly set up Accumulo on a Hortonworks sandbox. This time, I’ll be showing you how to remotely connect to that Accumulo instance via a Java client.

Setting up

If you haven’t already set up Eclipse for big data development, go look at this post. It will cover how to set up Maven, as well as how to connect to Github (if you wish).

Before we get started, create a new Java project in Eclipse, then convert it to a Maven project. Recall that to convert to a Maven project you can right click on it, then click Configure > Convert to Maven Project.

Edit the pom.xml file

In order to run the code you’ll be using, you need to add the proper jars to your pom.xml file. Double click your pom.xml file, then click Dependencies.

Click ‘Add…’ then fill out the following information:

Accumulo-core dependency

You can leave the scope as Compile.

Click Done.

Add your sandbox IP to your hosts file

To make it easier to connect to your sandbox, we’ll add the IP address to your hosts file. To do this on a Mac, open up your terminal and type:

sudo nano /etc/hosts

Add a line that looks similar to this (may vary based on your sandbox IP address): sandbox.hortonworks.com

Hosts file

Save and exit.

Edit the Authorizations of the root user

Since the code in this guide covers an Accumulo concept known as Authorizations, you’ll need to give your root user the proper authorizations in order to use it.

To give your root user an authorization to modify Accumulo rows that have a “public” authorization, first power on your sandbox and start up Accumulo:


Log on to the accumulo shell:

accumulo shell

Enter your password (hadoop).

Then run this code to to tell Accumulo to give the root user an authorization for any rows with the ‘public’ authorization:

setauths -s public -u root

Finally, quit out of the accumulo shell:


Create a log4j.properties file

This step is optional, but if you choose to ignore it you’ll have to edit out a few lines of code in the Java program you create. A log4j.properties file will allow you to use a Logger with your Java program, which is useful for debugging.

See Customizing Log/Print Statements in the HBase guide for information on how to create the log4j.properties file.

Create the Java class

Now that everything is set up, create the class you’ll be using to connect to Accumulo. I called my class AccumuloConnection.class, but call it whatever you like.

Next, add this code to your class.

Examine the comments of the code to understand how it works. Some notes about the code:

  • It uses a Logger to keep track of system messages as well as any message the developer wants to print
  • It sets up the Logger configuration using the configuration file we created earlier
  • It connects to Accumulo and Zookeeper, then tries to create a table if the table doesn’t already exist
  • Next, it writes a mutation (row) to the server using a BatchWriter
    • Multiple mutations could be written using the BatchWriter, but for this example we just write one
  • Next, a scanner is created
    • Authorizations for the scanner are specified
    • A range to scan over is given
    • A column family is specified to further narrow down the search
  • Finally, the entries the scanner returns are iterated through and printed to console

Hope you enjoyed learning about Accumulo development using Java. If you have any questions feel free to reach out in the comments or an email.

How to Start Accumulo on the Hortonworks Sandbox

The Hortonworks sandbox is a great virtual environment for learning about technologies in the Hadoop ecosystem. It comes bundled with the ability to start and stop services such as MapReduce, Hive, HBase, Kafka, Spark and more in just a few clicks.

Unfortunately, installing and starting Accumulo on the Hortonworks sandbox is a little trickier than that. Luckily for you, all you have to do is follow the steps in this guide and you’ll be up and running Accumulo in no time.

Setting up

Before we get started, make sure your Hortonworks sandbox has the proper network adapter settings.

While your sandbox is powered off, navigate to the settings of your Hortonworks sandbox by clicking Machine > Settings while ‘Hortonworks Sandbox with HDP 2.3_1′ is highlighted. Navigate to Network once in the settings and ensure that your NAT network adapter is turned on, and that it is the only network adapter turned on:

Enable NAT Adapter

The default settings should be fine.

Once you’re enabled the correct network adapter, start up your virtual machine.

Installing Accumulo

Since Accumulo doesn’t come bundled with the Hortonworks sandbox you’ll have to install it. Run this code:

yum install accumulo

It will install Accumulo under a directory similar to /usr/hdp/

If the install code fails, run this command, then try again:

sudo sed -i "s/mirrorlist=https/mirrorlist=http/" /etc/yum.repos.d/epel.repo

Switch to Host-only adapter

Before we go any further, switch to your host-only adapter on your Hortonworks virtual machine. This will allow you to set things up properly for if you later decide you want to remotely connect to your Accumulo instance.

First, power down your VM.

Navigate to your Hortonworks virtual machine settings: click the Hortonworks sandbox on the virtualbox manager then clicking Machine > Settings. Navigate to Network and ensure the NAT network adapter is unchecked.

Ensure NAT is turned off

Next, turn on your Host-only network adapter:

Host-only network adapter

The default settings should be fine. If you were able to follow these steps, move on to Copy a Configuration Example.

If you don’t see an option for a Host-only network adapter after Name, navigate to VirtualBox > Preferences > Network > Host-only Networks:

Adding a new host-only network

Click the plus sign to add a new Host-only network. Then go back and ensure that your Hortonworks virtual machine settings (Machine > Settings > Network) are correct.  I.e. ensure NAT is unchecked, and create a new network adapter for which you select Host-only and vboxnet0. Refer to the screenshots/steps above if you’re still lost, or reach out to me to ask.

Copy a configuration example

Next, you’ll want to copy the files from one of the configuration examples provided by Accumulo to Accumulo’s config directory:

cp /usr/hdp/* /usr/hdp/

You can choose different sizes ranging from 512MB to 3GB based on your available memory. I choose 1 GB since my sandbox system is on the smaller side.

Edit accumulo-env.sh

Next, open accumulo-env.sh in a text editor. I’ll use vi:

vi /usr/hdp/

Press to enter insert/edit mode in vi.

Edit the line about your JAVA_HOME to be:

test -z "$JAVA_HOME" && export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64

The above code is most likely sandwiched between and if … else statement. Don’t modify those if..else statements, just modify the line that looks like the line I gave you. The important thing that’s changed in the line I gave you is the path. Everything else should be as-is.

Next edit ZOOKEEPER_HOME to be:

test -z "$ZOOKEEPER_HOME" && export ZOOKEEPER_HOME=/usr/hdp/

Then, find the line with HADOOP_PREFIX and change it to:

test -z "$HADOOP_PREFIX" && export HADOOP_PREFIX=/usr/hdp/

Finally, uncomment this line:


Press the escape key, then type ‘:wq’ to save and exit vi.

Edit accumulo-site.xml

Open up accumulo-site.xml in vi:

vi /usr/hdp/

Press i to enter edit/insert mode.

Change instance.secret property’s value to be hadoop:

  A secret unique to a given instance that all servers must know in order to communicate with one another. Change it before initialization. To change it later use ./bin/accumulo org.apache.accumulo.server.util.ChangeSecret --old [oldpasswd] --new [newpasswd], and then update this file.

Then scroll down and change trace.token.property.password to hadoop:


Press escape then ‘:wq’ to save and exit

Edit gc, masters, monitor, slaves, and tracers files

To ensure you are able to use your Accumulo instance with a client program (Java) you need to replace ‘localhost’ in all of the following files with your sandbox’s IP address:

  • gc
  • masters
  • monitor
  • slaves
  • tracers

The files are located in your accumulo conf folder:

cd /usr/hdp/

Recall that you can figure out what your ip address is by typing this in your terminal:


Edit Accumulo User Properties

Now you need to change the accumulo user properties. Edit your password file:

vi /etc/passwd

Press to edit. Scroll all the way to the bottom and edit the accumulo line to read:


Don’t worry if the third entry isn’t 496. The important thing is to change the 4th entry to 501 and the 6th entry to /home/accumulo. Press escape then ‘:wq’ to save and exit.

Create a home directory for accumulo

The next step we’ll be doing is creating a home directory for the accumulo data to reside in on your local filesystem and hadoop filesystem. Think of this like a development directory. Create it using this code:

mkdir -p /home/accumulo/data


hadoop fs -mkdir -p /home/accumulo/data

Change the permissions:

chown -R accumulo:hadoop /home/accumulo/data
sudo -u hdfs hadoop fs -chown -R accumulo:hadoop /home/accumulo/data

Initialize Accumulo

Now that you have everything set up, it’s time to initialize Accumulo. Run the following lines of code in your Hortonworks sandbox:

su - accumulo
. /usr/hdp/
accumulo init

Once you run accumulo init a few messages will come up on your screen, followed by a message asking you to give accumulo an instance name. I kept it simple and chose accumulo-instance as mine, but choose whatever you like.

Next, enter the password from earlier: hadoop

Change file permissions of Accumulo folder on HDFS

In order for Accumulo to actually be able to run, we need to change the file permissions of the Accumulo folder. To do that, exit the accumulo user by typing ‘su’ into your hortonworks sandbox terminal.

You will be prompted for a password. Type hadoop.

Now run this code:

sudo -u hdfs hadoop fs -chmod 777 /accumulo

Run Accumulo

If you followed all the above steps, you should now be ready to run Accumulo. Enter this code into your terminal to run the start-all shell script:


You’re done! Accumulo should now be successfully running on your VM. To check, go to this web address:

If you notice that your instance name is null the first time you load that page, simply reload the page. It should then display properly.

If you just wanted to get Accumulo up and running, congratulations! You’ve successfully completed this guide.

As a bonus, here’s how to stop Accumulo.

Stopping Accumulo

Use this code to stop Accumulo:


Starting Accumulo Back Up

Want to start Accumulo back up?


Hope you enjoyed reading, and, as always, feel free to reach out with questions. In my next guide I’ll show you how to connect to Accumulo remotely.