To connect to an Amazon EC2 node, such as the master nodes for the Hadoop clusters you will be creating, you need an SSH key pair. To create and install one, do the following:
</path/to/saved/keypair/file.pem>
in the following instructions.$ chmod 600 </path/to/saved/keypair/file.pem>
To run a Pig job on AWS, you need to start up an AWS cluster using the web Management Console, then connect to the Hadoop master node, as follows:
$ ssh -o "ServerAliveInterval 10" -i </path/to/saved/keypair/file.pem> hadoop@<master.public-dns-name.amazonaws.com>
$ pig
Instead of pig -x local
.
grunt>This is the interactive mode where you type in pig queries. Here you will cut and paste
example.pig
, but only after you read "Managing the results of your Pig queries" below. In this homework we will use pig only interactively. (The alternative is to have pig read the program from a file.)quit
at the grunt>
promt. To terminate the ssh session, type exit
at the unix prompt: after that you must terminate the AWS cluster (see next).After you are done, shut down the AWS cluster:
Pay attention to this step. If you fail to terminate your job and only close the browser, or log off AWS, your AWS will continue to run, and AWS will continue to charge you: for hours, days, weeks, and when your credit is exhausted, it chages your creditcard. Make sure you don't leave the console until you have confirmation that the job is terminated.
You are required in this homework to monitor the running Hadoop jobs on your AWS cluster using the master node's job tracker web UI. There are two ways to do this: using lynx or using your own browser with a SOCKS proxy.
ssh
connection to the AWS master node and type:
% lynx http://localhost:9100/
up/down arrows
= move through the links (current link is highlighted); enter
= follows a link; left arrow
= return to previous page. foxyproxy.xml
configuration file from the
hw6/
folder into your
Firefox profile folder.foxyproxy.xml
you copied into your profile, and using
Amazon's instructions to set up FoxyProxy manually.
$ ssh -o "ServerAliveInterval 10" -i </path/to/saved/keypair/file.pem> -ND 8888 hadoop@<master.public-dns-name.amazonaws.com>
(The -N
option tells ssh
not to start a shell,
and the -D 8888
option tells ssh
to start
the proxy and have it listen on port 8888.)http://<master.public-dns-name.amazonaws.com>:9100/
http://<master.public-dns-name.amazonaws.com>:9101/
If you want to kill pig, you first type CTRL/C, which kills pig only. Next, kill the hadoop job, as follows. From the job tracker interface find the hadoop job_id
, then type:
% hadoop job -kill job_id
Your pig program stores the results in several files in a directory. You have two options: (1) store these files in the Hadoop File System, or (2) store these files in S3. In both cases you need to copy them to your local machine.
This is done through the following pig command (used in example.pig
):
store count_by_object_ordered into '/user/hadoop/example-results' using PigStorage();
Before you run the pig query, you need to (A) create the /user/hadoop directory. After you run the query you need to (B) copy this directory to the local directory of the AWS master node, then (C) copy this directory from the AWS master node to your local machine.
To create a /user/hadoop
directory on the AWS cluster's
HDFS file system run this from the AWS master node:
% hadoop dfs -mkdir /user/hadoopCheck that the directory was created by listing it with this command:
% hadoop dfs -ls /user/hadoopYou may see some output from either command, but you should not see any errors. Now you can run
example.pig.
The result of a pig script is stored in the hadoop directory specified by the store
command.
That is, for example.pig
, the output will be stored at
/user/hadoop/example-results
, as specified in the script.
HDFS is separate from the master node's file system, so
before you can copy this to your local machine, you must copy the directory
from HDFS to the master node's Linux file system:
% hadoop dfs -copyToLocal /user/hadoop/example-results example-results
This will create a directory example-results
with
part-*
files in it, which you can copy
to your local machine with scp
.
For the example, there may be only one part-* file, but generally you will have several. You can then concatenate all the part-*
files to get
a single results file, perhaps sorting the results if you like.
Use hadoop dfs -help
or see the hadoop dfs
guide
to learn how to manipulate HDFS. (Note that hadoop fs
is
the same as hadoop dfs
.)
$ scp -o "ServerAliveInterval 10" -i </path/to/saved/keypair/file.pem> hadoop@<master.public-dns-name.amazonaws.com>:<file_path> .
where <file_path>
can be absolute or relative to the
AWS master node's home folder. The file should be copied onto your current directory
('.') on your local computer.example-results
. They type the following on your loal computer:
$ scp -o "ServerAliveInterval 10" -i </path/to/saved/keypair/file.pem> -r hadoop@<master.public-dns-name.amazonaws.com>:example-results .
This seems much easier to use. Go to your AWS Management Console, click on Create Bucket, and create a new bucket (=directory). Give it a name that may be a public name. Let's say you call it superman-hw6
. Click on the Properties button, then Permissions tab. Make sure you have all the permissions.
Modify the store command of example.pig
to:
store count_by_object_ordered into 's3n://superman-hw6/example-results';
Run your pig program. When it terminates, then in your S3 console you should see the new directory example-results
. Click on individual files to download. The number of files depends on the number of reduce tasks, and may vary from one to a few dozens. The only disadvantage of using S3 is that you have to click on each file separately to download.
Note that S3 is permanent storage, and you are charged for it. You can safely store all your query answers for several weeks without exceeding your credit; at some point in the future remember to delete them.