Free Cheat-test Samples and Demo Questions Download
Adobe exams Adobe
Apple exams Apple
Avaya exams Avaya
Check Point exams Check Point
Cisco exams Cisco
Citrix exams Citrix
CIW exams CIW
CompTIA exams CompTIA
EC-Council exams EC-Council
EMC exams EMC
Exin exams Exin
Fortinet exams Fortinet
Hitachi exams Hitachi
HP exams HP
IBM exams IBM
Isaca exams Isaca
ISC exams ISC
Juniper exams Juniper
LPI exams LPI
McAfee exams McAfee
Microsoft exams Microsoft
Oracle exams Oracle
PMI exams PMI
Riverbed exams Riverbed
SNIA exams SAP
Sun exams SAS
Symantec exams Symantec
VMware exams VMware
All certification exams

Cloudera CCA-505 Exam -

Free CCA-505 Sample Questions:

Q: 1
Which process instantiates user code, and executes map and reduce tasks on a cluster running MapReduce V2 (MRv2) on YARN?
A. NodeManager
B. ApplicationMaster
C. ResourceManager
D. TaskTracker
E. JobTracker
F. DataNode
G. NameNode
Answer: E

Q: 2
A user comes to you, complaining that when she attempts to submit a Hadoop job, it fails. There is a directory in HDFS named /data/input. The Jar is named j.jar, and the driver class is named DriverClass. She runs command:
hadoop jar j.jar DriverClass /data/input/data/output
The error message returned includes the line:
PrivilegedActionException as:training (auth:SIMPLE)
cause.apache.hadoop.mapreduce.lib.input. InvalidInputException: Input path does not exits: file :/data/input
What is the cause of the error?
A. The Hadoop configuration files on the client do not point to the cluster
B. The directory name is misspelled in HDFS
C. The name of the driver has been spelled incorrectly on the command line
D. The output directory already exists
E. The user is not authorized to run the job on the cluster
Answer: A

Q: 3
A slave node in your cluster has four 2TB hard drives installed (4 x 2TB). The DataNode is configured to store HDFS blocks on the disks. You set the value of the dfs.datanode.du.reserved parameter to 100GB. How does this alter HDFS block storage?
A. A maximum of 100 GB on each hard drive may be used to store HDFS blocks
B. All hard drives may be used to store HDFS blocks as long as atleast 100 GB in total is available on the node
C. 100 GB on each hard drive may not be used to store HDFS blocks
D. 25 GB on each hard drive may not be used to store HDFS blocks
Answer: B

Q: 4
You are working on a project where you need to chain together MapReduce, Pig jobs. You also needs the ability to use forks, decision, and path joins. Which ecosystem project should you use to perform these actions?
A. Oozie
B. Zookeeper
C. HBase
D. Sqoop
Answer: A

Q: 5
Which is the default scheduler in YARN?
A. Fair Scheduler
B. FIFO Scheduler
C. Capacity Scheduler
D. YARN doesn’t configure a default scheduler. You must first assign a appropriate scheduler class in yarn-site.xml
Answer: A

Q: 6
You have a cluster running with the Fair Scheduler enabled. There are currently no jobs running on the cluster, and you submit a job A, so that only job A is running on the cluster. A while later, you submit Job B. now job A and Job B are running on the cluster at the same time. How will the Fair Scheduler handle these two jobs?
A. When job A gets submitted, it consumes all the tasks slots.
B. When job A gets submitted, it doesn’t consume all the task slots
C. When job B gets submitted, Job A has to finish first, before job B can scheduled
D. When job B gets submitted, it will get assigned tasks, while Job A continue to run with fewer tasks.
Answer: C

© 2014, All Rights Reserved