Free Cheat-test Samples and Demo Questions Download
Adobe exams Adobe
Apple exams Apple
Avaya exams Avaya
Check Point exams Check Point
Cisco exams Cisco
Citrix exams Citrix
CIW exams CIW
CompTIA exams CompTIA
CWNP exams CWNP
EC-Council exams EC-Council
EMC exams EMC
Exin exams Exin
Fortinet exams Fortinet
GIAC exams GIAC
Hitachi exams Hitachi
HP exams HP
IBM exams IBM
Isaca exams Isaca
ISC exams ISC
ISEB exams ISEB
Juniper exams Juniper
LPI exams LPI
McAfee exams McAfee
Microsoft exams Microsoft
Oracle exams Oracle
PMI exams PMI
Riverbed exams Riverbed
SNIA exams SAP
Sun exams SAS
Symantec exams Symantec
VMware exams VMware
All certification exams

Cloudera CCA-500 Exam - Cheat-Test.com

Free CCA-500 Sample Questions:

Q: 1
You have a cluster running with a FIFO scheduler enabled. You submit a large job A to the cluster, which you expect to run for one hour. Then, you submit job B to the cluster, which you expect to run a couple of minutes only.
You submit both jobs with the same priority.
Which two best describes how FIFO Scheduler arbitrates the cluster resources for job and its tasks?
A. Because there is a more than a single job on the cluster, the FIFO Scheduler will enforce a limit on the percentage of resources allocated to a particular job at any given time
B. Tasks are scheduled on the order of their job submission
C. The order of execution of job may vary
D. Given job A and submitted in that order, all tasks from job A are guaranteed to finish before all tasks from job B
E. The FIFO Scheduler will give, on average, and equal share of the cluster resources over the job lifecycle
F. The FIFO Scheduler will pass an exception back to the client when Job B is submitted, since all slots on the cluster are use
Answer: A,E

Q: 2
Your Hadoop cluster contains nodes in three racks. You have not configured the dfs.hosts property in the NameNode’s configuration file. What results?
A. The NameNode will update the dfs.hosts property to include machines running the DataNode daemon on the next NameNode reboot or with the command dfsadmin –refreshNodes
B. No new nodes can be added to the cluster until you specify them in the dfs.hosts file
C. Any machine running the DataNode daemon can immediately join the cluster
D. Presented with a blank dfs.hosts property, the NameNode will permit DataNodes specified in mapred.hosts to join the cluster
Answer: A

Q: 3
Choose three reasons why should you run the HDFS balancer periodically?
A. To ensure that there is capacity in HDFS for additional data
B. To ensure that all blocks in the cluster are 128MB in size
C. To help HDFS deliver consistent performance under heavy loads
D. To ensure that there is consistent disk utilization across the DataNodes
E. To improve data locality MapReduce
Answer: D

Q: 4
Your cluster implements HDFS High Availability (HA). Your two NameNodes are named nn01 and nn02. What occurs when you execute the command: hdfs haadmin -failover nn01 nn02?
A. nn02 is fenced, and nn01 becomes the active NameNode
B. nn01 is fenced, and nn02 becomes the active NameNode
C. nn01 becomes the standby NameNode and nn02 becomes the active NameNode
D. nn02 becomes the standby NameNode and nn01 becomes the active NameNode
Answer: B

Q: 5
Your Hadoop cluster is configuring with HDFS and MapReduce version 2 (MRv2) on YARN. Can you configure a worker node to run a NodeManager daemon but not a DataNode daemon and still have a functional cluster?
A. Yes. The daemon will receive data from the NameNode to run Map tasks
B. Yes. The daemon will get data from another (non-local) DataNode to run Map tasks
C. Yes. The daemon will receive Map tasks only
D. Yes. The daemon will receive Reducer tasks only
Answer: A

Q: 6
You are planning a Hadoop cluster and considering implementing 10 Gigabit Ethernet as the network fabric. Which workloads benefit the most from faster network fabric?
A. When your workload generates a large amount of output data, significantly larger than the amount of intermediate data
B. When your workload consumes a large amount of input data, relative to the entire capacity if HDFS
C. When your workload consists of processor-intensive tasks
D. When your workload generates a large amount of intermediate data, on the order of the input data itself
Answer: A

Q: 7
On a cluster running CDH 5.0 or above, you use the hadoop fs –put command to write a 300MB file into a previously empty directory using an HDFS block size of 64 MB. Just after this command has finished writing 200 MB of this file, what would another use see when they look in directory?
A. The directory will appear to be empty until the entire file write is completed on the cluster
B. They will see the file with a ._COPYING_ extension on its name. If they view the file, they will see contents of the file up to the last completed block (as each 64MB block is written, that block becomes available)
C. They will see the file with a ._COPYING_ extension on its name. If they attempt to view the file, they will get a ConcurrentFileAccessException until the entire file write is completed on the cluster
D. They will see the file with its original name. If they attempt to view the file, they will get a ConcurrentFileAccessException until the entire file write is completed on the cluster
Answer: D

Q: 8
Which two features does Kerberos security add to a Hadoop cluster?
A. User authentication on all remote procedure calls (RPCs)
B. Encryption for data during transfer between the Mappers and Reducers
C. Encryption for data on disk (“at rest”)
D. Authentication for user access to the cluster against a central server
E. Root access to the cluster for users hdfs and mapred but non-root access for clients
Answer: B,D


© 2014 Cheat-Test.com, All Rights Reserved