Free Cheat-test Samples and Demo Questions Download
Adobe exams Adobe
Apple exams Apple
Avaya exams Avaya
Check Point exams Check Point
Cisco exams Cisco
Citrix exams Citrix
CIW exams CIW
CompTIA exams CompTIA
CWNP exams CWNP
EC-Council exams EC-Council
EMC exams EMC
Exin exams Exin
Fortinet exams Fortinet
GIAC exams GIAC
Hitachi exams Hitachi
HP exams HP
IBM exams IBM
Isaca exams Isaca
ISC exams ISC
ISEB exams ISEB
Juniper exams Juniper
LPI exams LPI
McAfee exams McAfee
Microsoft exams Microsoft
Oracle exams Oracle
PMI exams PMI
Riverbed exams Riverbed
SNIA exams SAP
Sun exams SAS
Symantec exams Symantec
VMware exams VMware
All certification exams

Cloudera CCD-410 Exam - Cheat-Test.com

Free CCD-410 Sample Questions:

Q: 1
You have just executed a MapReduce job. Where is intermediate data written to after being emitted from the Mapper’s map method?
A. Intermediate data in streamed across the network from Mapper to the Reduce and is never written to disk.
B. Into in-memory buffers on the TaskTracker node running the Mapper that spill over and are written into HDFS.
C. Into in-memory buffers that spill over to the local file system of the TaskTracker node running the Mapper.
D. Into in-memory buffers that spill over to the local file system (outside HDFS) of the TaskTracker node running the Reducer
E. Into in-memory buffers on the TaskTracker node running the Reducer that spill over and are written into HDFS.
Answer: D

Q: 2
You want to understand more about how users browse your public website, such as which pages they visit prior to placing an order. You have a farm of 200 web servers hosting your website. How will you gather this data for your analysis?
A. Ingest the server web logs into HDFS using Flume.
B. Write a MapReduce job, with the web servers for mappers, and the Hadoop cluster nodes for reduces.
C. Import all users’ clicks from your OLTP databases into Hadoop, using Sqoop.
D. Channel these clickstreams inot Hadoop using Hadoop Streaming.
E. Sample the weblogs from the web servers, copying them into Hadoop using curl.
Answer: B

Q: 3
MapReduce v2 (MRv2/YARN) is designed to address which two issues?
A. Single point of failure in the NameNode.
B. Resource pressure on the JobTracker.
C. HDFS latency.
D. Ability to run frameworks other than MapReduce, such as MPI.
E. Reduce complexity of the MapReduce APIs.
F. Standardize on a single MapReduce API.
Answer: B,D

Q: 4
You need to run the same job many times with minor variations. Rather than hardcoding all job configuration options in your drive code, you’ve decided to have your Driver subclass org.apache.hadoop.conf.Configured and implement the org.apache.hadoop.util.Tool interface.
Indentify which invocation correctly passes.mapred.job.name with a value of Example to Hadoop?
A. hadoop “mapred.job.name=Example” MyDriver input output
B. hadoop MyDriver mapred.job.name=Example input output
C. hadoop MyDrive –D mapred.job.name=Example input output
D. hadoop setproperty mapred.job.name=Example MyDriver input output
E. hadoop setproperty (“mapred.job.name=Example”) MyDriver input output
Answer: C

Q: 5
You are developing a MapReduce job for sales reporting. The mapper will process input keys representing the year (IntWritable) and input values representing product indentifies (Text).
Indentify what determines the data types used by the Mapper for a given job.
A. The key and value types specified in the JobConf.setMapInputKeyClass and JobConf.setMapInputValuesClass methods
B. The data types specified in HADOOP_MAP_DATATYPES environment variable
C. The mapper-specification.xml file submitted with the job determine the mapper’s input key and value types.
D. The InputFormat used by the job determines the mapper’s input key and value types.
Answer: D

Q: 6
Identify the MapReduce v2 (MRv2 / YARN) daemon responsible for launching application containers and monitoring application resource usage?
A. ResourceManager
B. NodeManager
C. ApplicationMaster
D. ApplicationMasterService
E. TaskTracker
F. JobTracker
Answer: C


© 2014 Cheat-Test.com, All Rights Reserved