Latest HDPCD Practice Tests with Actual Questions

Get Complete pool of questions with Premium PDF and Test Engine

Exam Code : HDPCD
Exam Name : Hortonworks Data Platform Certified Developer
Vendor Name : "Hortonworks"







HDPCD Dumps HDPCD Braindumps HDPCD Real Questions HDPCD Practice Test

HDPCD Actual Questions


killexams.com Hortonworks HDPCD


Hortonworks Data Platform Certified Developer


https://killexams.com/pass4sure/exam-detail/HDPCD



Question: 97

You write MapReduce job to process 100 files in HDFS. Your MapReduce algorithm uses TextInputFormat: the mapper applies a regular expression over input values and emits key- values pairs with the key consisting of the matching text, and the value containing the filename and byte offset. Determine the difference between setting the number of reduces to one and settings the number of reducers to zero.


  1. There is no difference in output between the two settings.

  2. With zero reducers, no reducer runs and the job throws an exception. With one reducer, instances of matching patterns are stored in a single file on HDFS.

  3. With zero reducers, all instances of matching patterns are gathered together in one

    file on HDFS. With one reducer, instances of matching patterns are stored in multiple files on HDFS.

  4. With zero reducers, instances of matching patterns are stored in multiple files on

HDFS. With one reducer, all instances of matching patterns are gathered together in one file on HDFS.




Answer: D



Explanation:

*/

public void run(Context context) throws IOException, InterruptedException { setup(context);

while (context.nextKeyValue()) {

map(context.getCurrentKey(), context.getCurrentValue(), context);

}

cleanup(context);


}


setup(Context) - Perform any setup for the mapper. The default implementation is a no-op method.

map(Key, Value, Context) - Perform a map operation in the given Key / Value pair. The default implementation calls Context.write(Key, Value)

cleanup(Context) - Perform any cleanup for the mapper. The default implementation

is a no-op method.


Reference:

Hadoop/MapReduce/Mapper



Question: 102

Which one of the following files is required in every Oozie Workflow application?


  1. job.properties

  2. Config-default.xml

  3. Workflow.xml

  4. Oozie.xml




Answer: C



Question: 103

Which one of the following statements is FALSE regarding the communication between DataNodes and a federation of NameNodes in Hadoop 2.2?


  1. Each DataNode receives commands from one designated master NameNode.

  2. DataNodes send periodic heartbeats to all the NameNodes.

  3. Each DataNode registers with all the NameNodes.

  4. DataNodes send periodic block reports to all the NameNodes.




Answer: A



Question: 104

In a MapReduce job with 500 map tasks, how many map task attempts will there be?


  1. It depends on the number of reduces in the job.

  2. Between 500 and 1000.

  3. At most 500.

  4. At least 500.

  5. Exactly 500.




Answer: D



Explanation:

From Cloudera Training Course:

Task attempt is a particular instance of an attempt to execute a task



Question: 105

Review the following 'data' file and Pig code.



Which one of the following statements is true?


  1. The Output Of the DUMP D command IS (M,{(M,62.95102),(M,38,95111)})

  2. The output of the dump d command is (M, {(38,95in),(62,95i02)})

  3. The code executes successfully but there is not output because the D relation is empty

  4. The code does not execute successfully because D is not a valid relation




Answer: A



Question: 106

Which one of the following is NOT a valid Oozie action?


  1. mapreduce

  2. pig

  3. hive

  4. mrunit




Answer: D



Question: 107

Examine the following Hive statements:



Assuming the statements above execute successfully, which one of the following statements is true?


  1. Each reducer generates a file sorted by age

  2. The SORT BY command causes only one reducer to be used

  3. The output of each reducer is only the age column

  4. The output is guaranteed to be a single file with all the data sorted by age




Answer: A



Question: 108

Your client application submits a MapReduce job to your Hadoop cluster. Identify the Hadoop daemon on which the Hadoop framework will look for an available slot schedule a MapReduce operation.


  1. TaskTracker

  2. NameNode

  3. DataNode

  4. JobTracker

  5. Secondary NameNode




Answer: D



Explanation:

JobTracker is the daemon service for submitting and tracking MapReduce jobs in Hadoop. There is only One Job Tracker process run on any hadoop cluster. Job Tracker runs on its own JVM process. In a typical production cluster its run on a separate machine. Each slave node is configured with job tracker node location. The JobTracker is single point of failure for the Hadoop MapReduce service. If it goes down, all running jobs are halted. JobTracker in Hadoop performs following actions(from Hadoop Wiki:)

Client applications submit jobs to the Job tracker.

The JobTracker talks to the NameNode to determine the location of the data

The JobTracker locates TaskTracker nodes with available slots at or near the data The JobTracker submits the work to the chosen TaskTracker nodes.

The TaskTracker nodes are monitored. If they do not submit heartbeat signals often enough, they are deemed to have failed and the work is scheduled on a different TaskTracker.

A TaskTracker will notify the JobTracker when a task fails. The JobTracker decides what to do then: it may resubmit the job elsewhere, it may mark that specific record as something to avoid, and it may may even blacklist the TaskTracker as unreliable. When the work is completed, the JobTracker updates its status. Client applications can poll the JobTracker for information.


Reference:

24 Interview Questions & Answers for Hadoop MapReduce developers, What is

a JobTracker in Hadoop? How many instances of JobTracker run on a Hadoop Cluster?