Latest HCAHD Practice Tests with Actual Questions

Get Complete pool of questions with Premium PDF and Test Engine

Exam Code : HCAHD
Exam Name : Apache Hadoop Developer
Vendor Name : "Hitachi"







HCAHD Dumps HCAHD Braindumps HCAHD Real Questions HCAHD Practice Test

HCAHD Actual Questions


killexams.com Hitachi HCAHD


Apache Hadoop Developer


https://killexams.com/pass4sure/exam-detail/HCAHD



Question: 24


Assuming the following Hive query executes successfully:



Which one of the following statements describes the result set?


  1. A bigram of the top 80 sentences that contain the substring "you are" in the lines column of the input data A1 table.

  2. An 80-value ngram of sentences that contain the words "you" or "are" in the lines column of the inputdata table.

  3. A trigram of the top 80 sentences that contain "you are" followed by a null space in the lines column of the inputdata table.

  4. A frequency distribution of the top 80 words that follow the subsequence "you are" in the lines column of the inputdata table.




Answer: D
Question: 25

Given the following Pig commands:



Which one of the following statements is true?


  1. The $1 variable represents the first column of data in 'my.log'

  2. The $1 variable represents the second column of data in 'my.log'

  3. The severe relation is not valid

  4. The grouped relation is not valid




Answer: B
Question: 26

What does Pig provide to the overall Hadoop solution?


  1. Legacy language Integration with MapReduce framework

  2. Simple scripting language for writing MapReduce programs

  3. Database table and storage management services

  4. C++ interface to MapReduce and data warehouse infrastructure




Answer: B


Question: 27


What types of algorithms are difficult to express in MapReduce v1 (MRv1)?


  1. Algorithms that require applying the same mathematical function to large numbers of individual binary records.

  2. Relational operations on large amounts of structured and semi-structured data.

  3. Algorithms that require global, sharing states.

  4. Large-scale graph algorithms that require one-step link traversal.

  5. Text analysis algorithms on large collections of unstructured text (e.g, Web crawls).




Answer: A,C,E



Explanation: See 3) below.

Limitations of Mapreduce C where not to use Mapreduce


While very powerful and applicable to a wide variety of problems, MapReduce is not the answer to every problem. Here are some problems I found where MapReudce is not suited and some papers that address the limitations of MapReuce.



Question: 28


You need to create a job that does frequency analysis on input data. You will do this by writing a Mapper that uses TextInputFormat and splits each value (a line of text from an input file) into individual characters. For each one of these characters, you will emit the character as a key and an InputWritable as the value.


As this will produce proportionally more intermediate data than input data, which two resources should you expect to be bottlenecks?


  1. Processor and network I/O

  2. Disk I/O and network I/O

  3. Processor and RAM

  4. Processor and disk I/O




Answer: B
Question: 29

Which one of the following statements regarding the components of YARN is FALSE?


  1. A Container executes a specific task as assigned by the ApplicationMaster

  2. The ResourceManager is responsible for scheduling and allocating resources

  3. A client application submits a YARW job to the ResourceManager

  4. The ResourceManager monitors and restarts any failed Containers




Answer: D
Question: 30

You are developing a combiner that takes as input Text keys, IntWritable values, and emits Text keys, IntWritable values.


Which interface should your class implement?


  1. Combiner <Text, IntWritable, Text, IntWritable>

  2. Mapper <Text, IntWritable, Text, IntWritable>

  3. Reducer <Text, Text, IntWritable, IntWritable>

  4. Reducer <Text, IntWritable, Text, IntWritable>

  5. Combiner <Text, Text, IntWritable, IntWritable>




Answer: D
Question: 31

Which one of the following Hive commands uses an HCatalog table named x?


  1. SELECT * FROM x;

  2. SELECT x.-FROM org.apache.hcatalog.hive.HCatLoader('x');

  3. SELECT * FROM org.apache.hcatalog.hive.HCatLoader('x');

  4. Hive commands cannot reference an HCatalog table




Answer: C
Question: 32

Given the following Pig command:


logevents = LOAD &apos;input/my.log&apos; AS (date:chararray, levehstring, code:int, message:string); Which one of the following statements is true?

  1. The logevents relation represents the data from the my.log file, using a comma as the parsing delimiter

  2. The logevents relation represents the data from the my.log file, using a tab as the parsing delimiter

  3. The first field of logevents must be a properly-formatted date string or table return an error

  4. The statement is not a valid Pig command




Answer: B
Question: 33

Consider the following two relations, A and B.


  1. C = DOIN B BY a1, A by b2;

  2. C = JOIN A by al, B by b2;

  3. C = JOIN A a1, B b2;

  4. C = JOIN A SO, B $1;




Answer: B
Question: 34

Given the following Hive commands:



Which one of the following statements Is true?


  1. The file mydata.txt is copied to a subfolder of /apps/hive/warehouse

  2. The file mydata.txt is moved to a subfolder of /apps/hive/warehouse

  3. The file mydata.txt is copied into Hive's underlying relational database 0.

  4. The file mydata.txt does not move from Its current location in HDFS




Answer: A
Question: 35

In a MapReduce job, the reducer receives all values associated with same key. Which statement best describes the ordering of these values?

  1. The values are in sorted order.

  2. The values are arbitrarily ordered, and the ordering may vary from run to run of the same MapReduce job.

  3. The values are arbitrary ordered, but multiple runs of the same MapReduce job will always have the same ordering.

  4. Since the values come from mapper outputs, the reducers will receive contiguous sections of sorted values.




Answer: A,B



Explanation:

Note:



Note:

Reduce


In this phase the reduce(WritableComparable, Iterator, OutputCollector, Reporter) method is called for each <key, (list of values)> pair in the grouped inputs.


The output of the reduce task is typically written to the FileSystem via OutputCollector.collect(WritableComparable, Writable).


Applications can use the Reporter to report progress, set application-level status messages and update Counters, or just indicate that they are alive.


The output of the Reducer is not sorted.



Question: 39


In Hadoop 2.0, which one of the following statements is true about a standby NameNode? The Standby NameNode:

  1. Communicates directly with the active NameNode to maintain the state of the active NameNode.

  2. Receives the same block reports as the active NameNode.

  3. Runs on the same machine and shares the memory of the active NameNode.

  4. Processes all client requests and block reports from the appropriate DataNodes.




Answer: B
Question: 40

In the reducer, the MapReduce API provides you with an iterator over Writable values. What does calling the next () method return?

  1. It returns a reference to a different Writable object time.

  2. It returns a reference to a Writable object from an object pool.

  3. It returns a reference to the same Writable object each time, but populated with different data.

  4. It returns a reference to a Writable object. The API leaves unspecified whether this is a reused object or a new object.

  5. It returns a reference to the same Writable object if the next value is the same as the previous value, or a new Writable object otherwise.




Answer: A,C,E



Explanation:


Calling Iterator.next() will always return the SAME EXACT instance of IntWritable, with the contents of that instance replaced with the next value.


Reference: manupulating iterator in mapreduce