Exam Code : ARA-C01
Exam Name : SnowPro Advanced Architect Certification
Vendor Name :
"SnowFlake"
ARA-C01 Dumps ARA-C01 Braindumps
ARA-C01 Real Questions ARA-C01 Practice Test ARA-C01 Actual Questions
killexams.com SnowFlake ARA-C01
SnowPro Advanced Architect Certification
https://killexams.com/pass4sure/exam-detail/ARA-C01
Question: 191
What conditions should be true for a table to consider search optimization
The table size is at least 100 GB
The table is not clustered OR The table is frequently queried on columns other than the primary cluster key
The table can be of any size
Search optimization works best to improve the performance of a query when the following conditions are true: For the table being queried:
Question: 192
One of your query is taking a long time to finish, when you open the query profiler you see that lot of data is spilling to the remote disk(Bytes spilled to remote storage).
What may be the cause of this?
The amount of memory available for the servers used to execute the operation might not be sufficient to hold intermediate results
The size of the AWS bucket used to hold the data is not sufficient for the query
Number of disks attached to the virtual warehouse is not enough for the processing
This is again a question based on work experience. One variation of this may be, you will be given a query profile snapshot which will be having a huge number against Bytes spilled to remote storage. You will be asked to find the possible cuase
Queries Too Large to Fit in Memory
For some operations (e.g. duplicate elimination for a huge data set), the amount of memory available for the servers used to execute the operation might not be sufficient to hold intermediate results. As a result, the query processing engine will start spilling the data to local disk. If the local disk space is not sufficient, the spilled data is then saved to remote disks.
This spilling can have a profound effect on query performance (especially if remote disk is used for spilling). To alleviate this, we recommend:
Question: 193
While loading data into a table from stage, which are the valid copyOptions
CONTINUE
SKIP_FILE
SKIP_FILE_<NUM>
SKIP_FILE_<NUM>%
ABORT_STATEMENT
ERROR_STATEMENT
Question: 194
For this object, Snowflake executes code outside Snowflake; the executed code is known as remote service. What is this object called?
External procedure
External function
External Script
External job
An external function calls code that executes outside Snowflake; the executed code is known as a remote service.
Users can write and call their own remote services, or call remote services written by third parties. These remote services can be written using any HTTP server stack, including cloud serverless compute services such as AWS Lambda.
From the perspective of a user running a SQL statement, an external function behaves like any other scalar function. A SQL statement performs the following actions: Calls the function, optionally passing parameters.
Receives a value back from the function.
In SQL statements, external functions generally behave like UDFs (user-defined functions). For example, external functions follow these rules:
Inside Snowflake, an external function is represented as a database object. That object is created in a specific database and schema, and can be referenced using dot notation (e.g. MY_DATABASE.MY_SCHEMA.MY_EXTERNAL_FUNCTION()).
An external function can appear in any clause of a SQL statement in which other types of functions can appear (e.g. the WHERE clause).
The returned value can be a compound value, such as a VARIANT that contains JSON.
External functions can be overloaded; two different functions can have the same name but different signatures (different numbers or data types of input parameters).
An external function can be part of a more complex expression: select upper(zipcode_to_city_external_function(zipcode)) from address_table;
https://docs.snowflake.com/en/sql-reference/external-functions-introduction.html#what-is-an-external-fun ction
Question: 195
Validation mode can take the below options
RETURN_<n>_ROWS
RETURN_ERRORS
RETURN_ALL_ERRORS
RETURN_SEVERE_EERORS_ONLY
VALIDATION_MODE = RETURN_n_ROWS | RETURN_ERRORS | RETURN_ALL_ERRORS
String (constant) that instructs the COPY command to validate the data files instead of loading them into the specified table; i.e. the COPY command tests the files for errors but does not load them. The command validates the data to be loaded and returns results based on the validation option specified:
Question: 196
Which semi structured data function interprets an input string as a JSON document, producing a VARIANT value.
PARSE_JSON
PARSE_XML
STRIP_JSON
Try a hands-on exercise to understand this
create or replace table vartab (n number(2), v variant); insert into vartab select column1 as n, parse_json(column2) as v
from values (1, ‘null’), (2, null),
(3, ‘true’),
(4, ‘-17’),
(7, ‘"Om ara pa ca na dhih" ‘), (8, ‘[-1, 12, 289, 2188, false,]’),
(9, ‘{ "x" : "abc", "y" : false, "z": 10} ‘) as vals;
select n, v, typeof(v) from vartab;
Question: 197
Remote service in external function can be an AWS Lambda function
TRUE
FALSE
A remote service is stored and executed outside Snowflake, and returns a value. For example, remote services can be implemented as:
An AWS Lambda function.
An HTTPS server (e.g. Node.js) running on an EC2 instance.
To be called by the Snowflake external function feature, the remote service must: Expose an HTTPS endpoint.
Accept JSON inputs and return JSON outputs.
Question: 198
Bytes spilled to remote storage in query profile indicates volume of data spilled to remote disk
TRUE
FALSE
This question may come in various format in the exam, so let us not mug it up. Let us understand what it means.
When you run large aggregations, sorts in snowflake the processing usually happens in memory of the virtual warehouse. But if the virtual warehouse is not properly sized and if it does not have enough memory, the intermediate results starts spilling to remote disk(in AWS, it will be S3). When this happens, it impacts the query performance because now you are retrieving your results from remote disk instead of memory. In most of these cases, if it is a regular occurrence you may need to move to a bigger warehouse.
Also read this section referred in the link
https://docs.snowflake.com/en/user-guide/ui-query-profile.html#queries-too-large-to-fit-in-memory
Question: 199
{"stuId":2000,"stuCourse":"Snowflake"}
How will you write a query that will check if stuId in JSON in #1 is also there in JSON in#2
with stu_demography as (select parse_json(column1) as src, src:stuId as ID from values(‘{"stuId":2000, "stuName":"Amy"}’)),
stu_course as (select parse_json(column1) as src, src:stuId as ID from
values(‘{"stuId":2000,"stuCourse":"Snowflake"}’)) select case when stdemo.ID in(select ID from stu_course) then ‘True’ else ‘False’ end as result from stu_demography stdemo;
with stu_demography as (select parse_json(column1) as src, src[‘stuId’] as ID from values(‘{"stuId":2000, "stuName":"Amy"}’)), stu_course as (select parse_json(column1) as src, src[‘stuId’] as ID from values(‘{"stuId":2000,"stuCourse":"Snowflake"}’)) select case when stdemo.ID in(select ID from stu_course) then ‘True’ else ‘False’ end as result from stu_demography stdemo;
SELECT CONTAINS(‘{"stuId":2000, "stuName":"Amy"}’,'{"stuId":2000,"stuCourse":"Snowflake"}’);
with stu_demography as (select parse_json(column1) as src, src[‘STUID’] as ID from values(‘{"stuId":2000, "stuName":"Amy"}’)), stu_course as (select parse_json(column1) as src, src[‘stuId’] as ID from values(‘{"stuId":2000,"stuCourse":"Snowflake"}’)) select case when stdemo.ID in(select ID
from stu_course) then ‘True’ else ‘False’ end as result from stu_demography stdemo;
I would like you to try this out in your snowflake instance and find that out
Please note that this may not be the way the question will appear in the certification exam, but why we are still learning this?
Question: 200
In the default access control hierarchy, both securityadmin and sysadmin are owned by accountadmin
TRUE
FALSE
Role hierarchy is an important concept that you should read thoroughly. More than one question may appear in the exam on this topic. Please remember in snowflake you cannot assign a privilege to a user directly. You need to create role and grant privileges to the role and then assign users to the role. As role can be assigned to another role also.
https://docs.snowflake.com/en/user-guide/security-access-control-overview.html#role-hierarchy-and-privi lege-inheritance
Question: 201
You are running a large join on snowflake. You ran it on a medium warehouse and it took almost an hour to run. You then tried to run the join on a large warehouse but still the performance did not improve.
What may be the most possible cause of this.
There may be a symptom on skew in your data where one of the value of the column is significantly more than rest of the values in the column
Your warehouses do not have enough memory
Since you have configured an warehouse with a low auto-suspend value, your warehouse is going down frequently
In the snowflake advanced architect exam, 40% of the questions will be based on work experience and this is one such question. You need to have a very good hold on the concepts of Snowflake. So, what may be happening here?