Free Cheat-test Samples and Demo Questions Download
Adobe exams Adobe
Apple exams Apple
Avaya exams Avaya
Check Point exams Check Point
Cisco exams Cisco
Citrix exams Citrix
CIW exams CIW
CompTIA exams CompTIA
EC-Council exams EC-Council
EMC exams EMC
Exin exams Exin
Fortinet exams Fortinet
Hitachi exams Hitachi
HP exams HP
IBM exams IBM
Isaca exams Isaca
ISC exams ISC
Juniper exams Juniper
LPI exams LPI
McAfee exams McAfee
Microsoft exams Microsoft
Oracle exams Oracle
PMI exams PMI
Riverbed exams Riverbed
SNIA exams SAP
Sun exams SAS
Symantec exams Symantec
VMware exams VMware
All certification exams

IBM 000-418 Exam -

Free 000-418 Sample Questions:

Q: 1 Your job uses the MQ connector stage to read messages from an MQ queue. The job should retrieve the message ID into the MessageID field and parse the payload into two fields:
Name is to get the first ten characters, Description is to get the remaining characters.
What will accomplish this?
A. First column is MessageID as Binary 24; second column is Name as Binary 10; select WSMG.MSPAYLOAD data element; third column is Description as VarBinary 200; select WSMG.MSPAYLOAD data element.
B. First column is MessageID; select the WSMQ.MSGID data element for the Message ID field;second column is Description as VarBinary 200; third column is Name as Binary 10.
C. First column is MessageID; select the WSMQ.MSGID data element for the Message ID field; second column is Name; select WSMG.MSPAYLOAD data element; third column is Description; select WSMG.MSPAYLOAD data element.
D. First column is MessageID; select the WSMQ.MSGID data element for the Message ID field; second column is Name as Binary 10; third column is Description as VarBinary 200.
Answer: D

Q: 2 Which two methods can be used for adding messages to a message handler? (Choose two.)
A. Import message handler from existing message handler dsx.
B. Drag and drop a message from the job log onto the message handler.
C. Type in the message rule by hand.
D. Use the add rule to message hander interface.
Answer: C, D

Q: 3 Which two steps are required to change from a normal lookup to a sparse lookup in an ODBC Enterprise stage? (Choose two.)
A. Change the lookup option in the stage properties to "Sparse".
B. Replace columns at the beginning of a SELECT statement with a wildcard asterisk (*).
C. Establish a relationship between the key field column in the source stage with the database table field.
D. Sort the data on the reference link.
Answer: A, C

Q: 4 What describes the column information specified in Orchestrate schemas? (Choose two.)
A. C++ data types, such as string[max 145]
B. column properties, such as nullability
C. SQL data types, such as Char(20)
D. record format information, such as record delimiter
Answer: A, B

Q: 5 In which two situations would you use the Web Services Client stage? (Choose two.)
A. You want to deploy a service.
B. You need the Web service to act as either a data source or a data target during an operation.
C. You do not need both input and output links in a single web service operation.
D. You need to create a WSDL.
Answer: B, C

Q: 6 Which three statements are true about File Sets? (Choose three.)
A. File sets are partitioned.
B. File sets are unpartitioned.
C. File sets are stored as a single file.
D. File sets are readable by external applications.
E. File sets are stored as header file and data files.
Answer: A, D, E

Q: 7 When tuning a parallel process, it is necessary to measure the amount of system resources that are used by each instance of a stage. Which two methods enable the collection of CPU time used by each instance of a stage? (Choose two.)
A. Set the environment variable $APT_PM_PLAYER_TIMING=true.
B. Invoke vmstat before the job run and after the job completes.
C. Select the Record job performance data check box from Job Properties.
D. Set the environment variable $DS_MAKE_JOB_REPORT=2.
Answer: A, C

Q: 8 Which three lookup types may be performed in the Lookup stage? (Choose three.)
A. Equality match
B. Negative match
C. Range on stream link
D. Range on the reject link
E. Range on the reference link
Answer: A, C, E

Q: 9 Which three property areas must be configured when using the ODBC connector stage as a target in your job design? (Choose three.)
A. Define the connection properties to an ODBC data source.
B. Define columns for the output link.
C. Specify properties for the input link.
D. Define columns for the input link data.
E. Specify the remote server property.
Answer: A, C, D

Q: 10 When invoking a job from a third-party scheduler, it is often desirable to invoke a job and wait for its completion in order to return the job's completion status. Which three commands would invoke a job named "BuildWarehouse" in project DevProject and wait for the job's completion? (Choose three.)
A. dsjob -run -log DevProject BuildWarehouse
B. dsjob -run -jobstatus DevProject BuildWarehouse
C. dsjob -run -userstatus DevProject BuildWarehouse
D. dsjob -run DevProject BuildWarehouse
E. dsjob -run -wait DevProject BuildWarehouse
Answer: B, C, E

Q: 11 You are working on a job in which a sequential file cannot be read in parallel. In an attempt to improve job performance, you first define a single large string column for the non-parallel sequential file read. Which stage may be used to parse the large string in parallel?
A. the Column Import stage
B. the Column Export stage
C. the Make Vector stage
D. the Split Vector stage
Answer: A

Q: 12 What is the lowest CPU cost partitioning method for parallel stage to parallel stage?
A. Range
B. Modulus
C. Entire
D. Same
Answer: D

Q: 13 A job design reads from a complex flat file, performs some transformations on the data, and outputs the results to a WISD output stage. What are two ways that parameter values can be passed to this job at run-time? (Choose two.)
A. Pass the parameter values at the time of the service request.
B. Change the properties of the information provider and redeploy.
C. Include the parameter values in the data.
D. Execute a DSSetParam with the values at job execution time.
Answer: A, B

Q: 14 A parallel job combines rows from a source DB2 table with historical information maintained in two separate Oracle tables. Only rows in the DB2 source whose key values match either Oracle table are output to a target Teradata table. Both Oracle tables have identical column definitions and are stored in the same Oracle instance. Which two design techniques would satisfy this
requirement? (Choose two.)
A. Using a master DB2 Enterprise stage, merge by dropping unmatched masters against a single Oracle Enterprise stage with custom SQL with UNION ALL.
B. Combine the inputs from the DB2 Enterprise stage and two Oracle Enterprise stages using the Sort options of the Funnel stage defined onthe key columns.
C. Use a separate Oracle Enterprise stage for each source table to a Funnel stage and then perform an inner join with rows from a DB2 Enterprise stage.
D. Use a Lookup stage to combine the DB2 Enterprise input with each Oracle Enterprise reference link using range partitioning to limit each reference by historical data values.
Answer: A, C

Q: 15 A DataStage job is sourcing a flat file which contains a VARCHAR field.
This field needs to be mapped to a target field which is a date. Which will accomplish this?
A. Use a Column Exporter to perform the type conversion.
B. DataStage handles the type conversion automatically.
C. Use the TimestampFromDateTime function in a Transformer.
D. Use the Modify stage to perform the type conversion.
Answer: D

Q: 16 Your job is to setup credential mappings for DataStage developers within DataStage. Which two statements are true? (Choose two.)
A. You must be a Information Server Suite administrator to complete this task.
B. You can create Information Server Suite users and groups in the Web console.
C. You can create new Information Server Suite users by using the DataStage Administrator.
D. You can create new users in the operating system level and map these credentials within DataStage Administrator.
Answer: A, B

Q: 17 A Data Set was created earlier with one partition. A subsequent job using a 2-node configuration file reads from that Data Set passing data to a Transformer stage with "Same" partitioning. Which statement is true?
A. Setting the Preserve Partitioning flag to "Clear" will generate two instances of the Transformer stage.
B. DataStage will repartition the data and run one instance of the Transformer stage.
C. Setting the Preserve Partitioning flag to "Set" will run one instance of the Transformer stage.
D. The DataStage job will abort.
Answer: A

Q: 18 Which three actions can improve sort performance in a DataStage job? (Choose three.)
A. Specify only the key columns which are necessary.
B. Use the stable-sort option to avoid the random ordering of non-key data.
C. Minimize the number of sorts used within a job flow.
D. Adjusting the "Restrict Memory Usage" option in the Sort stage.
E. Run the job sequentially so that only one sort process is invoked.
Answer: A, C, D

Q: 19 You have created a parallel job in which there are several stages that you want to be able to re-use in other jobs. You decided to create a parallel shared container from these stages. Identify two things that are true about this shared container. (Choose two.)
A. It can be used in sequencer jobs.
B. It can take advantage of Run Time Column Propagation (RCP).
C. It can be used in Transformer stage derivations.
D. It can have job parameters to resolve stage property values.
Answer: B, D

Q: 20 You are assigned to write a job which reads a sequential file, applies business logic, and writes the results to one or more flat-files. However, the number and names of the input columns may vary from one input file to the next. You are guaranteed that a core set of columns required to perform the business logic will always be present, though not necessarily in the same place on the input record. Which two features would you use to build this job? (Choose two.)
A. Data Set
B. Schema File
D. Runtime Column Propagation
Answer: B, D

Q: 21 You are given a source file that was created by a COBOL program on z/OS. The corresponding COBOL copybook has hierarchical relationships on multiple levels.
Which scenario would properly de-normalize the data into a single tabular output with best performance with a 4-node configuration file?
A. Use the Complex Flat File stage, setting the "Read from Multiple Nodes" property.
B. Define the output of a Complex Flat File stage as a single column, pass to the Column Import stage running in parallel to parse the output into to multiple columns.
C. Use the Sequential File stage, setting "Number of Readers Per Node" property greater than one, pass to Split Vector stage running parallel.
D. Use the External Source stage, running in parallel, to invoke a COBOL program to parse the source file.
Answer: A

Q: 22 Which Oracle Enterprise stage read property can be set using db options to tune job performance?
A. memsize
B. arraysize
C. partitionsize
D. transactsize
Answer: B

Q: 23 Which import option can be used to import metadata from a data modeling tool such as Erwin or Rational Data Architect?
A. Import using bridges.
B. Import a table definition using the Connector wizard.
C. Import DataStage components.
D. Import a table definition using the Plug-In Meta Data facility.
Answer: A

Q: 24 You need to invoke a job from the command line that is a multi-instance enabled. What is the correct syntax to start a multi-instance job?
A. dsjob -run -mode NORMAL -instance <instance> <project> <job>
B. dsjob -run -mode NORMAL -wait -instance <instance> <project> <job>
C. dsjob -run -mode NORMAL <project> <job>.<instance>
D. dsjob -run -mode MULTI <project> <job>.<instance>
Answer: C

Q: 25 A client must support multiple languages in selected text columns when reading from DB2 database. Which two actions will allow selected columns to support such data?
(Choose two.)
A. Choose Unicode setting in the extended column attribute.
B. Click NLS support within the advanced column tab.
C. Choose NVar/NVarchar as data types.
D. NLS must be added in the Additional Connection Options of the database operator.
Answer: A, C

© 2014, All Rights Reserved