Question 1
Question
What is a Latency:
Answer
-
is amount of data that can be in flight at the same time (Little’s Law)
-
is the number of accesses per unit time – If m instructions are loads/stores, 1 + m memory accesses
per instruction, CPI = 1 requires at least 1 + m memory accesses per cycle
-
is time for a single access – Main memory latency is usually >> than processor cycle time
Question 2
Question
What occurs at Intruction fetches when we speak about Common And Predictable Memory
Reference Patterns?
Answer
-
n loop iterations
-
subroutine call
-
vector access
Question 3
Question
What occurs at Stack access when we speak about Common And Predictable Memory
Reference Patterns?
Answer
-
n loop iterations
-
subroutine call
-
vector access
Question 4
Question
What occurs at Data access when we speak about Common And Predictable Memory
Reference Patterns?
Answer
-
subroutine call
-
n loop iterations
-
vector access
Question 5
Answer
-
No Write Allocate, Write Allocate
-
Write Through, Write Back
Question 6
Answer
-
No Write Allocate, Write Allocate
-
Write Through, Write Back
Question 7
Question
Average Memory Access Time is equal:
Answer
-
Hit Time * ( Miss Rate + Miss Penalty )
-
Hit Time - ( Miss Rate + Miss Penalty )
-
Hit Time / ( Miss Rate - Miss Penalty )
-
Hit Time + ( Miss Rate * Miss Penalty )
Question 8
Answer
-
cache is too small to hold all data needed by program, occur even under perfect replacement policy (loop over 5 cache
lines)
-
first-reference to a block, occur even with infinite cache
-
misses that occur because of collisions due to less than full associativity (loop over 3 cache lines)
Question 9
Answer
-
cache is too small to hold all data needed by program, occur even under perfect replacement policy (loop over 5 cache
lines)
-
misses that occur because of collisions due to less than full associativity (loop over 3 cache lines)
-
first-reference to a block, occur even with infinite cache
Question 10
Answer
-
first-reference to a block, occur even with infinite cache
-
misses that occur because of collisions due to less than full associativity (loop over 3 cache lines)
-
cache is too small to hold all data needed by program, occur even under perfect replacement policy (loop over 5 cache
lines)
Question 11
Question
Algorithm for Cache HIT:
Answer
-
Processor issues load request to cache -> Replace victim block in cache with new block -> return copy of data from
cache
-
Processor issues load request to cache -> Read block of data from main memory -> return copy of data from cache
-
Processor issues load request to cache -> Compare request address to cache tags and see if there is a match -> return
copy of data from cache
Question 12
Question
Algorithm for Cache MISS:
Answer
-
Processor issues load request to cache -> Compare request address to cache tags and see if there is a match -> Read
block of data from main memory -> Replace victim block in cache with new block -> return copy of data from cache
-
Processor issues load request to cache -> Read block of data from main memory -> return copy of data from cache
-
Processor issues load request to cache -> Replace victim block in cache with new block -> return copy of data from
cache
Question 13
Question
The formula of “Iron Law” of Processor Performance:
Answer
-
time/program = instruction/program * cycles/instruction * time/cycle
-
time/program = instruction/program * cycles/instruction + time/cycle
-
time/program = instruction/program + cycles/instruction * time/cycle
Question 14
Question
Structural Hazard:
Answer
-
An instruction depends on a data value produced by an earlier instruction
-
An instruction in the pipeline needs a resource being used by another instruction in the pipeline
-
Whether or not an instruction should be executed depends on a control decision made by an earlier instruction
Question 15
Answer
-
Whether or not an instruction should be executed depends on a control decision made by an earlier instruction
-
An instruction in the pipeline needs a resource being used by another instruction in the pipeline
-
An instruction depends on a data value produced by an earlier instruction
Question 16
Answer
-
Whether or not an instruction should be executed depends on a control decision made by an earlier instruction
-
An instruction depends on a data value produced by an earlier instruction
-
An instruction in the pipeline needs a resource being used by another instruction in the pipeline
Question 17
Question
What is a Bandwidth:
Answer
-
a is the number of accesses per unit time – If m instructions are loads/stores, 1 + m memory accesses per instruction, CPI
= 1 requires at least 1 + m memory accesses per cycle
-
is time for a single access – Main memory latency is usually >> than processor cycle time
-
is amount of data that can be in flight at the same time (Little’s Law)
Question 18
Question
What is a Bandwidth-Delay Product:
Answer
-
is the number of accesses per unit time – If m instructions are loads/stores, 1 + m memory accesses per instruction, CPI
= 1 requires at least 1 + m memory accesses per cycle
-
is time for a single access – Main memory latency is usually >> than processor cycle time
-
is amount of data that can be in flight at the same time (Little’s Law)
Question 19
Question
What is Computer Architecture?
Answer
-
the programs used to direct the operation of a computer, as well as documentation giving instructions on how to use
them
-
is the design of the abstraction/implementation layers that allow us to execute information processing applications
efficiently using manufacturing technologies
-
is a group of computer systems and other computing hardware devices that are linked together through communication
channels to facilitate communication and resource-sharing among a wide range of users
Question 20
Question
Least Recently Used (LRU):
Answer
-
FIFO with exception for most recently used block(s)
-
Used in highly associative caches
-
cache state must be updated on every access
Question 21
Answer
-
Write Through – write both cache and memory, generally higher traffic but simpler to design
-
write cache only, memory is written when evicted, dirty bit per block avoids unnecessary write backs, more complicated
-
No Write Allocate – only write to main memory
Question 22
Question
Reduce Miss Rate: Large Cache Size. Empirical Rule of Thumb:
Answer
-
None of them
-
If cache size is doubled, miss rate usually drops by about √2
-
Direct-mapped cache of size N has about the same miss rate as a two-way set- associative cache of size N/2
Question 23
Question
Reduce Miss Rate: High Associativity. Empirical Rule of Thumb:
Answer
-
Direct-mapped cache of size N has about the same miss rate as a two-way set- associative cache of size N/2
-
None of them
-
If cache size is doubled, miss rate usually drops by about √2
Question 24
Question
Exploit temporal locality:
Question 25
Question
Exploit spatial locality:
Question 26
Question
Structural Hazard:
Answer
-
An instruction depends on a data value produced by an earlier instruction
-
An instruction in the pipeline needs a resource being used by another instruction in the pipeline
-
Whether or not an instruction should be executed depends on a control decision made by an earlier instruction
Question 27
Answer
-
An instruction in the pipeline needs a resource being used by another instruction in the pipeline
-
Whether or not an instruction should be executed depends on a control decision made by an earlier instruction
-
An instruction depends on a data value produced by an earlier instruction
Question 28
Question
What is the access time?
Answer
-
Describes the technology inside the memory chips and those innovative, internal organizations
-
Time between when a read is requested and when the desired word arrives
-
The minimum time between requests to memory.
-
None of them
Question 29
Question
What is the cycle time?
Answer
-
The minimum time between requests to memory
-
Time between when a read is requested and when the desired word arrives
-
The maximum time between requests to memory.
-
None of them
Question 30
Question
What does SRAM stands for?
Answer
-
System Random Access memory
-
Static Random Access memory
-
Short Random Accessmemory
-
None of them
Question 31
Question
What does DRAM stands for?
Answer
-
Dataram Random Access memory
-
Dual Random Access memory
-
Dynamic Random Access memory
Question 32
Question
What does DDR stands for?
Answer
-
None of them
-
Double data reaction
-
Dual data rate
-
Double data rate
Question 33
Question
What is kernel process?
Answer
-
Provide at least two modes, indicating whether the running process is a user process or an
operating system process
-
Provide a portion of the processor state that a user process can use but not write
-
Provide at least five modes, indicating whether the running process is a user process or an
operating system process
-
None of them
Question 34
Question
Which one is NOT concerning to pitfall?
Answer
-
Implementing a virtual machine monitor on an instruction set architecture that wasn’t designed to
be virtualizable
-
Simulating enough instructions to get accurate performance measures of the memory hierarchy
-
Predicting cache performance of one program from another
-
Over emphasizing memory bandwidth in DRAMs
Question 35
Question
Which one is concerning to fallacy?
Answer
-
Over emphasizing memory bandwidth in DRAMs
-
Implementing a virtual machine monitor on an instruction set architecture that wasn’t designed to
be virtualizable
-
Predicting cache performance of one program from another
-
Simulating enough instructions to get accurate performance measures of the memory hierarchy
Question 36
Question
If we talk about storage systems an interaction or transaction with a computer is
divided for first what is an “System response time” - ?:
Answer
-
The time from the reception of the response until the user begins to enter the next command
-
The time between when the user enters the command and the complete response is displayed
-
The time for the user to enter the command
Question 37
Question
If we talk about storage systems an interaction or transaction with a computer is
divided for first what is an “Think time” - ?:
Answer
-
The time from the reception of the response until the user begins to enter the next command
-
The time between when the user enters the command and the complete response is displayed
-
The time for the user to enter the command
Question 38
Question
Little’s Law and a series of definitions lead to several useful equations for “Time
server” - :
Answer
-
Average time per task in the queue
-
Average time/task in the system, or the response time, which is the sum of Time queue and Time server
-
Average time to service a task; average service rate is 1/Time server traditionally represented
by the symbol μ in many queuing texts
Question 39
Question
Little’s Law and a series of definitions lead to several useful equations for “Time
queue” - :
Answer
-
Average time to service a task; average service rate is 1/Time server traditionally
represented by the symbol μ in many queuing texts
-
Average time per task in the queue
-
Average time/task in the system, or the response time, which is the sum of Time queue and Time server
Question 40
Question
Little’s Law and a series of definitions lead to several useful equations for “Time
system” - :
Answer
-
Average time/task in the system, or the response time, which is the sum of Time queue and Time server
-
Average time to service a task; average service rate is 1/Time server traditionally
represented by the symbol μ in many queuing texts
-
Average time per task in the queue
Question 41
Question
Little’s Law and a series of definitions lead to several useful equations for
“Length server” - :
Question 42
Question
Little’s Law and a series of definitions lead to several useful equations for
“Length queue” -:
Question 43
Question
How many issue queue used in Centralized Superscalar 2 and Exceptions?
Question 44
Question
How many issue queue used in Distributed Superscalar 2 and Exceptions:
Question 45
Question
How many instructions used in Distributed Superscalar 2 and Exceptions?
Question 46
Question
How many issue queue used in Centralized Superscalar 2 and Exceptions?
Question 47
Question
Which of the following formula is true about Issue Queue for “Instruction Ready”:
Answer
-
Instruction Ready = (!Vsrc1 || !Psrc1)&&(!Vsrc0 || !Psrc0)&& no structural hazards
-
Instruction Ready = (!Vsrc0 || !Psrc1)&&(!Vsrc1 || !Psrc0)&& no structural hazards
-
Instruction Ready = (!Vsrc0 || !Psrc0)&&(!Vsrc1 || !Psrc1)&& no structural hazards
-
Instruction Ready = (!Vsrc1 || !Psrc1)&&(!Vsrc0 || !Psrc1)&& no structural hazards
Question 48
Question 49
Answer
-
Read Only Buffer
-
Reorder Buffer
-
Reload Buffer
-
Recall Buffer
Question 50
Answer
-
Finished Star Buffer
-
Finished Stall Buffer
-
Finished Store Buffer
-
Finished Stack Buffer
Question 51
Answer
-
Pure Register File
-
Physical Register File
-
Pending Register File
-
Pipeline Register File
Question 52
Answer
-
Scalebit
-
Scaleboard
-
Scorebased
-
Scoreboard
Question 53
Question
How many stages used in Superscalar (Pipeline)?
Question 54
Question
What is about Superscalar means “F-D-X-M-W”?
Answer
-
Fetch, Decode, Instruct, Map, Write
-
Fetch, Decode, Execute, Memory, Writeback
-
Fetch, Decode, Excite, Memory, Write
-
Fetch, Decode, Except, Map, Writeback
Question 55
Question
Speculating on Exceptions “Prediction mechanism” is -
Answer
-
None of them
-
Only write architectural state at commit point, so can throw away partially executed instructions after
exception
-
Exceptions are rare, so simply predicting no exceptions is very accurate
-
Exceptions detected at end of instruction execution pipeline, special hardware for various exception types
Question 56
Question
Speculating on Exceptions “Check prediction mechanism” is -
Answer
-
The way in which an object is accessed by a subject
-
Exceptions detected at end of instruction execution pipeline, special hardware for various exception types
-
Exceptions are rare, so simply predicting no exceptions is very accurate
-
None of them
Question 57
Question
Speculating on Exceptions “Recovery mechanism” is
Answer
-
None of them
-
An entity capable of accessing objects
-
Exceptions are rare, so simply predicting no exceptions is very accurate
-
Only write architectural state at commit point, so can throw away partially executed instructions after
exception
Question 58
Answer
-
Rename Table
-
Recall Table
-
Relocate Table
-
Remove Table
Question 59
Answer
-
Free Launch
-
Free List
-
Free Leg
-
Free Last
Question 60
Answer
-
Internal Queue
-
Instruction Queue
-
Issue Queue
-
Interrupt Queue
Question 61
Question
At VLIW “Superscalar Control Logic Scaling” which parameters are used?
Answer
-
Width and Height
-
Width and Lifetime
-
Time and Cycle
-
Length and Addition
Question 62
Question
Out-of-Order Control Complexity MIPS R10000 which element is in Control
Logic?
Answer
-
Register name
-
Instruction cache
-
Data tags
-
Data cache
Question 63
Question
Out-of-Order Control Complexity MIPS R10000 which element is not in Control
Logic?
Answer
-
Integer Datapath
-
CLK
-
Address Queue
-
Free List
Question 64
Question 65
Question
At VLIW by “performance and loop iteration” which time is longer?
Answer
-
Loop Unrolled
-
Software Pipelined
Question 66
Question
At VLIW by “performance and loop iteration” which time is shorter?
Answer
-
Loop Unrolled
-
Software Pipelined
Question 67
Question
At VLIW Speculative Execution, which of this solution is true about problem:
Branches restrict compiler code motion?
Question 68
Question
At VLIW Speculative Execution, which of this solution is true about problem:
Possible memory hazards limit code scheduling:
Question 69
Question
What is an ALAT? :
Answer
-
Advanced Load Address Table
-
Allocated Link Address Table
-
Allowing List Address Table
-
Addition Long Accessibility Table
Question 70
Question
At VLIW Multi-Way Branches, which of this solution is true about problem: Long
instructions provide few opportunities for branches:
Question 71
Question
What is a Compulsory?
Answer
-
first-reference to a block, occur even with infinite cache
-
cache is too small to hold all data needed by program, occur even under perfect replacement policy
-
misses that occur because of collisions due to less than full associativity
Question 72
Question
What is a Capacity?
Answer
-
cache is too small to hold all data needed by program, occur even under perfect replacement policy
-
misses that occur because of collisions due to less than full associativity
-
first-reference to a block, occur even with infinite cache
Question 73
Question
What is a Conflict?
Answer
-
misses that occur because of collisions due to less than full associativity
-
first-reference to a block, occur even with infinite cache
-
cache is too small to hold all data needed by program, occur even under perfect replacement policy
Question 74
Question
In Multilevel Caches “Local miss rate” equals =
Answer
-
misses in cache / accesses to cache
-
misses in cache / CPU memory accesses
-
misses in cache / number of instructions
Question 75
Question
In Multilevel Caches “Global miss rate” equals =
Answer
-
misses in cache / CPU memory accesses
-
misses in cache / accesses to cache
-
misses in cache / number of instructions
Question 76
Question
In Multilevel Caches “Misses per instruction” equals =
Answer
-
misses in cache / number of instructions
-
misses in cache / accesses to cache
-
misses in cache / CPU memory accesses
Question 77
Question
Non-Blocking Cache Timeline for “Blocking Cache” the sequence is - ?
Answer
-
CPU time-Cache Miss-Miss Penalty-CPU time
-
CPU time->Cache Miss->Hit->Stall on use->Miss Penalty->CPU time
-
CPU time->Cache Miss->Miss->Stall on use->Miss Penalty->CPU time
Question 78
Question
Non-Blocking Cache Timeline for “Hit Under Miss” the sequence is -?
Answer
-
CPU time->Cache Miss->Hit->Stall on use->Miss Penalty->CPU time
-
CPU time-Cache Miss-Miss Penalty-CPU time
-
CPU time->Cache Miss->Miss->Stall on use->Miss Penalty->CPU time
Question 79
Question
Non-Blocking Cache Timeline for “Miss Under Miss” the sequence is -?
Answer
-
CPU time->Cache Miss->Miss->Stall on use->Miss Penalty->Miss Penalty->CPU time
-
CPU time-Cache Miss-Miss Penalty-CPU time
-
CPU time->Cache Miss->Hit->Stall on use->Miss Penalty->CPU time
Question 80
Question
What does mean MSHR?
Answer
-
Miss Status Handling Register
-
Map Status Handling Reload
-
Mips Status Hardware Register
-
Memory Status Handling Register
Question 81
Answer
-
Miss Address File
-
Map Address File
-
Memory Address File
Question 82
Question
At Critical Word First for Miss Penalty chose correct sequence of Basic Blocking
Cache “Order of fill”:
Answer
-
0,1,2,3,4,5,6,7
-
3,4,5,6,7,0,1,2
Question 83
Question
At Critical Word First for Miss Penalty chose correct sequence of Blocking
Cache with Critical Word first “Order of fill”:
Answer
-
3,4,5,6,7,0,1,2
-
0,1,2,3,4,5,6,7
Question 84
Question
Storage Systems, “Larger block size to reduce miss rate”
Answer
-
The simplest way to reduce the miss rate is to take advantage of spatial locality and increase
the block size
-
The obvious way to reduce capacity misses is to increase cache capacity
-
Obviously, increasing associativity reduces conflict misses
Question 85
Question
Storage Systems, “Bigger caches to reduce miss rate” -
Answer
-
The obvious way to reduce capacity misses is to increase cache capacity
-
Obviously, increasing associativity reduces conflict misses
-
The simplest way to reduce the miss rate is to take advantage of spatial locality and increase
the block size
Question 86
Question
Storage Systems, “Higher associativity to reduce miss rate” -
Answer
-
Obviously, increasing associativity reduces conflict misses
-
The obvious way to reduce capacity misses is to increase cache capacity
-
The simplest way to reduce the miss rate is to take advantage of spatial locality and increase
the block size
Question 87
Question
In Non-Blocking Caches what does mean “Critical Word First”?
Answer
-
Request the missed word first from memory and send it to the processor as soon as it arrives;
let the processor continue execution while filling the rest of the words in the block
-
Fetch the words in normal order, but as soon as the requested word of the block arrives,
send it to the processor and let the processor continue execution
Question 88
Question
In Non-Blocking Caches what does mean “Early restart”?
Answer
-
Fetch the words in normal order, but as soon as the requested word of the block arrives, send
it to the processor and let the processor continue execution
-
Request the missed word first from memory and send it to the processor as soon as it
arrives; let the processor continue execution while filling the rest of the words in the block
Question 89
Question
A virus classification by target includes the following categories, What is a File
infector?
Answer
-
A typical approach is as follows
-
Infects files that the operating system or shell consider to be executable
-
The key is stored with the virus
-
Far more sophisticated techniques are possible
Question 90
Question
What is a RAID 0?
Answer
-
It has no redundancy and is sometimes nicknamed JBOD, for “just a bunch of disks,” although
the data may be striped across the disks in the array
-
Also called mirroring or shadowing, there are two copies of every piece of data
-
This organization was inspired by applying memory-style error correcting codes to disks
Question 91
Question
What is a RAID 1?
Answer
-
Also called mirroring or shadowing, there are two copies of every piece of data
-
It has no redundancy and is sometimes nicknamed JBOD, for “just a bunch of disks,”
although the data may be striped across the disks in the array
-
This organization was inspired by applying memory-style error correcting codes to disks
Question 92
Question
What is a RAID 2?
Answer
-
This organization was inspired by applying memory-style error correcting codes to disks
-
It has no redundancy and is sometimes nicknamed JBOD, for “just a bunch of disks,”
although the data may be striped across the disks in the array
-
Also called mirroring or shadowing, there are two copies of every piece of data
Question 93
Question
What is a RAID 3?
Answer
-
Since the higher-level disk interfaces understand the health of a disk, it’s easy to figure out which disk failed
-
Many applications are dominated by small accesses
-
Also called mirroring or shadowing, there are two copies of every piece of data
Question 94
Question
What is a RAID 4?
Answer
-
Many applications are dominated by small accesses
-
Since the higher-level disk interfaces understand the health of a disk, it’s easy to figure out which disk
failed
-
Also called mirroring or shadowing, there are two copies of every piece of data
Question 95
Question
At storage systems Gray and Siewiorek classify faults what does mean
“Hardware faults”? :
Answer
-
Devices that fail, such as perhaps due to an alpha particle hitting a memory cell
-
Faults in software (usually) and hardware design (occasionally)
-
Mistakes by operations and maintenance personnel
Question 96
Question
At storage systems Gray and Siewiorek classify faults what does mean “Design
faults”? :
Answer
-
Faults in software (usually) and hardware design (occasionally)
-
Devices that fail, such as perhaps due to an alpha particle hitting a memory cell
-
Mistakes by operations and maintenance personnel
Question 97
Question
At storage systems Gray and Siewiorek classify faults what does mean
“Operation faults”? :
Answer
-
Mistakes by operations and maintenance personnel
-
Devices that fail, such as perhaps due to an alpha particle hitting a memory cell
-
Faults in software (usually) and hardware design (occasionally)
Question 98
Question
At storage systems Gray and Siewiorek classify faults what does mean
“Environmental faults”? :
Answer
-
Fire, flood, earthquake, power failure, and sabotage
-
Faults in software (usually) and hardware design (occasionally)
-
Devices that fail, such as perhaps due to an alpha particle hitting a memory cell
Question 99
Question
If we talk about storage systems an interaction or transaction with a computer is
divided for first what is an “Entry time” - ? :
Answer
-
The time for the user to enter the command
-
The time between when the user enters the command and the complete response is displayed
-
The time from the reception of the response until the user begins to enter the next command
Question 100
Question
If we talk about storage systems an interaction or transaction with a
computer is divided for first what is an “System response time” - ?:
Answer
-
The time between when the user enters the command and the complete response is displayed
-
The time for the user to enter the command
-
The time from the reception of the response until the user begins to enter the next command
Question 101
Question
If we talk about storage systems an interaction or transaction with a
computer is divided for first what is an “Think time” - ?:
Answer
-
The time from the reception of the response until the user begins to enter the next command
-
The time for the user to enter the command
-
The time between when the user enters the command and the complete response is displayed
Question 102
Question
Little’s Law and a series of definitions lead to several useful equations for
“Time server” - :
Answer
-
Average time to service a task; average service rate is 1/Time server traditionally represented
by the symbol μ in many queuing texts
-
Average time/task in the system, or the response time, which is the sum of Time queue and Time server
-
Average time per task in the queue
Question 103
Question
Little’s Law and a series of definitions lead to several useful equations for
“Time queue” - :
Answer
-
Average time per task in the queue
-
Average time to service a task; average service rate is 1/Time server traditionally
represented by the symbol μ in many queuing texts
-
Average time/task in the system, or the response time, which is the sum of Time queue and Time server
Question 104
Question
Little’s Law and a series of definitions lead to several useful equations for
“Time system” - :
Answer
-
Average time/task in the system, or the response time, which is the sum of Time queue and Time server
-
Average time to service a task; average service rate is 1/Time server traditionally
represented by the symbol μ in many queuing texts
-
Average time per task in the queue
Question 105
Question
Little’s Law and a series of definitions lead to several useful equations for
“Length server” - :
Question 106
Question
Little’s Law and a series of definitions lead to several useful equations for
“Length queue” -:
Question 107
Question
How many size of Cache L1 is true approximately? :
Question 108
Question
How many size of Cache L2 is true approximately?
Question 109
Question
How many size of Cache L3 is true approximately?
Question 110
Question
How many main levels of Cache Memory?
Question 111
Question
What is a “Synchronization” in Cache Memory?
Question 112
Question
What is a “Kernel” in Cache Memory?
Question 113
Question
What is a “Synchronization” in Cache Memory?
Question 114
Question
Network performance depends of what?
Question 115
Question
The time between the start and the completion of an event ,such as milliseconds
for a disk access is...
Answer
-
latency
-
bandwidth
-
throughput
-
performance
Question 116
Question
Total amount of work done in a given time ,such as megabytes per second for disk
transfer...
Answer
-
bandwidth
-
latency
-
throughput
-
performance
Question 117
Question
Learning curve itself is best measured by change in...
Question 118
Question
Products that are sold by multiple vendors in large volumes and are essentialy
identical
Answer
-
commodities
-
boxes
-
folders
-
files
Question 119
Question
Integrated circuit processes are charecterized by the
Answer
-
feature size
-
permanent size n
-
compex size
-
fixed size
Question 120
Question
For CMOS chips, the traditional dominant energy consumption has been in
switching transistors, called ____
Answer
-
dynamic power
-
physical energy
-
constant supply
-
simple battery
Question 121
Question
Manufacturing costs that decrease over time are ____
Answer
-
the learning curve
-
the cycled line
-
the regular option
-
the final loop
Question 122
Question
Volume is a ________ key factor in determining cost
Question 123
Question
The most companies spend only ____________ of their income on R&D, which
includes all engineering.
Answer
-
4% to 12%
-
15% to 30%
-
1% to 17%
-
30% to 48%
Question 124
Question
Systems alternate between two states of service with respect to an SLA:
Answer
-
1. Service accomplishment, where the service is delivered as specified
2. Service interruption, where the delivered service is different from the SLA
-
1. Service accomplishment, where the service is not delivered as specified
2. Service interruption, where the delivered service is different from the SLA
-
1. Service accomplishment, where the service is not delivered as specified
2. Service interruption, where the delivered service is not different from the SLA
-
1. Service accomplishment, where the service is delivered as specified
2. Service interruption, where the delivered service is not different from the SLA
Question 125
Question
Desktop benchmarks divide into __ broad classes:
Question 126
Question
What MTTF means:
Answer
-
mean time to failure
-
mean time to feauture
-
mean this to failure
-
my transfers to failure
Question 127
Question
A widely held rule of thumb is that a program spends __ of its execution time in
only __ of the code.
Answer
-
90% 10%
-
50% 50%
-
70% 30%
-
89% 11%
Question 128
Question
(Performance for entire task using the enhancement when possible) / (Performance
for entire task without using the enhancement) is equals to:
Answer
-
Speedup
-
Efficiency
-
Probability
-
Ration
Question 129
Question
Which of the following descriptions corresponds to static power?
Answer
-
Grows proportionally to the transistor count (whether or not the transistors are switching)
-
Proportional to the product of the number of switching transistors and the switching rate
Probability
-
Proportional to the product of the number of switching transistors and the switching rate
-
All of the above
Question 130
Question
Which of the following descriptions corresponds to dynamic power?
Answer
-
Proportional to the product of the number of switching transistors and the switching rate
-
Grows proportionally to the transistor count (whether or not the transistors are switching)
-
Certainly a design concern
-
None of the above
Question 131
Question
Which of the written below is NOT increase power consumption?
Answer
-
Increasing multithreading
-
Increasing performance
-
Increasing multiple cores
-
Increasing multithreading (V baze tak napisano)
Question 132
Question
Growing performance gap between peak and sustained performance translates to
increasing energy per unit of performance, when:
Answer
-
The number of transistors switching will be proportional to the peak issue rate, and the
performance is proportional to the sustained rate
-
The number of transistors switching will be proportionalto the sustained rate, and the
performance is proportionalto the peak issue rate
-
The number of transistors switching will be proportional to the sustained rate
-
The performance is proportional to the peak issue rate
Question 133
Question
If we want to sustain four instructions per clock
Answer
-
We must fetch more, issue more, and initiate execution on more than four instructions
-
We must fetch less, issue more, and initiate execution on more than two instructions
-
We must fetch more, issue less, and initiate execution on more than three instructions
-
We must fetch more, issue more, and initiate execution on less than five instructions
Question 134
Question
If speculation were perfect, it could save power, since it would reduce the execution time and
save _____________, while adding some additional overhead to implement
Answer
-
Static power
-
Dynamic power
-
Processing rate
-
Processor state
Question 135
Question
When speculation is not perfect, it rapidly becomes energy inefficient, since it requires
additional ___________ both for the incorrect speculation and for the resetting of the processor
state
Answer
-
Dynamic power
-
Static power
-
Processing rate
-
Processor state
Question 136
Question
Which of these concepts is NOT illustrated case study by Wen-mei W. Hwu and John W.
Sias
Answer
-
Achievable ILP with software resource constraints
-
Limited ILP due to software dependences
-
Achievable ILP with hardware resource constraints
-
Variability of ILP due to software and hardware interaction
Question 137
Question
What is a hash table?
Answer
-
Popular data structure for organizing a large collection of data items so that one can quickly
answer questions
-
Popular data structure for updating large collections, so that one can hardly answer questions
-
Popular tables for organizing a large collection of data structure
-
Popular data structure for deletingsmall collections of data items so that one can hardly
answer questions
Question 138
Question
Which of these is NOT characteristics of recent highperformance microprocessors?
Question 139
Question
How this process called: “Operations execute as soon as their operands are available”
Question 140
Question
For what the reorder buffer is used :
Answer
-
To pass results among instructions that may be speculated
-
To pass parameters through instructions that may be speculated
-
To get additional registers in the same way as the reservation stations
-
To control registers
Question 141
Question
How many fields contains the entry in the ROB:
Question 142
Question
Choose correct fields of entry in the ROB:
Answer
-
the instruction type, the destination field, the value field, and the ready field
-
the source type, the destination field, the value field, and the ready field
-
the program type, the ready field, the parameter field, the destination field
-
the instruction type, the destination field, and the ready field
Question 143
Question
Choose the steps of instruction execution:
Answer
-
issue, execute, write result, commit
-
execution, commit, rollback
-
issue, execute, override, exit
-
begin, write, interrupt, commit
Question 144
Question
Which one is not the major flavor of Multiple-issue processors
Answer
-
statistically superscalar processors
-
dynamically scheduled superscalar processors
-
statically scheduled superscalar processors
-
VLIW (very long instruction word) processors
Question 145
Question
Which Multiple-issue processors has not the hardware hazard detection
Answer
-
EPIC
-
Superscalar(dynamic)
-
Superscalar(static)
-
Superscalar(speculative)
Question 146
Question
Examples of EPIC:
Question 147
Question
Examples of superscalar(static):
Question 148
Question
Examples of superscalar(dynamic) :
Question 149
Question
Examples of VLIW/LIW:
Question 150
Question
A branch-prediction cache that stores the predicted address for the next instruction after a
branch
Answer
-
branch-target buffer
-
data buffer
-
frame buffer
-
optical buffer
Question 151
Question
Buffering the actual target instructions allows us to perform an optimization which called:
Answer
-
branch folding
-
Branch prediction
-
Target instructions
-
Target address
Question 152
Question
Which is not the function of integrated instruction fetch unit:
Answer
-
Instruction memory commit
-
Integrated branch prediction
-
Instruction prefetch
-
Instruction memory access and buffering
Question 153
Question
What is the simple technique that predicts whether two stores or a load and a store refer to
the same memory address:
Answer
-
Address aliasing prediction
-
Branch prediction
-
Integrated branch prediction
-
Dynamic branch prediction
Question 154
Question
How to decrypt RISC?
Answer
-
Reduced Instruction Set Computer
-
Recall Instruction Sell Communication
-
Rename Instruction Sequence Corporation
-
Red Instruction Small Computer
Question 155
Question
The ideal pipeline CPI is a measure of …
Answer
-
the maximum performance attainable by the implementation
-
the maximum performance attainable by the instruction
-
the minimum performance attainable by the implementation
-
the minimum performance attainable by the instruction
Question 156
Question
What is the Pipeline CP = ?
Answer
-
deal pipeline CPI + Structural stalls + Data hazard stalls + Control stalls
-
deal pipeline CPU + Data hazard stalls + Control stalls
-
deal pipeline CPU + deal pipeline CPI + Data hazard stalls + Control stalls
-
Structural stalls + Data hazard stalls + Control stalls
Question 157
Question
The simplest and most common way to increase the ILP is …?
Answer
-
to exploit parallelism among iterations of a loop
-
to exploit minimalism among iterations of a loop
-
to destroy iterations of a loop
-
to decrease the minimalism of risk
Question 158
Question
The simplest and most common way to increase the ILP is to exploit parallelism among
iterations of a loop. How is often called?
Question 159
Question
In parallelism have three different types of dependences, tagging him:
Answer
-
data dependences , name dependences , and control dependences
-
data dependences , name dependences , and surname dependences
-
datagram dependences , name dependences , and animal dependences
-
no correct answers
Question 160
Question
What is Name dependence?
Answer
-
name dependence occurs when two instructions use the same register or memory location
-
name dependence occurs when five or more instructions use the same register or memory location
-
name dependence occurs when instructions use the same name
-
All answers is correct
Question 161
Question
When occurs an output dependence?
Answer
-
When i and instruction j write the same register or memory location
-
when i and instruction j write the same name
-
when i and instruction j write the same adress or memory location
-
All answers is correct
Question 162
Question
What is RAW (read after write)?
Answer
-
when j tries to read a source before i writes it, so j incorrectly gets the old value
-
when i tries to read a source before j writes it, so j correctly gets the old value
-
when j tries to write a source before i writes it
-
when a tries to write a source before b read it, so a incorrectly gets the old value
Question 163
Question
What is given is not a hazard?
Question 164
Question
A simple scheme for increasing the number of instructions relative to the branch and
overhead instructions is…?
Answer
-
loop unrolling
-
RAR
-
loop-level
-
loop rolling
Question 165
Question
Effect that results from instruction scheduling in large code segments is called…?
Answer
-
register pressure
-
loop unrolling
-
loop-level
-
registration
Question 166
Question
The simplest dynamic branch-prediction scheme is a
Answer
-
branch-prediction buffer
-
branch buffer
-
All answers correct
-
registration
Question 167
Question
Branch predictors that use the behavior of other branches to make a prediction are called
Question 168
Question
How many branch-selected entries are in a (2,2) predictor that has a total of 8K bits in the
prediction buffer? If we know that Number of prediction entries selected by the branch = 8K
Answer
-
the number of prediction entries selected by the branch = 1K.
-
the number of prediction entries selected by the branch = 2K.
-
the number of prediction entries selected by the branch = 8K.
-
the number of prediction entries selected by the branch = 4K.
Question 169
Question
What is the compulsory in Cs model?
Answer
-
The very first access to a block cannot be in the cache, so the block must be brought into the cache.
Compulsory misses are those that occur even if you had an infinite cache
-
If the cache cannot contain all the blocks needed during execution of a program, capacity misses
(in addition to compulsory misses) will occur because of blocks being discarded and later
retrieved
-
The number of accesses that miss divided by the number of accesses.
-
None of them
Question 170
Question
What is capacityin Cs model?
Answer
-
If the cache cannot contain all the blocks needed during execution of a program, capacity misses
(in addition to compulsory misses) will occur because of blocks being discarded and later retrieved
-
The very first access to a block cannot be in the cache, so the block must be brought into the
cache. Compulsory misses are those that occur even if you had an infinite cache.
-
The number of accesses that miss divided by the number of accesses.
-
None of them
Question 171
Question
What is conflict in Cs model?
Answer
-
If the block placement strategy is not fully associative, conflict misses (in addition to compulsory
and capacity misses) will occur because a block may be discarded and later retrieved if conflicting
blocks map to its set
-
The very first access to a block cannot be in the cache, so the block must be brought into the
cache. Compulsory misses are those that occur even if you had an infinite cache.
-
If the cache cannot contain all the blocks needed during execution of a program, capacity misses
(in addition to compulsory misses) will occur because of blocks being discarded and later
retrieved
-
None of them
Question 172
Question
Choose the benefit of Cache Optimization.
Answer
-
Larger block size to reduce miss rate
-
Bigger caches to increase miss rat
-
Single level caches to reduce miss penalty
-
None of them
Question 173
Question
Choose the strategy of Seventh Optimization.
Question 174
Question
Choose the Eight Optimization
Answer
-
Merging Write Buffer to Reduce Miss Penalty
-
Critical word first
-
Nonblocking Caches to Increase Cache Bandwidth
-
Trace Caches to Reduce Hit Time
Question 175
Question
Choose the Eleventh Optimization
Answer
-
Compiler-Controlled Prefetching to Reduce Miss Penalty or Miss Rate
-
Merging Write Buffer to Reduce Miss Penalty
-
Hardware Prefetching of Instructions and Data to Reduce Miss Penalty or Miss Rate
-
None of them
Question 176
Question
What is the access time?
Answer
-
Time between when a read is requested and when the desired word arrives
-
The minimum time between requests to memory.
-
Describes the technology inside the memory chips and those innovative, internal organizations
-
None of them
Question 177
Question
What is the cycle time?
Answer
-
The minimum time between requests to memory
-
Time between when a read is requested and when the desired word arrives
-
The maximum time between requests to memory.
-
None of them
Question 178
Question
How much in percentage single-processor performance improvement has dropped to
less than?
Question 179
Question
How many elements of the Instruction Set Architecture (ISA):
Question 180
Question
What is the Thread Level Parallelism –
Answer
-
Exploits either data-level parallelism or task-level parallelism in a tightly coupled hardware model that
allows for interaction among parallel threads.
-
Exploit data-level parallelism by applying a single instruction to a collection of data in parallel.
-
Exploits parallelism among largely decoupled tasks specified by the programmer or operating system.
Question 181
Question
What is the PMD in computer classes?
Answer
-
Personal mobile device
-
Powerful markup distance
-
Percentage map device
Question 182
Question
What is the Instruction Level Parallelism:
Answer
-
Exploits data-level parallelism at modest levels with compiler help using ideas like pipelining and at a
medium levels using ideas like speculative execution.
-
Exploits parallelism among largely decoupled tasks specified by the programmer or operating system
-
Exploit data-level parallelism by applying a single instruction to a collection of data in parallel
Question 183
Question
How many elements in Trends of Technology?
Question 184
Question
What is the Vector Architecture and Graphic Processor Units (GPUs) –
Answer
-
Exploit data-level parallelism by applying a single instruction to a collection of data in parallel.
-
Exploits data-level parallelism at modest levels with compiler help using ideas like pipelining and at a
medium levels using ideas like speculative execution
-
Exploits parallelism among largely decoupled tasks specified by the programmer or operating system.
Question 185
Question
How many Optimizations’ in Cache memory Performance?
Question 186
Question
What is the Reducing the Miss Rate?
Answer
-
Time Optimization
-
Compiler Optimization
-
Performance Optimization
Question 187
Question
What is the Spatial Locality?
Question 188
Question
What is the Temporal Locality?
Question 189
Question
True formula of Module availability (MTTF – mean time to failure, MTTR – mean
time to repair)?
Answer
-
MTTF / (MTTF + MTTR)
-
MTTF * (MTTF + MTTR)
-
MTTF * (MTTF - MTTR)