Save
AP Computer Science Principles
Big Idea 4: Computer Systems and Networks
4.4 Parallel and Distributed Computing
Save
Share
Learn
Content
Leaderboard
Share
Learn
Cards (90)
Parallel and distributed computing both aim to increase computational efficiency.
True
Multiple processors or cores in parallel computing perform tasks in
parallel
What is a key advantage of distributed computing in handling workloads?
Scalability
What are parallel algorithms designed for in parallel computing?
Multiple processors
Fault tolerance in
distributed computing
ensures operation continues even if some computers fail.
True
What type of memory does distributed computing use?
Distributed
What is the primary goal of parallel computing?
Reduce execution time
Distributed computing uses
message-based
synchronization.
True
Parallel computing requires message-based synchronization.
False
What type of memory architecture is used in parallel computing?
Shared memory
What is the primary method of communication in distributed computing?
Messages
What is a limitation of message-based synchronization in distributed computing?
Increased complexity
Parallel computing reduces execution time by using multiple processors simultaneously.
True
Parallel computing and distributed computing are interchangeable approaches for all computing tasks.
False
The Distributed Memory Model uses message
passing
Match the architecture with its example:
Shared Memory ↔️ SMP systems
Distributed Memory ↔️ Clusters, Supercomputers
Cloud-Based ↔️ AWS, Azure
Client-Server ↔️ Web servers
The Distributed Memory Model is less scalable than the Shared Memory Model.
False
Which parallel algorithm is used for sorting a list by splitting it into sublists and merging them in parallel?
Parallel Merge Sort
Match the computing type with its feature:
Parallel Computing ↔️ Multiple processors or cores
Distributed Computing ↔️ Networked computers
Parallel computing has higher fault tolerance compared to distributed computing.
False
Distributed computing is suitable for problems requiring high fault tolerance.
True
Synchronization in parallel computing is
complex
Synchronization in distributed computing is
message-based
Scalability and fault tolerance in distributed computing are achieved through message-based synchronization.
True
Match the parallel architecture with its description:
Shared Memory Model ↔️ Processors share a single memory
Distributed Memory Model ↔️ Processors have their own memory
In the Peer-to-Peer (P2P) model, computers communicate directly without a
central
server.
True
Cloud-based systems provide infrastructure and resources over the
internet
Arrange the common design patterns for parallel algorithms in their correct order.
1️⃣ Divide and Conquer
2️⃣ MapReduce
3️⃣ Pipeline Processing
The Parallel Prefix Sum algorithm computes cumulative
sums
Leader election improves efficiency through centralized
control
Network topologies describe the physical and logical arrangement of
nodes
A mesh topology offers high fault
tolerance
Steps involved in replication for fault tolerance:
1️⃣ Create multiple copies of data
2️⃣ Synchronize data across copies
3️⃣ Ensure consistency and redundancy
Parallel and distributed systems differ in terms of memory and
location
How does parallel computing reduce execution time?
Multiple calculations simultaneously
Match the term with its definition:
Parallel computing ↔️ Simultaneous execution of tasks
Multiple processors ↔️ Processing units in parallel computing
Synchronization mechanisms ↔️ Tools to coordinate parallel tasks
Synchronization mechanisms in parallel computing ensure data consistency and task
order
In parallel computing, tasks are executed within a single
system
What type of synchronization is used in distributed computing?
Message-based
Match the computing type with its location:
Parallel computing ↔️ Single system
Distributed computing ↔️ Multiple networked computers
See all 90 cards