/
Concurrency & Parallelism

Concurrency & Parallelism

How does work concurrency ?

Means that multiple tasks can be executed in an overlapping time period. One of the tasks can begin before the preceding one is completed; however, they won’t be running at the same time. The CPU will adjust time slices per task and appropriately switch contexts. That’s why this concept is quite complicated to implement and especially debug.

 

While the current thread or process is waiting for input-output operations, database transactions, or launching an external program, another process or thread receives the CPU allocation. On the kernel side, the OS sends an interrupt to the active task to stop it:

If jobs are running on the same core of CPU (single or mutlti-core), they can access the same resources at the same time. To be efficient, task scheduling alga could be useful : FIFO, SJF (Shortest-job-first) and RR (Round-Robin).

How does work parallelism ?

The ability to execute independent tasks of a program in the same instant of time. Contrary to concurrent tasks, these tasks can run simultaneously on another processor core, another processor, or an entirely different computer that can be a distributed system.

For example, a distributed computing system consists of multiple computer systems, but it’s run as a single system. The computers that are in a system can be physically close to each other and connected by a local network, or they can be distant and connected by a wide area network: Parallelism is a must for performance gain.

We can implement it on different levels of abstractions:

  • distributed systems are one of the most important examples of parallel systems. They’re basically independent computers with their own memory and IO.

  • process pipelining.

  • even at chip level, parallelism can increase concurrency in operations.

  • using multiple cores on the same computer. This makes various edge devices, like mobile phones, possible.

 

Pitfalls in concurrency and parallelism

Complex ideas and require advanced development skills. Otherwise, there could be some potential risks that jeopardize the system’s reliability.

In concurrent : there can be deadlocks (situation in which processes block each other due to resource acquisition and none of the processes makes any progress as they wait for the resource held by the other process.), race conditions (condition of a program where its behavior depends on relative timing or interleaving of multiple threads or processes. One or more possible outcomes may be undesirable, resulting in a bug. We refer to this kind of behavior as nondeterministic.) or starvation (outcome of a process that is unable to gain regular access to the shared resources it requires to complete a task and thus, unable to make any progress.).

In parallel : there could be memory corruption, leaks (objects present in the heap that are no longer used, but the garbage collector is unable to remove them from memory) or errors.