How does work concurrency ?
Means that multiple tasks can be executed in an overlapping time period. One of the tasks can begin before the preceding one is completed; however, they won’t be running at the same time. The CPU will adjust time slices per task and appropriately switch contexts. That’s why this concept is quite complicated to implement and especially debug.
While the current thread or process is waiting for input-output operations, database transactions, or launching an external program, another process or thread receives the CPU allocation. On the kernel side, the OS sends an interrupt to the active task to stop it:
If jobs are running on the same core of CPU (single or mutlti-core), they can access the same resources at the same time. To be efficient, task scheduling alga could be useful : FIFO, SJF (Shortest-job-first) and RR (Round-Robin).
How does work parallelism ?
The ability to execute independent tasks of a program in the same instant of time. Contrary to concurrent tasks, these tasks can run simultaneously on another processor core, another processor, or an entirely different computer that can be a distributed system.
For example, a distributed computing system consists of multiple computer systems, but it’s run as a single system. The computers that are in a system can be physically close to each other and connected by a local network, or they can be distant and connected by a wide area network: Parallelism is a must for performance gain.
We can implement it on different levels of abstractions:
distributed systems are one of the most important examples of parallel systems. They’re basically independent computers with their own memory and IO.
process pipelining.
even at chip level, parallelism can increase concurrency in operations.
using multiple cores on the same computer. This makes various edge devices, like mobile phones, possible.