...
How does work concurrency ?
Means that multiple tasks can be executed in an overlapping time period. One of the tasks can begin before the preceding one is completed; however, they won’t be running at the same time. The CPU will adjust time slices per task and appropriately switch contexts. That’s why this concept is quite complicated to implement and especially debug.
While the current thread or process is waiting for input-output operations, database transactions, or launching an external program, another process or thread receives the CPU allocation. On the kernel side, the OS sends an interrupt to the active task to stop it:
...
For example, a distributed computing system consists of multiple computer systems, but it’s run as a single system. The computers that are in a system can be physically close to each other and connected by a local network, or they can be distant and connected by a wide area network: Parallelism is a must for performance gain.
...
We can implement it on different levels of abstractions:
distributed systems are one of the most important examples of parallel systems. They’re basically independent computers with their own memory and IO.
process pipelining.
even at chip level, parallelism can increase concurrency in operations.
using multiple cores on the same computer. This makes various edge devices, like mobile phones, possible.