...
Means that multiple tasks can be executed in an overlapping time period. One of the tasks can begin before the preceding one is completed; however, they won’t be running at the same time. The CPU will adjust time slices per task and appropriately switch contexts. That’s why this concept is quite complicated to implement and especially debug.
While the current thread or process is waiting for input-output operations, database transactions, or launching an external program, another process or thread receives the CPU allocation. On the kernel side, the OS sends an interrupt to the active task to stop it:
...
distributed systems are one of the most important examples of parallel systems. They’re basically independent computers with their own memory and IO.
process pipelining.
even at chip level, parallelism can increase concurrency in operations.
using multiple cores on the same computer. This makes various edge devices, like mobile phones, possible.
Pitfalls in concurrency and parallelism