Search

Scale-up or Scale-out

One of the core pillars to the architecture of Tensor Networks is the convergence of the data-network and high performance computing to lower the application latency of distributed parallel processing. This architecture enables the Matrix-Platform to scale-up and scale-out application services across participating devices in a cluster, or across a private WAN with participating nodes for high-performance computing services in elastic fashion. We produce this in our Matrix-Private Cloud solution. This empowers an enterprise to extract the maximum computational benefit from their assets.


https://en.wikipedia.org/wiki/Parallel_computing


Parallel computing is a type of computation where many calculations or the execution of processes are carried out simultaneously.[1] Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling.[2] As power consumption (and consequently heat generation) by computers has become a concern in recent years,[3] parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.[4]

Parallel computing is closely related to concurrent computing—they are frequently used together, and often conflated, though the two are distinct: it is possible to have parallelism without concurrency (such as bit-level parallelism), and concurrency without parallelism (such as multitasking by time-sharing on a single-core CPU).[5][6] In parallel computing, a computational task is typically broken down into several, often many, very similar sub-tasks that can be processed independently and whose results are combined afterwards, upon completion. In contrast, in concurrent computing, the various processes often do not address related tasks; when they do, as is typical in distributed computing, the separate tasks may have a varied nature and often require some inter-process communication during execution.



8 views0 comments

Recent Posts

See All