Grid Computing - Virtual SuperComputer

Grid computing is a group of networked computers that work together as a virtual supercomputer to perform large tasks, such as analyzing huge sets of data or weather modeling. Through the cloud, you can assemble and use vast computer grids for specific time periods and purposes, paying, if necessary, only for what you use to save both the time and expense of purchasing and deploying the necessary resources yourself. Also by splitting tasks over multiple machines, the processing time is significantly reduced to increase efficiency and minimize wasted resources.

Unlike parallel computing, grid computing projects typically have no time dependency associated with them. They use computers that are part of the grid only when idle and operators can perform tasks unrelated to the grid at any time. Security must be considered when using computer grids as controls on member nodes are usually very loose. Redundancy should also be built in as many computers may disconnect or fail during processing.

Grid computing Vs Supercomputers :

“Distributed” or “grid” computing, in general, is a special type of parallel computing that relies on complete computers (with onboard CPUs, storage, power supplies, network interfaces, etc.) connected to a network (private, public, or the Internet) by a conventional network interface producing commodity hardware, compared to the lower efficiency of designing and constructing a small number of custom supercomputers. The primary performance disadvantage is that the various processors and local storage areas do not have high-speed connections. This arrangement is thus well-suited to applications in which multiple parallel computations can take place independently, without the need to communicate intermediate results between processors. The high-end scalability of geographically dispersed grids is generally favorable, due to the low need for connectivity between nodes relative to the capacity of the public Internet. There are also some differences in programming and MC. It can be costly and difficult to write programs that can run in the environment of a supercomputer, which may have a custom operating system, or require the program to address issues. If a problem can be adequately parallelized, a “thin” layer of “grid” infrastructure can allow conventional, standalone programs, given a different part of the same problem, to run on multiple machines. This makes it possible to write and debug on a single conventional machine and eliminates complications due to multiple instances of the same program running in the same shared memory and storage space at the same time.

Disadvantages of Grid Computing :

One disadvantage of this feature is that the computers which are actually performing the calculations might not be entirely trustworthy. The designers of the system must thus introduce measures to prevent malfunctions or malicious participants from producing false, misleading, or erroneous results, and from using the system as an attack vector. This often involves assigning work randomly to different nodes (presumably with different owners) and checking that at least two different nodes report the same answer for a given work unit. Discrepancies would identify malfunctioning and malicious nodes. However, due to the lack of central control over the hardware, there is no way to guarantee that nodes will not drop out of the network at random times. Some nodes (like laptops or dial-up Internet customers) may also be available for computation but not network communications for unpredictable periods. These variations can be accommodated by assigning large work units (thus reducing the need for continuous network connectivity) and reassigning work units when a given node fails to report its results in the expected time.

CPU-scavenging:

CPU-scavenging, cycle-scavenging, or shared computing creates a “grid” from the idle resources in a network of participants (whether worldwide or internal to an organization). Typically, this technique exploits the 'spare' instruction cycles resulting from the intermittent inactivity that typically occurs at night, during lunch breaks, or even during the (comparatively minuscule, though numerous) moments of idle waiting that modern desktop CPU's experience throughout the day (when the computer is waiting on I/O from the user, network, or storage).

In practice, participating computers also donate some supporting amount of disk storage space, RAM, and network bandwidth, in addition to raw CPU power. Many projects, such as BOINC, use the CPU scavenging model. Since nodes are likely to go "offline" from time to time, as their owners use their resources for their primary purpose, this model must be designed to handle such contingencies. Creating an opportunistic environment is another implementation of CPU-scavenging where a special workload management system harvests the idle desktop computers for compute-intensive jobs, it also refers to Enterprise Desktop Grid (EDG).

Generally, cyber security is a practice of safeguarding computers, servers, networks from malicious attacks. Even though many cyber malware attacks are happening. Innumerable attacks are executed on social media platforms such as Instagram, Twitter, Facebook where large personal data is hacked and infringed. To avoid these cyberattacks, blockchain technology with the integration of cyber security must be implemented in those messaging systems. It potentially strengthens cybersecurity and makes a suspicious one to a trusted environment.

For instance, the open-source high-throughput computing software framework for coarse-grained distributed rationalization of computationally intensive tasks can be configured to only use desktop machines where the keyboard and mouse are idle to effectively harness wasted CPU power from otherwise idle desktop workstations. Like other full-featured batch systems, HTCondor provides a job queueing mechanism, scheduling policy, priority scheme, resource monitoring, and resource management. It can be used to manage workload on a dedicated cluster of computers as well or it can seamlessly integrate both dedicated resources (rack-mounted clusters) and non-dedicated desktop machines (cycle scavenging) into one computing environment.

Conclusion

Grid Computing has numerous advantages. It can effectively be used as a supercomputer with the network. Grid Computing has become an latest trend in Modern Technology

“Blockchain technology can change the world more than people imagine.”