Load Rebalancing File Distributed System in Cloud
Keywords:
Load Management, Algorithm Design and analysis, Cloud Computing Structure.Abstract
Load Rebalancing Distributed file systems are the building blocks for cloud computing software based on the one of the Algorithm that is Map Reduce programming paradigm Algorithm. In a such file systems of this Procedures, nodes at the time serve computing and mass storage functions, The file is Subdivided into a number of chunks allocated in nodes so that Map Reduce tasks can be performed in parallel over the nodes. In a cloud computing System, failure is the normal, because of the so many servers are connected and nodes may be upgraded, replaced, and placed in the system of the Load Balancing. Files can be created, deleted, and it will edited. This results in load imbalance in a distributed file system; that is, the file chunks are not Separated as uniformly among the nodes of the systems.
References
- J. Dean and S. Ghemawat, "MapReduce: Simplified Data Processingon Large Clusters," in Proc. 6th Symp. Operating System Design and Implementation (OSDI’04), Dec. 2004, pp. 137–150.
- S. Ghemawat, H. Gobioff, and S.-T. Leung, "The Google File System," in Proc. 19th ACM Symp. Operating Systems Principles (SOSP’03), Oct. 2003, pp. 29–43.
- Hadoop Distributed File System, ttp://hadoop.apache.org /hdfs/.
- VMware, http://www.vmware.com/.
- Xen, http://www.xen.org/
- Apache Hadoop, http://hadoop.apache.org/.
Downloads
Published
Issue
Section
License
Copyright (c) IJSRCSEIT

This work is licensed under a Creative Commons Attribution 4.0 International License.