Allocating Work Scheduler for Various Processors by using Map Reducing

Authors(2) :-P. Meghana, G.Sivaranjan

The usefulness of current multi-center processors is regularly determined by a given power spending that expects planners to assess distinctive choice exchange offs, e.g., to pick between some moderate, control proficient centers, or less quick, control hungry centers, or a blend of them. Here, we model and assess another Hadoop scheduler, called DyScale, that adventures abilities advertised by heterogeneous centers inside a solitary multi-center processor for accomplishing an assortment of execution destinations. A normal MapReduce workload contains occupations with various execution objectives: substantial, clump employments that are throughput situated, and littler intelligent employments that is reaction time delicate? Heterogeneous multi-center processors empower making virtual asset pools in view of "moderate" and "quick" centers for multi-class need booking. Since similar information can be gotten to with either "moderate" or "quick" spaces, save assets (openings) can be shared between various asset pools. Utilizing estimations on a real trial setting and by means of recreation, we contend for heterogeneous multi-center processors as they accomplish "speedier" (up to 40%) preparing for little, intuitive MapReduce employments, while offering enhanced throughput (up to 40%) for substantial, bunch occupations. We assess the execution advantages of DyScale versus the FIFO what's more, Capacity work schedules that are extensively utilized as a part of the Hadoop people group.

Authors and Affiliations

P. Meghana
Student, Department of Master of Computer Applications, Rayalaseema Institute Of Information And Management Sciences, Tirupati, India)
Assistant Professor, Department of Master of Computer Applications, Rayalaseema Institute Of Information And Management Sciences, Tirupati, India)

MapReduce, Hadoop, heterogeneous systems, scheduling, performance, power.

  1. Hadoop: Open source implementation of MapReduce. http:// lucene.
  2. The Phoenix system for MapReduce programming. http:// csl.stanford. edu/~christos/sw/phoenix/.
  3. Arpaci-Dusseau, A. C., Arpaci-Dusseau, R. H., Culler, D. E., Heller- stein, J. M., and Patterson, D. A. 1997. High-performance sorting on networks of workstations. In Proceedings of the 1997 ACM SIGMOD International Conference on Management of Data. Tucson, AZ.
  4. Barroso, L. A., Dean, J., and Urs Hölzle, U. 2003. Web search for a planet: The Google cluster architecture. IEEE Micro 23, 2, 22-28.
  5. Bent, J., Thain, D., Arpaci-Dusseau, A. C., Arpaci-Dusseau, R. H., and Livny, M. 2004. Explicit control in a batch-aware distributed file system. In Proceedings of the 1st USENIX Symposium on Networked Systems Design and Implementation ( NSDI ).
  6. Blelloch, G. E. 1989. Scans as primitive parallel operations. IEEE Trans. Comput. C-38, 11
  7. Chu, C.-T., Kim, S. K., Lin, Y. A., Yu, Y., Bradski, G., Ng, A., and Olukotun, K. 2006. Map-Reduce for machine learning on multicore. In Proceedings of Neural Information Processing Systems Conference (NIPS). Vancouver, Canada.
  8. Dean, J. and Ghemawat, S. 2004. MapReduce: Simplified data pro-cessing on large clusters. In Proceedings of Operating Systems Design and Implementation ( OSDI ). San Francisco, CA. 137-150.
  9. Fox, A., Gribble, S. D., Chawathe, Y., Brewer, E. A., and Gauthier, P.1997. Cluster-based scalable network services. In Proceedings of the 16th ACM Symposium on Operating System Principles. Saint-Malo, France. 78-91.
  10. Ghemawat, S., Gobioff, H., and Leung, S.-T. 2003. The Google file system. In 19th Symposium on Operating Systems Principles. Lake George, NY. 29-43.
  11. Gorlatch, S. 1996. Systematic efficient parallelization of scan and other list homomorphisms. In L. Bouge, P. Fraigniaud, A. Mignotte, and Y. Robert, Eds. Euro-Par’96 Parallel Processing, Lecture Notes in Computer Science, vol. 1124. Springer-Verlag. 401-408.
  12. Gray, J. Sort benchmark home page. http:// research. microsoft. com/ barc/ SortBenchmark/
  13. Huston, L., Sukthankar, R., Wickremesinghe, R., Satyanarayanan, M.,Ganger, G. R., Riedel, E., and Ailamaki, A. 2004. Diamond: A storage architecture for early discard in interactive search. In Proceedings of the 2004 USENIX File and Storage Technologies FAST Conference
  14. Ladner, R. E., and Fischer, M. J. 1980. Parallel prefix computation. JACM 27 , 4. 831-838.
  15. Rabin, M. O. 1989. Efficient dispersal of information for security, load balancing and fault tolerance. JACM 36 , 2. 335-348.
  16. Ranger, C., Raghuraman, R., Penmetsa, A., Bradski, G., and Kozyrakis, C. 2007. Evaluating mapreduce for multi-core and multi- processor systems. In Proceedings of 13th International Symposium on High-Performance Computer Architecture ( HPCA ). Phoenix, AZ.
  17. Riedel, E., Faloutsos, C., Gibson, G. A., and Nagle, D. Active disks for large-scale data processing. IEEE Computer . 68-74

Publication Details

Published in : Volume 4 | Issue 2 | March-April 2018
Date of Publication : 2018-03-31
License:  This work is licensed under a Creative Commons Attribution 4.0 International License.
Page(s) : 476-480
Manuscript Number : CSEIT184183
Publisher : Technoscience Academy

ISSN : 2456-3307

Cite This Article :

P. Meghana, G.Sivaranjan, "Allocating Work Scheduler for Various Processors by using Map Reducing", International Journal of Scientific Research in Computer Science, Engineering and Information Technology (IJSRCSEIT), ISSN : 2456-3307, Volume 4, Issue 2, pp.476-480, March-April-2018.
Journal URL :

Article Preview

Follow Us

Contact Us