Secure Data Duplication Process for Better Performance of Primary Databases

Authors(2) :-Mohd Nadeem, Md Ateeq Ur Rahman

In Real-time scenario, the data duplication is available but not dynamically implemented. The purpose of this paper is to study the data deduplication and performance specially when dealing with remote server. Normally remote servers are not capable of detecting data deduplication as they are situated and programmed in such a way that their job is to accept the data from many users around the globe. At client side only, we can implement a technique or scheme where data deduplication can be detected and informed to the data owner to save cloud infrastructure. To Detect data being duplicated in cloud servers for more resources availability and fast performance. Storage data on remote servers requires attention on both security and consistency. The data owner can check and verify their data stored in duplicates in cloud server before uploading any new content from the client side. By introducing a new and novel technique this paper achieved the goal of detecting and instructing data duplication in cloud server before outsourcing.

Authors and Affiliations

Mohd Nadeem
M.Tech Scholar, Department of Computer Science & Engineering, Shadan College of Engineering & Technology, Hyderabad, Telangana, India
Md Ateeq Ur Rahman
Professor, Department of Computer Science & Engineering, Shadan College of Engineering & Technology, Hyderabad, Telangana, India

Data Duplication, Deduplication, iDedup, Select iDedup, SecureDeDup, Data Blocks, Token number, Infrastructure as a Service.

  1. M. Fu, D. Feng, Y. Hua, X. He, Z. Chen, W. Xia, F. Huang, and Q. Liu. Proposed a paper Accelerating Restore and Garbage Collection in Deduplication-based Backup Systems via Exploiting Historical Information. In USENIX'14, Jun. 2014.
  2. J. Lofstead, M. Polte, G. Gibson, S. Klasky, K. Schwan, R. Oldfield, M. Wolf, and Q. Liu. Proposed a paper Six Degrees of Scientific Data: Reading Patterns for Extreme Scale Science IO. In HPDC'11, Jun. 2011.
  3. C. Zhang, X. Yu, A. Krishnamurthy, and Randolph Y. Wang. Proposed a paper Configuring and Scheduling an Eager-Writing Disk Array for a Transaction Processing Workload. In FAST'02, Jan. 2002.
  4. F. Chen, T. Luo, and X. Zhang. Proposed a paper CAFTL: A Content-Aware Flash Translation Layer Enhancing the Lifespan of Flash Memory based Solid State Drives. In FAST'11, pages 77–90, Feb. 2011.
  5. E. Rozier and W. Sanders. Proposed a paper A Framework for Efficient Evaluation of the Fault Tolerance of Deduplicated Storage Systems. In DSN'12, Jun. 2012.

Publication Details

Published in : Volume 2 | Issue 6 | November-December 2017
Date of Publication : 2017-12-31
License:  This work is licensed under a Creative Commons Attribution 4.0 International License.
Page(s) : 159-163
Manuscript Number : CSEIT172666
Publisher : Technoscience Academy

ISSN : 2456-3307

Cite This Article :

Mohd Nadeem, Md Ateeq Ur Rahman, "Secure Data Duplication Process for Better Performance of Primary Databases", International Journal of Scientific Research in Computer Science, Engineering and Information Technology (IJSRCSEIT), ISSN : 2456-3307, Volume 2, Issue 6, pp.159-163, November-December-2017. |          | BibTeX | RIS | CSV

Article Preview