Ajou University repository

Small File Indexing Scheme for HDFS with Erasure Coding
  • TEREFE ANENE BEKUMA
Citations

SCOPUS

0

Citation Export

Advisor
Sangyoon Oh
Affiliation
아주대학교 일반대학원
Department
일반대학원 컴퓨터공학과
Publication Year
2018-08
Publisher
The Graduate School, Ajou University
Keyword
Distributed File SystemsSmall File StorageHadoop Distributed File SystemErasure Coding
Description
학위논문(석사)--아주대학교 일반대학원 :컴퓨터공학과,2018. 8
Alternative Abstract
Hadoop Distributed File System (HDFS) is designed to store and manage large files. It stores the file system metadata in the NameNode’s memory for high performance. Since there is a single NameNode in HDFS cluster, it suffers from high memory usage of the NameNode when processing a massive number of small files. This problem is referred as ‘the small file problem’. The small file problem occurs because HDFS stores each small file on a separate storage block in the DataNode and maintains an individual metadata on the NameNode. Researchers suggested merging small files to large file with a size of one HDFS block to reduces the memory usage of the NameNode. They considered HDFS with contiguous block layout to solve the small file problem. However, a striped block layout has been adopted by Hadoop when Erasure Coding is enabled. In this block layout, a file is divided into smaller parts of 1MB size and distributed across multiple storage blocks of a DataNode. As a result, it creates a possibility to further reduce the memory usage of the small files by increasing the merged file size to fully fill the multiple storage blocks while maintaining the same size of metadata. Therefore, we propose a new scheme for solving the small file problem that further reduces the memory usage of the NameNode by considering a striped block layout. However, it brings a new challenge as it needs a novel file indexing and file extracting methods to access the small files. We introduced a program named Small File Processor (SFP) which performs file merging and indexing on small files. A file extracting algorithm has been implemented to read the small files from their corresponding merged file. The experiment result shows that the proposed scheme reduces the memory usage of NameNode and improves the write access speed of the small files compared to the default HDFS with Erasure Coding.
Language
eng
URI
https://dspace.ajou.ac.kr/handle/2018.oak/14060
Fulltext

Type
Thesis
Show full item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Total Views & Downloads

File Download

  • There are no files associated with this item.