--- license: other task_categories: - text-classification tags: - anomaly-detection - log-analysis - hdfs pretty_name: HDFS v1 Block-Level Logs size_categories: - 100K : ``` Example: ``` INFO dfs.DataNode$DataXceiver: Receiving block blk_-1608999687919862906 src: /10.251.73.220:42557 dest: /10.251.73.220:50010 INFO dfs.DataNode$DataXceiver: Receiving block blk_-1608999687919862906 src: /10.251.73.220:55213 dest: /10.251.73.220:50010 INFO dfs.FSNamesystem: BLOCK* NameSystem.allocateBlock: /mnt/hadoop/mapred/system/job_200811092030_0001/job.jar. blk_-1608999687919862906 INFO dfs.DataNode$PacketResponder: PacketResponder 1 for block blk_-1608999687919862906 terminating INFO dfs.DataNode$PacketResponder: Received block blk_-1608999687919862906 of size 67108864 from /10.251.73.220 ``` ## Source Data - **Original Dataset**: [logfit-project/HDFS_v1](https://huggingface.co/datasets/logfit-project/HDFS_v1) - **Original Source**: [LogPAI/loghub](https://github.com/logpai/loghub/tree/master/HDFS#hdfs_v1) ## Dataset Creation 1. Loaded the original line-level HDFS_v1 dataset 2. Formatted each log line as ` : ` 3. Grouped by `block_id` and concatenated log entries (ordered by line number) 4. Aggregated anomaly labels (max per block) 5. Created stratified 80/10/10 train/dev/test splits preserving class distribution ## Citation ```bibtex @inproceedings{xu2009detecting, title={Detecting Large-Scale System Problems by Mining Console Logs}, author={Xu, Wei and Huang, Ling and Fox, Armando and Patterson, David and Jordan, Michael}, booktitle={SOSP 2009} } @inproceedings{zhu2023loghub, title={Loghub: A Large Collection of System Log Datasets for AI-driven Log Analytics}, author={Zhu, Jieming and He, Shilin and He, Pinjia and Liu, Jinyang and Lyu, Michael R.}, booktitle={ISSRE 2023} } ``` ## License See original dataset license.