papers AI Learner
The Github is limit! Click to go to the new site.

BigDataBench: a Big Data Benchmark Suite from Web Search Engines

2013-07-01
Wanling Gao, Yuqing Zhu, Zhen Jia, Chunjie Luo, Lei Wang, Zhiguo Li, Jianfeng Zhan, Yong Qi, Yongqiang He, Shiming Gong, Xiaona Li, Shujie Zhang, Bizhu Qiu

Abstract

This paper presents our joint research efforts on big data benchmarking with several industrial partners. Considering the complexity, diversity, workload churns, and rapid evolution of big data systems, we take an incremental approach in big data benchmarking. For the first step, we pay attention to search engines, which are the most important domain in Internet services in terms of the number of page views and daily visitors. However, search engine service providers treat data, applications, and web access logs as business confidentiality, which prevents us from building benchmarks. To overcome those difficulties, with several industry partners, we widely investigated the open source solutions in search engines, and obtained the permission of using anonymous Web access logs. Moreover, with two years’ great efforts, we created a sematic search engine named ProfSearch (available from this http URL). These efforts pave the path for our big data benchmark suite from search engines—BigDataBench, which is released on the web page (this http URL). We report our detailed analysis of search engine workloads, and present our benchmarking methodology. An innovative data generation methodology and tool are proposed to generate scalable volumes of big data from a small seed of real data, preserving semantics and locality of data. Also, we preliminarily report two case studies using BigDataBench for both system and architecture researches.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1307.0320

PDF

https://arxiv.org/pdf/1307.0320


Similar Posts

Comments