当前位置: X-MOL 学术Eng. Sci. Technol. Int. J. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
FlashPage: A read cache for low-latency SSDs in web proxy servers
Engineering Science and Technology, an International Journal ( IF 5.7 ) Pub Date : 2024-02-19 , DOI: 10.1016/j.jestch.2024.101639
Junhee Ryu , Dong Kun Noh , Kyungtae Kang

The paper introduces FlashPage, a high-speed SSD caching system designed for ultra-fast media, with the goal of enhancing web page delivery in proxy servers. Traditional SSD caching schemes, designed primarily for slow HDD-based primary storage, encounter difficulties when applied to capacity-class SSDs as primary storage. This limits the high-performance capabilities of caching media. To address this issue, FlashPage operates within the Linux virtual filesystem layer, shortening the hit-handling path and minimizing lookup overhead. It incorporates a compact radix tree to fast locate cached data. These approaches reduce the software overhead for a 4kB read hit by over 5 times. FlashPage also employs novel admission and eviction policies to minimize flash wear while maintaining a high hit rate. As a second-level storage cache, FlashPage predicts the hotness of potential demotion candidates in the first-level storage cache (, page cache), achieving a 10.1% higher hit rate and reducing write traffic by 10.4% compared to LRU. Evaluations using Varnish and Squid HTTP caches show its effectiveness, with up to 29.6% and 38.2% faster web request processing compared to Bcache and DM-Cache, state-of-the-art caching schemes in mainline Linux kernels.

中文翻译:

FlashPage:Web 代理服务器中低延迟 SSD 的读取缓存

该论文介绍了FlashPage,这是一种专为超快媒体设计的高速SSD缓存系统,其目标是增强代理服务器中的网页交付。传统的 SSD 缓存方案主要针对基于 HDD 的慢速主存储而设计,当应用于容量级 SSD 作为主存储时会遇到困难。这限制了缓存媒体的高性能能力。为了解决这个问题,FlashPage 在 Linux 虚拟文件系统层中运行,缩短了命中处理路径并最大限度地减少了查找开销。它采用紧凑的基数树来快速定位缓存的数据。这些方法将 4kB 读取命中的软件开销减少了 5 倍以上。 FlashPage 还采用新颖的准入和驱逐策略来最大限度地减少闪存磨损,同时保持高命中率。作为二级存储缓存,FlashPage 可以预测一级存储缓存(页缓存)中潜在降级候选者的热度,与 LRU 相比,命中率提高 10.1%,写入流量减少 10.4%。使用 Varnish 和 Squid HTTP 缓存进行的评估显示了其有效性,与 Bcache 和 DM-Cache(主线 Linux 内核中最先进的缓存方案)相比,Web 请求处理速度分别提高了 29.6% 和 38.2%。
更新日期:2024-02-19
down
wechat
bug