지원사업
학술연구/단체지원/교육 등 연구자 활동을 지속하도록 DBpia가 지원하고 있어요.
커뮤니티
연구자들이 자신의 연구와 전문성을 널리 알리고, 새로운 협력의 기회를 만들 수 있는 네트워킹 공간이에요.
이용수
Abstract
I. INTRODUCTION
II. EXPLORING CACHE LEAKAGE REDUCTION TECHNIQUES
III. EVALUATION METHODOLOGY
IV. EXPERIMENT RESULTS
V. RELATED WORK
VI. CONCLUSION
REFERENCES
논문 유사도에 따라 DBpia 가 추천하는 논문입니다. 함께 보면 좋을 연관 논문을 확인해보세요!
Enhancing GPU Performance by Efficient Hardware-Based and Hybrid L1 Data Cache Bypassing
Journal of Computing Science and Engineering
2017 .06
Workload Characteristics-based L1 Data Cache Switching-off Mechanism for GPUs
한국컴퓨터정보학회논문지
2018 .10
Dynamic Probabilistic Caching Algorithm with Content Priorities for Content-Centric Networks
[ETRI] ETRI Journal
2017 .10
Cascaded Cache Based on Recently Used Order for Latency Optimization for IoT
Journal of Computing Science and Engineering
2021 .09
Preventing Fast Wear-out of Flash Cache with An Admission Control Policy
JOURNAL OF SEMICONDUCTOR TECHNOLOGY AND SCIENCE
2015 .10
GPU+HBM 컴퓨팅 플랫폼에서의 Last-Level Cache 유무에 따른 성능 분석
대한전자공학회 학술대회
2018 .06
Maximizing GPU Cache Utilization with Adjustable Cache Line Management
한국정보과학회 학술발표논문집
2019 .06
차세대 CPU를 위한 캐시 메모리 시스템 설계
대한임베디드공학회논문지
2016 .12
D2D 캐싱 시스템에서 마이크로 D2D 캐싱의 비율
한국통신학회 학술대회논문집
2022 .11
Study of Cache Performance on GPGPU
IEIE Transactions on Smart Processing & Computing
2015 .04
Exploiting L2 Cache Sensitivity in Artificial Neural Network on GPUs
대한전자공학회 학술대회
2017 .01
Warp-Based Load/Store Reordering to Improve GPU Time Predictability
Journal of Computing Science and Engineering
2017 .06
선택적 캐시 플러시를 위한 캐시 구현
대한전자공학회 학술대회
2015 .11
Exploring Time-Predictable and High-Performance Last-Level Caches for Hard Real-Time Integrated CPU-GPU Processors
Journal of Computing Science and Engineering
2020 .09
Distributed Learning-Based Proactive Content Caching for Improved Quality-of-Experience (QoE)
한국정보과학회 학술발표논문집
2021 .06
When Learning-Based Caching at the Edge Meets Data Market
한국정보과학회 학술발표논문집
2021 .06
Performance and energy efficiency analysis of Cache Memory Architecture in GPGPU
INTERNATIONAL CONFERENCE ON FUTURE INFORMATION & COMMUNICATION ENGINEERING
2019 .06
A Locality-Aware Write Filter Cache for Energy Reduction of STTRAM-Based L1 Data Cache
JOURNAL OF SEMICONDUCTOR TECHNOLOGY AND SCIENCE
2016 .02
Variable latency L1 data cache architecture design in multi-core processor under process variation
한국컴퓨터정보학회논문지
2015 .09
Improving CPU and GPU Performance through Sample-Based Dynamic LLC Bypassing
Journal of Computing Science and Engineering
2018 .06
0