Committee |
Date Time |
Place |
Paper Title / Authors |
Abstract |
Paper # |
CPSY, DC, RECONF, IPSJ-ARC [detail] |
2024-08-08 17:50 |
Tokushima |
Awagin Hall (Primary: On-site, Secondary: Online) |
Basic Evaluation of Throughput and Power Efficiency by Offloading Image Recognition Processing to FPGA Eisuke Okazaki, Gai Nagahashi (Tokai Univ.), Yanzhi Li, Midori Sugaya (SIT), Takeshi Ohkawa (Kumamoto Univ.), Mikiko Sato (Tokai Univ.) CPSY2024-26 DC2024-26 RECONF2024-26 |
Multi-access Edge Computing (MEC) is a computing infrastructure that provides low-latency communication with nearby edge... [more] |
CPSY2024-26 DC2024-26 RECONF2024-26 pp.52-57 |
CPSY, IPSJ-ARC, IPSJ-HPC |
2023-12-05 10:55 |
Okinawa |
Okinawa Industry Support Center (Primary: On-site, Secondary: Online) |
Performance improvements of Multi-Platform Parallel Computing System Based on Web Technologies Soki Imaizumi, Kanemitsu Ootsu, Takashi Yokota (Utsunomiya Univ.) CPSY2023-27 |
Web browsers can be used as architecture-independent execution environments, and nowadays they can provide the same func... [more] |
CPSY2023-27 pp.1-6 |
CPSY, IPSJ-ARC, IPSJ-HPC |
2023-12-06 17:15 |
Okinawa |
Okinawa Industry Support Center (Primary: On-site, Secondary: Online) |
An Efficient Sparse Matrix Storage Format for Sparse Matrix-Vector Multiplication and Sparse Matrix-Transpose-Vector Multiplication on GPUs Ryohei Izawa, Yasushi Inoguchi (JAIST) CPSY2023-37 |
The utilization of sparse matrix storage formats is widespread across various fields, including scientific computing, ma... [more] |
CPSY2023-37 pp.58-63 |
RECONF |
2020-09-10 13:30 |
Online |
Online |
With GPU-FPGA Heterogeneous computing, Highly Effective Communication for Distributed Deep Learning Kenji Tanaka, Yuki Arikawa, Tsuyoshi Ito, Kazutaka Morita, Naru Nemoto, Fumiaki Miura, Kazuhiko Terada, Junji Teramoto, Takashi Sakamoto (NTT) RECONF2020-19 |
In distributed deep learning (DL), collective communication (Allreduce) used to share training results between GPUs is a... [more] |
RECONF2020-19 pp.1-6 |
IPSJ-ARC, VLD, CPSY, RECONF, IPSJ-SLDM [detail] |
2018-01-18 14:25 |
Kanagawa |
Raiosha, Hiyoshi Campus, Keio University |
VLD2017-71 CPSY2017-115 RECONF2017-59 |
Interconnection networks are key components for parallel and distributed computing systems, and they often become limiti... [more] |
VLD2017-71 CPSY2017-115 RECONF2017-59 pp.53-58 |
CCS |
2016-08-10 12:30 |
Hokkaido |
Yoichi Chuo kominkan |
[Invited Lecture]
Topology-based Approach for Network Sensing and its Applications Kazuki Nakada (Hiroshima City Univ.), Keiji Miura (Kwansei Gakuin Univ.) CCS2016-25 |
In this presentation, we review a topology-based approach for network sensing and its applications. We here explain rece... [more] |
CCS2016-25 pp.47-52 |
NLP |
2016-07-22 10:00 |
Hokkaido |
Hokkaido Univ. Centennial Hall |
An Artificial Bee Colony Algorithm Suited for Parallel Distributed Processing Yu Isono, Tomoyuki Sasaki, Hidehiro Nakano, Arata Miyauchi (Tokyo City Univ.) NLP2016-40 |
As approximate solutions in large-scale problems are obtained by evolutionary computation algorithms, many search indivi... [more] |
NLP2016-40 pp.33-38 |
IE, IMQ, MVE, CQ (Joint) [detail] |
2016-03-08 09:15 |
Okinawa |
|
Dynamically Assigned Distributed Processing System with SDN Switches Masahiko Kitamura, Hiroyuki Kimiyama, Tomoko Sawabe, Tatsuya Fujii (NTT), Kazunari Kojima, Mitsuru Maruyama (KAIT) CQ2015-134 |
The cloud services that enable us to use computing and networking resources on our demand drive many workflows into dist... [more] |
CQ2015-134 pp.147-151 |
R |
2015-05-22 16:40 |
Shimane |
Okinoshima-Bunka-Kaikan |
Rejuvenation Strategies in Time Warp-Based Distributed Systems Satoshi Fukumoto, Mamoru Ohara (Tokyo Metropolitan Univ.) R2015-9 |
Software rejuvenation is a technique to prevent aging of running software. Recently, benefits of software rejuvenation i... [more] |
R2015-9 pp.45-48 |
SC |
2015-03-28 10:55 |
Fukushima |
Aizu Univ. |
Parallel Processing of Large-Scale Graphs Using Spark on GPGPU Yuki Inamoto, Mikio Aoyama (Nanzan Univ.) SC2014-19 |
We propose a high performance computing method for large-scale graphs using the Spark on GPGPU. RDD, multiple sets of ab... [more] |
SC2014-19 pp.31-36 |