BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Denver
X-LIC-LOCATION:America/Denver
BEGIN:DAYLIGHT
TZOFFSETFROM:-0700
TZOFFSETTO:-0600
TZNAME:MDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0600
TZOFFSETTO:-0700
TZNAME:MST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20240116T191658Z
LOCATION:505
DTSTART;TZID=America/Denver:20231112T095600
DTEND;TZID=America/Denver:20231112T100100
UID:submissions.supercomputing.org_SC23_sess421_ws_rsdha109@linklings.com
SUMMARY:NVMe-Backed GNN Training on GPU Leveraging a Paged UVM Memory Syst
 em
DESCRIPTION:Workshop\n\nBenjamin Wagley (Colorado School of Mines), Pak Ma
 rkthub (NVIDIA Corporation), and Bo Wu and Mehmet Belviranli (Colorado Sch
 ool of Mines)\n\nGraph Neural Networks (GNNs) are powerful machine learnin
 g models that learn on graph data by extracting embeddings that represent 
 vertex and edge features, as well as graph topology. With graph data scale
  increasing, and high memory pressure generated from GNN feature data, we 
 turn to out-of-core training methods on many real world graphs. Current st
 ate-of-the-art methods for large-graph GNN training leverage mini-batches,
  distributed or parallel environments, and memory-aware partitioning and s
 ampling.  These methods however require custom training architectures and 
 pipelines. Here, we propose Kirin, a framework for large-graph out-of-core
  training on a single machine with a single GPU on pre-sampled graphs. Kir
 in leverages Dragon-direct, allowing for NVMe-backed tensors for out-of-co
 re training through driver managed allocations. Building on UVM, Dragon-di
 rect utilizes a page-based unified memory system, resulting in memory-mana
 gement that is largely invisible to the user. We showcase Kirin and analyz
 e its performance and effectiveness for GNN workloads.\n\nTag: Accelerator
 s, Edge Computing, Heterogeneous Computing\n\nRegistration Category: Works
 hop Reg Pass\n\nSession Chairs: Ali Akoglu (University of Arizona), Mehmet
  E Belviranli (Colorado School of Mines), and Seyong Lee (Oak Ridge Nation
 al Laboratory (ORNL))
END:VEVENT
END:VCALENDAR
