You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Recently, RAPIDS/CCCL nightlies began failing due to a change in CCCL's memory resources. See NVIDIA/cccl#5313
Describe the solution you'd like
RMM should support CCCL's new memory resources, which are targeting CCCL 3.2. RAPIDS is currently using CCCL 3.1.
Current plan for adoption:
Allocation Interfaces
This list of tasks requires CCCL 3.1+, so we can ship these changes in 25.12.
Need to verify that all of RAPIDS builds with CCCL 3.1 with these changes in RMM, and ask Spark to do testing with the same pre-release of CCCL 3.1. The goal is to unblock adoption of CCCL 3.1 for RAPIDS.
Is your feature request related to a problem? Please describe.
Recently, RAPIDS/CCCL nightlies began failing due to a change in CCCL's memory resources. See NVIDIA/cccl#5313
Describe the solution you'd like
RMM should support CCCL's new memory resources, which are targeting CCCL 3.2. RAPIDS is currently using CCCL 3.1.
Current plan for adoption:
Allocation Interfaces
This list of tasks requires CCCL 3.1+, so we can ship these changes in 25.12.
allocateupdates (Support building with CCCL 3.1.0 #2017) (25.10)allocatesignature (RMM internal refactoring) Use CCCL MR interface internally #2112 (25.12)allocatesignature Migrate RAPIDS to CCCL MR interface (new allocation APIs) #2126 (25.12)allocatesignature Add deprecation warnings for legacy MR interface #2128 (25.12)allocateinterfaces Remove legacy memory resource interface in favor of CCCL interface #2150 (26.02)Memory Resource Handling
This list of tasks requires CCCL 3.2+, so we will need to work on that migration in 26.02.
device_memory_resource*#2143any_resourcein device-resource global mapping Store any_resource in device-resource global mapping #2200 (26.02)any_resourcein custom containers Store any_resource in device_buffer and device_uvector #2201 (26.02)resource_refs should instead storecuda::mr::any_resourcepolyfillnamespace (26.04)cuda::shared_resource(staging, 26.06)owning_wrapper(staging, 26.06)Remove
device_memory_resourceand legacy interfacedevice_memory_resource*toresource_ref/any_resourcermm::cuda_streamtocuda::stream_refdevice_memory_resourceinheritance from all C++ memory resourcesdevice_memory_resourcecccl_adaptors.hppand use raw CCCLresource_reftypesPost-tasks
is_resource_adaptor.hppand its test usages (no longer meaningful aftershared_resourceadoption)cuda_mrorcuda_async_mrin tests rather thanget_current_device_resource_ref(see comment)device_memory_resource,do_allocate, virtual dispatch from Doxygen and Python docstrings#includedirectives for deleted headers (device_memory_resource.hpp,device_memory_resource_view.hpp,cccl_adaptors.hpp)Switch adaptors to be property-agnostic (e.g. support host accessible pools) and exposeUpstream& upstream_resource()Update RAPIDS libraries
The merge train 🚂🚃🚃 is tracked in #2364.
Tree 1: Structured Data Processing
Tree 2: Vector Search, ML, Graph, Optimization