Skip to content

Managing the GPU memory usage #117

@AlkaSaliss

Description

@AlkaSaliss

Hi

I'm trying to do hyperparameters optimization on a GPU machine with tensorflow-gpu installed.
In my Keras model I manage the gpu memory with the following code snippet (without it, tensorflow occupies all available GPU memory by default) :

import tensorflow as tf
import keras
keras.backend.clear_session()
config_gpu = tf.ConfigProto()
config_gpu.gpu_options.allow_growth=True
sess = tf.Session(config=config_gpu)
keras.backend.set_session(sess)

However, as I have no idea about how gpflowopt uses tensorflow I can't manage its GPU memory usage, and I am running out of memory each time I launch optimization experiment.

Do you have any suggestion about how (or where) I can modify the gpflowopt code to manage gpu memory allocation ?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions