You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm playing around with this module living in a web API where a /train route will train a textgenrnn object on some training data and then save the model weights to a cloud bucket for later downloading and getting output from a /generate route. I'm using Flask.
For /train, there will understandably be a spike in memory usage while the model trains, but after this completes I want to free that memory to keep the server's footprint as small as possible. However, it seems like memory used in previous trains of textgenrnn (~120 MB) is sticking around and stacking on top of later memory usage.
Here is a profile of the server's memory usage while making the exact same call to /train twice in a row. Notice how the memory usage strictly increases on the second call, instead of resetting after the first call.
I use the model roughly as such:
model = textgenrnn()
model.train_on_texts(training_strings)
# do API stuff with the model
# now throw the kitchen sink at it, trying to free up memory
del model
gc.collect()
tf.keras.backend.clear_session()
tf.compat.v1.reset_default_graph()
Am I missing something?
The text was updated successfully, but these errors were encountered:
Hello. I don't have any programming knowledge, but if you have access to the script which you're running, make it so which it would run the following commands after your system is done with the script, to clean your memory:
I'm playing around with this module living in a web API where a
/train
route will train atextgenrnn
object on some training data and then save the model weights to a cloud bucket for later downloading and getting output from a/generate
route. I'm using Flask.For
/train
, there will understandably be a spike in memory usage while the model trains, but after this completes I want to free that memory to keep the server's footprint as small as possible. However, it seems like memory used in previous trains oftextgenrnn
(~120 MB) is sticking around and stacking on top of later memory usage.Here is a profile of the server's memory usage while making the exact same call to
/train
twice in a row. Notice how the memory usage strictly increases on the second call, instead of resetting after the first call.I use the model roughly as such:
Am I missing something?
The text was updated successfully, but these errors were encountered: