Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'Context' object has no attribute '_rng' #1

Open
ChAoss0910 opened this issue Jan 28, 2022 · 9 comments
Open

AttributeError: 'Context' object has no attribute '_rng' #1

ChAoss0910 opened this issue Jan 28, 2022 · 9 comments

Comments

@ChAoss0910
Copy link

ChAoss0910 commented Jan 28, 2022

Hi, I got an issue when running the training code below.

> wt_rnn(train_data = test_data,test_data = test_data,type="LSTM",catchment = "CM1",model_name = "LSTM")
*** Starting RNN computation for catchment CM1 ***
CM1 train data was sucessfully splitted!
number of missing days: 0 
CM1 validation data was sucessfully splitted!
number of missing days: 6 
CM1 test data was sucessfully splitted!
number of missing days: 6 
Mean and standard deviation used for feature scaling are saved under CM1/RNN/LSTM/LSTM/scaling_values.csv

Random hyperparameter sampling:
RNN type = LSTM, layers = 1, units = 200, dropout = 0.175, batch_size = 115, timesteps = 153, ensemble_runs = 1, Error in py_call_impl(callable, dots$args, dots$keywords) : 
  AttributeError: 'Context' object has no attribute '_rng'

I've double checked that there's no data format issue. Not sure if it's a bug with the source code.

Also tested with the original data given in the repo and got the same error:

> train_data <- feather::read_feather("test_catchment/train_data.feather")
> 
> test_data <- feather::read_feather("test_catchment/test_data.feather")
> 
> wt_fnn(train_data = train_data,test_data = test_data,catchment = "test_catchment",model_name = "fnn")
*** Starting FNN computation for catchment test_catchment ***
Mean and standard deviation used for feature scaling are saved under test_catchment/FNN/fnn/scaling_values.csv

Random hyperparameter sampling:
layers = 3, units = 53, dropout = 0.05, batch_size = 91, ensemble_runs = 1, Error in py_call_impl(callable, dots$args, dots$keywords) : 
  AttributeError: 'Context' object has no attribute '_rng'

Could you have a check if there's need of any upgrades with the newest version of dependent libraries? Looks like it has something wrong with the random number generator.

@mcvta
Copy link

mcvta commented Feb 1, 2022

Hi, I´m having the same problem as Hao Chen, while testing the original data. I don´t know what is "_rng". Can this be related with tensorflow version?
I´m using tensorflow 2.7.0. 9000
Thank you

data(test_catchment)
wt_preprocess(test_catchment)
train_data <- feather::read_feather("test_catchment/train_data.feather")
test_data <- feather::read_feather("test_catchment/test_data.feather")


wt_fnn(
  train_data,
  test_data = NULL,
  catchment = NULL,
  model_name = NULL,
  seed = NULL,
  n_iter = 40,
  n_random_initial_points = 20,
  epochs = 100,
  early_stopping_patience = 5,
  ensemble_runs = 5,
  bounds_layers = c(1, 5),
  bounds_units = c(5, 200),
  bounds_dropout = c(0, 0.2),
  bounds_batch_size = c(5, 150),
  initial_grid_from_model_scores = TRUE
)

wt_fnn(train_data, test_data, "test_catchment", "standard_FNN")

OUTPUT

wt_fnn(train_data, test_data, "test_catchment", "standard_FNN")
*** Starting FNN computation for catchment test_catchment ***
Mean and standard deviation used for feature scaling are saved under test_catchment/FNN/standard_FNN/scaling_values.csv

Random hyperparameter sampling:
layers = 3, units = 105, dropout = 0.025, batch_size = 27, ensemble_runs = 1,
Error in py_call_impl(callable, dots$args, dots$keywords) :
AttributeError: 'Context' object has no attribute '_rng'

@mcvta
Copy link

mcvta commented Feb 1, 2022

I know that the error is related with the random number generator (rng)

@ChAoss0910
Copy link
Author

ChAoss0910 commented Feb 1, 2022

I know that the error is related with the random number generator (rng)

Hi just found out this could be resolved by adding a random seed number in the input parameters like this:
> wt_fnn(train_data = train_data,test_data = test_data,catchment = "CM1",seed = 42,model_name = "fnn")
Since the default seed number is set as NULL in the source code.

@mcvta
Copy link

mcvta commented Feb 1, 2022

Hi can you run this code?

library("wateRtemp")
library(tensorflow)
data(test_catchment)


wt_preprocess(test_catchment)
train_data <- feather::read_feather("test_catchment/train_data.feather")
test_data <- feather::read_feather("test_catchment/test_data.feather")


wt_fnn(
  train_data,
  test_data = NULL,
  catchment = NULL,
  model_name = NULL,
  seed = NULL,
  n_iter = 40,
  n_random_initial_points = 20,
  epochs = 100,
  early_stopping_patience = 5,
  ensemble_runs = 5,
  bounds_layers = c(1, 5),
  bounds_units = c(5, 200),
  bounds_dropout = c(0, 0.2),
  bounds_batch_size = c(5, 150),
  initial_grid_from_model_scores = TRUE
)

wt_fnn(train_data,test_data,catchment = "test_catchment",seed = 42,model_name = "fnn")

@mcvta
Copy link

mcvta commented Feb 1, 2022

Hi,
It´s running... great!

@mcvta
Copy link

mcvta commented Feb 1, 2022

Now I´m getting another error:

Error in py_call_impl(callable, dots$args, dots$keywords) :
TypeError: Exception encountered when calling layer "alpha_dropout_174" (type AlphaDropout).

'>' not supported between instances of 'dict' and 'float'

Call arguments received:
• inputs=tf.Tensor(shape=(None, 42), dtype=float32)
• training=None

@ChAoss0910
Copy link
Author

Now I´m getting another error:

Error in py_call_impl(callable, dots$args, dots$keywords) : TypeError: Exception encountered when calling layer "alpha_dropout_174" (type AlphaDropout).

'>' not supported between instances of 'dict' and 'float'

Call arguments received: • inputs=tf.Tensor(shape=(None, 42), dtype=float32) • training=None

Got same error...looking into it now

@mcvta
Copy link

mcvta commented Feb 1, 2022

There may be missing values in the dataset?

@mcvta
Copy link

mcvta commented Feb 2, 2022

This are the parameters for the best model:
layers = 3
units = 200
max_epoc = 100
early_stopping_patience = 5
batch_size = 60
dropout = 2.22044604925031E-16
ensemble =1

Can this problem be related with the very small value of dropout =2.22044604925031E-16 ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants