-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lasagne issue while running on GPU #4
Comments
I am also getting this error and I am using
The error origins from
I am trying to learn more about this problem. After some hacking, the code can run now:
I am not sure if the modification is right, I will wait to see the result! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
After following your instructions and installing the prerequisite for running DMN+, I get the following error:
`
(keras-dmn)user1@dpl04:~/keras/Improved-Dynamic-Memory-Networks-DMN-plus$ python main.py --network dmn_tied --mode train --babi_id 1
Using gpu device 2: GeForce GTX TITAN X (CNMeM is enabled with initial size: 98.0% of memory, CuDNN not available)
==> parsing input arguments
==> Loading test from /home/IAIS/user1/keras/Improved-Dynamic-Memory-Networks-DMN-plus/data/tasks_1-20_v1-2/en-10k/qa1_single-supporting-fact_train.txt
==> Loading test from /home/IAIS/user1/keras/Improved-Dynamic-Memory-Networks-DMN-plus/data/tasks_1-20_v1-2/en-10k/qa1_single-supporting-fact_test.txt
==> not using minibatch training in this mode
==> not used params in DMN class: ['shuffle', 'network', 'babi_id', 'batch_size', 'epochs', 'prefix', 'load_state', 'log_every', 'babi_test_id', 'save_every']
==> building input module
==> creating parameters for memory module
==> building episodic memory module (fixed number of steps: 3)
==> building answer module
==> collecting all parameters
==> building loss layer and computing updates
Traceback (most recent call last):
File "main.py", line 194, in
args, network_name, dmn = dmn_mid(args)
File "main.py", line 84, in dmn_mid
dmn = dmn_tied.DMN_tied(**args_dict)
File "/home/IAIS/user1/keras/Improved-Dynamic-Memory-Networks-DMN-plus/dmn_tied.py", line 225, in init
updates = lasagne.updates.adam(self.loss, self.params)
File "/home/IAIS/user1/anaconda2/envs/keras-dmn/lib/python2.7/site-packages/lasagne/updates.py", line 583, in adam
all_grads = get_or_compute_grads(loss_or_grads, params)
File "/home/IAIS/user1/anaconda2/envs/keras-dmn/lib/python2.7/site-packages/lasagne/updates.py", line 114, in get_or_compute_grads
raise ValueError("params must contain shared variables only. If it "
ValueError: params must contain shared variables only. If it contains arbitrary parameter expressions, then lasagne.utils.collect_shared_vars() may help you.
`
I used your theanorc file with adjusting the CUDA root. Thanks!
The text was updated successfully, but these errors were encountered: