Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

关于G网络训练的疑问 #7

Open
Rheelt opened this issue Jul 5, 2018 · 3 comments
Open

关于G网络训练的疑问 #7

Rheelt opened this issue Jul 5, 2018 · 3 comments

Comments

@Rheelt
Copy link

Rheelt commented Jul 5, 2018

在G_pretrain.m文件中代码摘要如下
`prob_k=zeros(9,1);
for k=1:9

row=floor((k-1)/3)+1;
col=mod((k-1),3)+1;

for i=1:nBatches
batch = pos_data(:,:,:,opts.batchSize*(i-1)+1:min(end,opts.batchSize*i));
batch(col,row,:,:)=0;
if(opts.useGpu)
batch = gpuArray(batch);
end
res = vl_simplenn(net_fc, batch, [], [], ...
'disableDropout', true, ...
'conserveMemory', true, ...
'sync', true) ;

f = gather(res(end).x) ;
if ~exist('feat','var')
    feat = zeros(size(f,1),size(f,2),size(f,3),n,'single');
end
feat(:,:,:,opts.batchSize*(i-1)+1:min(end,opts.batchSize*i)) = f;    

end

X=feat;
E = exp(bsxfun(@minus, X, max(X,[],3))) ;
L = sum(E,3) ;
Y = bsxfun(@rdivide, E, L) ;
prob_k(k)=sum(Y(1,1,1,:));

end
[~,idx]=min(prob_k);`

其中**[~,idx]=min(prob_k)**此处选择出来的idx对应的mask,所产生的D网络的loss不是最大的,而是最小的。这样选出的mask与论文中所阐明的选择方法不同。
@22wei22

@Rheelt
Copy link
Author

Rheelt commented Jul 5, 2018

@yl1991

@ybsong00
Copy link
Owner

ybsong00 commented Nov 3, 2018

We do softmax after network output. So the prob_k means the output probability rather than loss.

@vincennnnt
Copy link

@Rheelt 你好,在代码中的cost sensitive loss是在哪里调用的,不是根据这个loss来更新mask的权重么,然后我在代码并没有找到调用的地方。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants