Skip to content

Implementing AssembleNet: Searching for Multi-Stream Neural Connectivity in Video Architectures Explain using Pytorch

License

Notifications You must be signed in to change notification settings

leaderj1001/AssembleNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Implementing AssembleNet: Searching for Multi-Stream Neural Connectivity in Video Architectures Explain using Pytorch (Work In Process)

Reference

This repo is not official repository.

  • Paper Link
  • Author: Michael S. Ryoo (Robotics at Google, Google Research), AJ Piergiovanni (Robotics at Google, Google Research), Mingxing Tan (Google Research), Anelia Angelova (Robotics at Google, Google Research)
  • Organization: Robotics at Google, Google Research

Usage

  • Make graph
from make_graph import Graph
import pprint

p = pprint.PrettyPrinter(width=160, indent=4)
g = Graph()
p.pprint(g.grpah)
  • Make Network
from make_graph import Graph
import pprint

g = Graph()
m = Model(g.graph)
pprint.pprint(m.graph, width=160)

x = torch.randn([2, 3, 16, 256, 256])
print(m(x).size())
  • Network Evolution
from make_graph import Graph
import pprint

g = Graph()
m = Model(g.graph)
pprint.pprint(m.graph, width=160)
m._evolution()
pprint.pprint(m.graph, width=160)

x = torch.randn([2, 3, 16, 256, 256])
print(m(x).size())

Evolution

스크린샷 2020-09-28 오후 12 50 18

Work In Process

  • Connection-Learning-Guided Mutation
  • Evolution
  • Training

About

Implementing AssembleNet: Searching for Multi-Stream Neural Connectivity in Video Architectures Explain using Pytorch

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages