Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create Machine Learning Tutorials with Flux.jl #107

Open
logankilpatrick opened this issue Oct 5, 2021 · 9 comments
Open

Create Machine Learning Tutorials with Flux.jl #107

logankilpatrick opened this issue Oct 5, 2021 · 9 comments

Comments

@logankilpatrick
Copy link
Member

logankilpatrick commented Oct 5, 2021

Hello prospective hacktoberfest contributor! The FluxML community would welcome new tutorials to the Flux website which can generally be found under: https://fluxml.ai/tutorials.html

You can find the source code for the tutorials here: https://github.com/FluxML/fluxml.github.io/tree/main/tutorials/_posts. They are just markdown files.

What we are looking for

We would be open to Pull Request which provide a tutorial topic that is not already covered by the existing tutorials. But no need to re-invent the wheel here. If you have a favorite tutorial that you want to try and re-create using Flux, we would love to help and see it!

Find out more about contributing here: https://github.com/FluxML/fluxml.github.io/blob/main/CONTRIBUTING.md and more general ways of contributing (which may not be open hacktoberfest issues but we can happily make them into issues if that helps you) here: https://github.com/FluxML/Flux.jl/blob/master/CONTRIBUTING.md

Another good starting place would be the Model Zoo: https://github.com/FluxML/model-zoo where we have a bunch of existing models but usually without tutorials built around them.

@Dsantra92
Copy link
Contributor

Dsantra92 commented Oct 5, 2021

I would love to see some of my favourite tutorials in Tensorflow and PyTorch for Flux. I propose to write flux versions for these tutorials:

I would love to hear your feedback on these tutorials.

P.s: These are the tutorials that I can think off the top of my head. If these PRs go well I would love to add more tutorials in the future.

@logankilpatrick
Copy link
Member Author

I would love to see some of my favourite tutorials in Tensorflow and PyTorch for Flux. I propose to write flux versions for these tutorials:

I would love to hear your feedback on these tutorials.

P.s: These are the tutorials that I can think off the top of my head. If these PRs go well I would love to add more tutorials in the future.

This would be incredible! Let us know how we can help.

@Fernando23296
Copy link

I propose to write flux version of this tutorial:

What do you think?

@logankilpatrick
Copy link
Member Author

@Fernando23296 yes! Let's do it.

@logankilpatrick
Copy link
Member Author

@Fernando23296 there is still time if you want to try and get a tutorial created during hacktoberfest!

@kailukowiak
Copy link

You never realize how much you rely on tutorial to build novel models until you try to build one with out any similar examples 🙈

@logankilpatrick
Copy link
Member Author

@kailukowiak I highly encourage you to use other popular tutorials (in TF, Keras, PyTorch, etc.) if you are trying to make a Flux version. This should help a ton!

@kailukowiak
Copy link

kailukowiak commented Mar 27, 2022

@logankilpatrick I've been working on converting this tutorial tutorial to Julia. I've got a working version for the CPU but my rnn uses a loop with indexes.

function rnn(input, hidden, model)
    combined = [input; hidden] |> model.device
    out = model.in2out(combined)
    hidden = model.in2hidden(combined)
    return out, hidden
end

function predict(X, model::Model)
    hidden = zeros(model.hidden_size) |> model.device
    local ŷ
    for i = 1:size(X, 2)
        ŷ, hidden = rnn(X[:, i], hidden, model) #  |> model.device
    end
    return ŷ
end

This throws an error when I try and take the gradient and straight-up failes if I set: CUDA.allowscalar(false). I could use the build in flux functions but I wanted to stay closer to the method in the tutorial as I think it gives good intuition into what's actually going on. Do you have any idea how I could get around the issues with integer slicing a GPU array?

@ToucheSir
Copy link
Member

Hi @kailukowiak, this kind of question is better suited for Discourse (we try to keep the issue tracker to bugs and feature requests). If you wouldn't mind opening a thread there (there's a Github login option) and letting us know, we can pick up there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants