-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Build GPU Variants of Current Images #1557
Comments
@sdwalker62 first of all, thank you for your suggestion! I see a growing interest in the GPU-related docker stacks because people ask more and more about them. But I think right now it's not easy to add a new image. And in this case a whole set of new images.
Right now, if we add new images, this will add time to our workflow.
I think you can actually get rid of this patching because we have a way to pass custom arguments to Please, tell me, if this helps. Update 2024.01.17: fixed the command to use more recent image and |
This works because we have |
@sdwalker62 if you're still interested in this issue - we've changed our build system completely, so it should probable be easier now to add cuda based images. |
It would be awesome if such cuda based images were available. |
I think it's actually currently possible to have cuda based images and not increase build time at all. |
@sdwalker62 You may be interested in my/b-data's CUDA-enabled JupyterLab Python docker stack:
P.S.: I have whitelisted your GitHub account at https://demo.cuda.jupyter.b-data.ch so you may quickly test online. |
I updated some advices in my comments above, so they work with current set of images. If someone comes across this issue and feels they need GPU images, please, upvote this issue - this will make it more clear for me and other maintainers, that community is really interested in such images. |
New CUDA enabled |
Hi, I am interested in having docker image for jupyter + pytorch + gpu, because docker is quite convenient and deep learning really needs GPU. Therefore, I wonder is there any shortcomings when using these? I would love to hear from you experts! |
@sdwalker62 @KopfKrieg @joglekara @TylerSpears @romainrossi @ChristofKaufmann @fzyzcjy @mfreeman451 @markusschmaus @benz0li These images are still based on top of regular Ubuntu images, but install GPU versions of I would like to thank @johanna-reiml-hpi for implementing general |
@mathbunnyru do you think it would be a useful idea to make a blog post on the Jupyter blog announcing these? |
Everything branded NVIDIA and/or CUDA comes with proprietary licenses. The Every Whether or not you are aware of it, you agree to their End User License Agreement (EULA). |
Thanks for the image!
My two cents: Have not tried it yet, but IIRC some pip packages require proper cuda environments to compile some C++ code that is generated on the fly to maximize performance. (Maybe https://github.com/microsoft/DeepSpeed or some other lib, I do not remember very clearly)
Wondering whether it is possible to mark jupyter's corresponding images with that license, while other image with the original normal license |
Thanks for the suggestion. I’ll do it! |
Cross reference: Uploading of container developed on top of nvidia/cuda images (#224) · Issues · nvidia / container-images / cuda · GitLab |
Sent for a review. |
@benz0li @yuvipanda could you please tell me, what do I need to do to make GPU work? I don't use GPU versions of images, so I don't have any hands-on experience, sorry for that. |
NVIDIA GPU + NVIDIA Linux driver + NVIDIA Container Toolkit ℹ️ The host running the GPU accelerated images only requires the NVIDIA driver, the CUDA toolkit does not have to be installed. Prerequisites: See https://github.com/b-data/jupyterlab-python-docker-stack/blob/83e8c3b830db83b8c73457487605f44be9e4e487/CUDA.md#prerequisites
Yes.
The above information is for Linux hosts. If you are working on a Windows host:
ℹ️ Current Apple hardware: No NVIDIA GPU = No CUDA support. |
According to the nvidia doc: When setting the environment variable |
@ChristofKaufmann This only works if either
or
ℹ️ Specifying |
Thanks for the clarification! |
Thanks, @benz0li 👍 |
Hello everyone! First, I just want to say that at work we almost exclusively use Jupyter as our data science platform and appreciate everything that the community and the Jupyter team has done to create such a wonderful tool!
I have searched through the current list of issues and pull requests open in this repository and have not found and that match this request so I am starting a new one. Apologies if I have missed any. Our work necessitates the use of cuda-enabled containers as we primarily train deep learning models. We have made a slight modification to the makefile that allows us to re-use most of the infrastructure already in place to build the various levels of notebooks.
I am aware of the fantastic work being done at https://github.com/iot-salzburg/gpu-jupyter/ and suggest anyone who needs gpu-enabled containers to check it out as a first stop. This solution will work for many, and it contains some additional features which are nice to have, but it is slightly different than what we prepose.
The current docker-stacks starts with the base-notebook which is built from an Ubuntu 20.04 layer. Our preposal is to build a separate set of notebooks based on the nvidia/cuda:11.3.0-cudnn8-runtime-ubuntu20.04 image (we chose this for compatibility with Pytorch). This is similar to what the iot-salzburg repository does in their images but maintains the minimalism found in some of the base notebooks for those who are looking for a solid foundation for their own custom images. With the change to the makefiles it would be easy to build two sets of images: one for the standard notebooks as they exist now, and another for the cuda-enabled variants.
To address the selection criteria found in the documention for new features:
I would be happy to answer any questions about this proposal!
The text was updated successfully, but these errors were encountered: