Search…
Installation Guide
Installation guide for the Tensorleap free 14-days trial.
Installing the Tensorleap platform is quick and easy. Simply run the installation command and follow the prompts. The entire process should take less than 10 minutes.
The installation script will:
  • Install K3D, a lightweight wrapper to run k3s (Lightweight Kubernetes) within a docker.
  • Deploy the Tensorleap platform in the K3D docker.
The installed platform does not have access to any local data, unless you explicitly provide it (see Mount Local Path). All the provided data is stored within the cluster and is not accessible from the outside.
The platform can be uninstalled at any time by running the provided uninstall command. This will remove all of the docker containers and images from your machine.

Minimum Requirements

  • Supported OS: macOS, Linux
  • Available Storage: 15GB

Installing

Installing Tensorleap Standalone

Install the Tensorleap standalone version on your local machine by executing the following command:
Linux/MacOS
Linux with GPU
bash <(curl -s https://helm.tensorleap.ai/install.sh)
Following the following steps to install a standalone version of Tensorleap with GPU support:
  1. 1.
    Install the NVIDIA device drivers: Run the following command to install your NVIDIA drivers:
    sudo apt update
    sudo apt install nvidia-driver-515
    To check on the current configuration, you can run the following command:
    nvidia-smi
    It should present the name of the GPU. In case it fails, try rebooting the machine.
  2. 2.
    Install the nvidia-docker2 package:

    Docker Prerequisite:

    First, make sure docker is installed on your machine by running the following command:
    sudo docker ps
    If docker is NOT installed, get it at at https://docs.docker.com/get-docker/ or install by using the following command:
    curl https://get.docker.com | sh && sudo systemctl --now enable docker sudo usermod -aG docker $USER
    Once docker is set, enter the following commands to install nvidia-docker2:
    distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
    && curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
    && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
    sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
    sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
    sudo apt update
    sudo apt install nvidia-docker2
    sudo systemctl restart docker
    This should allow docker to recognize the GPU, and you can validate that it does by running the following command:
    sudo docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu18.04 nvidia-smi
    This command should present the name of the GPU when detected. In case of failure, try rebooting the machine.
  3. 3.
    Install Tensorleap with GPU support by running the following command:
    USE_GPU=true bash <(curl -s https://helm.tensorleap.ai/install.sh)
After running the command, you will be given the option to experience Tensorleap with your local data, see below:
Enter a path to be mounted and accessible by scripts (default: /Users/me/tensorleap/data):
Enter the local path where your dataset is located or press Enter to use the default path.
If the local path was set, you will be given an option to set the location at which it can be accesses within the platform. You can press Enter to use the same path.
The installation process will guide you through a few configuration steps to initiate and download all images needed to run within the local cluster. Depending on your internet connection, this process may take up to 10 minutes.
If you wish to update the mounting path, simply reinstall by the following commands:
k3d cluster delete tensorleap
bash <(curl -s https://helm.tensorleap.ai/install.sh)
or for the GPU version:
USE_GPU=true bash <(curl -s https://helm.tensorleap.ai/install.sh)

In the Event of Timeout

If a timeout occurs (e.g. due to a poor internet connection), the cluster will automatically resume downloading the images.

Monitoring

To monitor the current state of the Tensorleap installation cluster, you can use the following command:
docker exec k3d-tensorleap-server-0 kubectl get pods --namespace=tensorleap
In case you installed the GPU version, you can verify that the docker node detects your GPU by using the following command:
docker exec -it k3d-tensorleap-server-0 kubectl describe nodes
Where nvidia/gpu: 1 should be listed listed under Resources.

Getting Started

Once installation of the standalone version is completed, access the platform using the following URL and follow the instructions in the wizard:
Welcome Page

Your First Project

The Standalone version comes with a few, relatively simple, example projects you can explore.
🚀 You can quickly Get started with our MNIST Walkthrough or IMDB Walkthrough, where we train a model, present metrics, and run analyses.
Moreover, you can also integrate your own custom model and data by following the Integration Script and the Full Guides.

Pausing / Resuming Docker Containers

You may pause Tensorleap's docker containers with the following command:
docker pause k3d-tensorleap-tools k3d-tensorleap-serverlb k3d-tensorleap-server-0
You can resume the docker containers with the following command:
docker unpause k3d-tensorleap-tools k3d-tensorleap-serverlb k3d-tensorleap-server-0

Uninstalling

To remove the installed cluster from your local machine, run this command:
k3d cluster delete tensorleap
To remove all stored projects data permanently, run this command :
rm -rf /var/lib/tensorleap/standalone
To remove the downloaded images, run this command:
docker system prune -a --volumes
This operation will remove all unused docker images on your machine!
Copy link
On this page
Minimum Requirements
Installing
Getting Started
Your First Project
Pausing / Resuming Docker Containers
Uninstalling