Installation
Here we will describe how to install the Tensorleap on-prem server
The Tensorleap setup is usually composed from two parts:
Server - where all of the workloads are computed
Client - used by users to upload models and code to the server
Where should you install the server and client
The Tensorleap server should be installed on a machine that:
is accessible to all Tensorleap users.
have access the datasets you would like to use with Tensorleap
Be able to expose port 4589 to any client station that would like to access Tensorleap. In cases where the server and client are the same station (i.e. a local installation) no port-forwarding is needed.
Ubuntu or mac station (Installing the Server on a windows machine is currently not supported)
Tensorleap clients should be installed on each Tensorleap local station used by the user to develop the models & write the code that trains and checks the models.
Station could be ubuntu/Mac/Windows
Installing the Tensorleap Server
Tensorleap Requirements
For installing Tensorleap we recommend a server with the following requirements:
CPU
4
16
RAM
16
32
Storage
200GB
1TB
GPU
-
1
Drivers and packages requirements
We expect NVIDIA drivers and nvidia-docker2 to be installed in the server prior to Tensorleap installation. It's usually recommended to keep your current setup if it's already working for model training and inferring. In case you're missing some basic GPU related dependencies - you can follow this guide for a complete setup of the requirements on an ubuntu based server.
Tensorleap CLI installation
in the order to install the Tensorleap you should do the following:
In order to install the Tensorleap CLI - you should run the following command in your shell:
curl -s https://raw.githubusercontent.com/tensorleap/leap-cli/master/install.sh | bash
To verify that this was correctly verify you get the CLI help menu when running leap -h
from your terminal.
Tensorleap Server
Once you have the CLI, the tensorleap server could be installed by running:
leap server install
The installation process would interactively prompt you to select the following:
Which GPUs should the system access
What would be the datasets volumes (i.e. the places that hold all of the datasets Tensorleap should have access to)
It would then pull required assets and install the Tensorleap server on the machine.
Once the Installation is done you should see the following message:
Tensorleap installed on local k3d cluster
Tensorleap client
For each user, we recommend installing the CLI (without installing the server) on the local machine on which he browses the web and writes code. This CLI would serve as a client that allow the user to upload models and codebases to the platform.
Accessing the Tensorleap Platform
To access the Tensorleap platform from your local machine, follow these steps:
Local Installation:
If installed locally, access the platform directly via:
http://localhost:4589/
Server Access:
Port-Forwarding:
Set up port-forwarding between your local machine and the server to expose port 4589. Access via:
http://localhost:4589/
DNS Setup:
Configure a DNS to assign an IP address to the server, enabling direct access:
http://SERVER-IP:4589/
This method bypasses the need for port-forwarding.
Last updated
Was this helpful?