Installation

Here we will describe how to install the Tensorleap on-prem server

This page covers an on-prem or local server installation. If instead you want to setup a new ec2 server that is dedicated to Tensorleap you can check this guide.

The Tensorleap setup is usually composed from two parts:

  • Server - where all of the workloads are computed

  • Client - used by users to upload models and code to the server

Where should you install the server and client

  • The Tensorleap server should be installed on a machine that:

    • is accessible to all Tensorleap users.

    • have access the datasets you would like to use with Tensorleap

    • Be able to expose port 4589 to any client station that would like to access Tensorleap. In cases where the server and client are the same station (i.e. a local installation) no port-forwarding is needed.

    • Ubuntu or mac station (Installing the Server on a windows machine is currently not supported)

  • Tensorleap clients should be installed on each Tensorleap local station used by the user to develop the models & write the code that trains and checks the models.

    • Station could be ubuntu/Mac/Windows

Installing the Tensorleap Server

Tensorleap Requirements

For installing Tensorleap we recommend a server with the following requirements:

Resource
Minimal
Recommended

CPU

4

16

RAM

16

32

Storage

200GB

1TB

GPU

-

1

For an installation that should support multiple concurrent users workloads we recommend to increase resources further.

Drivers and packages requirements

We expect NVIDIA drivers and nvidia-docker2 to be installed in the server priort to Tensorleap installation. It's usually recommended to keep your current setup if it's already working for model training and inferring. In case you're missing some basic GPU related dependencies - you can follow this guide for a complete setup of the requirements on an ubuntu based server.

Tensorleap CLI installation

in the order to install the Tensorleap you should do the following:

In order to install the Tensorleap CLI - you should run the following command in your shell:

curl -s https://raw.githubusercontent.com/tensorleap/leap-cli/master/install.sh | bash

To verify that this was correctly verify you get the CLI help menu when running leap -h from your terminal.

Tensorleap Server

This instllation requires internet connectivity. In case you station is air-gapped, please contact the Tensorleap Team for support.

Once you have the CLI, the tensorleap server could be installed by running:

leap server install

The installation process would interactively prompt you to select the following:

  • Which GPUs should the system access

  • What would be the datasets volumes (i.e. the places that hold all of the datasets Tensorleap should have access to)

It would then pull required assets and install the Tensorleap server on the machine.

Once the Installation is done you should see the following message:

Tensorleap installed on local k3d cluster

Tensorleap client

For each user, we recommend installing the CLI (without installing the server) on the local machine on which he browses the web and writes code. This CLI would serve as a client that allow the user to upload models and codebases to the platform.

In cases where the server is installed on a central location, port 4589 should be forwarded to the local station in order to communicate with the server.

Accessing the Tensorleap Platform

To access the Tensorleap platform from your local station:

  • If you would like to access the platform from the server station (i.e. a local installation), you can just access the app via http://localhost:4589/

  • Otherwise, we recommend setting up a port-forward between your local station and the server, that would expose the 4589 port from the server. You should then be able to http://localhost:4589/ from your local computer.

  • You can also create a DNS setup allow you to set up an IP address for the server that would be accessible from your local station. You can then skip the port-forwarding and access it directly via http://SERVER-IP:4589/.

Last updated

Was this helpful?