This guide will instruct you on how to set up a worker node on the AI Network.
[CAUTION] AI Network Worker on AIN Blockchain is on beta.
You can provide your machine's computing power to the decentralized applications on AI Network Blockchain through AI Network Worker.
How To Run AIN Worker
Requirements
Docker
Ubuntu 18.04 or above
Minimum Storage Requirements: 50 GB
If you want to provide GPU computing power,
GPU
Nvidia-docker
1. (Optional) Check Graphics Driver
Before running a GPU supported worker, you should check the requirements. If you want to run non-GPU worker, please skip this part. First, let's check if the graphics driver is installed correctly. Please enter the following command:
$ nvidia-smi
The results will be printed in the following form, and you can check the CUDA version supported by your driver.
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla K80 Off | 00002DE1:00:00.0 Off | 0 |
| N/A 44C P0 69W / 149W | 0MiB / 11441MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
If the driver is not installed or the supported CUDA version is lower than 10.1, refer to here to install the graphics driver.
2. (Optional) Check Nvidia Docker
If you want to run non-GPU worker, please skip this part. The next step is to check whether the docker and Nvidia docker is installed, which allows you to utilize the GPU on docker containers. Please enter the following command:
$ sudo docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi
After you run the above command, you should see something similar to this:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla K80 Off | 00002DE1:00:00.0 Off | 0 |
| N/A 44C P0 69W / 149W | 0MiB / 11441MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
If you're having trouble with the installation, please refer here to install the Nvidia docker.
Find the version tagged 'recommended' in the list of drivers. In the example above, nvidia-driver-455 is tagged with 'recommended'. Now you can install the appropriate graphics driver with the following command.
// Change `455` to the number that recommended for your system.
$ sudo apt install nvidia-driver-455
After installation is complete, reboot the system.
$sudoreboot
Use the nvidia-smi command to confirm that the driver installation was successful.
Nvidia docker installation is complete. To check if it's installed properly, run the command below and make sure you see an output similar to the following.