GPU computing offers additional power for translation. This article describes how to configure an additional TRM on your Systran Server computing node using GPU, in order to exploit both the CPU & GPU of your server. In that case, one TRM will be dedicated to your GPU computing and one another TRM for CPU computing.
Nvidia installation part
Nvidia setup consists of install Nvidia drivers to address the embedded GPU board and Nvidia Toolkit to execute specific GPU instructions.
Based on current GPU model plugged, go to Nvidia Drivers Download webpage then download the required rpm package according to correct filters:
1. Product Type (e.g Tesla)
2. Product Series (e.g P Series)
3. Product (e.g P100)
4. Operating System (e.g Linux 64-bits RHEL 6 or 7)
5. CUDA Toolkit = 8.0 (Version 8.0 is MANDATORY, not higher or lower)
Nvidia CUDA Toolkit
Go to the Nvidia CUDA Toolkit Download webpage then download the required rpm according to expected architecture:
1. Select CUDA Toolkit version 8.0 GA2
2. Select Linux
3. Select x86_64
4. Select RHEL
5. Select 6 or 7
6. Select rpm (local)
7. Download Base installer and potential Patches
On the system, install both rpm packages using following command:
rpm -ivh <nvidia_driver>.rpm <nvidia_cuda_toolkit_base_installer>.rpm
Then reboot the system.
Configuration of the new TRM using the GPUs
Before continuing ; we consider that the package "systran-translation-resource-monitor" is already installed with the service configured and running on the CPU of your Systran Server.
If not please consult the article below:
TRM Data duplication
In order to run two TRMs on same machine, TRM conf files must be duplicated as described below. The "gpu_id" will be 0 if there is only one GPU on the server.
Note: In case you have more than one GPU in your server, you must repeat the steps below for each GPU you have. The second "gpu_id" will be 1, the third "gpu_id" will be 2, and so on...
Duplicate the TRM SYSTRAN folder:
cp -Rp /opt/systran/translation-resource-monitor/workspace /opt/systran/translation-resource-monitor/workspace-gpu<gpu_id>
Duplicate the TRM Configuration file:
cp -p /opt/systran/translation-resource-monitor/etc/trm.cfg /opt/systran/translation-resource-monitor/etc/trm-gpu<gpu_id>.cfg
Edit the file /opt/systran/translation-resource-monitor/etc/trm-gpu<gpu_id>.cfg then edit folllowing entries:
port = 895<gpu_id> workspace-directory = /opt/systran/translation-resource-monitor/workspace-gpu<gpu_id>
Duplicate the Systemd configuration file:
cp -p /usr/lib/systemd/system/systran-translation-resource-monitor.service /usr/lib/systemd/system/systran-translation-resource-monitor-gpu<gpu_id>.service
Then edit following entries in "/usr/lib/systemd/system/systran-translation-resource-monitor-gpu<gpu_id>.service" as follow:
ExecStart=/opt/systran/translation-resource-monitor-2/bin/TranslationResourceMonitor --config /opt/systran/translation-resource-monitor/etc/trm-gpu<gpu_id>.cfg
Enable the new TRM service:
systemctl enable systran-translation-resource-monitor-gpu<gpu_id>.service
chkconfig systran-translation-resource-monitor-gpu<gpu_id>.service on
Then start the new TRM service:
systemctl start systran-translation-resource-monitor-gpu<gpu_id>.service
service systran-translation-resource-monitor-gpu<gpu_id>.service start
Computing node registration
Last step consists of registering this computing node in your Systran Server.
- In the Systran console, go to Menu > Advanced Configuration > Services then click on + Register new service then select Computing Node and type localhost:8950. You should then see the new Computing Node running: