web-dev-qa-db-ja.com

Ubuntu 16.04デスクトップへのSLURMのインストール/エミュレート:slurmdの起動に失敗する

編集

私が本当に探しているのは、インストールできるインタラクティブで適度にユーザーフレンドリーなSLURMをエミュレートする方法です。


元の投稿

SLURMを使用していくつかの最小限の例をテストドライブしたいのですが、Ubuntu16.04を使用してすべてをローカルマシンにインストールしようとしています。私は 私が見つけた最新のslurmインストールガイド をフォローしていて、「slurmdSudo /etc/init.d/slurmd startで開始」まで到達しました。

[....] Starting slurmd (via systemctl): slurmd.serviceJob for slurmd.service failed because the control process exited with error code. See "systemctl status slurmd.service" and "journalctl -xe" for details.
 failed!

systemctlログの解釈方法がわかりません:

● slurmd.service - Slurm node daemon
   Loaded: loaded (/lib/systemd/system/slurmd.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Thu 2017-10-26 22:49:27 EDT; 12s ago
  Process: 5951 ExecStart=/usr/sbin/slurmd $SLURMD_OPTIONS (code=exited, status=1/FAILURE)

Oct 26 22:49:27 Haggunenon systemd[1]: Starting Slurm node daemon...
Oct 26 22:49:27 Haggunenon systemd[1]: slurmd.service: Control process exited, code=exited status=1
Oct 26 22:49:27 Haggunenon systemd[1]: Failed to start Slurm node daemon.
Oct 26 22:49:27 Haggunenon systemd[1]: slurmd.service: Unit entered failed state.
Oct 26 22:49:27 Haggunenon systemd[1]: slurmd.service: Failed with result 'exit-code'.

lsb_release -aは次のようになります。 (はい、私は知っています、厳密に言えば、KDE ​​Neonは正確にUbuntuではありません。)

o LSB modules are available.
Distributor ID: neon
Description:    KDE neon User Edition 5.11
Release:        16.04
Codename:       xenial

ガイドが言ったのとは異なり、私は自分のユーザー名wlandauを使用し、chown/var/lib/slurm-llnl/var/run/slurm-llnlを確認しました。これが私の/etc/slurm-llnl/slurm.confです。

# slurm.conf file generated by configurator.html.
# Put this file on all nodes of your cluster.
# See the slurm.conf man page for more information.
#
ControlMachine=linux0
#ControlAddr=
#BackupController=
#BackupAddr=
#
AuthType=auth/munge
CacheGroups=0
#CheckpointType=checkpoint/none
CryptoType=crypto/munge
#DisableRootJobs=NO
#EnforcePartLimits=NO
#Epilog=
#EpilogSlurmctld=
#FirstJobId=1
#MaxJobId=999999
#GresTypes=
#GroupUpdateForce=0
#GroupUpdateTime=600
#JobCheckpointDir=/var/lib/slurm-llnl/checkpoint
#JobCredentialPrivateKey=
#JobCredentialPublicCertificate=
#JobFileAppend=0
#JobRequeue=1
#JobSubmitPlugins=1
#KillOnBadExit=0
#LaunchType=launch/slurm
#Licenses=foo*4,bar
#MailProg=/usr/bin/mail
#MaxJobCount=5000
#MaxStepCount=40000
#MaxTasksPerNode=128
MpiDefault=none
#MpiParams=ports=#-#
#PluginDir=
#PlugStackConfig=
#PrivateData=jobs
ProctrackType=proctrack/pgid
#Prolog=
#PrologFlags=
#PrologSlurmctld=
#PropagatePrioProcess=0
#PropagateResourceLimits=
#PropagateResourceLimitsExcept=
#RebootProgram=
ReturnToService=1
#SallocDefaultCommand=
SlurmctldPidFile=/var/run/slurm-llnl/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/var/run/slurm-llnl/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/var/lib/slurm-llnl/slurmd
SlurmUser=wlandau
#SlurmdUser=root
#SrunEpilog=
#SrunProlog=
StateSaveLocation=/var/lib/slurm-llnl/slurmctld
SwitchType=switch/none
#TaskEpilog=
TaskPlugin=task/none
#TaskPluginParam=
#TaskProlog=
#TopologyPlugin=topology/tree
#TmpFS=/tmp
#TrackWCKey=no
#TreeWidth=
#UnkillableStepProgram=
#UsePAM=0
#
#
# TIMERS
#BatchStartTimeout=10
#CompleteWait=0
#EpilogMsgTime=2000
#GetEnvTimeout=2
#HealthCheckInterval=0
#HealthCheckProgram=
InactiveLimit=0
KillWait=30
#MessageTimeout=10
#ResvOverRun=0
MinJobAge=300
#OverTimeLimit=0
SlurmctldTimeout=120
SlurmdTimeout=300
#UnkillableStepTimeout=60
#VSizeFactor=0
Waittime=0
#
#
# SCHEDULING
#DefMemPerCPU=0
FastSchedule=1
#MaxMemPerCPU=0
#SchedulerRootFilter=1
#SchedulerTimeSlice=30
SchedulerType=sched/backfill
SchedulerPort=7321
SelectType=select/linear
#SelectTypeParameters=
#
#
# JOB PRIORITY
#PriorityFlags=
#PriorityType=priority/basic
#PriorityDecayHalfLife=
#PriorityCalcPeriod=
#PriorityFavorSmall=
#PriorityMaxAge=
#PriorityUsageResetPeriod=
#PriorityWeightAge=
#PriorityWeightFairshare=
#PriorityWeightJobSize=
#PriorityWeightPartition=
#PriorityWeightQOS=
#
#
# LOGGING AND ACCOUNTING
#AccountingStorageEnforce=0
#AccountingStorageHost=
#AccountingStorageLoc=
#AccountingStoragePass=
#AccountingStoragePort=
AccountingStorageType=accounting_storage/none
#AccountingStorageUser=
AccountingStoreJobComment=YES
ClusterName=cluster
#DebugFlags=
#JobCompHost=
#JobCompLoc=
#JobCompPass=
#JobCompPort=
JobCompType=jobcomp/none
#JobCompUser=
#JobContainerPlugin=job_container/none
JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/none
SlurmctldDebug=3
SlurmctldLogFile=/var/log/slurm-llnl/slurmctld.log
SlurmdDebug=3
SlurmdLogFile=/var/log/slurm-llnl/slurmd.log
#SlurmSchedLogFile=
#SlurmSchedLogLevel=
#
#
# POWER SAVE SUPPORT FOR IDLE NODES (optional)
#SuspendProgram=
#ResumeProgram=
#SuspendTimeout=
#ResumeTimeout=
#ResumeRate=
#SuspendExcNodes=
#SuspendExcParts=
#SuspendRate=
#SuspendTime=
#
#
# COMPUTE NODES
NodeName=linux[1-32] CPUs=1 State=UNKNOWN
PartitionName=debug Nodes=linux[1-32] Default=YES MaxTime=INFINITE State=UP

ファローアップ

@damienfrancoisの助けを借りて私のslurm.confを書き直した後、slurmdが今始まります。しかし、残念ながら、呼び出すとsinfoがハングし、以前と同じエラーメッセージが表示されます。

$ Sudo /etc/init.d/slurmctld stop
[ ok ] Stopping slurmctld (via systemctl): slurmctld.service.
$ Sudo /etc/init.d/slurmctld start
[ ok ] Starting slurmctld (via systemctl): slurmctld.service.
$ sinfo
slurm_load_partitions: Unable to contact slurm controller (connect failure)
$ slurmd -Dvvv
slurmd: fatal: Frontend not configured correctly in slurm.conf.  See man slurm.conf look for frontendname.

次に、デーモンを再起動しようとしましたが、slurmdを最初からやり直すことができませんでした。

$ Sudo /etc/init.d/slurmctld start
[....] Starting slurmd (via systemctl): slurmd.serviceJob for slurmd.service failed because the control process exited with error code. See "systemctl status slurmd.service" and "journalctl -xe" for details.
 failed!
7
landau

ControlMachineの前の値は、slurmctldが起動するマシンでの_hostname -s_の出力と一致する必要があります。同じことがNodeNameにも当てはまります。 slurmdが起動するマシンの_hostname -s_の出力と一致する必要があります。マシンは1つしかなく、Haggunenonと呼ばれているように見えるため、_slurm.conf_の関連する行は次のようになります。

_ControlMachine=Haggunenon
[...]
NodeName=Haggunenon CPUs=1 State=UNKNOWN
_

より大きなクラスターをエミュレートするために複数のslurmdデーモンを起動する場合は、_-N_オプションを指定してslurmdを起動する必要があります(ただし、Slurmは_--enable-multiple-slurmd_設定オプション)


更新。これがウォークスルーです。 VagrantとVirtualBox(_vagrant init ubuntu/xenial64 ; vagrant up_)を使用してVM)をセットアップし、_vagrant ssh_の後に、次のコマンドを実行しました。

_ubuntu@ubuntu-xenial:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 16.04.3 LTS
Release:    16.04
Codename:   xenial
ubuntu@ubuntu-xenial:~$ Sudo apt-get update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
[...]
Get:35 http://archive.ubuntu.com/ubuntu xenial-backports/universe Translation-en [3,060 B]
Fetched 23.6 MB in 4s (4,783 kB/s)
Reading package lists... Done
ubuntu@ubuntu-xenial:~$ Sudo apt-get install munge libmunge2
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
  libmunge2 munge
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 102 kB of archives.
After this operation, 351 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/universe AMD64 libmunge2 AMD64 0.5.11-3ubuntu0.1 [18.4 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates/universe AMD64 munge AMD64 0.5.11-3ubuntu0.1 [83.9 kB]
Fetched 102 kB in 0s (290 kB/s)
Selecting previously unselected package libmunge2.
(Reading database ... 57914 files and directories currently installed.)
Preparing to unpack .../libmunge2_0.5.11-3ubuntu0.1_AMD64.deb ...
Unpacking libmunge2 (0.5.11-3ubuntu0.1) ...
Selecting previously unselected package munge.
Preparing to unpack .../munge_0.5.11-3ubuntu0.1_AMD64.deb ...
Unpacking munge (0.5.11-3ubuntu0.1) ...
Processing triggers for libc-bin (2.23-0ubuntu9) ...
Processing triggers for man-db (2.7.5-1) ...
Processing triggers for systemd (229-4ubuntu21) ...
Processing triggers for ureadahead (0.100.0-19) ...
Setting up libmunge2 (0.5.11-3ubuntu0.1) ...
Setting up munge (0.5.11-3ubuntu0.1) ...
Generating a pseudo-random key using /dev/urandom completed.
Please refer to /usr/share/doc/munge/README.Debian for instructions to generate more secure key.
Processing triggers for libc-bin (2.23-0ubuntu9) ...
Processing triggers for systemd (229-4ubuntu21) ...
Processing triggers for ureadahead (0.100.0-19) ...
ubuntu@ubuntu-xenial:~$ Sudo apt-get install slurm-wlm slurm-wlm-basic-plugins
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  fontconfig fontconfig-config fonts-dejavu-core freeipmi-common libcairo2 libdatrie1 libdbi1 libfontconfig1 libfreeipmi16 libgraphite2-3 
[...]
  python-minimal python2.7 python2.7-minimal slurm-client slurm-wlm slurm-wlm-basic-plugins slurmctld slurmd
0 upgraded, 43 newly installed, 0 to remove and 0 not upgraded.
Need to get 20.8 MB of archives.
After this operation, 87.3 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://archive.ubuntu.com/ubuntu xenial/main AMD64 fonts-dejavu-core all 2.35-1 [1,039 kB]
[...]
Get:43 http://archive.ubuntu.com/ubuntu xenial/universe AMD64 slurm-wlm AMD64 15.08.7-1build1 [6,482 B]
Fetched 20.8 MB in 3s (5,274 kB/s)
Extracting templates from packages: 100%
Selecting previously unselected package fonts-dejavu-core.
(Reading database ... 57952 files and directories currently installed.)
[...]
Processing triggers for libc-bin (2.23-0ubuntu9) ...
Processing triggers for systemd (229-4ubuntu21) ...
Processing triggers for ureadahead (0.100.0-19) ...
ubuntu@ubuntu-xenial:~$ Sudo vim /etc/slurm-llnl/slurm.conf
ubuntu@ubuntu-xenial:~$ grep -v \# /etc/slurm-llnl/slurm.conf
ControlMachine=ubuntu-xenial
AuthType=auth/munge
CacheGroups=0
CryptoType=crypto/munge
MpiDefault=none
ProctrackType=proctrack/pgid
ReturnToService=1
SlurmctldPidFile=/var/run/slurm-llnl/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/var/run/slurm-llnl/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/var/lib/slurm-llnl/slurmd
SlurmUser=ubuntu
StateSaveLocation=/var/lib/slurm-llnl/slurmctld
SwitchType=switch/none
TaskPlugin=task/none
InactiveLimit=0
KillWait=30
MinJobAge=300
SlurmctldTimeout=120
SlurmdTimeout=300
Waittime=0
FastSchedule=1
SchedulerType=sched/backfill
SchedulerPort=7321
SelectType=select/linear
AccountingStorageType=accounting_storage/none
AccountingStoreJobComment=YES
ClusterName=cluster
JobCompType=jobcomp/none
JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/none
SlurmctldDebug=3
SlurmctldLogFile=/var/log/slurm-llnl/slurmctld.log
SlurmdDebug=3
SlurmdLogFile=/var/log/slurm-llnl/slurmd.log
NodeName=ubuntu-xenial CPUs=1 State=UNKNOWN
PartitionName=debug Nodes=ubuntu-xenial Default=YES MaxTime=INFINITE State=UP
ubuntu@ubuntu-xenial:~$ Sudo chown ubuntu /var/log/slurm-llnl
ubuntu@ubuntu-xenial:~$ Sudo chown ubuntu /var/lib/slurm-llnl/slurmctld
ubuntu@ubuntu-xenial:~$ Sudo chown ubuntu /var/run/slurm-llnl
ubuntu@ubuntu-xenial:~$ Sudo /etc/init.d/slurmctld start
[ ok ] Starting slurmctld (via systemctl): slurmctld.service.
ubuntu@ubuntu-xenial:~$ Sudo /etc/init.d/slurmd start
[ ok ] Starting slurmd (via systemctl): slurmd.service.
_

そして最後に、それは私に期待される結果を与えます:

_ubuntu@ubuntu-xenial:~$ sinfo
PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
debug*       up   infinite      1   idle ubuntu-denial
_

ここで正確な手順を実行しても問題が解決しない場合は、実行してみてください

Sudo slurmctld -Dvvv

そして

Sudo slurmd -Dvvv

メッセージは十分に明確である必要があります。

6
damienfrancois