web-dev-qa-db-ja.com

Kubernetesサービスへの接続が拒否されました

私はminikubeをテストして、3つのサービスを備えたデモアプリケーションを作成しようとしています。アイデアは、他のサービスと通信するWeb UIを持つことです。各サービスは、異なる言語で記述されます:nodejs、pythonそして、行きます。

アプリごとに1つずつ3つのDockerイメージを作成し、コードをテストしました。基本的に、非常にシンプルなRESTエンドポイントを提供しました。その後、minikubeを使用してそれらをデプロイしました。現在のデプロイメントyamlファイルは次のとおりです。

---
apiVersion: v1
kind: Namespace
metadata:
  name: ngci

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: web-gateway
  namespace: ngci
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: web-gateway
    spec:
      containers:
      - env:
        - name: VCSA_MANAGER
          value: http://vcsa-manager-service:7070
        name: web-gateway
        image: silvam11/web-gateway
        imagePullPolicy: Never
        ports:
          - containerPort: 8080 
        readinessProbe:
          httpGet:
            path: /status
            port: 8080
          periodSeconds: 5           

---
apiVersion: v1
kind: Service
metadata:
  name: web-gateway-service
  namespace: ngci
spec:
  selector:
    app: web-gateway
  ports:
    - protocol: "TCP"
      # Port accessible inside cluster
      port: 8080
      # Port forward to inside the pod
      #targetPort did not work with nodePort, why?
      #targetPort: 9090
      # Port accessible outside cluster
      nodePort: 30001
      #name: grpc
  type: LoadBalancer

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: vcsa-manager
  namespace: ngci
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: vcsa-manager
    spec:
      containers:
        - name: vcsa-manager
          image: silvam11/vcsa-manager
          imagePullPolicy: Never
          ports:
            - containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  name: vcsa-manager-service
  namespace: ngci
spec:
  selector:
    app: vcsa-manager
  ports:
    - protocol: "TCP"
      # Port accessible inside cluster
      port: 7070
      # Port forward to inside the pod
      #targetPort did not work with nodePort, why?
      targetPort: 9090
      # Port accessible outside cluster
      #nodePort: 30001
      #name: grpc

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: repo-manager
  namespace: ngci
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: repo-manager
    spec:
      containers:
        - name: repo-manager
          image: silvam11/repo-manager
          imagePullPolicy: Never
          ports:
            - containerPort: 8000

---
apiVersion: v1
kind: Service
metadata:
  name: repo-manager-service
  namespace: ngci
spec:
  selector:
    app: repo-manager
  ports:
    - protocol: "TCP"
      # Port accessible inside cluster
      port: 9090
      # Port forward to inside the pod
      #targetPort did not work with nodePort, why?
      #targetPort: 9090
      # Port accessible outside cluster
      #nodePort: 30001
      #name: grpc

ご覧のとおり、サービスを作成しましたが、WebゲートウェイのみがLoadBalancerタイプとして定義されています。 2つのエンドポイントを提供します。サービスのテストを可能にする/ statusという名前のものが稼働しており、到達可能です。

/ userという名前の2番目のエンドポイントは、別のk8sサービスと通信します。コードは非常に簡単です:

app.post('/user', (req, res) => {
  console.log("/user called.");
  console.log("/user req.body : " + req.body);

  if(!req || !req.body)
  {
     var errorMsg = "Invalid argument sent";
     console.log(errorMsg);
     return res.status(500).send(errorMsg);
  }

  **console.log("calling " + process.env.VCSA_MANAGER);
  const options = {
    url: process.env.VCSA_MANAGER,
    method: 'GET',
    headers: {
      'Accept': 'application/json'
    }
  };**

  request(options, function(err, resDoc, body) {
    console.log("callback : " + body);
    if(err)
    {
      console.log("ERROR: " + err);
      return res.send(err);
    }

    console.log("statusCode : " + resDoc.statusCode);
    if(resDoc.statusCode != 200)
    {
      console.log("ERROR code: " + res.statusCode);
      return res.status(500).send(resDoc.statusCode);
    }

    return res.send({"ok" : body});
  });

});

このスニペットの主なアイデアは、環境変数process.env.VCSA_MANAGERを使用して、他のサービスにリクエストを送信することです。この変数は、私のk8sデプロイメントyamlファイルでhttp:// vcsa-manager-service:707として定義されました

問題は、このリクエストが接続エラーを返すことです。最初はDNSの問題だと思っていましたが、Webゲートウェイポッドで名前を解決できるようです。

kubectl exec -it web-gateway-7b4689bff9-rvbbn -n ngci -- ping vcsa-manager-service

PING vcsa-manager-service.ngci.svc.cluster.local (10.99.242.121): 56 data bytes

Webゲートウェイポッドからのpingコマンドは、DNSを正しく解決しました。以下に示すように、IPは正しいです。

kubectl get svc -n ngci

NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
repo-manager-service   ClusterIP      10.102.194.179   <none>        9090/TCP         35m
vcsa-manager-service   ClusterIP      10.99.242.121    <none>        7070/TCP         35m
web-gateway-service    LoadBalancer   10.98.128.210    <pending>     8080:30001/TCP   35m

また、提案されているように、それらの説明

kubectl describe pods -n ngci
Name:           repo-manager-6cf98f5b54-pd2ht
Namespace:      ngci
Node:           minikube/10.0.2.15
Start Time:     Wed, 09 May 2018 17:53:54 +0100
Labels:         app=repo-manager
                pod-template-hash=2795491610
Annotations:    <none>
Status:         Running
IP:             172.17.0.10
Controlled By:  ReplicaSet/repo-manager-6cf98f5b54
Containers:
  repo-manager:
    Container ID:   docker://d2d54e42604323c8a6552b3de6e173e5c71eeba80598bfc126fbc03cae93d261
    Image:          silvam11/repo-manager
    Image ID:       docker://sha256:dc6dcbb1562cdd5f434f86696ce09db46c7ff5907b991d23dae08b2d9ed53a8f
    Port:           8000/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 10 May 2018 10:32:49 +0100
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Wed, 09 May 2018 17:53:56 +0100
      Finished:     Wed, 09 May 2018 18:31:24 +0100
    Ready:          True
    Restart Count:  1
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-tbkms (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  default-token-tbkms:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-tbkms
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                 Age   From               Message
  ----    ------                 ----  ----               -------
  Normal  Scheduled              16h   default-scheduler  Successfully assigned repo-manager-6cf98f5b54-pd2ht to minikube
  Normal  SuccessfulMountVolume  16h   kubelet, minikube  MountVolume.SetUp succeeded for volume "default-token-tbkms"
  Normal  Pulled                 16h   kubelet, minikube  Container image "silvam11/repo-manager" already present on machine
  Normal  Created                16h   kubelet, minikube  Created container
  Normal  Started                16h   kubelet, minikube  Started container
  Normal  SuccessfulMountVolume  3m    kubelet, minikube  MountVolume.SetUp succeeded for volume "default-token-tbkms"
  Normal  SandboxChanged         3m    kubelet, minikube  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled                 3m    kubelet, minikube  Container image "silvam11/repo-manager" already present on machine
  Normal  Created                3m    kubelet, minikube  Created container
  Normal  Started                3m    kubelet, minikube  Started container


Name:           vcsa-manager-8696b44dff-mzq5q
Namespace:      ngci
Node:           minikube/10.0.2.15
Start Time:     Wed, 09 May 2018 17:53:54 +0100
Labels:         app=vcsa-manager
                pod-template-hash=4252600899
Annotations:    <none>
Status:         Running
IP:             172.17.0.14
Controlled By:  ReplicaSet/vcsa-manager-8696b44dff
Containers:
  vcsa-manager:
    Container ID:   docker://3e19fd8ca21a678e18eda3cb246708d10e3f1929a31859f0bb347b3461761b53
    Image:          silvam11/vcsa-manager
    Image ID:       docker://sha256:1a9cd03166dafceaee22586385ecda1c6ad3ed095b498eeb96771500092b526e
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 10 May 2018 10:32:54 +0100
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Wed, 09 May 2018 17:53:56 +0100
      Finished:     Wed, 09 May 2018 18:31:15 +0100
    Ready:          True
    Restart Count:  1
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-tbkms (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  default-token-tbkms:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-tbkms
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                 Age   From               Message
  ----    ------                 ----  ----               -------
  Normal  Scheduled              16h   default-scheduler  Successfully assigned vcsa-manager-8696b44dff-mzq5q to minikube
  Normal  SuccessfulMountVolume  16h   kubelet, minikube  MountVolume.SetUp succeeded for volume "default-token-tbkms"
  Normal  Pulled                 16h   kubelet, minikube  Container image "silvam11/vcsa-manager" already present on machine
  Normal  Created                16h   kubelet, minikube  Created container
  Normal  Started                16h   kubelet, minikube  Started container
  Normal  SuccessfulMountVolume  3m    kubelet, minikube  MountVolume.SetUp succeeded for volume "default-token-tbkms"
  Normal  SandboxChanged         3m    kubelet, minikube  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled                 3m    kubelet, minikube  Container image "silvam11/vcsa-manager" already present on machine
  Normal  Created                3m    kubelet, minikube  Created container
  Normal  Started                3m    kubelet, minikube  Started container


Name:           web-gateway-7b4689bff9-rvbbn
Namespace:      ngci
Node:           minikube/10.0.2.15
Start Time:     Wed, 09 May 2018 17:53:55 +0100
Labels:         app=web-gateway
                pod-template-hash=3602456995
Annotations:    <none>
Status:         Running
IP:             172.17.0.12
Controlled By:  ReplicaSet/web-gateway-7b4689bff9
Containers:
  web-gateway:
    Container ID:   docker://677fbcbc053c57e4aa24c66d7f27d3e9910bc3dbb5fda4c1cdf5f99a67dfbcc3
    Image:          silvam11/web-gateway
    Image ID:       docker://sha256:b80fb05c087934447c93c958ccef5edb08b7c046fea81430819823cc382337dd
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 10 May 2018 10:32:54 +0100
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 09 May 2018 17:53:57 +0100
      Finished:     Wed, 09 May 2018 18:31:16 +0100
    Ready:          True
    Restart Count:  1
    Readiness:      http-get http://:8080/status delay=0s timeout=1s period=5s #success=1 #failure=3
    Environment:
      VCSA_MANAGER:  http://vcsa-manager-service:7070
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-tbkms (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  default-token-tbkms:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-tbkms
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                 Age              From               Message
  ----     ------                 ----             ----               -------
  Normal   Scheduled              16h              default-scheduler  Successfully assigned web-gateway-7b4689bff9-rvbbn to minikube
  Normal   SuccessfulMountVolume  16h              kubelet, minikube  MountVolume.SetUp succeeded for volume "default-token-tbkms"
  Normal   Pulled                 16h              kubelet, minikube  Container image "silvam11/web-gateway" already present on machine
  Normal   Created                16h              kubelet, minikube  Created container
  Normal   Started                16h              kubelet, minikube  Started container
  Warning  Unhealthy              16h              kubelet, minikube  Readiness probe failed: Get http://172.17.0.13:8080/status: dial tcp 172.17.0.13:8080: getsockopt: connection refused
  Normal   SuccessfulMountVolume  3m               kubelet, minikube  MountVolume.SetUp succeeded for volume "default-token-tbkms"
  Normal   SandboxChanged         3m               kubelet, minikube  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled                 3m               kubelet, minikube  Container image "silvam11/web-gateway" already present on machine
  Normal   Created                3m               kubelet, minikube  Created container
  Normal   Started                3m               kubelet, minikube  Started container
  Warning  Unhealthy              3m (x3 over 3m)  kubelet, minikube  Readiness probe failed: Get http://172.17.0.12:8080/status: dial tcp 172.17.0.12:8080: getsockopt: connection refused

Ngci名前空間のポッドは次のとおりです。

silvam11@ubuntu:~$ kubectl get pods -n ngci
NAME                            READY     STATUS    RESTARTS   AGE
repo-manager-6cf98f5b54-pd2ht   1/1       Running   1          16h
vcsa-manager-8696b44dff-mzq5q   1/1       Running   1          16h
web-gateway-7b4689bff9-rvbbn    1/1       Running   1          16h

ここで何が欠けていますか?ファイアウォールですか?

マウロ

3
Mauro Silva

ポート番号を誤って構成しました。

最初、 vcsa-managerはポート8080で公開されました。その後、サービスをマップしようとしましたvcsa-manager-serviceをポート9090に接続します。次に、repo-managerは、potr 8000で公開されました。あなたはtargetPortにコメントし、サービスをポートにマップしませんでした。サービスを正しいポートにマップする必要があります。

固定構成は次のようになります

---
apiVersion: v1
kind: Namespace
metadata:
  name: ngci

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: web-gateway
  namespace: ngci
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: web-gateway
    spec:
      containers:
      - env:
        - name: VCSA_MANAGER
          value: http://vcsa-manager-service:7070
        name: web-gateway
        image: silvam11/web-gateway
        imagePullPolicy: Never
        ports:
          - containerPort: 8080 
        readinessProbe:
          httpGet:
            path: /status
            port: 8080
          periodSeconds: 5           

---
apiVersion: v1
kind: Service
metadata:
  name: web-gateway-service
  namespace: ngci
spec:
  selector:
    app: web-gateway
  ports:
    - protocol: "TCP"
      port: 8080
      targetPort: 8080
      nodePort: 30001

  type: LoadBalancer

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: vcsa-manager
  namespace: ngci
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: vcsa-manager
    spec:
      containers:
        - name: vcsa-manager
          image: silvam11/vcsa-manager
          imagePullPolicy: Never
          ports:
            - containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  name: vcsa-manager-service
  namespace: ngci
spec:
  selector:
    app: vcsa-manager
  ports:
    - protocol: "TCP"
      port: 7070
      targetPort: 8080

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: repo-manager
  namespace: ngci
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: repo-manager
    spec:
      containers:
        - name: repo-manager
          image: silvam11/repo-manager
          imagePullPolicy: Never
          ports:
            - containerPort: 8000

---
apiVersion: v1
kind: Service
metadata:
  name: repo-manager-service
  namespace: ngci
spec:
  selector:
    app: repo-manager
  ports:
    - protocol: "TCP"
      port: 9090
      targetPort: 8000

構成のすべてのポートを修正しました。

1
Nick Rak

エラーがあるかどうかはわかりませんが、Readiness probe failed: Get http://172.17.0.13:8080/status: dial tcp 172.17.0.13:8080: getsockopt: connection refusedはこれが原因です。 targetPortを取得したので、「web-gateway-service」サービスはコメント化されており、ポートが閉じているデフォルトのポート80にリクエストを転送しています。

次のいずれかを行う必要があります。

  1. 「web-gateway-service」サービスのtargetPortのコメントを外し、ポート8080に設定します。
  2. 「web-gateway」デプロイメントのcontainerPortをポート80に変更します。

たぶん最初のものをやろう。

サービスとそのポートについて1つ注意してください。

  • ポート:PORT->これはサービスポートです。サービスが10.0.0.10で実行されている場合は、10.0.0.10:PORTでサービスをヒットできます
  • targetPort:PORT->これがポートです。サービスはリクエストを転送します。したがって、これは常にポッドcontainerPortと一致する必要があります。
1
suren