Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When trying to connect to the cluster via lens: Failed to get /version for clusterId=id Internal Server Errorr #8057

Open
AjayEdupuganti opened this issue Jun 7, 2024 · 5 comments
Labels
bug Something isn't working

Comments

@AjayEdupuganti
Copy link

AjayEdupuganti commented Jun 7, 2024

I am trying to connect to my kubernetes cluster which was spunup using kubeadm on aws instances.

I am using lens desktop app on windows.

My kubeconfig file

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://<private ip address of the master>:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: DATA+OMITTED
    client-key-data: DATA+OMITTED

i think the issue is with private ip address as server. kindly help me with this error

E0607 15:01:56.696813 3604 proxy_server.go:147] Error while proxying request: dial tcp :6443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. Failed to get /version for clusterId=: Internal Server Error

If i change the config file and add public ip address there i am getting another error

E0607 16:25:02.787250 484 proxy_server.go:147] Error while proxying request: tls: failed to verify certificate: x509: certificate is valid certificate is valid for privateip1, privateip2, not publicip of the master

Failed to get /version for clusterId=clusterid: Internal Server Error

@AjayEdupuganti AjayEdupuganti added the bug Something isn't working label Jun 7, 2024
@Christiaanvdl
Copy link

Hello @AjayEdupuganti , thank you for reaching out !

@AjayEdupuganti
Copy link
Author

Hi @Christiaanvdl,
can you help me with this issue please?

I changed my admin.conf file,
removed the certificate and added public ip address of the master and then added insecure-skip-tsl-verify: true flag
and then it worked.
server: https://:6443
insecure-skip-tls-verify: true

@Christiaanvdl
Copy link

Hello , i have contacted our development and will get back to you as soon as we have a update!

@Nokel81
Copy link
Collaborator

Nokel81 commented Jun 12, 2024

You said that you switched a the public address and then it worked, were you previously using a private address? I assume you were using some sort of VPN to connect? That might be what is blocking the connection.

If you were to run kubectl proxy -p 12000 for that cluster (when trying to connect via the private address) and then run curl http://localhost:12000/api/v1/namespaces, what do you get?

@AjayEdupuganti
Copy link
Author

@Nokel81
this is how i created my cluster

sudo kubeadm init --pod-network-cidr=192.168.0.0/16
by default when the cluster is creatde it has private ip address of the master as the server in the admin.conf file.
so i cant access it from outside. now

  1. why is it created on private ip and how can we ensure it will be public ip and also what will be differences between two.
  2. I havent used any vpn, it is just that when i changing the config file to use public ip address, it is throwing error that the certificated only support private ip address of the master. so i removed the certificate and tried access it over public ip with insecure-skip-tls: true it worked, but i cant use this
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
3 participants