M4のmacでkubernetesを維持
しはじめたけど、性能良くなかった。
そこで、
macでarm64のlinux内にkubernetes
使う環境と、
windowsでx86のlinux内kubernetes
の両方を維持するようにした。
microk8s&multipassにするとarm64のubuntuになるので、UTM入れてubuntu動かした中でmicrok8s動かすようにしてる。
自分の設定が悪いのか、残念ながらintelのmacで使ってたmicrok8sより性能悪くなった・・・。
arm64の中では無理にx86のlinux動かさないようにしたので、x86もarm64もスローになったりせずに元気に動いてくれてる。
microk8sでのkubernetes履歴メモ#
microk8sの実施履歴は
このへん
でやってた。
minikubeの実施履歴は
このへん
でやってた。
構成変更したから履歴書くのもここで書くように変更。arm64もx86もほぼ同時に実施。
4月と8月と12月頃にちょいちょい書き足したり更新してく。
| 時期 |
macのk8s環境 |
クラスタ |
| 2025年 9月 |
microk8s
v1.34
(containerd v1.7.28) |
Kubernetes v1.34/stable |
| 2025年 5月 |
microk8s
v1.33
(containerd v1.7.27) |
Kubernetes v1.33/stable |
| 2024年12月 |
microk8s
v1.32
(containerd v1.6.28) |
Kubernetes v1.32/stable |
| 2024年11月 |
microk8s
v1.31
(containerd v1.6.28) |
Kubernetes v1.31/stable |
ローカルの最新状態はこんな感じ。windows11 - vmware - x86 - ubuntuの内容ではあるけど、macのutmでarm64のubuntu動かしてる分も同じ内容。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
|
root@kubelinux:/microk8s/script# onlchk
----- cluster status ------
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
------ cluster node ------
PRETTY_NAME="Ubuntu 24.04.3 LTS"
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ubuntu--vg-ubuntu--lv 57G 31G 24G 57% /
/dev/sda2 2.0G 77M 1.8G 5% /boot
//192.168.1.40/kubernetes 932G 312G 620G 34% /microk8s
----- recent cluster ver -----
latest/stable: v1.32.3 2025-04-07 (7964) 172MB classic
installed: v1.34.0 (8384) 183MB classic ⭐️入っとる!!
1.34/stable: v1.34.0 2025-08-28 (8384) 183MB classic
1.33/stable: v1.33.0 2025-04-24 (8205) 177MB classic
------- images in ctr -------
docker.io/library/save-django:gvis-saved 1.4 GiB
docker.io/library/save-xrdpubu:gvis-saved 6.3 GiB
-------kubectl version -------
clientVersion:
gitVersion: v1.34.0
serverVersion:
gitVersion: v1.34.0
----kubectl po/svc/configmap status ----
NAME READY STATUS RESTARTS AGE
pod/cl-ubun 1/1 Running 1 (4m10s ago) 7m30s
pod/sv-django 1/1 Running 1 (4m10s ago) 7m19s
pod/sv-https-portal 1/1 Running 2 (3m12s ago) 7m14s
pod/sv-mariadb 1/1 Running 1 (4m10s ago) 7m30s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 67m
service/sv-django ClusterIP 10.152.183.188 <none> 38080/TCP 66m
service/sv-https-portal ClusterIP 10.152.183.76 <none> 30080/TCP,30443/TCP 66m
service/sv-mariadb ClusterIP 10.152.183.176 <none> 13306/TCP 66m
NAME DATA AGE
configmap/kube-root-ca.crt 1 67m
configmap/sv-mariadb-txt 5 66m
-------kubectl PV -------
NAME CAPACITY ACCESS RECLAIM
gvis-pv-django-sslcerts 1Gi RWO Bound
gvis-pv-django-uwsgi-nginx 1Gi RWO Bound
gvis-pv-mariadb 20Gi RWO Bound
gvis-pv-mariadbconf 5Gi RWO Bound
gvis-pv-ubun 10Gi RWO Bound
pvc-e31b5f9e-303a-43e8-a101-845dd10f4a71 30Gi RWX Bound
-------kubectl forward -------
port-forward --address 0.0.0.0 cl-ubun 33389:3389
port-forward --address 0.0.0.0 sv-django 38080:8080
port-forward --address 0.0.0.0 sv-https-portal 30443:443
port-forward --address 0.0.0.0 sv-mariadb 13306:3306
root@kubelinux:/microk8s/script#
|
microk8sで使えるkubernetesクラスタのバージョン確認#
出てるやん。
1
2
3
4
5
6
|
root@kubelinux:/microk8s/script# snap info microk8s | egrep 'stable|installed|stable' | grep classic | grep -v 'tracking' | sort -r | head -4
latest/stable: v1.32.3 2025-04-07 (7964) 172MB classic
installed: v1.32.3 2025-04-22 (8148) 172MB classic
1.33/stable: v1.33.0 2025-04-24 (8205) 177MB classic ⭐️新しいのおるな
1.32/stable: v1.32.3 2025-04-22 (8148) 172MB classic
root@kubelinux:/microk8s/script#
|
クラスタ作り直すスクリプトを書き換え#
microk8sのバージョンを一発で上げられるようになったから、このスクリプト使ってへん。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
|
## -------------------------------------------------------------------------
## Script Name : 300_kubeClusterRecreate.sh
## Created by : T.Naritomi
## on : 2023.08.26
## Updated by : 2025.05.10
## on :
## Parameters :
## Return Code : 0=Normal End
## Comments : change driver hyperkit -> qemu2 , minikube -> microk8s
## -------------------------------------------------------------------------
## ---define----------------------------------------------------------------
EXEC_HOME=/microk8s/script # Execute Home directory
KUBE_HOME=/microk8s # kubernetes Home directory
LOG_FILE=/microk8s/log/kube.log # Log file
GVIS_VER=1.33/stable ⭐️書き換え
GVIS_USER=nari
## ---detail----------------------------------------------------------------
read -p "--- kube Data save ready ? ---(y/N):" yn
case "$yn" in [yY]*) ;; *) echo "abort." ; exit ;; esac
read -p "--- kube Recreate cluster ready ? ---(y/N):" yn
case "$yn" in [yY]*) ;; *) echo "abort." ; exit ;; esac
echo '---Recreate start---' >> ${LOG_FILE}
echo -------- `date +%F_%T` -------- >> ${LOG_FILE}
echo ${LOG_FILE}
snap remove microk8s >> ${LOG_FILE} ⭐️クラスタ吹っ飛ぶ
rm -fR ~/.kube
echo -------- `date +%F_%T` -------- >> ${LOG_FILE}
snap install microk8s --channel=${GVIS_VER} --classic >> ${LOG_FILE}
## snap install microk8s --stable
mkdir -p /data
chmod 777 /data
microk8s enable registry --size 30Gi >> ${LOG_FILE} ⭐️コンテナイメージ置き場が自分のは大きくないと入らへんから、必要なら増やす
microk8s enable hostpath-storage >> ${LOG_FILE}
microk8s enable host-access >> ${LOG_FILE}
echo -------- kubernetes cluster created -------- >> ${LOG_FILE}
:(中略) ⭐️configmap/service/pv/pvcをこの後作らせる
|
永続化領域の保全#
永続化領域をtar.gz化してコピーしとく。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
root@kubelinux:/data# cd /data
root@kubelinux:/data# ls
gvis-pv-django-sslcerts gvis-pv-mariadb gvis-pv-ubun sv_django-uwsgi-nginx
gvis-pv-django-uwsgi-nginx gvis-pv-mariadbconf gvis-pv-ubun.tar.gz sv_django-uwsgi-nginx.tar.gz
root@kubelinux:/data# rm gvis-pv-ubun.tar.gz
root@kubelinux:/data# tar czf gvis-pv-ubun.tar.gz gvis-pv-ubun/
root@kubelinux:/data# ls -lh gvis-pv-ubun*
-rw-r--r-- 1 root root 14M Dec 19 06:12 gvis-pv-ubun.tar.gz
gvis-pv-ubun:
total 12K
drwxrwxrwx 3 nari nari 4.0K Dec 15 06:40 download
drwxrwxrwx 2 nari nari 4.0K May 14 2021 _old
drwxrwxrwx 2 nari nari 4.0K Dec 21 2022 script
root@kubelinux:/data#
root@kubelinux:/data# mv gvis-pv-ubun.tar.gz /microk8s/nariDockerDat/
root@kubelinux:/data# ls -l /microk8s/nariDockerDat/*gvis-pv-ubun*
-rwx------ 1 501 dialout 1254020 Oct 20 05:56 /microk8s/nariDockerDat/20241020_gvis-pv-ubun.tar.gz ⭐️1つ前の世代だけ維持
-rw-r--r-- 1 root root 14558769 Dec 19 06:12 /microk8s/nariDockerDat/gvis-pv-ubun.tar.gz ⭐️コレを維持するとあとはteratermマクロにやってもらう
root@kubelinux:/data#
|
tar.gzファイルを保管置き場にコピーしとく。

クラスタ作り直す#
バージョンアップっていうてもクラスタ吹っ飛ばして作り直しせなあかん(snap refresh microk8s --channel=x.xx/stableってやればバージョン上げれることがわかったけど安定して使えるかは2〜3回やってみんと確証ない・・・)。
永続化領域だけ別で維持してるから、それをコピーしてきたら同じように動かせる。
podが動いてても全部吹っ飛ばすのでクラスタ停止からやってまう。
いっつもエラーみたいなの出てくるなぁ。
1
2
3
4
5
6
7
8
9
10
11
|
nari@gvis-mac script % sh ./301_kubeStop.sh
root@kubelinux:/microk8s/script# sh ./301_kubeStop.sh
/microk8s/log/kube.log
error: lost connection to pod
error: lost connection to pod
error: lost connection to pod
error: lost connection to pod
:(中略)
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/snap/microk8s/7449/microk8s-kubectl.wrapper', 'port-forward', '-n', 'kube-system', 'service/kubernetes-dashboard', '10443:443', '--address', '0.0.0.0']' returned non-zero exit status 1.
root@kubelinux:/microk8s/script#
|
クラスタ停止したのを確認してからログも念のため退避しとく。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
root@kubelinux:/microk8s/log# pwd
/microk8s/log
root@kubelinux:/microk8s/log# ls
20241204kube.log kube.log
root@kubelinux:/microk8s/log# tail -f kube.log
2024-12-18T09:44:52+09:00 INFO Waiting for "snap.microk8s.daemon-kubelite.service" to stop.
Stopped.
-------- 2024-12-18_09:45:50 --------
-------- 2024-12-19_05:17:28 --------
---- microk8s start ----
-------- 2024-12-19_05:18:27 --------
-------- 2024-12-19_06:22:35 --------
---- microk8s stop ----
Stopped.
-------- 2024-12-19_06:22:41 --------
^C
root@kubelinux:/microk8s/log#
|
作り直しスクリプトを流すと始まる。5分もかからず終わる。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
|
root@kubelinux:/microk8s/script# uname -a
Linux kubelinux 6.8.0-51-generic #52-Ubuntu SMP PREEMPT_DYNAMIC Thu Dec 5 13:09:44 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux ⭐️念の為確認
root@kubelinux:/microk8s/script# cat 300_kubeClusterRecreate.sh | grep VER
GVIS_VER=1.32/stable ⭐️念の為確認
snap install microk8s --channel=${GVIS_VER} --classic >> ${LOG_FILE}
root@kubelinux:/microk8s/script#
root@kubelinux:/microk8s/script# ./300_kubeClusterRecreate.sh ⭐️こっからがんばれー
--- kube Data save ready ? ---(y/N):y
--- kube Recreate cluster ready ? ---(y/N):y
/microk8s/log/kube.log
Infer repository core for addon registry
Infer repository core for addon hostpath-storage
Infer repository core for addon hostpath-storage
Infer repository core for addon host-access
Error: ipv4: Address already assigned.
persistentvolume/gvis-pv-mariadb created
persistentvolumeclaim/gvis-pv-mariadb-claim created
persistentvolume/gvis-pv-mariadbconf created
persistentvolumeclaim/gvis-pv-mariadbconf-claim created
persistentvolume/gvis-pv-django-sslcerts created
persistentvolumeclaim/gvis-pv-django-sslcerts-claim created
persistentvolume/gvis-pv-django-uwsgi-nginx created
persistentvolumeclaim/gvis-pv-django-uwsgi-nginx-claim created
persistentvolume/gvis-pv-ubun created
persistentvolumeclaim/gvis-pv-ubun-claim created
configmap/sv-mariadb-txt created
service/sv-django created
service/sv-https-portal created
service/sv-mariadb created
NAME STATUS ROLES AGE VERSION
kubelinux Ready <none> 73s v1.32.0
root@kubelinux:/microk8s/script# Checking if Dashboard is running.
Infer repository core for addon dashboard
Infer repository core for addon metrics-server
Waiting for Dashboard to come up.
root@kubelinux:/microk8s/script# Trying to get token from microk8s-dashboard-token
Waiting for secret token (attempt 0)
Dashboard will be available at https://127.0.0.1:10443
Use the following token to login:
(トークン文字列が表示される)
|
トークンをメモに控えといて、ブラウザでダッシュボード表示するときに使う。
スクリプトから吐かせたログ見るとこんな感じ。
ちゃんとできとる。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
|
-------- 2024-12-19_06:31:42 --------
microk8s (1.32/stable) v1.32.0 from Canonical** installed
Enabling default storage class.
WARNING: Hostpath storage is not suitable for production environments.
A hostpath volume can grow beyond the size limit set in the volume claim manifest.
deployment.apps/hostpath-provisioner created
storageclass.storage.k8s.io/microk8s-hostpath created
serviceaccount/microk8s-hostpath created
clusterrole.rbac.authorization.k8s.io/microk8s-hostpath created
clusterrolebinding.rbac.authorization.k8s.io/microk8s-hostpath created
Storage will be available soon.
namespace/container-registry created
persistentvolumeclaim/registry-claim created
deployment.apps/registry created
service/registry created
configmap/local-registry-hosting configured
The registry will be created with the size of 30Gi.
Default storage class will be used.
Addon core/hostpath-storage is already enabled
Setting 10.0.1.1 as host-access
-------- kubernetes cluster created --------
-------- 2024-12-19_06:34:11 --------
|
パーティションと永続化領域の/dataはクラスタが作り直しされた直後こんな感じ。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
nari@kubelinux:/data$ df -h | grep -v tmpfs | grep -v common
Filesystem Size Used Avail Use% Mounted on
efivarfs 256K 29K 223K 12% /sys/firmware/efi/efivars
/dev/mapper/ubuntu--vg-ubuntu--lv 59G 25G 32G 44% /
/dev/sda2 2.0G 183M 1.7G 11% /boot
/dev/sda1 1.1G 6.2M 1.1G 1% /boot/efi
share 461G 325G 137G 71% /microk8s
nari@kubelinux:/data$ sudo du -shc *
52K gvis-pv-django-sslcerts
8.4M gvis-pv-django-uwsgi-nginx
3.9G gvis-pv-mariadb
2.5G gvis-pv-mariadbconf
15M gvis-pv-ubun
1.7M sv_django-uwsgi-nginx
13M sv_django-uwsgi-nginx.tar.gz
6.4G total
nari@kubelinux:/data$
|
あとは
前にやった方法
でPod起動用のイメージと永続化領域を流し込んで、起動確認してく。
kubernetes1.33でなんかエラー出た#
クラスタ作り直してダッシュボード起動するところでエラー出た。
1.34でも出る。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
|
Checking if Dashboard is running.
Infer repository core for addon dashboard
Waiting for Dashboard to come up.
Error from server (NotFound): deployments.apps "kubernetes-dashboard" not found ⭐️おいおい、何言うてんねん
Traceback (most recent call last):
File "/snap/microk8s/8205/scripts/wrappers/dashboard_proxy.py", line 111, in <module>
dashboard_proxy()
File "/snap/microk8s/8205/usr/lib/python3/dist-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/snap/microk8s/8205/usr/lib/python3/dist-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/snap/microk8s/8205/usr/lib/python3/dist-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/snap/microk8s/8205/usr/lib/python3/dist-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/snap/microk8s/8205/scripts/wrappers/dashboard_proxy.py", line 79, in dashboard_proxy
check_output(command)
File "/snap/microk8s/8205/usr/lib/python3.8/subprocess.py", line 415, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/snap/microk8s/8205/usr/lib/python3.8/subprocess.py", line 516, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/snap/microk8s/8205/microk8s-kubectl.wrapper', '-n', 'kube-system', 'wait', '--timeout=240s', 'deployment', 'kubernetes-dashboard', '--for', 'condition=available']' returned non-zero exit status 1.
|
kubernetesのkongってpodが動いてへん#
ログ見たらなんかわかるんかもしれん。1.33へバージョンアップするとき確認したら、なんかkongっていうコンポーネントがリトライ繰り返しとる。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
|
root@kubelinux:/microk8s/script# microk8s.kubectl logs pod/kubernetes-dashboard-kong-648658d45f-chcwt -n kubernetes-dashboard
Defaulted container "proxy" out of: proxy, clear-stale-pid (init)
2025/05/09 21:12:43 [warn] 1#0: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /kong_prefix/nginx.conf:7
nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /kong_prefix/nginx.conf:7
2025/05/09 21:12:43 [notice] 1#0: [lua] init.lua:791: init(): [request-debug] token for request debugging: b1946751-38dd-40b3-b93f-a54663acd748
2025/05/09 21:12:43 [emerg] 1#0: bind() to unix:/kong_prefix/sockets/we failed (98: Address already in use)
nginx: [emerg] bind() to unix:/kong_prefix/sockets/we failed (98: Address already in use)
2025/05/09 21:12:43 [notice] 1#0: try again to bind() after 500ms
2025/05/09 21:12:43 [emerg] 1#0: bind() to unix:/kong_prefix/sockets/we failed (98: Address already in use)
nginx: [emerg] bind() to unix:/kong_prefix/sockets/we failed (98: Address already in use)
2025/05/09 21:12:43 [notice] 1#0: try again to bind() after 500ms
2025/05/09 21:12:43 [emerg] 1#0: bind() to unix:/kong_prefix/sockets/we failed (98: Address already in use)
nginx: [emerg] bind() to unix:/kong_prefix/sockets/we failed (98: Address already in use)
2025/05/09 21:12:43 [notice] 1#0: try again to bind() after 500ms
2025/05/09 21:12:43 [emerg] 1#0: bind() to unix:/kong_prefix/sockets/we failed (98: Address already in use)
nginx: [emerg] bind() to unix:/kong_prefix/sockets/we failed (98: Address already in use)
2025/05/09 21:12:43 [notice] 1#0: try again to bind() after 500ms
2025/05/09 21:12:43 [emerg] 1#0: bind() to unix:/kong_prefix/sockets/we failed (98: Address already in use)
nginx: [emerg] bind() to unix:/kong_prefix/sockets/we failed (98: Address already in use)
2025/05/09 21:12:43 [notice] 1#0: try again to bind() after 500ms
2025/05/09 21:12:43 [emerg] 1#0: still could not bind()
nginx: [emerg] still could not bind()
root@kubelinux:/microk8s/script#
|
似たような現象になってる人いた。
KIC fails to start. All pods down: nginx [emerg] 1#0: bind() to unix:/kong_prefix/sockets/we failed …
めんどくさいわぁ・・・。
回避する#
今は1.33.0で、1.33.1とか公開されるまでしばらく待ったらエラー回避できるんかもしれんけど、何か逃げ道ないんか。
githubのmicrok8sのissueを読んでみたら、こんなのがあった。
1.33 kubelite crash loop · Issue #5057 · canonical/microk8s · GitHub
kubeliteのことが書いてあるんやけど、その中にsudo snap refresh microk8s --channel=1.33/stableってある。
んんん!?
microk8sはクラスタのバージョンアップできるんか?
もう一回microk8sのドキュメント探してみた。
MicroK8s - Upgrading MicroK8s
はい、読み落としてました。クラスタは常に作り直すもんと思ってたのはそうやないっちゅーことで。
クラスタのバージョン上げるのにupgradeやなくてrefreshって命令使うみたいやけど、直感的にわかりにくいな。
マイナーバージョン1つずつ上げるってことと、永続化領域はちゃんとバックアップ取ることにしてやってみる。
まずはクラスタ作り替えのスクリプトを元に戻して、もう一回クラスタ作ってからすぐに停止させとく。
1
|
GVIS_VER=1.32/stable ⭐️1.33から1.32へいったん書き換え
|
podは吹っ飛んでても、コンテナイメージと永続化領域は後で流し込むからかまへん。クラスタだけに注力して動かす。
1.32のクラスタ作った後、ここでアップグレードする。
1
2
3
4
5
6
|
root@kubearm:/microk8s/script# snap refresh microk8s --channel=1.33/stable
microk8s (1.33/stable) v1.33.0 from Canonical✓ refreshed
root@kubearm:/microk8s/script#
root@kubearm:/microk8s/script# snap info microk8s | grep classic | grep install
installed: v1.33.0 (8206) 155MB classic
root@kubearm:/microk8s/script#
|
1.34にするときは1.32で作った後、1.34にバージョン上げた。
1
2
3
4
5
|
root@kubelinux:/microk8s/script# snap refresh microk8s --channel=1.33/stable
microk8s (1.33/stable) v1.33.0 from Canonical✓ refreshed
root@kubelinux:/microk8s/script# snap refresh microk8s --channel=1.34/stable
microk8s (1.34/stable) v1.34.0 from Canonical✓ refreshed
root@kubelinux:/microk8s/script#
|
いちいち1.32に戻すの面倒やから次からはこのコマンドライン一発でやってみるかな。
macosのutmで使ってるほうで試したらうまく行った。
1
2
3
4
5
|
root@kubearm:/microk8s/script# sh ./301_kubeStop.sh
/microk8s/log/kube.log
root@kubearm:/microk8s/script# snap refresh microk8s --channel=1.34/stable
microk8s (1.34/stable) v1.34.0 from Canonical✓ refreshed
root@kubearm:/microk8s/script#
|
ちゃんと入ってくれて、クラスタ本体が動かせるようになった。
1
2
3
4
5
6
7
8
9
10
|
root@kubearm:/microk8s/script# sh ./302_kubeStart.sh
/microk8s/log/kube.log
root@kubearm:/microk8s/script# Checking if Dashboard is running.
Infer repository core for addon dashboard
Waiting for Dashboard to come up.
Trying to get token from microk8s-dashboard-token
Waiting for secret token (attempt 0)
Dashboard will be available at https://127.0.0.1:10443
Use the following token to login:
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
|
ダッシュボード起動できとる。

ubuntuで動くmicrok8sの状態#
OSはubuntuね。x86でもarm64でもほぼ同じ。
1
2
3
|
root@kubelinux:/microk8s/script# cat /etc/os-release | grep PRETTY
PRETTY_NAME="Ubuntu 24.04.2 LTS"
root@kubelinux:/microk8s/script#
|
いつかどっちかを維持するのやめて潰すかもしれんけど、pstreeで稼働プロセス確認。左がarm64で右がx86。

dockerやなくてcontainerdが動いてくれてる。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
|
root@kubelinux:/microk8s/script# pstree
systemd─┬─VGAuthService
├─agetty
├─bash───sleep
├─bash───cluster-agent───6*[{cluster-agent}]
├─containerd───19*[{containerd}]
├─containerd-shim─┬─pause
│ ├─runsvdir─┬─4*[runsv───calico-node───9*[{calico-node}]]
│ │ └─runsv───calico-node───16*[{calico-node}]
│ └─12*[{containerd-shim}]
├─containerd-shim─┬─pause
│ ├─supervisord─┬─nginx───6*[nginx]
│ │ └─uwsgi───4*[uwsgi]
│ └─12*[{containerd-shim}]
├─containerd-shim─┬─hostpath-provis───8*[{hostpath-provis}]
│ ├─pause
│ └─11*[{containerd-shim}]
├─containerd-shim─┬─metrics-server───11*[{metrics-server}]
│ ├─pause
│ └─12*[{containerd-shim}]
├─containerd-shim─┬─pause
│ ├─registry───7*[{registry}]
│ └─11*[{containerd-shim}]
├─containerd-shim─┬─pause
│ ├─s6-svscan─┬─s6-supervise───s6-linux-init-s
│ │ ├─2*[s6-supervise]
│ │ ├─s6-supervise───s6-ipcserverd
│ │ ├─s6-supervise───sh───cron
│ │ ├─s6-supervise───sh─┬─inotifywait
│ │ │ └─sh
│ │ └─s6-supervise───sh───nginx───nginx
│ └─11*[{containerd-shim}]
├─containerd-shim─┬─metrics-sidecar───9*[{metrics-sidecar}]
│ ├─pause
│ └─12*[{containerd-shim}]
├─containerd-shim─┬─dashboard───7*[{dashboard}]
│ ├─pause
│ └─11*[{containerd-shim}]
├─containerd-shim─┬─coredns───10*[{coredns}]
│ ├─pause
│ └─11*[{containerd-shim}]
├─containerd-shim─┬─pause
│ ├─xrdp───xrdp-sesman
│ └─12*[{containerd-shim}]
├─containerd-shim─┬─mariadbd───7*[{mariadbd}]
│ ├─pause
│ └─11*[{containerd-shim}]
├─containerd-shim─┬─kube-controller───7*[{kube-controller}]
│ ├─pause
│ └─12*[{containerd-shim}]
├─cron
├─dbus-daemon
├─k8s-dqlite───17*[{k8s-dqlite}]
├─kubelite───25*[{kubelite}]
├─multipathd───6*[{multipathd}]
├─polkitd───3*[{polkitd}]
├─rsyslogd───3*[{rsyslogd}]
├─snapd───12*[{snapd}]
├─sshd───sshd───sshd───bash───sudo───sudo───su───bash───pstree
├─systemd───(sd-pam)
├─systemd-journal
├─systemd-logind
├─systemd-network
├─systemd-resolve
├─systemd-timesyn───{systemd-timesyn}
├─systemd-udevd
├─udisksd───5*[{udisksd}]
├─unattended-upgr───{unattended-upgr}
└─vmtoolsd───3*[{vmtoolsd}]
root@kubelinux:/microk8s/script#
|
kubernetesのダッシュボード見たらこんな感じでconfigmapやpv/pvcとかできとる。

スクリプトが動いた後の、ノードとサービスの確認しとく。
1
2
3
4
5
6
7
8
9
10
|
root@kubelinux:/microk8s/script# kubectl get nodes,services
NAME STATUS ROLES AGE VERSION
node/kubelinux Ready <none> 22h v1.33.0
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 22h
service/sv-django ClusterIP 10.152.183.103 <none> 38080/TCP 22h
service/sv-https-portal ClusterIP 10.152.183.122 <none> 30080/TCP,30443/TCP 22h
service/sv-mariadb ClusterIP 10.152.183.22 <none> 13306/TCP 22h
root@kubelinux:/microk8s/script#
|
teratermのマクロがやってくれるけど、永続化領域とイメージ入れてく。
母艦からmacminiへコピーし、UTMの共有フォルダ通じてdockerイメージ(9GBぐらい)をtar.gzからtarにしとく。
windowsホストのvmwareの中でx86のubuntuあるので、microk8sが動くホストにdockerイメージを使わせる。
同じ感じで、macminiの中でrancher desktop動いてるから、その中のdockerイメージをarm64のkubernetes環境で使えるようにしてる。
1
2
3
4
5
6
|
root@kubearm:/microk8s/nariDockerDat/DockerImages# ls -lh
total 11G
-rw-r--r-- 1 501 dialout 1.5G May 5 08:14 save-django.tar
-rw-r--r-- 1 501 dialout 420M May 5 08:13 save-mariadb.tar
-rw-r--r-- 1 501 dialout 8.5G May 5 08:14 save-xrdpubu.tar
root@kubearm:/microk8s/nariDockerDat/DockerImages#
|
コピーとtarに展開するまで10分、tarをctrに入れるまで20分、合計30分ぐらいはかかる。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
nari@kubearm:~$ sudo su -
[sudo] password for nari:
root@kubearm:~# cd /data
root@kubearm:/data# rm -fR gvis-pv-ubun ; sync ; cp /microk8s/nariDockerDat/gvis-pv-ubun.tar.gz /data ; tar xzf gvis-pv-ubun.tar.gz
root@kubearm:/data# chown -R nari:nari gvis-pv-ubun ; chmod -R 777 gvis-pv-ubun ; sync
root@kubearm:/data# cd /microk8s/nariDockerDat/DockerImages
root@kubearm:/microk8s/nariDockerDat/DockerImages# microk8s.ctr images rm docker.io/library/save-django:gvis-saved
docker.io/library/save-django:gvis-saved
root@kubearm:/microk8s/nariDockerDat/DockerImages# microk8s.ctr images rm docker.io/library/save-xrdpubu:gvis-saved
docker.io/library/save-xrdpubu:gvis-saved
root@kubearm:/microk8s/nariDockerDat/DockerImages# microk8s.ctr images import save-django.tar
unpacking docker.io/library/save-django:gvis-saved (sha256:4f0c9d6737b035232342873cfda28086358fc60daa26b24076aea3c831ed9681)...done
root@kubearm:/microk8s/nariDockerDat/DockerImages# microk8s.ctr images import save-xrdpubu.tar
unpacking docker.io/library/save-xrdpubu:gvis-saved (sha256:143b709b18b89cfc48dd53b69d5196dadfa9b9e757129bb5aaeee890732a965b)...done
root@kubearm:/microk8s/nariDockerDat/DockerImages# microk8s.ctr images ls | grep save
docker.io/library/save-django:gvis-saved application/vnd.oci.image.manifest.v1+json sha256:4f0c9d6737b035232342873cfda28086358fc60daa26b24076aea3c831ed9681 1.4 GiB linux/arm64 io.cri-containerd.image=managed
docker.io/library/save-xrdpubu:gvis-saved application/vnd.oci.image.manifest.v1+json sha256:143b709b18b89cfc48dd53b69d5196dadfa9b9e757129bb5aaeee890732a965b 8.4 GiB linux/arm64 io.cri-containerd.image=managed
root@kubearm:/microk8s/nariDockerDat/DockerImages#
|
nmonで見てるとイメージの流し込みとか、データベースのダンプの取り込みとか、結構CPUが頑張り続けてた。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
|
┌nmon─16p─ppppp[H for help]pppHostname=kubelinux────Refresh= 2secs ───07:31.29─99999999999999999999999999999999999999999999999999┐
│ CPU Utilisation ─ │
│---------------------------+-------------------------------------------------+ │
│CPU User% Sys% Wait% Idle|0 |25 |50 |75 100| │
│ 1 13.6 22.5 0.6 63.3|UUUUUUsssssssssss > | │
│ 2 9.5 20.7 0.0 69.8|UUUUssssssssss > | │
│ 3 13.1 20.2 0.0 66.7|UUUUUUssssssssss > | │
│ 4 14.8 35.2 0.0 50.0|UUUUUUUsssssssssssssssss > | │
│ 5 12.9 17.1 0.0 70.0|UUUUUUssssssss > | │
│ 6 12.0 31.1 0.6 56.3|UUUUUsssssssssssssss > | │
│---------------------------+-------------------------------------------------+ │
│Avg 12.6 24.4 0.1 63.0|UUUUUUssssssssssss > | │
│---------------------------+-------------------------------------------------+ │
│ Memory and Swap ─ │
│ PageSize:4KB RAM-Memory Swap-Space High-Memory Low-Memory │
│ Total (MB) 9940.8 4096.0 - not in use - not in use │
│ Free (MB) 259.3 4095.5 │
│ Free Percent 2.6% 100.0% │
│ Linux Kernel Internal Memory (MB) │
│ Cached= 7969.3 Active= 1684.9 │
│ Buffers= 181.5 Swapcached= 0.3 Inactive = 7397.1 │
│ Dirty = 181.7 Writeback = 0.0 Mapped = 592.8 │
│ Slab = 437.2 Commit_AS = 3275.4 PageTables= 12.6 │
│ Disk I/O ──/proc/diskstats──Requested KB/s────Warning:may contains duplicates─sssssssssssssssssssssssssssssssssssssssssssssssss│
│DiskName Busy Read Write |0 |25 |50 |75 100| │
│sda 3% 2977.6 704.0|RR │
│sda3 3% 2977.6 704.0|RR │
│dm-0 3% 2977.6 704.0|RR │
│Totals Read-MB/s=8.7 Writes-MB/s=2.1 Transfers/sec=279.7 │
└─3333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333
|
mariadbの設定ファイル・ダンプ用処理・ダンプデータを、teratermマクロ書いておいて永続化領域置き場の/dataへ流し込む。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
|
root@kubearm:/microk8s# kubectl delete -f sv-mariadb-pod.yaml
kubectl delete -f gvis-PersistentVol-mariadbconf.yaml
Error from server (NotFound): error when deleting "sv-mariadb-pod.yaml": pods "sv-mariadb" not found
root@kubearm:/microk8s# kubectl delete -f gvis-PersistentVol-mariadbconf.yaml
kubectl delete -f gvis-PersistentVol-mariadb.yaml
persistentvolume "gvis-pv-mariadbconf" deleted
persistentvolumeclaim "gvis-pv-mariadbconf-claim" deleted
root@kubearm:/microk8s# kubectl delete -f gvis-PersistentVol-mariadb.yaml
persistentvolume "gvis-pv-mariadb" deleted
persistentvolumeclaim "gvis-pv-mariadb-claim" deleted
root@kubearm:/microk8s# cd /data
root@kubearm:/data# rm -fR ./gvis-pv-mariadb ; rm -fR ./gvis-pv-mariadbconf ; sync ; sync
root@kubearm:/data# mkdir -p /data/gvis-pv-mariadbconf/nari/fullback/ ; mkdir -p gvis-pv-mariadb
root@kubearm:/data# cd /microk8s/nariDockerDat/
root@kubearm:/microk8s/nariDockerDat# cp -p /tmp/gvis.cnf /microk8s/nariDockerDat/sv_mariadb11conf/
root@kubearm:/microk8s/nariDockerDat# mv /tmp/gvis.cnf /data/gvis-pv-mariadbconf/
root@kubearm:/microk8s/nariDockerDat# cp -p ./sv_mariadb11conf/nari/fullback/2_fullRecover.sh /data/gvis-pv-mariadbconf/nari/fullback/
root@kubearm:/microk8s/nariDockerDat# cp -p ./sv_mariadb11conf/nari/fullback/4_nariDB_DjangoRecover.sh /data/gvis-pv-mariadbconf/nari/fullback/
root@kubearm:/microk8s/nariDockerDat# mv /tmp/FullBackup_nariDB_1st.sql /data/gvis-pv-mariadbconf/nari/
root@kubearm:/microk8s/nariDockerDat# mv /tmp/FullBackup_nariDB_Django.sql /data/gvis-pv-mariadbconf/nari/
root@kubearm:/microk8s/nariDockerDat# cd /data
root@kubearm:/data# chmod -R 777 gvis-pv-mariadbconf
root@kubearm:/data# chmod -R 777 gvis-pv-mariadb
root@kubearm:/data# chmod 644 /data/gvis-pv-mariadbconf/gvis.cnf
|
mariadbのpv/pvc/podを作り直してダンプをロードする。
処理性能弱いから、pod作ったときはmariadbの初期化処理が終わるまで30秒待たせてからダンプ流し込む。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
|
root@kubearm:/data# m8
root@kubearm:/microk8s# kubectl create -f gvis-PersistentVol-mariadbconf.yaml
persistentvolume/gvis-pv-mariadbconf created
persistentvolumeclaim/gvis-pv-mariadbconf-claim created
root@kubearm:/microk8s# kubectl create -f gvis-PersistentVol-mariadb.yaml
persistentvolume/gvis-pv-mariadb created
persistentvolumeclaim/gvis-pv-mariadb-claim created
root@kubearm:/microk8s# kubectl create -f sv-mariadb-pod.yaml
pod/sv-mariadb created
root@kubearm:/microk8s# sync ; sync ; sleep 30
root@kubearm:/microk8s# kubectl exec -it `kubectl get pod | grep mariadb | awk '{print $1}'` -- bash
root@svmariadb:/# sync ; sleep 20 ; sync
/bin/sh /etc/mysql/conf.d/nari/fullback/2_fullRecover.sh
root@svmariadb:/# /bin/sh /etc/mysql/conf.d/nari/fullback/2_fullRecover.sh
sync ; sleep 20 ; sync
root@svmariadb:/# sync ; sleep 20 ; sync
root@svmariadb:/# /bin/sh /etc/mysql/conf.d/nari/fullback/4_nariDB_DjangoRecover.sh
sync ; sleep 20 ; sync
root@svmariadb:/# sync ; sleep 20 ; sync
mariadb -unari -pXXXXXXXX
root@svmariadb:/# mariadb -unari -pXXXXXXX
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 5
Server version: 11.4.5-MariaDB-ubu2404-log mariadb.org binary distribution
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> show variables like 'max_allowed_packet' ;
+--------------------+------------+
| Variable_name | Value |
+--------------------+------------+
| max_allowed_packet | 1073741824 |
+--------------------+------------+
1 row in set (0.000 sec)
MariaDB [(none)]> show databases ;
+--------------------+
| Database |
+--------------------+
| information_schema |
| nariDB_1st |
| nariDB_Django |
+--------------------+
3 rows in set (0.001 sec)
MariaDB [(none)]> use nariDB_1st ;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
MariaDB [nariDB_1st]> select count(*) from GVIS_keihi ;
+----------+
| count(*) |
+----------+
| 12178 |
+----------+
1 row in set (0.002 sec)
MariaDB [nariDB_1st]> exit
Bye
root@svmariadb:/# exit
exit
root@kubearm:/microk8s# rm -f /data/gvis-pv-mariadbconf/nari/FullBackup_nariDB_1st.sql
root@kubearm:/microk8s# rm -f /data/gvis-pv-mariadbconf/nari/FullBackup_nariDB_Django.sql
root@kubearm:/microk8s#
|
djangoの永続化領域も流し込む。pip3で更新対象見えたら、母艦の次の定期更新でなんとかしとく。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
|
nari@kubearm:~$ sudo su -
[sudo] password for nari:
root@kubearm:~# rm -f /tmp/sv_django-uwsgi-nginx.tar.gz
root@kubearm:~# ps -ef |grep -v grep |grep -c scp
0
root@kubearm:~# echo SCP finish
SCP finish
root@kubearm:~# cd /microk8s/nariDockerDat ; rm -f sv_django-uwsgi-nginx.tar.gz ; mv /tmp/sv_django-uwsgi-nginx.tar.gz ./
rm -fR ./sv_django-uwsgi-nginx
root@kubearm:/microk8s/nariDockerDat# rm -fR ./sv_django-uwsgi-nginx
root@kubearm:/microk8s/nariDockerDat# m8
root@kubearm:/microk8s# kubectl delete -f sv-django-pod.yaml
kubectl delete -f gvis-PersistentVol-sv_django-uwsgi-nginx.yaml
Error from server (NotFound): error when deleting "sv-django-pod.yaml": pods "sv-django" not found
root@kubearm:/microk8s# kubectl delete -f gvis-PersistentVol-sv_django-uwsgi-nginx.yaml
persistentvolume "gvis-pv-django-uwsgi-nginx" deleted
persistentvolumeclaim "gvis-pv-django-uwsgi-nginx-claim" deleted
root@kubearm:/microk8s# cd /data
root@kubearm:/data# rm -fR ./gvis-pv-django-uwsgi-nginx ; rm -fR ./gvis-pv-django-sslcerts ; sync
root@kubearm:/data# cp -p /microk8s/nariDockerDat/sv_django-uwsgi-nginx.tar.gz ./
root@kubearm:/data# tar xzf sv_django-uwsgi-nginx.tar.gz
root@kubearm:/data# mv ./sv_django-uwsgi-nginx/app ./gvis-pv-django-uwsgi-nginx ; mkdir gvis-pv-django-sslcerts
root@kubearm:/data# /bin/sh /data/sv_django-uwsgi-nginx/kubearmCopy.sh
root@kubearm:/data# chmod 777 gvis-pv-django-uwsgi-nginx ; chmod 777 gvis-pv-django-sslcerts
root@kubearm:/data# rm -fR ./sv_django-uwsgi-nginx/
root@kubearm:/data# m8
root@kubearm:/microk8s# kubectl create -f gvis-PersistentVol-sv_django-uwsgi-nginx.yaml
persistentvolume/gvis-pv-django-uwsgi-nginx created
persistentvolumeclaim/gvis-pv-django-uwsgi-nginx-claim created
root@kubearm:/microk8s# kubectl create -f sv-django-pod.yaml
pod/sv-django created
root@kubearm:/microk8s# sleep 10
root@kubearm:/microk8s# kubectl exec -it `kubectl get pod | grep sv-django | awk '{print $1}'` -- bash
bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
root@sv-django:/# pip3 list -o
Package Version Latest Type
---------- ------- ------ -----
Django 5.2 5.2.1 wheel
fonttools 4.57.0 4.58.0 wheel
matplotlib 3.10.1 3.10.3 wheel
setuptools 80.3.1 80.4.0 wheel
root@sv-django:/# exit
exit
root@kubearm:/microk8s# sc
root@kubearm:/microk8s/script# sh ./415_ReCreateHTTPSpod.sh
NAME READY STATUS RESTARTS AGE
sv-django 1/1 Running 0 14s
sv-mariadb 1/1 Running 0 34m
Error from server (NotFound): error when deleting "/microk8s/sv-https-portal-pod.yaml": pods "sv-https-portal" not found
pod/sv-https-portal created
NAME READY STATUS RESTARTS AGE
sv-django 1/1 Running 0 14s
sv-https-portal 0/1 ContainerCreating 0 0s
sv-mariadb 1/1 Running 0 34m
root@kubearm:/microk8s/script#
|
xrdpのpodは手動で動かす。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
root@kubearm:/microk8s/script# sh ./413_ReCreateXRDPpod.sh
NAME READY STATUS RESTARTS AGE
sv-django 1/1 Running 0 2m10s
sv-https-portal 1/1 Running 0 116s
sv-mariadb 1/1 Running 0 36m
Error from server (NotFound): error when deleting "/microk8s/cl-ubun-pod.yaml": pods "cl-ubun" not found
pod/cl-ubun created
NAME READY STATUS RESTARTS AGE
cl-ubun 0/1 ContainerCreating 0 1s
sv-django 1/1 Running 0 2m11s
sv-https-portal 1/1 Running 0 117s
sv-mariadb 1/1 Running 0 36m
root@kubearm:/microk8s/script#
root@kubearm:/microk8s/script# kubectl get po
NAME READY STATUS RESTARTS AGE
cl-ubun 1/1 Running 0 32s
sv-django 1/1 Running 0 2m42s
sv-https-portal 1/1 Running 0 2m28s
sv-mariadb 1/1 Running 0 37m
root@kubearm:/microk8s/script#
|
軽く動作確認#
dockerイメージの流し込みと反映した後、軽く動作確認。
Pod再起動して状態確認してみる。

動いてそうやから、xrdpのPodから見てみる。

speedtestはぜんぜん速度出えへん。dockerコンテナとかmacのsafariからは800Mbpsほど速度出るんやけどなぁ。

djangoのmatplotlibで円グラフ書けてるし、データベース読めて扱えてるし、うまいこと行ってるっぽい。
speedtestちゃんと速度出てる。
sequoiaになってM4にもなったし、x86とarm64でkubernetes環境維持できるようになった。