This ticket can even turn into WSL issues tracker if it's not desired by maintainers to have multiple WSL issue tickets.
Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/mnt/wsl/rancher-desktop/run/docker-mounts/73b946ab-5af2-4925-8a46-b424c8ad947f" to rootfs at "/etc/prometheus/prometheus.yml" caused: mount through procfd: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
I get this error when running docker compose up -d on WSL with Rancher Desktop on Windows.
So I tried commenting Prometheaus service section from docker-compose.yml as it doesn't seem essential.
❯ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bb625cacf856 pingcap/tispark:v2.1.1 "/opt/spark/sbin/sta…" 3 minutes ago Up 3 minutes 0.0.0.0:38081->38081/tcp, :::38081->38081/tcp tidb-docker-compose-tispark-slave0-1
a31d098b53d9 pingcap/tispark:v2.1.1 "/opt/spark/sbin/sta…" 3 minutes ago Up 3 minutes 0.0.0.0:7077->7077/tcp, :::7077->7077/tcp, 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp tidb-docker-compose-tispark-master-1
d378577ed56a pingcap/tidb:latest "/tidb-server --stor…" 3 minutes ago Up 3 minutes 0.0.0.0:4000->4000/tcp, :::4000->4000/tcp, 0.0.0.0:10080->10080/tcp, :::10080->10080/tcp tidb-docker-compose-tidb-1
2e7cc0c7ff2b pingcap/tikv:latest "/tikv-server --addr…" 3 minutes ago Restarting (101) 38 seconds ago tidb-docker-compose-tikv0-1
c0cd0025b162 pingcap/tikv:latest "/tikv-server --addr…" 3 minutes ago Up 3 minutes 20160/tcp tidb-docker-compose-tikv1-1
311c9363ac04 pingcap/tikv:latest "/tikv-server --addr…" 3 minutes ago Restarting (101) 39 seconds ago tidb-docker-compose-tikv2-1
99aa0b1abbc6 pingcap/tidb-vision:latest "/bin/sh -c 'sed -i …" 3 minutes ago Up 3 minutes 80/tcp, 443/tcp, 2015/tcp, 0.0.0.0:8010->8010/tcp, :::8010->8010/tcp tidb-docker-compose-tidb-vision-1
94e073e82fec pingcap/pd:latest "/pd-server --name=p…" 3 minutes ago Up 3 minutes 2380/tcp, 0.0.0.0:49166->2379/tcp, :::49166->2379/tcp tidb-docker-compose-pd0-1
97ef79ecad60 pingcap/pd:latest "/pd-server --name=p…" 3 minutes ago Restarting (1) 40 seconds ago tidb-docker-compose-pd1-1
047fd4212062 prom/pushgateway:v0.3.1 "/bin/pushgateway --…" 3 minutes ago Up 3 minutes 9091/tcp tidb-docker-compose-pushgateway-1
1ed8c5019307 grafana/grafana:6.0.1 "/run.sh" 3 minutes ago Up 3 minutes 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp tidb-docker-compose-grafana-1
c6f4959e767e pingcap/pd:latest "/pd-server --name=p…" 3 minutes ago Restarting (1) 38 seconds ago tidb-docker-compose-pd2-1
❯ docker compose ps
NAME COMMAND SERVICE STATUS PORTS
tidb-docker-compose-grafana-1 "/run.sh" grafana running 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp
tidb-docker-compose-pd0-1 "/pd-server --name=p…" pd0 running 0.0.0.0:49166->2379/tcp, :::49166->2379/tcp
tidb-docker-compose-pd1-1 "/pd-server --name=p…" pd1 restarting
tidb-docker-compose-pd2-1 "/pd-server --name=p…" pd2 restarting
tidb-docker-compose-pushgateway-1 "/bin/pushgateway --…" pushgateway running 9091/tcp
tidb-docker-compose-tidb-1 "/tidb-server --stor…" tidb running 0.0.0.0:4000->4000/tcp, 0.0.0.0:10080->10080/tcp, :::4000->4000/tcp, :::10080->10080/tcp
tidb-docker-compose-tidb-vision-1 "/bin/sh -c 'sed -i …" tidb-vision running 0.0.0.0:8010->8010/tcp, :::8010->8010/tcp
tidb-docker-compose-tikv0-1 "/tikv-server --addr…" tikv0 restarting
tidb-docker-compose-tikv1-1 "/tikv-server --addr…" tikv1 running 20160/tcp
tidb-docker-compose-tikv2-1 "/tikv-server --addr…" tikv2 restarting
tidb-docker-compose-tispark-master-1 "/opt/spark/sbin/sta…" tispark-master running 0.0.0.0:7077->7077/tcp, 0.0.0.0:8080->8080/tcp, :::7077->7077/tcp, :::8080->8080/tcp
tidb-docker-compose-tispark-slave0-1 "/opt/spark/sbin/sta…" tispark-slave0 running 0.0.0.0:38081->38081/tcp, :::38081->38081/tcp
As you can see above, next issue is that following get stuck in restart loops:
tidb-docker-compose-pd1-1
tidb-docker-compose-pd2-1
tidb-docker-compose-tikv0-1
tidb-docker-compose-tikv2-1
Then get stuck in this 'Restarting' state for exactly 60 seconds, turn to Up state for a moment and then back to 'Restarting' state and it repeats.
I assume it is related to 3rd issue:
❯ mysql -h 127.0.0.1 -P 4000 -u root
ERROR 2013 (HY000): Lost connection to server at 'handshake: reading initial communication packet', system error: 11
Not sure what among these are expected behaviors.
Update:
I had been running docker compose up inside a WSL distribution using Rancher Desktop which has a mount point issue rancher-sandbox/rancher-desktop#2231.
That accounts for part of the issues.
Running directly on a Windows terminal fixes some of the issues but:
Current Status:-
- All 3 tikv containers still stuck in loop with following error:
[2022/05/28 07:48:38.004 +00:00] [FATAL] [lib.rs:465] ["Failed to reserve space for recovery: Operation not supported (os error 95)."] [backtrace=" 0: tikv_util::set_panic_hook::{{closure}}\n at home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tikv/components/tikv_util/src/lib.rs:464:18\n 1: std::panicking::rust_panic_with_hook\n at rustc/2faabf579323f5252329264cc53ba9ff803429a3/library/std/src/panicking.rs:626:17\n 2: std::panicking::begin_panic_handler::{{closure}}\n at rustc/2faabf579323f5252329264cc53ba9ff803429a3/library/std/src/panicking.rs:519:13\n 3: std::sys_common::backtrace::__rust_end_short_backtrace\n at rustc/2faabf579323f5252329264cc53ba9ff803429a3/library/std/src/sys_common/backtrace.rs:141:18\n 4: rust_begin_unwind\n at rustc/2faabf579323f5252329264cc53ba9ff803429a3/library/std/src/panicking.rs:515:5\n 5: std::panicking::begin_panic_fmt\n at rustc/2faabf579323f5252329264cc53ba9ff803429a3/library/std/src/panicking.rs:457:5\n 6: server::server::TiKVServer<ER>::init_fs::{{closure}}\n at home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tikv/components/server/src/server.rs:445:26\n 7: core::result::Result<T,E>::map_err\n at rustc/2faabf579323f5252329264cc53ba9ff803429a3/library/core/src/result.rs:835:27\n server::server::TiKVServer<ER>::init_fs\n at home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tikv/components/server/src/server.rs:441:13\n server::server::run_tikv\n at home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tikv/components/server/src/server.rs:155:9\n 8: tikv_server::main\n at home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tikv/cmd/tikv-server/src/main.rs:190:5\n 9: core::ops::function::FnOnce::call_once\n at rustc/2faabf579323f5252329264cc53ba9ff803429a3/library/core/src/ops/function.rs:227:5\n std::sys_common::backtrace::__rust_begin_short_backtrace\n at rustc/2faabf579323f5252329264cc53ba9ff803429a3/library/std/src/sys_common/backtrace.rs:125:18\n 10: main\n 11: __libc_start_main\n 12: <unknown>\n"] [location=components/server/src/server.rs:445] [thread_name=main]
❯ mysql -h 127.0.0.1 -P 4000 -u root
ERROR 2013 (HY000): Lost connection to server at 'handshake: reading initial communication packet', system error: 11
This ticket can even turn into WSL issues tracker if it's not desired by maintainers to have multiple WSL issue tickets.
I get this error when running
docker compose up -don WSL with Rancher Desktop on Windows.So I tried commenting Prometheaus service section from
docker-compose.ymlas it doesn't seem essential.As you can see above, next issue is that following get stuck in restart loops:
tidb-docker-compose-pd1-1tidb-docker-compose-pd2-1tidb-docker-compose-tikv0-1tidb-docker-compose-tikv2-1Then get stuck in this 'Restarting' state for exactly 60 seconds, turn to Up state for a moment and then back to 'Restarting' state and it repeats.
I assume it is related to 3rd issue:
Not sure what among these are expected behaviors.
Update:
I had been running
docker compose upinside a WSL distribution using Rancher Desktop which has a mount point issue rancher-sandbox/rancher-desktop#2231.That accounts for part of the issues.
Running directly on a Windows terminal fixes some of the issues but:
Current Status:-