Edit this page Create issue

What operating systems are supported?

Today, gVisor requires Linux.

What CPU architectures are supported?

gVisor currently supports x86_64/AMD64 compatible processors. Preliminary support is also available for ARM64.

Do I need to modify my Linux application to use gVisor?

No. gVisor is capable of running unmodified Linux binaries.

What binary formats does gVisor support?

gVisor supports Linux ELF binaries.

Binaries run in gVisor should be built for the AMD64 or AArch64 CPU architectures.

Can I run Docker images using gVisor?

Yes. Please see the Docker Quick Start.

Can I run Kubernetes pods using gVisor?

Yes. Please see the Kubernetes Quick Start.

How do I integrate gVisor in my existing production stack?

See the Production guide.

What’s the security model?

See the Security Model.


My container runs fine with runc but fails with runsc

If you’re having problems running a container with runsc it’s most likely due to a compatibility issue or a missing feature in gVisor. See Debugging.

When I run my container, docker fails with: open /run/containerd/.../<containerid>/log.json: no such file or directory

You are using an older version of Linux which doesn’t support memfd_create.

This is tracked in bug #268.

When I run my container, docker fails with: flag provided but not defined: -console

You’re using an old version of Docker. See Docker Quick Start.

I can’t see a file copied with: docker cp

For performance reasons, gVisor caches directory contents, and therefore it may not realize a new file was copied to a given directory. To invalidate the cache and force a refresh, create a file under the directory in question and list the contents again.

As a workaround, shared root filesystem can be enabled. See Filesystem.

This bug is tracked in bug #4.

Note that kubectl cp works because it does the copy by exec’ing inside the sandbox, and thus gVisor’s internal cache is made aware of the new files and directories.

I’m getting an error like: panic: unable to attach: operation not permitted or fork/exec /proc/self/exe: invalid argument: unknown

Make sure that permissions is correct on the runsc binary.

sudo chmod a+rx /usr/local/bin/runsc

If your Kernel is configured with YAMA LSM (see https://www.kernel.org/doc/Documentation/security/Yama.txt and https://man7.org/linux/man-pages/man2/ptrace.2.html) gVisor may fail in certain modes (i.e., systrap and/or directfs) with this error if /proc/sys/kernel/yama/ptrace_scope is set to 2. If this is the case, try setting /proc/sys/kernel/yama/ptrace_scope to max of mode 1:

sudo cat /proc/sys/kernel/yama/ptrace_scope
sudo bash -c 'echo 1 > /proc/sys/kernel/yama/ptrace_scope'

I’m getting an error like mount submount "/etc/hostname": creating mount with source ".../hostname": input/output error: unknown.

There is a bug in Linux kernel versions 5.1 to 5.3.15, 5.4.2, and 5.5. Upgrade to a newer kernel or add the following to /lib/systemd/system/containerd.service as a workaround.


And run systemctl daemon-reload && systemctl restart containerd to restart containerd.

See issue #1765 for more details.

I’m getting an error like RuntimeHandler "runsc" not supported

This error indicates that the Kubernetes CRI runtime was not set up to handle runsc as a runtime handler. Please ensure that containerd configuration has been created properly and containerd has been restarted. See the containerd quick start for more details.

If you have ensured that containerd has been set up properly and you used kubeadm to create your cluster please check if Docker is also installed on that system. Kubeadm prefers using Docker if both Docker and containerd are installed.

Please recreate your cluster and set the --cri-socket option on kubeadm commands. For example:

kubeadm init --cri-socket=/var/run/containerd/containerd.sock ...

To fix an existing cluster edit the /var/lib/kubelet/kubeadm-flags.env file and set the --container-runtime flag to remote and set the --container-runtime-endpoint flag to point to the containerd socket. e.g. /var/run/containerd/containerd.sock.

My container cannot resolve another container’s name when using Docker user defined bridge

This is normally indicated by errors like bad address 'container-name' when trying to communicate to another container in the same network.

Docker user defined bridge uses an embedded DNS server bound to the loopback interface on address This requires access to the host network in order to communicate to the DNS server. runsc network is isolated from the host and cannot access the DNS server on the host network without breaking the sandbox isolation. There are a few different workarounds you can try:

  • Use default bridge network with --link to connect containers. Default bridge doesn’t use embedded DNS.
  • Use --network=host option in runsc, however beware that it will use the host network stack and is less secure.
  • Use IPs instead of container names.
  • Use Kubernetes. Container name lookup works fine in Kubernetes.

I’m getting an error like dial unix /run/containerd/s/09e4...8cff: connect: connection refused: unknown

This error may happen when using gvisor-containerd-shim with a containerd that does not contain the fix for CVE-2020-15257. The resolve the issue, update containerd to 1.3.9 or 1.4.3 (or newer versions respectively).