date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,364,456,540,000 |
for example, I have this command:
lxc launch ubuntu:18.04 memsql1
lxc exec memsql1 -- wget -O - 'https://release.memsql.com/release-aug2018.gpg' 2>/dev/null | sudo apt-key add - && apt-key list
lxc exec memsql1 apt-cache policy apt-transport-https
lxc exec memsql1 -- apt -y install apt-transport-https
lxc exec memsql1 -- echo "deb [arch=amd64] https://release.memsql.com/production/debian memsql main" | sudo tee /etc/apt/sources.list.d/memsql.list
^ the command above (2nd and last line) will pipe into host instead of container. I know that I can use lxc exec memsql1 bash then run the command inside bash, but I want to make script out of these commands.
|
Nevermind, found it:
lxc exec memsql1 -- bash -c "the command with | pipe"
| How to run lxc exec for commands with pipe? |
1,364,456,540,000 |
I have 2 NICs: eth0 & eth1 on a host OS which runs some VMs (LXC) under it.
The hosts eth0 is connected to a private network and configured as the primary interface.
The hosts eth1 is connected to the DMZ.
Each VM has a static IP that bridges to eth0 on the host.
Some VMs, which need a public IP, have a 2nd virtual NIC bridged to the hosts eth1.
Question:
Can I remove the IP address on the hosts eth1 NIC? The host has absolutely no need for a public IP address, I only need to bridge the interface to selected guest VMs. It's the guest VM which will host a service on whatever public IP its assigned.
Cursory attempts at removing the IP on the hosts eth1 have generated errors.
|
It should work, as long the interface is still up
ifconfig eth0 | grep UP
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
the bridge is a "switch" and it doesn't need to have one IP. But please check if you have firewall rules in eth0:
iptables -l -vnx
if rp_filter is off
cat /proc/sys/net/ipv4/conf/eth1/rp_filter
0
and if all fails, try to enable the STP:
brctl stp br0 on
and enable the promisc mode in that interface:
ifconfig eth0 promisc
(ifconfig eth0 -promisc to remove it)
Of course, also check if there is no other config using that removed IP :)
| Can I have a bridged interface without an IP? |
1,364,456,540,000 |
I have a LXC container on my Debian system. I want to setup a public Git server on it so that it's accessible to other people. How can I do this?
UPDATE #1
Link to apache2.conf: http://pastebin.com/Nvh4SsSH.
|
Give this Howto a look. It's a little dated but should have the general steps you need to setup a Git server. The howto is titled: How To Install A Public Git Repository On A Debian Server.
General steps
Install git + gitweb
$ sudo apt-get install git-core gitweb
Setup gitweb directories
$ sudo mkdir /var/www/git
$ [ -d "/var/cache/git" ] || sudo mkdir /var/cache/git
Setup gitweb's Apache config
$ sudo vim /etc/apache2/conf.d/git
contents of file:
<Directory /var/www/git>
Allow from all
AllowOverride all
Order allow,deny
Options ExecCGI
<Files gitweb.cgi>
SetHandler cgi-script
</Files>
</Directory>
DirectoryIndex gitweb.cgi
SetEnv GITWEB_CONFIG /etc/gitweb.conf
Copy gitweb files to Apache
$ sudo mv /usr/share/gitweb/* /var/www/git
$ sudo mv /usr/lib/cgi-bin/gitweb.cgi /var/www/git
Setup gitweb.conf
$ sudo vim /etc/gitweb.conf
Contents of gitweb.conf:
$projectroot = '/var/cache/git/';
$git_temp = "/tmp";
#$home_link = $my_uri || "/";
$home_text = "indextext.html";
$projects_list = $projectroot;
$stylesheet = "/git/gitweb.css";
$logo = "/git/git-logo.png";
$favicon = "/git/git-favicon.png";
Reload/Restart Apache
$ sudo /etc/init.d/apache2 reload
Setup Git Repository
$ mkdir -p /var/cache/git/project.git && cd project.git
$ git init
Configure Repository
$ echo "Short project's description" > .git/description
$ git config --global user.name "Your Name"
$ git config --global user.email "[email protected]"
$ git commit -a
$ cd /var/cache/git/project.git && touch .git/git-daemon-export-ok
Start Git Daemon
$ git daemon --base-path=/var/cache/git --detach --syslog --export-all
Test clone the Repository (from a secondary machine)
$ git clone git://server/project.git project
Adding additional Repos + Users
To add more repos simply repeat steps #7 - #9. To add users just create Unix accounts for each additional user.
| How to setup Git server on Linux Container in Debian |
1,364,456,540,000 |
So according to the documentation on the Ubuntu LXC documentation the following statement can be found at the time of this writing:
A NIC can only exist in one namespace at a time, so a physical NIC passed into the container is not usable on the host.
Now one can have a single physical network card (NIC) share several IPs like this in /etc/network/interfaces (Debian/Ubuntu):
auto eth0 eth0:1
iface eth0 inet static
address 192.168.0.100/24
gateway 192.168.0.1
iface eth0:1 inet static
address 192.168.0.200
netmask 255.255.255.0
The same can be done with the respective configuration on other distros as well.
Now the question: can eth0 and eth0:1 be assigned to different namespaces or will assigning either one limit the other to the same namespace automatically?
|
It should be possible to assign eth0 and eth0:1 to different namespaces, but keep in mind there are security implications because you are exposing physical network device to your container.
Because of that, I would just use veth and bridge. Create a bridge br0 and bridge it with eth0 device by default. Then configure your lxc container like this:
lxc.network.type=veth
lxc.network.ipv4=192.168.0.200
lxc.network.link=br0
This will have the same result, but you will use a virtual Ethernet interface for the container and you will also be able to access the same network that your LXC host is in because of the bridge.
| LXC container to use "virtual" interface from host (namespace semantics) |
1,364,456,540,000 |
This question is similar to my question about how to list namespaces. So in addtition, I'd like to know some information about moving processes from one namespace to other? E.g. I have processes of current session in one namespace and some other processes of lxc container in different namespace, so I want to run (e.g. links) in cgroup of that container (it's easily do with cgexec) and then move it to container's namespace , because I have to run this process in container without executing it exact in it. Can it be done or it's impossible in Linux??
|
You don't need to run process in some control groups if you already in certain namespace, instead you have to manipulate with namespaces. All new process in new namespace will «inherit» all control groups related to that namespace.
Moving processes between different namespaces can be done with setns() function or you can use nsenter command from util-linux to enter new namespace and then run new tasks in it. All you need is to know PID of process, which already is new namespace, then you can use (in case you want to run links):
# nsenter --PID --target pid_in_ns_you_want_to_enter && links
It's some cheat, because you don't moving processes, you just entered in namespace and running new processes, but with this possibility you can enter in certain NS and then fork in it already running in other NS process.
| How to move process from one namespace to other? |
1,364,456,540,000 |
I successfully created an archlinux container on an archlinux host with lxc. However, whenever I start a container via
lxc-start -n GUESTNAME
the keyboard layout changes to the default us-layout on the host and in the container. But I want it to be de-latin1. What is surprising is that this keeps happening despite the fact that in
/etc/vconsole.conf
on the host and in the container I have set the options
KEYMAP=de-latin1
The cause of this problem seems to be that the systemd service responsible for setting the vconsole options is not running inside the container:
systemctl status systemd-vconsole-setup
● systemd-vconsole-setup.service - Setup Virtual Console
Loaded: loaded (/usr/lib/systemd/system/systemd-vconsole-setup.service; static)
Active: inactive (dead)
start condition failed at Mon 2014-06-02 20:53:10 UTC; 27s ago
ConditionPathExists=/dev/tty0 was not met
Docs: man:systemd-vconsole-setup.service(8)
man:vconsole.conf(5)
Somehow it states that
/dev/tty0 was not met
but I am unsure what it is trying to tell me. The archlinux linux containers page (https://wiki.archlinux.org/index.php/Linux_Containers#Terminal_settings) is not helping me. Can someone please explain the error and how to solve it?
UPDATE:
(1) The keyboard layout does not change when I start the container directly from the console (e.g. by starting tmux tmux new -s stoic then running sudo lxc-start -n stoic and then detaching from the tmux session via CTRL-a-d) i.e. before logging into X.
This points to another possible explanation: When I log into X the keyboard layout gets set by my .xinitrc which has the content:
setxkbmap -model pc105 -layout de -variant ,qwertz -option lv3:caps_switch
if [ -s ~/.Xmodmap ]; then
xmodmap ~/.Xmodmap
fi
[[ -f ~/.Xresources ]] && xrdb -merge ~/.Xresources
If I then run a container via sudo lxc-start -n stoic within X it will also boot to the console and not to X (that’s the way I set up all my systems). Hence, it resets the keyboard layout I guess. But this wouldn’t be that big a problem if the container would at least respect the aformentioned /etc/vconsole.conf.
(2) I use a privileged container.
(3) Here is my current config-file:
lxc.utsname=stoic
lxc.autodev=1
lxc.tty=1
lxc.pts=1024
lxc.mount=/var/lib/lxc/stoic/fstab
lxc.cap.drop=sys_module mac_admin mac_override sys_time
lxc.kmsg=0
lxc.stopsignal=SIGRTMIN+4
#networking
lxc.network.type=veth
lxc.network.link=br0
lxc.network.flags=up
lxc.network.name=eth0
lxc.network.mtu=1500
#cgroups
lxc.cgroup.devices.deny = a
lxc.cgroup.devices.allow = c *:* m
lxc.cgroup.devices.allow = b *:* m
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
lxc.cgroup.devices.allow = c 1:7 rwm
lxc.cgroup.devices.allow = c 1:8 rwm
lxc.cgroup.devices.allow = c 1:9 rwm
lxc.cgroup.devices.allow = c 4:1 rwm
lxc.cgroup.devices.allow = c 5:0 rwm
lxc.cgroup.devices.allow = c 5:1 rwm
lxc.cgroup.devices.allow = c 5:2 rwm
lxc.cgroup.devices.allow = c 136:* rwm
lxc.rootfs = /var/lib/lxc/stoic/rootfs
lxc.pts = 1024
|
This has been fixed in releases of lxc >= 1.1 with the introduction of the fuse-filesystem lxcfs. Simply install and set up lxc with lxcfs.
| Host system keyboard layout changes when starting LXC container |
1,364,456,540,000 |
I'm trying to setup a NFS server on an Alpine Linux LXC running on Proxmox by following the instructions as outlined here, but rpc.statd refuses to start. Here's an excerpt from /var/log/messages showing the error:
Nov 26 03:08:25 nfs daemon.notice rpc.statd[226]: Version 2.1.1 starting
Nov 26 03:08:25 nfs daemon.warn rpc.statd[226]: Flags: TI-RPC
Nov 26 03:08:25 nfs daemon.err rpc.statd[226]: Unable to prune capability 0 from bounding set: Operation not permitted
Nov 26 03:08:25 nfs daemon.err /etc/init.d/rpc.statd[224]: start-stop-daemon: failed to start `/sbin/rpc.statd'
Nov 26 03:08:25 nfs daemon.err /etc/init.d/rpc.statd[210]: ERROR: rpc.statd failed to start
Nov 26 03:08:25 nfs daemon.err /etc/init.d/nfs[228]: ERROR: cannot start nfs as rpc.statd would not start
I've created a custom apparmor profile for the LXC (found here) to give the service enough permissions to run but that hasn't helped.
|
It turns out I needed the CAP_SETPCAP capability to run the NFS server.
This can be done by editing the container's configuration file in /etc/pve/lxc/CTID.conf (where CTID is your container ID) as follows:
....
# clear cap.drop
lxc.cap.drop:
# copy drop list from /usr/share/lxc/config/common.conf
lxc.cap.drop = mac_admin mac_override sys_time sys_module sys_rawio
# copy drop list from /usr/share/lxc/config/alpine.common.conf with setpcap commented
lxc.cap.drop = audit_write
lxc.cap.drop = ipc_owner
lxc.cap.drop = mknod
# lxc.cap.drop = setpcap
lxc.cap.drop = sys_nice
lxc.cap.drop = sys_pacct
lxc.cap.drop = sys_ptrace
lxc.cap.drop = sys_rawio
lxc.cap.drop = sys_resource
lxc.cap.drop = sys_tty_config
lxc.cap.drop = syslog
lxc.cap.drop = wake_alarm
And voila!
| Unable to start NFS server on Alpine Linux LXC |
1,364,456,540,000 |
I've made a new installation of Debian 11. Regarding LXC, I copied the working setup of my Debian 10 computer. I use a separate user, lxcuser which I su to, to lxc-start.
The configuration, ~/.config/lxc/default.conf
lxc.idmap = u 0 165536 65536
lxc.idmap = g 0 165536 65536
lxc.apparmor.profile = unconfined
lxc.mount.auto = proc:mixed sys:ro cgroup:mixed
lxc.net.0.type = veth
lxc.net.0.link = lxcbr0
lxc.net.0.flags = up
lxc.net.0.hwaddr = 00:FF:xx:xx:xx:xx
#lxc.include = /etc/lxc/default.conf
File system permissions are set using ACLs, as I did on my previous setup.
lxc-checkconfig
LXC version 4.0.6
Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-5.10.0-7-amd64
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled
--- Control groups ---
Cgroups: enabled
Cgroup v1 mount points:
Cgroup v2 mount points:
/sys/fs/cgroup
Cgroup v1 systemd controller: missing
Cgroup v1 freezer controller: missing
Cgroup namespace: required
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled
--- Misc ---
Veth pair device: enabled, not loaded
Macvlan: enabled, not loaded
Vlan: enabled, not loaded
Bridges: enabled, loaded
Advanced netfilter: enabled, loaded
CONFIG_NF_NAT_IPV4: missing
CONFIG_NF_NAT_IPV6: missing
CONFIG_IP_NF_TARGET_MASQUERADE: enabled, not loaded
CONFIG_IP6_NF_TARGET_MASQUERADE: enabled, not loaded
CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled, not loaded
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled, not loaded
FUSE (for use with lxcfs): enabled, loaded
--- Checkpoint/Restore ---
checkpoint restore: enabled
CONFIG_FHANDLE: enabled
CONFIG_EVENTFD: enabled
CONFIG_EPOLL: enabled
CONFIG_UNIX_DIAG: enabled
CONFIG_INET_DIAG: enabled
CONFIG_PACKET_DIAG: enabled
CONFIG_NETLINK_DIAG: enabled
File capabilities:
Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig
After running with debugging option, I think I've pinned down the error on these lines:
DEBUG cgfsng - cgroups/cgfsng.c:cgfsng_monitor_create:1355 - Failed to create cgroup "(null)"
WARN cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1152 - Permission denied - Failed to create directory "/sys/fs/cgroup/user.slice/user-1000.slice/session-1.scope/lxc.monitor.arch"
Changing permissions on /sys/fs/cgroup/user.slice/user-1000.slice/session-1.scope directory has no effect; using sudo cannot write there either.
I believe the issue has arisen due to cgroupv2 which is enabled by default on Debian 11. I tried various ways I found on the net as workabouts, nothing works so far.
Any ideas? Either to make unprivileged LXC work with cgroupv2 or the proper way to disable cgroupv2 and enable cgroupv1 on Debian 11 (or imitate Debian 10's cgroup setup). Other solutions welcomed of course!
Some links:
Same issue, unaswered
My blog on how I setup unprivileged LXC on Debian 10, copied the setup
Update: adding systemd.unified_cgroup_hierarchy=false systemd.legacy_systemd_cgroup_controller=false" to kernel parametres helped to start containers. But I still get this error from inside the container:
Arch Linux:
Welcome to Arch Linux!
Failed to create /init.scope control group: Permission denied
Failed to allocate manager object: Permission denied
[!!!!!!] Failed to allocate manager object.
Exiting PID 1...
Centos 8:
Welcome to CentOS Linux 8!
Failed to read AF_UNIX datagram queue length, ignoring: No such file or directory
Failed to install release agent, ignoring: No such file or directory
Failed to create /init.scope control group: Permission denied
Failed to allocate manager object: Permission denied
[!!!!!!] Failed to allocate manager object, freezing.
Freezing execution.
|
The very last version of Debian bullseye LXC package (1:4.0.6-2 from Fri, 11 Jun 2021) somewhat lately warns about changes in starting unprivileged containers in Debian 11 using cgroup v2 and LXC 4.x:
lxc (1:4.0.6-2) unstable; urgency=medium
A new way of handling unprivileged containers starting and attachment has
been made available through the lxc-unpriv-start and lxc-unpriv-attach
commands. See /usr/share/doc/lxc/README.Debian.gz for more details.
-- Pierre-Elliott Bécue [email protected] Fri, 11 Jun 2021 15:12:15 +0200
First parts in the README appear to have already been addressed by OP. The relevant part for OP's issue is at 7) Starting containers:
Starting containers
Under the unified groups hierarchy (default in systemd starting with
Debian 11/bullseye), a non-root user needs lxc-start to have some
additional privileges to start container as a non-root user. The
easiest way to do that is via systemd. You can either start the
container via a user defined service that sets Delegate=true
property, or do it explicitly with system-run:
$ systemd-run --scope --quiet --user --property=Delegate=yes \
lxc-start -n mycontainer
or, lastly, you can use the helper script Debian made available:
lxc-unpriv-start. It'll care about using the systemd-run command
properly and also to make sure the required environment variables are
set properly.
The part "3) Permissions checking" is also worth mentioning (with the right value(s) to adapt):
$ setfacl --modify user:100000:x . .local .local/share
Examples with systemd or with Debian's wrapper:
$ lxc-create -n busybox-amd64 -t busybox
$ lxc-start -n busybox-amd64
lxc-start: busybox-amd64: lxccontainer.c: wait_on_daemonized_start: 859 Received container state "ABORTING" instead of "RUNNING"
lxc-start: busybox-amd64: tools/lxc_start.c: main: 308 The container failed to start
lxc-start: busybox-amd64: tools/lxc_start.c: main: 311 To get more details, run the container in foreground mode
lxc-start: busybox-amd64: tools/lxc_start.c: main: 313 Additional information can be obtained by setting the --logfile and --logpriority options
$ systemd-run --scope --quiet --user --property=Delegate=yes lxc-start -n busybox-amd64
$ lxc-ls --active
busybox-amd64
$ lxc-stop -n busybox-amd64
$ lxc-unpriv-start -n busybox-amd64
Running scope as unit: run-r1c8a4b4fd0294f688f9f63069414fbf0.scope
$ lxc-ls --active
busybox-amd64
This information was previously just buried in a few bug reports and places a bit difficult to put together:
CGroup V2 controllers are unusable by an unprivileged container on Linux booted with unified_cgroup_hierarchy #3206
sequence of manipulations to allow unprivileged container's UID 0 to enable controllers
Delegation
Note:
Of course this successfully starts real OSes (Debian, CentOS ...) the same.
As a side note and unrelated to this Q/A, today (2021-06-26) using the download template, it appears hkp://pool.sks-keyservers.net is off service. To create a template, I had to do first this to override the default keyserver URL in /usr/share/lxc/templates/lxc-download:
$ export DOWNLOAD_KEYSERVER=hkp://keys.openpgp.org
$ lxc-create -n centos8-amd64 -t download -- --d centos -r 8 -a amd64
[...]
You just created a Centos 8 x86_64 (20210626_07:08) container.
| Cannot start unprivileged LXC containers on Debian 11 Bullseye |
1,364,456,540,000 |
Trying to start a Linux container, I get the following:
lxc-start: No cgroup mounted on the system
OS is Debian 7.
|
LXC (or other uses of the cgroups facility) requires the cgroups filesystem to be mounted (see §2.1 in the cgroups kernel documentation). It seems that as of Debian wheezy, this doesn't happen automatically.
Add the following line to /etc/fstab:
cgroup /sys/fs/cgroup cgroup defaults
For a one-time thing, mount it manually:
mount -t cgroup cgroup /sys/fs/cgroup
| lxc-start: No cgroup mounted on the system |
1,364,456,540,000 |
I'm looking for some reference documentation to explain what each of the settings are, for each control group.
For example, there's cpuset.cpus, I think setting this to 0 means use all CPUs, setting it to 1 limits you to 1 core. And cpuset.shares, how is that configured exactly?
Surely there's a reference doc that simply explains each of the settings somewhere right? Anyone have a link?
|
A big thanks to Red Hat, I finally tracked down the reference documentation I was looking for in their documentation. I expect that there's no difference between Red Hat and other distros on this point.
Subsystems and Tunable Parameters | Red Hat Customer Portal
| Reference documentation for cgroups (control groups) settings |
1,364,456,540,000 |
I've got 2 LXC containers with these cgroup settings:
lxc.cgroup.blkio.weight = 200
lxc.cgroup.cpu.shares = 200
and
lxc.cgroup.blkio.weight = 800
lxc.cgroup.cpu.shares = 800
I have verified in /sys/fs/cgroup/blkio/lxc/test1-lxccontainer/blkio.weight is indeed set to 200 on the host OS.
I have verified that cpu.shares are split up 80% to container 1 and 20% to container 2.
But when I run this command in both containers:
# write a 10GB file to disk
dd bs=1M count=10000 if=/dev/zero of=1test conv=fdatasync
I ran a similar test on reads:
davidparks21@test-cgroups1:/tmp$ time sh -c "dd if=1test of=/dev/null bs=1M"
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 37.9176 s, 277 MB/s
real 0m37.939s
user 0m0.004s
sys 0m24.306s
The IO speeds see in iotop on the host OS are virtually the same between the two containers.
I expected to see container 2 command 80% of the IO access in this case.
|
The problem here is that you need to use the fair scheduler, I was using the wrong scheduler, and had mis-read a setting (thought I was using fair scheduler, but really wasn't). Swapping to the correct IO scheduler fixed the problem.
To change the IO scheduler (taken from here):
echo cfq > /sys/block/{DEVICE-NAME}/queue/scheduler
| cgroups: blkio.weight doesn't seem to have the expected effect |
1,364,456,540,000 |
I'm trying to make ansible set InnoDB buffer pool size to some percent of available memory. But ansible_memtotal_mb and free report how much memory the host has. How do I figure out how much memory is available from inside container? Container name is not known in advance.
UPD I'm running debian jessie, and pass cgroup_enable=memory parameter to the kernel.
host
====
# lxc-checkconfig
Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-3.16.0-4-amd64
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled
Multiple /dev/pts instances: enabled
--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled
--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
File capabilities: enabled
Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig
# grep cgroup /var/lib/lxc/sta/config
lxc.cgroup.memory.limit_in_bytes = 1000M
# mount | grep memory
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
# cd /sys/fs/cgroup/memory
# cat memory.limit_in_bytes
18446744073709551615
# cat lxc/sta/memory.limit_in_bytes
1048576000
container
=========
$ cat /proc/self/cgroup
9:perf_event:/lxc/sta
8:blkio:/
7:net_cls,net_prio:/lxc/sta
6:freezer:/lxc/sta
5:devices:/
4:memory:/
3:cpu,cpuacct:/
2:cpuset:/lxc/sta
1:name=systemd:/user.slice/user-0.slice/session-10304.scope/system.slice/ssh.service
# mount | grep memory
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
# cd /sys/fs/cgroup/memory
# cat memory.limit_in_bytes
18446744073709551615
# cat lxc/sta/memory.limit_in_bytes
1048576000
|
tl;dr
cat /sys/fs/cgroup/memory$(cat /proc/self/cgroup | grep memory | cut -d: -f3)/memory.limit_in_bytes
or
cat $(mount | grep cgroup | grep memory | cut -d' ' -f3)$(cat /proc/self/cgroup | grep memory | cut -d: -f3)/memory.limit_in_bytes
If your default container configuration allows host's cgroup info from within container (based on lxc.mount.auto setting),you could simply parse cgroup info as shown below
Check your cgroup info from /proc/self/cgroup
root@my-firefox:/# grep memory /proc/self/cgroup
4:memory:/cv/my-firefox
Now based on your cgroup mount point (could locate that from /proc/mounts), verify memory limit file content
root@my-firefox:/# cd /sys/fs/cgroup/memory/cv/my-firefox/
root@my-firefox:/sys/fs/cgroup/memory/cv/my-firefox# cat memory.limit_in_bytes
268435456
In my case above, cgroup root was mounted at /sys/fs/cgroup so with that info and appending path /memory/cv/my-firefox, I could query all memory limits set for the container
This case the limit is 256M
PS: free & ansible_memtotal_mb are host based and they are not container aware. I am not aware of ansible, but I assume it would have something similar to facts in puppet, where you could write a custom fact to gather this info
| How to find out how much memory lxc container is allowed to consume? |
1,364,456,540,000 |
I'm using lxc to have a ubuntu 16.04 development environment on my 18.04 laptop. When I do a parallel build (ninja -j) on the container my computer becomes unresponsive and never recovers. I have to restart it when this happens. I know this is vague, I am suspicious this has something to do with memory usage or some other resource that when building on the host is better managed. If I remember to do -j 4 (gnu compilers) it does not lock up.
I've setup lxc in the simplest way where I have to run it as root. It does not have access to network devices and I "mount" folders on it using the config file to share repos with it. Below is my config file:
# Distribution configuration
lxc.include = /usr/share/lxc/config/common.conf
# For Ubuntu 16.04
lxc.mount.entry = /sys/kernel/debug sys/kernel/debug none bind,optional 0 0
lxc.mount.entry = /sys/kernel/security sys/kernel/security none bind,optional 0 0
lxc.mount.entry = /sys/fs/pstore sys/fs/pstore none bind,optional 0 0
lxc.mount.entry = mqueue dev/mqueue mqueue rw,relatime,create=dir,optional 0 0
lxc.arch = linux64
# Container specific configuration
lxc.rootfs.path = dir:/var/lib/lxc/u2/rootfs
lxc.uts.name = u2
# Network configuration
lxc.net.0.type = none
lxc.net.0.flags = down
# Share Display for gui applications
lxc.mount.entry = /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry = /dev/snd dev/snd none bind,optional,create=dir
lxc.mount.entry = /tmp/.X11-unix tmp/.X11-unix none bind,optional,create=dir
lxc.mount.entry = /dev/video0 dev/video0 none bind,optional,create=file
# Share folders
lxc.mount.entry = /home/tyler/workspace /var/lib/lxc/u2/rootfs/home/ubuntu/workspace none bind 0 0
My question has two parts, how can I isolate what is causing the lockup, and how can I configure my LXC container to keep it from locking up my computer when doing parallel builds?
|
It sounds like your LXC host is running out of memory and the system is killing processes. You have a couple of options:
Add more memory or add a swap file / increase swap to your host
Limit your LXC dev container to one or more CPU cores to make parallel ninja builds less aggressive
For option 2, assuming you have a 4 core CPU, following LXC commands should limit your container to 2 cores and give it scheduling time on 50% of those cores (effectively a 75% reduction in CPU access).
lxc config set container1 limits.cpu 2
lxc config set container1 limits.cpu.allowance 50%
(container1 above is your lxc dev container name)
Adjust the number of CPUs first. The 'cpu.allowance' command will probably have less of an impact on your issue if your host is running out of memory.
With fewer CPU cores available to the guest container, ninja should kick off less parallel build commands and thus use less system resources (memory specifically).
Edit
To make these changes without using LXD commands, edit the container's config file and add the following line:
lxc.cgroup.cpuset.cpus = 0-3
This will give the container cores 0 and 3, adjust to suit.
Here is some further reading on lxc cgroup config parameters.
| computer locks up when building on lxc container |
1,364,456,540,000 |
Debian 8.3,
LXC 1:1.0.6-6+deb8u2
When LXC downloads the base packages for a Debian container, are the packages verified? Verifying a Debian CD involves comparing signed checksum files against the checksum of the downloaded file (and also that the signature itself is valid) as described here. Apt also verifies packages automatically. Does the following LXC command download the base packages 'outside' of the apt system, making LXC a security weakness to the host with LXC installed?
lxc-create -n mycontainer -t debian
|
No. debian template for lxc-create internally uses debootstrap which certainly verifies downloaded packages against release signatures in the repository just like apt.
| LXC: Are downloaded templates verified? |
1,364,456,540,000 |
PROBLEM:
So I've been trying to get a usb device (primesense - the OEM reference for the Kinect) to passthrough to an LXC container so I can develop without worrying about polluting my stable system with experimental libraries.
I think I've done everything necessary, but applications running inside the container cannot access the device.
I'm using Ubuntu 12.04 x64 host with LXC 1.0.0, container is created from the 12.04 template. (I am active over on askubuntu, but I believe the question fits here more)
Question:
How do you pass through usb to a (privileged) LXC container.
Actions Taken:
My udev rules for the host and the udev rules for the container are the same
SUBSYSTEM=="usb", ATTR{idProduct}=="0609", ATTR{idVendor}=="1d27",
MODE:="0666", OWNER:="root", GROUP:="video"
On the host the device node is visible as:
$ ls -l /dev/bus/usb/001/015
crw-rw-rw- 1 root video 189, 14 Jun 18 15:27 /dev/bus/usb/001/015
In the container the device node is visible as:
$ ls -l /dev/bus/usb/001/015
crw-rw-rw- 1 root video 189, 14 Jun 18 22:07 /dev/bus/usb/001/015
Additionally, I have passed
sudo lxc-cgroup --n CN1 devices.allow "c 189:* rwm"
In order to whitelist usb devices for lxc
Unfortunately, when I try to run an application on the Host, the device is recognized and works as expected. Running the same application in the container (with the same relevant libraries) causes the application to fail to find the device even when I explicitly pass the URI.
I'm trying to narrow down the issue to either a library bug (which I could fix but I don't want to commit down that rabbit hole yet) or something I'm missing with the permissions for LXC containers.
|
Adding a whitelist rule through lxc-cgroup is not persistent, in testing my LXC containers I reset the container at some point and did not re-add the rule. The device node is created in the container correctly even without lxc white-listing (c *:* m is a default lxc rule) but the lxc container is denied access to the device when it tries to use it, without the right cgroup permissions it fails to work
Workaround is to add
lxc.cgroup.devices.allow = c 189:* rwm
to the relevant lxc.conf for your system.
| USB passthrough for LXC containers |
1,364,456,540,000 |
So far I am using various xautomation/xdotool scripts in a KVM virtual machine (linux guest) in order to let them do their work and let me work uninterruptedly. I am using a VirtIO disk, but still the performance of the guest is slow most of the times.
Can I do the same in an LXC container, e.g. using docker?
|
I can't say anything to the performance but in researching this I came across this SO Q&A titled: can you run GUI apps in a docker? that shows 3 methods for accomplishing this.
Running AppX over VNC
This method shows using the following Dockerfile:
# Firefox over VNC
#
# VERSION 0.1
# DOCKER-VERSION 0.2
from ubuntu:12.04
# make sure the package repository is up to date
run echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
run apt-get update
# Install vnc, xvfb in order to create a 'fake' display and firefox
run apt-get install -y x11vnc xvfb firefox
run mkdir /.vnc
# Setup a password
run x11vnc -storepasswd 1234 ~/.vnc/passwd
# Autostart firefox (might not be the best way to do it, but it does the trick)
run bash -c 'echo "firefox" >> /.bashrc'
And then running the Docker instance like so:
$ docker run -p 5900 creack/firefox-vnc x11vnc -forever -usepw -create
Use Docker + Subuser
Using Subuser + Docker you can directly launch Docker VMs with just single applications within then, granting them narrow access to specific folders from the physical host.
Subuser is meant to be easilly installed and in and of itself technically insignificant. It is just a wrapper around docker, nothing more.
Subuser launches docker containers with volumes shared between the host and the child container. That's all.
Here's a video showing Subuser in action.
Running X11 over SSH
This last technique shows how to setup a Docker instance with X11 + SSH services running within. This setup then allows any X11 apps to be tunneled out over SSH.
The ssh is used to forward X11 and provide you encrypted data communication between the docker container and your local machine.
This method then goes on to setup Xpra + Xephyr on the local side.
Xpra + Xephyr allows to display the applications running inside of the container such as Firefox, LibreOffice, xterm, etc. with recovery session capabilities. So, you can open your desktop any where without losing the status of your applications, even if the connection drops.
Xpra also uses a custom protocol that is self-tuning and relatively latency-insensitive, and thus is usable over worse links than standard X.
The applications can be rootless, so the client machine manages the windows that are displayed.
Source: DOCKER DESKTOP: YOUR DESKTOP OVER SSH RUNNING INSIDE OF A DOCKER CONTAINER
References
can you run GUI apps in a docker?
| X and xdotool in LXC instead of KVM |
1,364,456,540,000 |
I just installed lxc in Arch Linux, but the qemu-debootstrap binary seems missing,
This command sudo lxc-create -n test -t ubuntu -P /run/shm/1 complains about that.
I couldn't find it with either pacman or yaourt.
Any ideas how to fix that? I have the debootstrap script installed and that works though
|
Debootstrap is in aur/debootstrap package. After installation process you will have to make a symlink in /usr/bin:
cd /usr/bin ; ln -sf debootstrap qemu-debootstrap
After that do what ouzmoutous suggests.
Anyway I always advise to use downloaded templates.
HTH
| No qemu-debootstrap in Arch Linux |
1,364,456,540,000 |
(Please note that this question is about LXC 1.x, whereas this one is about LXC 2.x/LXD)
I scoured the web for an answer to this one, but couldn't come up with any reasonably non-hacky answer.
What I am looking for is an approach to fashion an existing template a way I'd like to. In particular what I'm after is to customize the upstream Ubuntu cloud image by making various changes in its root FS and adding/changing configuration.
So my current approach is to lxc launch ubuntu:lts CONTAINER and then use lxc exec CONTAINER -- ... to run a script I authored (after pushing it into the container) to perform my customizations.
What I get using this approach is a reasonably customized container. Alas, there's a catch. The container at this point has been primed by cloud-init and it's a container instance, not an image/template.
So this is where I'm at a loss now. What I would need is to turn my container back into an image (should be doable by using lxc publish) and either undo the changes done to it by cloud-init or at least "cock" cloud-init again so it triggers the next time the image is used as source for lxc init or lxc launch. Alternatively, maybe there's a way to completely disable cloud-init when I lxc launch from the upstream image?
Is there an authoritative way? Even though I looked through all kinds of documentation, including the Markdown documentation in the LXD repository as well as the blog series by Stéphane Graber (LXD project lead), especially [5/12], I was unable to find a suitable approach. Perhaps I just missed it (that's to say, I'll be happy to read through more documentation if you know some that describes what I need).
LXC version used is 2.20 (i.e. I'm using the LXD frontend).
|
On the linked page [5/12] by Stéphane Graber, you can find a second approach:
Manually building an image
Building your own image is also pretty simple.
Generate a container filesystem. This entirely depends on the distribution you’re using. For Ubuntu and Debian, it would be by using
debootstrap.
Configure anything that’s needed for the distribution to work properly in a container (if anything is needed).
Make a tarball of that container filesystem, optionally compress it.
Write a new metadata.yaml file based on the one described above.
Create another tarball containing that metadata.yaml file.
Import those two tarballs as a LXD image with:
This way, you don't have to start the container, before you publish the image. You can start with an existing image:
$ lxc image copy ubuntu:16.04/amd64 local: --alias ubuntu
$ mkdir export-directory
$ lxc image export ubuntu export-directory
$ cd export-directory
$ ls
5f364e2e3f460773a79e9bec2edb5e993d236f035f70267923d43ab22ae3bb62.squashfs
meta-5f364e2e3f460773a79e9bec2edb5e993d236f035f70267923d43ab22ae3bb62.tar.xz
$ mkdir meta squashfs
$ tar -xf *.tar.xz -D meta
$ sudo unsquashfs -f -d squash/ *squashfs
Now you can adjust files or even chroot into the squash directory. Then you can tar both directories and import the adjusted image with:
lxc image import <metadata tarball> <rootfs tarball> --alias my-adjusted-image
| Creating a custom template, based on some existing LXC template after running the instance at least once |
1,364,456,540,000 |
I'm trying to provide a service to the LXC guests, but do not want to expose it from the host. I also don't want to put up firewall rules for the service, so loopback appears to be the most straightforward solution.
Is there a way to have a service listening on lo (loopback) shared with LXC guests, e.g. similar to bind-mounting directories into place?
|
There are different ways to achieve your goal.
If the guests share a virtual network (i.e. are not just bridged to the physical interface) it's easy. Just tell your services to listen on that interface - or create a new guest and let that one host the service.
If the guests are bridged to ethX, you might still want to consider creating a virtual guest+host-only interface as that kind of encapsulation makes sense for all kinds of services (internal mail-server, any database server, local DNS, etc.)
(And obviously there's the way you already discarded for some reason: firewall rules)
As for lo: each lxc host has its own, and that's good imo
My lxc guests all share a virtual interface and for each service that should be exposed to the public internet, I create port forwarding rules on the host's iptables. And I try to run as few services as possible on the host itself. That way there's little to no rist accidentally exposing any services.
And for the sake of completeness, here's my config:
My interfaces file (debian stable):
auto br0
iface br0 inet static
bridge_maxwait 0
bridge_fd 0
bridge_ports dummy0
address 192.168.x.1
netmask 255.255.255.0
# if there are lxc clients that need a public IP, add something like this (a.b.c.d being the public IP) and set the client's `lxc.network.ipv4` config parameter to the same address:
#post-up route add a.b.c.d dev br0
The relevant part of the client config:
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.veth.pair = lxc-apache # each client gets their own .pair name
lxc.network.ipv4 = 192.168.x.y/24 # ... and of course their own address
| Is there a way to share a service listening on loopback of the host with an LXC guest? |
1,364,456,540,000 |
Is it possible to build ‘Linux From Scratch’ (LFS) inside of an LXC container, as opposed to creating a dedicated partition per the LFS instructions?
|
LFS run it's own kernel. In LXC container or any container based virtualization guest system shares the host's kernel. So LFS can't be run inside a container based VM.
Further in absence of dedicated kernel guest suffers several restrictions inside container. Like guest can't load it's own kernel module (i.e. drivers), can't drop cache etc.
Another plan may be using the host kernel and building other packages of LFS inside the host. But that's not a full featured LFS installation rather something like chroot. Further you can't replace the existing file system as you've no access to the virtual disk while the guest is off. I believe this approach will suffer serious driver issues also, unless the LFS builder has profound experience in virtualization.
However LFS shall work fine in kvm or xen based virtualization as they allow guest machine to run it's own Kernel.
| Linux From Scratch Inside of an LXC Container |
1,364,456,540,000 |
I am trying to experiment with mount namespace in Ubuntu. So far I can create an empty mount namespace using the following:
# mkdir test
# unshare --mount
# mount none test -t tmpfs
# cd test
# pivot_root . .
# cd / <--- test becomes /
When I check a LXC Ubuntu container, the mount command displays the following:
Since the mount namespace initially gets a copy of the mount points, I presume the /dev/sda1 inside the container is the global /dev/sda1 (because there is no /dev/sda1 inside the container once it is started), and yet the contents of the / inside the container correspond to its rootfs. Can someone familiar with LXC please explain what mount operations does LXC do before it does its pivot_root inside the container ?
|
To see what LXC actually does, let's create a new container and trace its startup process via strace(1):
[root@localhost /]# lxc-create -n testcontainer -t debian
[root@localhost /]# strace -e trace=clone,chdir,mount,pivot_root,execve \
-f -o lxclog \
lxc-start -n testcontainer
The resulting trace is written to lxclog file, and here are the most relevant parts of it (ellipses are added by me where some non-significant calls are omitted):
14671 clone(child_stack=0x7fff9379eb80, flags=CLONE_NEWNS|CLONE_NEWUTS|CLONE_NEWIPC|CLONE_NEWPID|CLONE_NEWNET|SIGCHLD) = 14677
<...>
14677 mount("/var/lib/lxc/testcontainer/rootfs", "/usr/lib64/lxc/rootfs", 0x7fe4c2d10eac, MS_BIND|MS_REC, NULL) = 0
<...>
14677 chdir("/usr/lib64/lxc/rootfs") = 0
14677 pivot_root(".", "/usr/lib64/lxc/rootfs/lxc_putold") = 0
14677 chdir("/")
<...>
14677 execve("/sbin/init", ["/sbin/init"], [/* 1 var */]) = 0
First, a new process (PID 14677) is spawned by lxc-start (PID 14671) using clone(2) and is placed in new mount namespace (CLONE_NEWNS flag). Then inside this new mount namespace the root file system of the container (/var/lib/lxc/testcontainer/rootfs) is bind-mounted (MS_BIND flag) to /usr/lib64/lxc/rootfs, which then becomes the new root. Finally, when the container initialization is finished, the process 14677 becomes the container's init.
The important thing here is that the root directory of container's mount namespace is the bind mount of the directory belonging to the host's root FS. This is why the root mount of the container still has /dev/sda1 as a source in the mount(8) output. However, there also is a difference which is not shown by mount(8) - to see it, try findmnt(8) inside the container:
root@testcontainer:~# findmnt
TARGET SOURCE FSTYPE OPTIONS
/ /dev/sda1[/var/lib/lxc/testcontainer/rootfs]
Compare this to the output of findmnt(8) from the host system:
[root@localhost /]# findmnt
TARGET SOURCE FSTYPE OPTIONS
/ /dev/sda1
Note that the source is the same, but inside the container you also see the source directory of the bind mount.
| How does LXC setup its root mount point? |
1,364,456,540,000 |
I have a number of ZFS sub-filesystems (so that I can granularly manage snapshots and ZFS options) like so:
tank/media
tank/media/pictures
tank/media/pictures/photos
tank/media/movies
tank/media/music
tank/media/documents
tank/media/documents/public
I am running Debian GNU/Linux 8.6 (jessie) with ZFS-on-Linux, kernel 4.4.19-1-pve. My goal is to share the parent ZFS filesystem (tank/media) with a LXC container via a bind mount and have the sub-filesystems be accessible.
If I bind mount tank/media inside the container, then the sub ZFS filesystems (E.G. tank/media/pictures) do not show up. I need to mount --make-rshared tank/media in order for the sub-mounts to also appear.
How can I make ZFS sub filesystems be mounted make-rshared by default using ZFS on Linux?
|
I have found that mounting with the rbind (rather than bind) option in the lxc mount line solves the issue (syntax for proxmox):
lxc.mount.entry: /tank/media media none rbind,create=dir,optional 0 0
Going off the RedHat documentation on sharing mounts, rbind achieves replication of mounts on the source in the bound directory (which is what we need), the difference being make-rshared allows a mount on the bind to be reflected in the source.
Just came across the issue myself, and this is the only relevant result on google, so I thought it was appropriate to give an answer despite the age of the question.
| How can I automatically make ZFS filesyatems mount shared / rshared? |
1,364,456,540,000 |
I normally create containers with:
lxc-create -n mycontainer -t debian
However I want to bake a few items into that "debian" default template.
New user with my ssh key, can sudo without a password.
Have python installed.
Basically this is the bare bones needed for ansible. I then want to provision my container via ansible from there.
However, I can't find any information on how to customize an lxc template. I have seen a few tutorials about creating a template from scratch, but that is not what I want to do. I want to simply customize an existing template.
OS is debian 8, both host and guest.
Thanks!
|
If you want to add a package, edit:
/usr/share/lxc/templates/lxc-debian
and search for download_debian(). Add your package to that section along with the other packages (I see ifupdown, locales, etc). If you make changes to the package list, you will need to clear the cache. I do that by doing:
rm -rf /var/cache/lxc/debian/
Of course the next container you create will take some time to download packages.
If you want to run a command in the container, add the following:
chroot $rootfs <command>
at the end of configure_debian(). You can also copy files from the host into $rootfs as well.
| How do I make changes to an lxc template? |
1,364,456,540,000 |
TL:DR
How can I make a bridge permanent (survive reboot) without adding a network device to the bridge config in /etc/network/interfaces?
Hi, I just started playing around with lxc on ubuntu 14.04.
The setup I would like to accomplish is, one container with haproxy, one with nginx.
I will dnat external requests via iptables to the haproxy and from there to nginx.
All of the containers will be in their own subnet. Routing/packetfiltering between the containers will be done by the host.
I've managed half of the setup so far.
I created two bridges with brctl and added IPs to the bridges.
br-haproxy: 10.100.0.1/24
br-nginx: 10.100.3.1/24
I then added the respective bridge to the corresponding container via the lxc config.
nginx got br-nginx
haproxy got br-haproxy
Then I configured IP addresses in the containers.
haproxy: 10.100.0.10/24 GW 10.100.0.1
nginx: 10.100.3.10/24 GW 10.100.3.1
I was now able to ping between the two containers and so on.
I now denied access by setting the forward policy from iptables to deny.
I was now able to control traffic between the two containers via iptables.
Ok so far so good. What I now want to achieve is, make the bridges permanent.
I added the bridgeconfig to /etc/network/interfaces but since I don't have a network device to add to the bridge I left this part out.
When I now try to initiate the bridge I get an error stating that the device e.g. br-haproxy couldn't be found.
I figured out, that the problem is the missing device in the bridge config. When I add eth0 from the host to the bridge config I can initiate the bridge and it comes up quite nice. But that's not what I need.
LXC dynamically creates and adds the interfaces from the container on startup of the container to the corresponding bridge.
So here comes my question. How can I make the bridges permanent without adding a network device to the bridge on boot?
Hope I made it somewhow clear what the problem is. :-)
Thanks in advance.
|
What about if you use bridge_ports none to get this working on boot, without the need to add members to this interface?
auto br-haproxy
iface br-haproxy inet static
bridge_ports none
bridge_fd 0
bridge_waitport 0
address 10.100.0.1
netmask 255.255.255.0
bridge_fd and waitport are set to avoid forwarding delay whenever a member port is put online, and avoid delay on waiting a port to be online.
| Setup permanent bridge for dynamic network devices from lxc container? |
1,573,048,801,000 |
I've (tried to) set things up to allow myself to have some unprivileged containers I can start (and then use) as needed.
Now I can create new containers with lxc-create, but when I try to start one, this happens
> lxc-start --name frisk-buster ~
lxc-start: frisk-buster: lxccontainer.c: wait_on_daemonized_start: 842 Received container state "ABORTING" instead of "RUNNING"
lxc-start: frisk-buster: tools/lxc_start.c: main: 330 The container failed to start
lxc-start: frisk-buster: tools/lxc_start.c: main: 333 To get more details, run the container in foreground mode
lxc-start: frisk-buster: tools/lxc_start.c: main: 336 Additional information can be obtained by setting the --logfile and --logpriority options
> lxc-start --name frisk-buster -F
lxc-start: frisk-buster: network.c: lxc_create_network_unpriv_exec: 2178 lxc-user-nic failed to configure requested network: cmd/lxc_user_nic.c: 1296: main: Quota reached
lxc-start: frisk-buster: start.c: lxc_spawn: 1777 Failed to create the configured network
lxc-start: frisk-buster: start.c: __lxc_start: 1951 Failed to spawn container "frisk-buster"
lxc-start: frisk-buster: tools/lxc_start.c: main: 330 The container failed to start
lxc-start: frisk-buster: tools/lxc_start.c: main: 336 Additional information can be obtained by setting the --logfile and --logpriority options
(I have no clue why line breaks look like that, when I run the container in foreground mode, but it doesn't really matter)
The only thing I've found online when searching for that error is advice to edit /etc/lxc/lxc-usernet to contain "your-username veth lxcbr0 10", but on my system it already does (except that right now I've set the limit to 25 to verify that it wasn't the problem.
What can be wrong?
|
The "Quota reached" message was actually right. For some reason (Probably copy-paste from different sources, that did things differently even though I hadn't seen that), the configuration file for my container said to make an interface called virbr0, while the lxc-usernet configuration allowed me to create interfaces called lxcbr. Changing vir to lxc in the container's config fixed things.
| lxc(-net ?) problems when starting container on debian Buster |
1,573,048,801,000 |
By default, Ctrl+Alt+F1-F6 lead to a virtual console.
A lxc container is running on my computer. How to configure the host so that Ctrl+Alt+F6 goes to the virtual console of the container?
Moreover, how to configure the host so that Ctrl+Alt+F6 goes to an x server running inside the container?
|
I've figured this out, mainly inspired by this post on arch forum.
Disable the getty currently running behind tty6 by removing /etc/init/tty6.conf, this would take effect after rebooting.
Allow the container to access tty6 by adding lxc.cgroup.devices.allow = c 4:6 rwm to the container's configuration
Autostart getty in the container, by editing /etc/init/tty6.conf instead the container
start on runlevel [23] # and not-container <- not-container is commented out
stop on runlevel [!23]
respawn
exec /sbin/getty -8 38400 tty6
Now Ctrl+Alt+F6 is container's console.
Additional operations are need for tty[1-4], as /dev/tty[1-4] in the container are not tty devices.
| Ctrl+Alt+F6 to access a linux container? |
1,573,048,801,000 |
Is it possible to make ssh connection to lxc container by its name?
I have been using IP addresses but then I though how easy it would be to use ssh test01 (here test01 being container name) instead of remembering its IP address, or starting the container and looking its IP.
|
Create a file ~/.ssh/config, with contents:
Host test01
Hostname 192.168.3.1
Then chmod it to 0600 and enjoy. You can add any ssh_config(5) options there (changing the remote username is a particularly useful one), and youy can have as many Host sections as you like.
| How to make ssh connection to LXC container by its name? |
1,573,048,801,000 |
I'm trying to set up unprivileged LXC containers and failing at every turn. I think I've followed every relevant step of the guide:
Normal users are allowed to create unprivileged containers:
$ sysctl kernel.unprivileged_userns_clone
kernel.unprivileged_userns_clone = 1
The control groups PAM module is enabled:
$ grep -F pam_cgfs.so /etc/pam.d/system-login
session optional pam_cgfs.so -c freezer,memory,name=systemd,unified
The UID and GID mappings are set up:
$ cat /etc/lxc/default.conf
lxc.idmap = u 0 100000 65536
lxc.idmap = g 0 100000 65536
lxc.net.0.type = veth
lxc.net.0.link = lxcbr0
lxc.net.0.flags = up
lxc.net.0.hwaddr = 00:16:3e:xx:xx:xx
$ cat /etc/subuid
root:100000:65536
$ cat /etc/subgid
root:100000:65536
The network is set up:
$ grep --invert-match --regexp='^#' --regexp='^$' /etc/default/lxc-net
USE_LXC_BRIDGE="true"
LXC_BRIDGE="lxcbr0"
LXC_ADDR="10.0.3.1"
LXC_NETMASK="255.255.255.0"
LXC_NETWORK="10.0.3.0/24"
LXC_DHCP_RANGE="10.0.3.2,10.0.3.254"
LXC_DHCP_MAX="253"
The services look fine:
$ systemctl status --lines=0 --no-pager lxc.service lxc-net.service
● lxc.service - LXC Container Initialization and Autoboot Code
Loaded: loaded (/usr/lib/systemd/system/lxc.service; disabled; vendor preset: disabled)
Active: active (exited) since Fri 2019-03-08 15:31:47 NZDT; 40min ago
Docs: man:lxc-autostart
man:lxc
Main PID: 4147 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 4915)
Memory: 0B
CGroup: /system.slice/lxc.service
● lxc-net.service - LXC network bridge setup
Loaded: loaded (/usr/lib/systemd/system/lxc-net.service; enabled; vendor preset: disabled)
Active: active (exited) since Fri 2019-03-08 15:31:45 NZDT; 40min ago
Main PID: 4099 (code=exited, status=0/SUCCESS)
Tasks: 1 (limit: 4915)
Memory: 8.4M
CGroup: /system.slice/lxc-net.service
└─4121 dnsmasq -u dnsmasq --strict-order --bind-interfaces --pid-file=/run/lxc/dnsm…
The packages are up to date and I've just rebooted.
Even so, I can't create containers:
$ lxc-create -n test -t download
lxc-create: test: parse.c: lxc_file_for_each_line_mmap: 100 No such file or directory - Failed to open file "/home/user/.config/lxc/default.conf"
lxc-create: test: conf.c: chown_mapped_root: 3179 No uid mapping for container root
lxc-create: test: lxccontainer.c: do_storage_create: 1310 Error chowning "/home/user/.local/share/lxc/test/rootfs" to container root
lxc-create: test: conf.c: suggest_default_idmap: 4801 You do not have subuids or subgids allocated
lxc-create: test: conf.c: suggest_default_idmap: 4802 Unprivileged containers require subuids and subgids
lxc-create: test: lxccontainer.c: do_lxcapi_create: 1891 Failed to create (none) storage for test
lxc-create: test: tools/lxc_create.c: main: 327 Failed to create container test
Is there anything obviously wrong with this setup? There's no mention anywhere in the linked article about ~/.config/lxc/default.conf, and I don't understand why it says I haven't allocated subuids and subgids.
Additional info:
Running lxc-create as root works, but this is explicitly about creating containers as a normal user.
cp /etc/lxc/default.conf ~/.config/lxc/default.conf gets rid of the complaint about the configuration file, but results in this message instead:
lxc-create: playtime: conf.c: chown_mapped_root: 3279 lxc-usernsexec failed: No such file or directory - Failed to open ttyNo such file or directory - Failed to open tt
|
Is this a new project, or do you have a choice? Why not use LXD instead of LXC - much easier to use and you get to the same place. I started out with lxc and quickly made the switch because I was interested in running unprivileged containers which is not easy in LXC, but is the default in LXD.
Take a look here to start:
https://discuss.linuxcontainers.org/t/comparing-lxd-vs-lxc/24
It's been a few months since I last installed/used it, but here are my notes on installation:
As LXD evolves quite rapidly, we recommend Ubuntu users use our PPA:
# add-apt-repository ppa:ubuntu-lxc/lxd-stable
# apt-get update
# apt-get dist-upgrade
# apt-get install lxd
The package creates a new “lxd” group which contains all users allowed to
talk to lxd over the local unix socket. All members of the “admin” and
“sudoers” groups are automatically added. If your user isn’t a member of
one of these groups, you’ll need to manually add your user to the lxd
group.
Because group membership is only applied at login, you then either need to
close and re-open your user session or use the newgrp lxd command in the
shell you’re going to interact with lxd from.
newgrp lxd
https://blog.ubuntu.com/2015/03/20/installing-lxd-and-the-command-line-tool
2018/10/22
To the best of my knowledge you can even run LXD in a virtual machine so you can give it a quick try without messing up whatever system you are working on.
Not exactly the answer to the question you asked, but I hope you find it a helpful alternative.
| How do I configure unprivileged Linux containers? |
1,573,048,801,000 |
I have an LXC container running docker. Many containers are running successfully but I am unable to add more; I am trying to deploy a new docker container and getting the following error:
container init caused "join session keyring: create session key: disk quote exceeded": unknown
But the container has plenty of free space, as does the host. I confirmed this with df -h and df -i (so, it's not inodes)
What does this error mean and how is it resolved?
|
It's not the root filesystem that's the issue here, it's the kernel keyring. This LXC thread explains it well, and has the following solution: on the LXC host (not inside the LXC container), raise the maximum number of keys with:
echo 5000 | sudo tee /proc/sys/kernel/keys/maxkeys
5000 is admittedly arbitrary; select a number that's greater than what you have now.
Quoting Stéphane Graber, maintainer of LXD, from the thread:
Kinda sounds like Docker may be attempting to use the kernel keyring?
That’d certainly be a new behavior from them…
and credit to simos also from that thread for the provided command, which resolved this for me.
Further reference on GitHub
| disk quota exceeded when trying to deploy Docker container inside LXC |
1,573,048,801,000 |
I am building a system with a read-only rootfs.
This rootfs is shared between the host and created containers.
The host cannot have any network related service or file and that means I cannot bridge the host connection.
I am currently using an USB network adapter.
How can I start this device and the network service inside the container only?
Any container can have new RW mount points for /etc /var and so on, that way, any file needed will be reachable in another partition. But the host continues to be RO and with limited files.
|
Forget the network card as a specific hardware. Just consider you have a network interface and it has to be moved to a container. This is not USB or PCI pass-through. Let's call it interface pass-through and it's specifically handled by the network namespace logic using for example ip link:
netns NETNSNAME | PID
move the device to the network namespace associated with name NETNSNAME or process PID.
Some devices are not allowed to change network namespace: loopback, bridge, ppp, wireless. These are network namespace local
devices. In such case ip tool will return "Invalid argument" error. It
is possible to find out if device is local to a single network
namespace by checking netns-local flag in the output of the ethtool:
ethtool -k DEVICE
To change network namespace for wireless devices the iw tool can be used. But it allows to change network namespace only for physical
devices and by process PID.
The interface has to appear initially on the host, but you can move it to the container. There's no option to have it directly appear in a container when plugged (of course at container start, with a specific configuration for the container, the interface can be moved, see later), but it's probably scriptable with udev. Moreover, the kernel being unique across host and containers, the host, even if not using network at all, must of course have all required network options compiled-in and is still in charge of loading relevant kernel modules if needed (it's usually transparently done).
So if in the end your card is called eth0 on the host and is really ethernet (not wireless), this command will move it to the target namespace:
ip link set eth0 netns NETNSNAME
Where NETNSNAME can be either a pid of a process in the target network namespace or a network namespace "mounted" and handled by ip netns add NETNSNAME as described above.
For two common container technology, LXC and Docker, here's how to replace NETNSNAME with a target container named containername:
LXC:
ip link set eth0 netns $(lxc-info -H -p -n containername)
Docker:
ip link set eth0 netns $(docker inspect --format '{{.State.Pid}}' containername)
For wireless the (not very well documented) command would be if there's only one wireless interface, then clearly associated with phy0:
iw phy phy0 set netns $(lxc-info -H -p -n containername)
But this will work only if the driver supports it, displaying this:
# iw phy0 info|grep netns
* set_wiphy_netns
This should probably not be done manually with the commands above, but with the specific container's configuration (LXC, Docker...). Eg, for LXC 3.0 (syntax changed from LXC 2.x) the configuration file would include lines like:
lxc.net.1.type = phys
lxc.net.1.link = eth0
Same config will handle ethernet or wireless.
and whenever the container is started, the interface would be swallowed by the container. When the container (more precisely its network namespace) stops, the interface is returned to the host (there's no way to return it to an other container directly).
Wireguard describes some use cases.
| Container with network interface but host without, how to expose eth0 to container |
1,573,048,801,000 |
So you configured your unprivileged LXC guest, by defining
lxc.id_map = u 0 1000000000 10000
lxc.id_map = g 0 1000000000 10000
and of course assigning those subordinate UID/GID ranges to an existing user (usermod --add-sub-uids ...).
However, whenever you ssh host you get:
Read from socket failed: Connection reset by peer
However, inside the guest you can clearly see that the sshd is running (e.g. with lsof -i :22).
What could possibly be wrong?
|
General troubleshooting advice for OpenSSH
First of all I refer you to this short troubleshooting guide for sshd which I am using as a recipe time and time again.
The plot thickens
Only difference in this case, I used lxc-console to attach to the guest, logged in and stopped the running sshd and then started my instance on the default port 22. And then I connected from the host to the guest with heightened verbosity:
$ ssh -vvvvvvvv host.lan
OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
debug1: Reading configuration data /home/joe/.ssh/config
debug1: /home/joe/.ssh/config line 2: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to host.lan [10.0.3.223] port 22.
debug1: Connection established.
debug1: identity file /home/joe/.ssh/id_rsa type -1
debug1: identity file /home/joe/.ssh/id_rsa-cert type -1
debug1: identity file /home/joe/.ssh/id_dsa type -1
debug1: identity file /home/joe/.ssh/id_dsa-cert type -1
debug1: identity file /home/joe/.ssh/id_ecdsa type -1
debug1: identity file /home/joe/.ssh/id_ecdsa-cert type -1
debug1: identity file /home/joe/.ssh/id_ed25519 type -1
debug1: identity file /home/joe/.ssh/id_ed25519-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2
debug1: Remote protocol version 2.0, remote software version OpenSSH_6.6.1p1 Ubuntu-2ubuntu2
debug1: match: OpenSSH_6.6.1p1 Ubuntu-2ubuntu2 pat OpenSSH_6.6.1* compat 0x04000000
debug2: fd 3 setting O_NONBLOCK
debug3: load_hostkeys: loading entries for host "host.lan" from file "/home/joe/.ssh/known_hosts"
debug3: load_hostkeys: found key type ED25519 in file /home/joe/.ssh/known_hosts:7
debug3: load_hostkeys: loaded 1 keys
debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],ssh-ed25519
debug1: SSH2_MSG_KEXINIT sent
Read from socket failed: Connection reset by peer
Hmm, there's nothing enlightening in that output. Let's check the server side output of our connection attempt:
# $(which sshd) -Dddddddp22
debug2: load_server_config: filename /etc/ssh/sshd_config
debug2: load_server_config: done config len = 724
debug2: parse_server_config: config /etc/ssh/sshd_config len 724
debug3: /etc/ssh/sshd_config:5 setting Port 22
debug3: /etc/ssh/sshd_config:9 setting Protocol 2
debug3: /etc/ssh/sshd_config:11 setting HostKey /etc/ssh/ssh_host_rsa_key
debug3: /etc/ssh/sshd_config:12 setting HostKey /etc/ssh/ssh_host_dsa_key
debug3: /etc/ssh/sshd_config:13 setting HostKey /etc/ssh/ssh_host_ed25519_key
debug3: /etc/ssh/sshd_config:15 setting UsePrivilegeSeparation yes
debug3: /etc/ssh/sshd_config:18 setting KeyRegenerationInterval 3600
debug3: /etc/ssh/sshd_config:19 setting ServerKeyBits 1024
debug3: /etc/ssh/sshd_config:22 setting SyslogFacility AUTH
debug3: /etc/ssh/sshd_config:23 setting LogLevel INFO
debug3: /etc/ssh/sshd_config:26 setting LoginGraceTime 120
debug3: /etc/ssh/sshd_config:27 setting PermitRootLogin without-password
debug3: /etc/ssh/sshd_config:28 setting StrictModes yes
debug3: /etc/ssh/sshd_config:30 setting RSAAuthentication yes
debug3: /etc/ssh/sshd_config:31 setting PubkeyAuthentication yes
debug3: /etc/ssh/sshd_config:35 setting IgnoreRhosts yes
debug3: /etc/ssh/sshd_config:37 setting RhostsRSAAuthentication no
debug3: /etc/ssh/sshd_config:39 setting HostbasedAuthentication no
debug3: /etc/ssh/sshd_config:44 setting PermitEmptyPasswords no
debug3: /etc/ssh/sshd_config:48 setting ChallengeResponseAuthentication no
debug3: /etc/ssh/sshd_config:51 setting PasswordAuthentication no
debug3: /etc/ssh/sshd_config:63 setting X11Forwarding yes
debug3: /etc/ssh/sshd_config:64 setting X11DisplayOffset 10
debug3: /etc/ssh/sshd_config:65 setting PrintMotd no
debug3: /etc/ssh/sshd_config:66 setting PrintLastLog yes
debug3: /etc/ssh/sshd_config:67 setting TCPKeepAlive yes
debug3: /etc/ssh/sshd_config:74 setting AcceptEnv LANG LC_*
debug3: /etc/ssh/sshd_config:76 setting Subsystem sftp /usr/lib/openssh/sftp-server
debug3: /etc/ssh/sshd_config:87 setting UsePAM yes
debug1: sshd version OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
debug3: Incorrect RSA1 identifier
debug3: Incorrect RSA1 identifier
debug3: Could not load "/etc/ssh/ssh_host_rsa_key" as a RSA1 public key
debug1: private host key: #0 type 1 RSA
debug3: Incorrect RSA1 identifier
debug3: Incorrect RSA1 identifier
debug3: Could not load "/etc/ssh/ssh_host_dsa_key" as a RSA1 public key
debug1: private host key: #0 type 1 RSA
debug3: Incorrect RSA1 identifier
debug3: Incorrect RSA1 identifier
debug3: Could not load "/etc/ssh/ssh_host_dsa_key" as a RSA1 public key
debug1: private host key: #1 type 2 DSA
debug3: Incorrect RSA1 identifier
debug3: Incorrect RSA1 identifier
debug3: Could not load "/etc/ssh/ssh_host_ed25519_key" as a RSA1 public key
debug1: private host key: #2 type 4 ED25519
debug1: rexec_argv[0]='/usr/sbin/sshd'
debug1: rexec_argv[1]='-Dddddddp22'
debug3: oom_adjust_setup
Set /proc/self/oom_score_adj from 0 to -1000
debug2: fd 3 setting O_NONBLOCK
debug1: Bind to port 22 on 0.0.0.0.
Server listening on 0.0.0.0 port 22.
debug2: fd 4 setting O_NONBLOCK
debug3: sock_set_v6only: set socket 4 IPV6_V6ONLY
debug1: Bind to port 22 on ::.
Server listening on :: port 22.
debug3: fd 5 is not O_NONBLOCK
debug1: Server will not fork when running in debugging mode.
debug3: send_rexec_state: entering fd = 8 config len 724
debug3: ssh_msg_send: type 0
debug3: send_rexec_state: done
debug1: rexec start in 5 out 5 newsock 5 pipe -1 sock 8
debug1: inetd sockets after dupping: 3, 3
Connection from 10.0.3.1 port 51448 on 10.0.3.223 port 22
debug1: Client protocol version 2.0; client software version OpenSSH_6.6.1p1 Ubuntu-2ubuntu2
debug1: match: OpenSSH_6.6.1p1 Ubuntu-2ubuntu2 pat OpenSSH_6.6.1* compat 0x04000000
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2
debug2: fd 3 setting O_NONBLOCK
debug2: Network child is on pid 558
debug3: preauth child monitor started
debug3: privsep user:group 101:65534 [preauth]
setgroups: Invalid argument [preauth]
debug1: do_cleanup [preauth]
debug3: PAM: sshpam_thread_cleanup entering [preauth]
debug1: monitor_read_log: child log fd closed
debug3: mm_request_receive entering
debug1: do_cleanup
debug3: PAM: sshpam_thread_cleanup entering
debug1: Killing privsep child 558
Pay special attention to the following the following lines from the above output:
debug1: Killing privsep child 558
indicating some issue with the privilege separation feature of OpenSSH (configuration directive UsePrivilegeSeparation yes), and:
debug3: privsep user:group 101:65534 [preauth]
setgroups: Invalid argument [preauth]
indicating that an attempt was made to change the effective GID of the process to 65534.
Reviewing the container configuration
Now have a look again at the stanzas from the container configuration file:
lxc.id_map = u 0 1000000000 10000
lxc.id_map = g 0 1000000000 10000
which tells LXC to create a user namespace (userns) with 10000 IDs for both, group and user IDs respectively, starting at 1000000000. Inside that namespace, the UID 1000000000 becomes 0, i.e. superuser.
The solution
There are effectively two solutions to the problem:
fix the container configuration and allow at least for 65535 subordinate IDs in the mapped range, or
set the configuration option UsePrivilegeSeparation no in sshd_config
Background
The script container-userns-convert which is hosted on launchpad (checkout with bzr branch lp:~serge-hallyn/+junk/nsexec) and written by Serge Hallyn, one of the important contributors to LXC, and uses uidmapshift from the same repository, will assign only 10k subordinate IDs for the mapping by default.
This tripped me up. Normally I assign a block of 100000 IDs (as it's easier to read) or 65535 myself.
| Can't connect to the sshd in my unprivileged LXC guest. What to do? |
1,573,048,801,000 |
I have a Lxc container inside a Proxmox host. I need to prevent that the Proxmox admin can login inside my container. To do so I disabled root login in /etc/passwd. When the Proxmox admin uses pct enter to login as root, the message "This account is currently not available" is correctly displayed.
I wish to add my custom message. There is any way to customize this message?
|
If the root login has its shell set to nologin, instead of logging in it runs the nologin command.
Check out the manpage of nologin for more information.
As pointed out in the manpage, you can create a text file at /etc/nologin.txt to display a custom message. Otherwise it will use the default message.
| Custom message to "This account is currently not available" when user login is disabled |
1,573,048,801,000 |
I have a need to point devices to sync their clocks to my CentOS-7 server running inside LXC.
On the server I'm trying to use ntpd from ntp package, but open to other products.
This question is about the setting up the ntpd or equivalent on the server.
So far I tried this in /etc/ntp.conf
driftfile /var/lib/ntp/drift
restrict default nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict ::1
server 127.127.1.1 iburst
fudge 127.127.1.1 stratum 8
disable monitor
There are TWO problem here.
ntpd terminates after logging cap_set_proc() failed to drop root privileges: Operation not permitted
ntpd is trying to adjust the local time. It fails, but it tries. If this was the only problem and I had error message in the log I could accept that.
The full output from /var/log/messages caused by attempt to start ntpd:
systemd: Starting Network Time Service...
ntpd[20154]: ntpd [email protected] Wed Apr 12 21:24:06 UTC 2017 (1)
ntpd[20155]: proto: precision = 0.120 usec
ntpd[20155]: ntp_adjtime() failed: Operation not permitted
systemd: Started Network Time Service.
ntpd[20155]: 0.0.0.0 c01d 0d kern kernel time sync enabled
ntpd[20155]: Listen and drop on 0 v4wildcard 0.0.0.0 UDP 123
ntpd[20155]: Listen and drop on 1 v6wildcard :: UDP 123
ntpd[20155]: Listen normally on 2 lo 127.0.0.1 UDP 123
ntpd[20155]: Listen normally on 3 eth0 hidden:A.B.C.D UDP 123
ntpd[20155]: Listen normally on 4 tun0 hidden:E.F.G.H UDP 123
ntpd[20155]: Listening on routing socket on fd #21 for interface updates
ntpd[20155]: 0.0.0.0 c016 06 restart
ntpd[20155]: ntp_adjtime() failed: Operation not permitted
ntpd[20155]: 0.0.0.0 c012 02 freq_set kernel 0.000 PPM
ntpd[20155]: 0.0.0.0 c011 01 freq_not_set
ntpd[20155]: cap_set_proc() failed to drop root privileges: Operation not permitted
systemd: ntpd.service: main process exited, code=exited, status=255/n/a
systemd: Unit ntpd.service entered failed state.
systemd: ntpd.service failed.
|
As discussed in comments, chrony recently received a new option -x to not attempt to alter the system clock, making it especially suitable for container operations. Alas, the first version (3.2) to receive this option wasn't thorough and still requests the Linux capability, so still fails.
Stracing chronyd from package chrony version 3.2-2.el7 in an CentOS7 LXC container (with non-CentOS host) with option -x, indeed the bugfix is not here:
# strace /usr/sbin/chronyd -x -d
[...]
[pid 571] capget({_LINUX_CAPABILITY_VERSION_3, 0}, NULL) = 0
[pid 571] capset({_LINUX_CAPABILITY_VERSION_3, 0}, {1<<CAP_NET_BIND_SERVICE|1<<CAP_SYS_TIME, 1<<CAP_NET_BIND_SERVICE|1<<CAP_SYS_TIME, 0}) = -1 EPERM (Operation not permitted)
So if you could prevent the unmodifiable binary chronyd to request a forbidden capability, it would run (that's what is the 3.3 bugfix about). Good news, it's possible with a LD_PRELOAD/dlsym() wrapper.
Compile on an other Linux system elsewhere (the compilation was actually made on the Debian 9 host and it didn't give any trouble to run on the CentOS7 container as is) this code called capsetwrapper.c, with structures definitions found there for example (this didn't change from kernel 3.10 either).
#define _GNU_SOURCE 1
#include <dlfcn.h>
#include <sys/capability.h>
int capset(cap_user_header_t hdrp, const cap_user_data_t datap) {
int (*orig_capset)(cap_user_header_t,const cap_user_data_t)=dlsym(RTLD_NEXT,"capset");
datap->effective &= ~ (1<<CAP_SYS_TIME);
datap->permitted &= ~ (1<<CAP_SYS_TIME);
return orig_capset(hdrp, datap);
}
using this specific way (to make a library suitable for LD_PRELOAD usage):
gcc -shared -fPIC -o libcapsetwrapper.so capsetwrapper.c -ldl
And it's working as seen here:
[root@centos7-amd64bis ~]# LD_PRELOAD=/root/libcapsetwrapper.so /usr/sbin/chronyd -x -d
2019-03-24T10:09:58Z chronyd version 3.2 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SECHASH +SIGND +ASYNCDNS +IPV6 +DEBUG)
2019-03-24T10:09:58Z Disabled control of system clock
2019-03-24T10:09:58Z Frequency 0.000 +/- 1000000.000 ppm read from /var/lib/chrony/drift
checking its capabilities while running:
# egrep '^Cap(Prm|Eff)' /proc/$(pidof chronyd)/status
CapPrm: 0000000000000400
CapEff: 0000000000000400
shows 0x400 which is the remaining 1<<CAP_NET_BIND_SERVICE as seen above.
To integrate this in the system:
place the libcapsetwrapper.so wrapper as /usr/local/lib/libcapsetwrapper.so
with systemctl edit chronyd, override the CAP_SYS_TIME check and the executable start with this:
[Unit]
ConditionCapability=
[Service]
ExecStart=
ExecStart=/bin/sh -c 'export LD_PRELOAD=/usr/local/lib/libcapsetwrapper.so; exec /usr/sbin/chronyd -x'
Sorry I wasn't able to reuse the $OPTIONS parameter (which is empty and should be reeiving the -x option, loaded from /etc/sysconfig/chronyd), but with more systemd knowledge it should be possible.
Working result:
# systemctl status chronyd
● chronyd.service - NTP client/server
Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/chronyd.service.d
`-override.conf
Active: active (running) since Sun 2019-03-24 10:24:26 UTC; 13min ago
Docs: man:chronyd(8)
man:chrony.conf(5)
Process: 843 ExecStartPost=/usr/libexec/chrony-helper update-daemon (code=exited, status=0/SUCCESS)
Process: 839 ExecStart=/bin/sh -c export LD_PRELOAD=/usr/local/lib/libcapsetwrapper.so; exec /usr/sbin/chronyd -x (code=exited, status=0/SUCCESS)
Main PID: 841 (chronyd)
CGroup: /system.slice/chronyd.service
`-841 /usr/sbin/chronyd -x
What wasn't tested is if the default SELinux environment (not available here) allows the preload operation or if something more should be done with /usr/local/lib/libcapsetwrapper.so (be sure to use restorecon on it).
| How to run ntpd as a server only, without adjusting local machine time, under LXC |
1,573,048,801,000 |
I ran rm -rf on /var/cache/lxc, not realizing it was full of symlinks. I've lost a bunch of files, including most of /dev. I have a mlocate.db from 16 hours ago. How do I compare the list of files from mlocate.db to what still exists to get a complete list of what is missing? locate -e says it will give me files that still exist, I basically need the opposite.
edit:
Thank you cas. Took me a while, but I finally found the problem:
#mount | grep /var/cache/lxc
devtmpfs on /var/cache/lxc/fedora/x86_64/bootstrap/dev type devtmpfs (rw,nosuid,seclabel,size=74173740k,nr_inodes=18543435,mode=755)
proc on /var/cache/lxc/fedora/x86_64/bootstrap/proc type proc (rw,relatime)
proc on /var/cache/lxc/yakkety/rootfs-amd64/proc type proc (rw,relatime)
|
Make a backup copy of /var/lib/mlocate/mlocate.db now, before the mlocate updatedb cron job runs again.
Dump mlocate.db to a text file:
mlocate / | sort > /var/lib/mlocate/mlocate-old.txt
Update your your mlocate.db. How to do this varies slightly according to what kind of unix clone or linux distribution you're using. e.g. on a Debian box, run /etc/cron.daily/mlocate, or just updatedb.mlocate.
Dump the new mlocate.db to a file:
mlocate / | sort > /var/lib/mlocate/mlocate-new.txt.
See the changes with, e.g., diff -u /var/lib/mlocate/mlocate-{old,new}.txt.
The output is likely to be huge, so redirect to a file or pipe into less.
| How do I compare mlocate.db to what exists now? |
1,573,048,801,000 |
Context
In my "quest" to get LXC to run on Raspbian I may be forced to disable loading the seccomp configuration at container startup, by commenting it out in /usr/share/lxc/config/debian.common.conf:
# Blacklist some syscalls which are not safe in privileged
# containers
# lxc.seccomp = /usr/share/lxc/config/common.seccomp
As (a.t.m.) only than the container starts (otherwise an error is raised)..
Turning off such a basic security setting that is so heavily tied to containerization/sandboxing is, to some extend, defeating the purpose of LXC. From a security/stability point of view I would very much like to keep blacklisting most of the system calls when running the LXC containers (as configured by LXC defaults in /usr/share/lxc/config/common.seccomp):
2
blacklist
[all]
kexec_load errno 1
open_by_handle_at errno 1
init_module errno 1
finit_module errno 1
delete_module errno 1
Questions
Does not 'loading seccomp rules for LXC containers' yield:
significant * security issues?
any other technical (application or stability) issues?
*Assuming I am the only one using the "mother" system and its LXC containers (otherwise it would be evident..)
|
Well, the seccomp rules prevent a container from modifying the host kernel. Without them, UID 0 in a container can use kexec(if that even works on Raspbian, I'm not sure) to load a new kernel(apparently not to start it) and insmod/rmmod to load/unload modules among other things as these syscalls don't take user namespaces into account correctly.
Whether this is a significant security issue is up to you - you just need to keep in mind that now UID 0 in the container can effectively become UID 0 outside of the container, i.e. it's possible for root to escape the container by loading a crafted module for example.
| How dangerous is it not to load `seccomp` rules for LXC containers? |
1,573,048,801,000 |
By mistake, I deleted an lxc image file. The container is still running and the file is therefore not yet actually deleted until I stop the container. I'd like to avoid stopping the container as it is quite sensitive.
I tried to find the deleted file with:
for i in $(ls /proc/|grep '^[0-9]*$'); do ls -l /proc/$i/fd|grep delete; done
But this doesn't find my loop device. Same with a simple lsof | vm-
If I run lsof on another image that is not delete, it doesn't show me any process using it: lsof /var/lib/vz/images/100/vm-100-disk-0.raw. Probably because it's open by the kernel, not a process.
As suggested in comment:
# losetup -l
NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE DIO
/dev/loop1 0 0 1 0 /var/lib/vz/images/200/vm-200-disk-0.raw (deleted) 0
/dev/loop0 0 0 1 0 /var/lib/vz/images/100/vm-100-disk-0.raw 0
I tried:
debugfs /dev/mapper/pve-data
debugfs: cd images/200
debugfs: lsdel
Inode Owner Mode Size Blocks Time deleted
0 deleted inodes found.
I guess that's because it's not deleted yet. It is a bit risky to just let it get deleted and hope that it appears here and doesn't get corrupted (it's >300Gb)
Inside the container, mount gives:
/var/lib/vz/images/200/vm-200-disk-0.raw (deleted) on / type ext4 (rw,relatime,data=ordered)
Any solution apart from dumping the entire filesystem and recreating the container entirely ? (Also, the host drive is almost full, I don't have enough space right now to create a second container next to it. I'm afraid that downsizing the storage would actually result it the actual deletion. :(
|
[not a complete answer, but too long to put in a comment]
You can find the inode of a (possibly deleted) backing file of a loop device with the LOOP_GET_STATUS or LOOP_GET_STATUS64 ioctls: it's the .lo_inode field of the loop_info and loop_info64 structs.
As I wasn't able to find any command line utility exposing that info, here is perl one-liner that should do it:
perl -le 'ioctl STDIN, 0x4C05, $s = pack "a512" or die "ioctl: $!"; print unpack "x[Q]Q", $s' </dev/loop1
1179684
More info in the loop(4) manpage and in the /usr/include/linux/loop.h file.
But I don't know if there's any safe way to resurect a deleted file by its inode: I don't think that you can use debugfs(8) on a mounted live file system without corrupting it beyond repair, and there's no way to create a link to a deleted file.
The only safe way I can think of is to copy the whole loop device / partition while it's still live:
cp --sparse=always /dev/loop1 /path/where/to/save/it
| Recover deleted but mounted loop file/filesystem |
1,573,048,801,000 |
NB: This question relates directly to this one and in particular to this answer, but it's not a duplicate.
I'd like to share a folder from the host with the guest, but make sure that the guest's root cannot accidentally write to that folder.
The folder in my case is /toolchains, both on the host and the guest. And it contains a number of GCC-based toolchains used for targeting different platforms.
Now the sharing itself is trivial:
lxc config device add CONTAINER toolchains disk source=/toolchains path=toolchains
Technically it seems to be a bind-mount. However, inside the container a remount to make it readonly fails:
# mount -o remount,ro /toolchains
mount: cannot mount /dev/sda1 read-only
Unfortunately this doesn't provide a great level of detail.
For good measure I also tried this alternative:
# mount -o remount,ro,bind /toolchains
mount: cannot mount /dev/sda1 read-only
which was mentioned in mount(8) under mount --bind,ro foo foo ...
What options do I have to achieve what I want? I.e. share the host folder as readonly with the guest. Should I use some kind of union FS here or is my only true chance of getting a readonly mount to 1.) use a CIFS share or 2.) use some hook to bind-mount the host folder via the mount command from the host into the guest root?
I'm using LXC 2.20.
|
What happens if you bind-mount your directory read-only on the host and then share it with the LXC container?
mount --bind /toolchains /toolchains-ro
mount -o remount,ro,bind /toolchains-ro
lxc config device add CONTAINER toolchains disk source=/toolchains-ro path=toolchains
Technically everything what is read-only on the host level should remain read-only in the container.
| Mount host folder to guest with LXC 2.x, but do it readonly? |
1,573,048,801,000 |
I want to run a program inside a container with a specific user. By default, when using lxc-attach, the user is root, but I don't want to execute the program as root.
Command I want to execute:
lxc-attach -n container -- python3 some_program.py
When attached to the container, I want the user to be uid=1000 not uid=0 (root)
I know it's possible with lxc-execute, with lxc.init_uid and lxc.init_gid [src: LXC.CONTAINER.CONF(5)], but with lxc-execute I don't have network connection (because the container is not running?).
|
You have to use su to change the user:
$ sudo lxc-attach -n test -- su ubuntu -c 'whoami'
ubuntu
Your command would look like this (if you don't know the username):
lxc-attach -n container -- 'su $(getent passwd 1000| cut -f1 -d:) -c "python3 some_program.py"'
| Running a program inside an LXC container with a specific user |
1,573,048,801,000 |
If I run top on the guest, the load average values seem to be exactly the same as running top on the host.
Is a Docker (LXC) guest's load average the same as the host's load average?
|
Looking at the code for /proc/loadavg - yes, it's the same. The load average is read out from global variables.
seq_printf(m, "%lu.%02lu %lu.%02lu %lu.%02lu %ld/%d %d\n",
LOAD_INT(avnrun[0]), LOAD_FRAC(avnrun[0]),
LOAD_INT(avnrun[1]), LOAD_FRAC(avnrun[1]),
LOAD_INT(avnrun[2]), LOAD_FRAC(avnrun[2]),
nr_running(), nr_threads,
task_active_pid_ns(current)->last_pid);
http://lxr.free-electrons.com/source/fs/proc/loadavg.c#L13
void get_avenrun(unsigned long *loads, unsigned long offset, int shift)
{
loads[0] = (avenrun[0] + offset) << shift;
loads[1] = (avenrun[1] + offset) << shift;
loads[2] = (avenrun[2] + offset) << shift;
}
http://lxr.free-electrons.com/source/kernel/sched/proc.c#L79
| Is an LXC guest load average the same as the host's load average? |
1,573,048,801,000 |
I want to use LXC to 'container' plugins my application is loading. Am I able to do this through C? I have been Googling a lot about it, but there don't seem to be any headers, only scripts that can be called through the terminal.
I know I can execute the scripts inside C, but I'd rather use the headers if there are any.
|
If you look at the LXC homepage, you'll notice liblxc referred to, implying there's an ABI, and if you look further down, you'll notice a link to the C API documentation.
That page looks empty at first because it has been done (rather lazily, I think) with doxygen. However, if you start clicking around you'll find stuff. Just keep in mind, again, that it's auto-generated from source and perhaps not a huge effort was put into annotating that in a doxygen friendly way. Another perhaps confusing thing is all the actual functions are documented via function pointers in data structures (looks like an OO-ish interface).
But if you already know how to use LXC on the command line you should be able to deduce some correlations.
| Can you use LXC through C? |
1,573,048,801,000 |
Debian 12.2 in an unpriviledged LXC (proxmox). It's almost 11:45 AM local time. At 5:00 AM in the morning, cron started a script:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
jan 26633 0.0 0.0 8500 2056 ? S 05:00 0:00 /usr/sbin/CRON -f
I'm using pgrep -f CRON -O 600 and I expect pgrep to return PID 26633, because the process is way older than 600 seconds. But pgrep returns nothing. If I leave out the -O, it correctly returns the PID.
Doing the same on the host machine, i.e. outside of the LXC, it works correctly.
As pgrep uses procps, I looked there.
ps -o etime -p $pid in the LXC: 441077225-02:04:48 (wrong, because since 5:00, ~6:45h passed)
ps -o etime -p $pid on the host: 06:43:29 (correct)
Would that be a bug in procps or does it rather have to do with LXC?
|
LXC mounts a fake /proc/uptime to simulate the uptime of the container rather than the host's uptime, since this property is not namespaced. Here on a (root) LXC container:
# findmnt /proc/uptime
TARGET SOURCE FSTYPE OPTIONS
/proc/uptime lxcfs[/proc/uptime] fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other
But there's no such provision for each process' stat pseudo-file in /proc/PID/stat.
So when pgrep compares time, it makes the difference between current /proc/uptime and the starttime field of /proc/PID/stat as described in proc(5) and procps sources:
PIDS_TIME_ELAPSED, // real * derived from stat: (/proc/uptime - start_time) / hertz
Since the container's /proc/uptime is faked by LXC (I guess by using host's /proc/uptime minus LXC container's pid 1 start time), the final result gets the start time of the container subtracted, resulting in a negative value initially (and for some time, but still wrong anyway if/when it becomes positive later) which is unexpected, since the system uptime is supposed to be greater than the target's process start_time (possibly adjusted by a factor of $(getconf CLK_TCK)). procps tools cannot cope with this correctly.
I don't know about a workaround: if /proc/uptime is restored to its host value, then pgrep -O or ps -o etime -p will compute the correct value, but any tool using the system uptime will now get the host's uptime instead of the container's (faked) uptime.
| Why does "pgrep -O 600" fail in an LXC? procps bug? |
1,573,048,801,000 |
I have a bridge network br0 that has the IP 10.0.0.1/24, I have a client connected with the IP 10.0.0.2 to it. I also have a VPN connection (tun0) that has an IP assigned by an DHCP, so it's IP may vary. The VPN connection is not the default route of the system, therefore all the traffic on the device goes trough the regular eth0 route (Not the VPN one). (IPv4 forwarding is enabled on the host)
What I'm trying to achieve is that any client that is connected to br0 (In my case an LXC container) with the gateway set to 10.0.0.1 should have it's traffic NATed and routed trough the VPN connetion.
As it is not possible to directly attach the tun0 device to br0, I tried to get the traffic forwarded by using iptables.
So the steps I believe I have to take is to force the traffic from br0 get masqueraded then get forwarded to tun0 by using these commands:
iptables -t nat -A POSTROUTING -o tun0 -j MASQUERADE
iptables -A FORWARD -i br0 -o tun0 -j ACCEPT
I added also state tracking, but it didn't work:
iptables -A INPUT -i br0 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A INPUT -i tun0 -m state --state RELATED,ESTABLISHED -j ACCEPT
To add a secondary default route I added in the file " /etc/iproute2/rt_tables" a "1 vpnout" entry, and added the default route:
ip route add default dev vpn-out table vpnout
which didn't work, and the same result with the next commands
ip route add default via dev vpn-out table vpnout
ip rule add from 10.0.0.0/24 table vpnout
ip rule add to 10.0.0.0/24 table vpnout
But still after that the I still cannot ping 8.8.8.8 from the Client connected to br0. Is there something that I'm missing out?
|
So at the end I finally found the missing settings thanks to the suggestions of @Fiisch.
Here is the final commands to make it work:
iptables -t nat -A POSTROUTING -o tun0 -j MASQUERADE
iptables -A FORWARD -i br0 -o tun0 -j ACCEPT
iptables -A FORWARD -i tun0 -o br0 -j ACCEPT
iptables -A INPUT -i tun0 -m state --state RELATED,ESTABLISHED -j ACCEPT
ip route add default dev tun0 table vpnout
ip route add 10.0.0.0/24 dev br0 table vpnout
ip rule add from 10.0.0.0/24 table vpnout
ip rule add to 10.0.0.0/24 table vpnout
| How to route traffic from br0 to tun0 when tun0 is not the default route of the system |
1,573,048,801,000 |
I know that the command $ lxc-snapshot exists somewhere, but it seems that it's not available on Debian? Forgive my Linux ignorance - but how do I get a hold of this capability? Is there an alternative for Debian?
Also, if you need this info, the result of $ uname -a:
Linux debian 3.2.0-4-amd64 #1 SMP Debian 3.2.65-1+deb7u1 x86_64 GNU/Linux
|
lxc-snapshot is listed in the version of lxc in testing/unstable, which at this time of writing are both the same, namely 1:1.0.6-6.
You can see the list of packages at
File list of package lxc/jessie/amd64.
Backporting lxc should not be a problem. It does not have much by way of dependencies. Comment if you need more details about this. Generic instructions are at
How can I install more recent versions of software than what Debian provides?
I haven't checked to see if you need to upgrade your kernel, but I think 3.2 probably offers sufficiently recent support for the 1.0.6 version of the LXC userspace tools.
| Snapshotting a Linux container in Debian |
1,573,048,801,000 |
I'm trying create a nice playground for Docker in Vagrant based on Vagrant's precise64 box. (Code is available at GitHub: rfkrocktk/docker-vagrant-playground)
Here's my Puppet provisioning dependencies for the instance:
# Puppet for Docker Vagrant Box
node default {
# apt
class { 'apt': }
apt::source { 'docker':
location => "http://get.docker.io/ubuntu",
key => "36A1D7869245C8950F966E92D8576A8BA88D21E9",
release => "docker",
repos => "main",
include_src => false
}
package { 'raring-kernel':
name => 'linux-image-generic-lts-raring',
ensure => present
}
package { 'lxc-docker':
require => [apt::source["docker"], Package["raring-kernel"]]
}
}
(This follows Docker's guide on installing on Ubuntu 12.04 LTS.)
Unfortunately, I'm running into problems with this, as Docker more-or-less requires a later kernel (>=3.9), which is why the linux-image-generic-lts-raring package is declared as a dependency. It's also necessary to be running this kernel to be able to use LXC (and by injunction, Docker) properly.
After running vagrant up or vagrant provision, I restart the box to be able to boot into the new kernel.
Unfortunately, the VirtualBox Guest Additions don't seem to be registered with
DKMS properly:
Failed to mount folders in Linux guest. This is usually because
the "vboxsf" file system is not available. Please verify that
the guest additions are properly installed in the guest and
can work properly. The command attempted was:
mount -t vboxsf -o uid=`id -u vagrant`,gid=`getent group vagrant | cut -d: -f3` /vagrant /vagrant
mount -t vboxsf -o uid=`id -u vagrant`,gid=`id -g vagrant` /vagrant /vagrant
Is there a simple way to get a box based on precise64 with the Raring kernel running and installed properly? I'd like to be able to quickly get going with a virtualized environment ready for Docker experimentation.
|
Evidently, Phusion packages their own Ubuntu 12.04 Vagrant boxes which run the required 3.8 kernel to make it easier to use Docker. They also provide the memory and swap accounting kernel init parameters to make these features available to LXC.
To use these boxes, simply update the box name and URL in your Vagrantfile:
# ...
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "phusion-open-ubuntu-12.04-amd64"
config.vm.box_url = "https://oss-binaries.phusionpassenger.com/vagrant/boxes/ubuntu-12.04.3-amd64-vbox.box"
# ...
end
Note that it's still necessary to provision the Docker package and repository as above.
Note also that in order to resolve the Hiera warning, a solution can be found in this answer on another question.
Now it should be extremely easy to start playing around with Docker by using Vagrant:
$ git clone [email protected]:rfkrocktk/docker-vagrant-playground.git
$ cd docker-vagrant-playground
$ vagrant up
$ vagrant ssh
Hopefully this helps someone in the future.
| Creating a Vagrant box with Docker installed |
1,573,048,801,000 |
I'm using LXC containers, and resolving CONTAINERNAME.lxd to the IP of the specified container, using:
sudo resolvectl dns lxdbr0 $bridge_ip
sudo resolvectl domain lxdbr0 '~lxd'
This works great! But the changes don't persist over a host reboot.
(I've described 'things I've tried' as answers to this question, which have varying degrees of success.)
I'm on Pop!_OS 22.04, which is based on Ubuntu 22.04.
How should I be making these resolvectl changes persistent across reboots?
|
The LXD docs describe a solution:
Put this in /etc/systemd/system/lxd-dns-lxdbr0.service:
[Unit]
Description=LXD per-link DNS configuration for lxdbr0
BindsTo=sys-subsystem-net-devices-lxdbr0.device
After=sys-subsystem-net-devices-lxdbr0.device
[Service]
Type=oneshot
ExecStart=/usr/bin/resolvectl dns lxdbr0 BRIDGEIP
ExecStart=/usr/bin/resolvectl domain lxdbr0 '~lxd'
ExecStopPost=/usr/bin/resolvectl revert lxdbr0
RemainAfterExit=yes
[Install]
WantedBy=sys-subsystem-net-devices-lxdbr0.device
(Substituting your own BRIDGEIP, from lxc network show lxdbr0 | grep ipv4.address)
Then apply those settings without having to reboot using:
sudo systemctl daemon-reload
sudo systemctl enable --now lxd-dns-lxdbr0
| Persist resolvectl changes across reboots |
1,573,048,801,000 |
From everything I have read in the unshare and nsenter man pages, I should be able to bind-mount a directory to itself, mount --make-private the directory, and then use files within that directory to hold refs for persistent namespaces. Here is what I'm doing, basically the same as the man unshare but with different directories and using --pid=file in addition to --mount=file
Terminal 1:
# mkdir -p /mnt/jails/debian/bookworm/.ns
# mount --bind /mnt/jails/debian/bookworm/.ns /mnt/jails/debian/bookworm/.ns
# touch /mnt/jails/debian/bookworm/.ns/{mount,pid}
# mount --make-private /mnt/jails/debian/bookworm/.ns
# unshare --fork --mount-proc --mount=/mnt/jails/debian/bookworm/.ns/mount --pid=/mnt/jails/debian/bookworm/.ns/pid /bin/sh & echo $!; fg
[1] 151299
151299
sh-4.4# echo $$
1
sh-4.4# grep NS /proc/self/status
NStgid: 3
NSpid: 3
NSpgid: 3
NSsid: 0
So far so good, the container above is working. While that runs:
Terminal 2:
# nsenter --mount=/mnt/jails/debian/bookworm/.ns/mount --pid=/mnt/jails/debian/bookworm/.ns/pid /bin/sh
sh-4.4# ps ax
Error, do this: mount -t proc proc /proc
# ls /proc/1/exe -l
lrwxrwxrwx. 1 root root 0 Jul 21 18:49 /proc/1/exe -> /usr/bin/bash
sh-4.4# mount -t proc proc /proc
sh-4.4# ps ax|head
<shows pids from the host OS, not from the container>
sh-4.4# grep NS /proc/self/status
NStgid: 156987
NSpid: 156987
NSpgid: 156987
NSsid: 156921
I've also tried this in Terminal 2 (note the pid from Terminal 1) with the exact same results:
# nsenter -t 151299 -a /bin/sh
sh-4.4# ps ax
Error, do this: mount -t proc proc /proc
# ls /proc/1/exe -l
lrwxrwxrwx. 1 root root 0 Jul 21 18:49 /proc/1/exe -> /usr/bin/bash
sh-4.4# mount -t proc proc /proc
sh-4.4# ps ax|head
<shows pids from the host OS, not from the container>
sh-4.4# grep NS /proc/self/status
NStgid: 155356
NSpid: 155356
NSpgid: 155356
NSsid: 143538
For some reason nsenter is entering the host OS's pid space, however it does seem to see a the namespace of the correct /proc directory, but it is invalid for sh in terminal2 because the pid namespace isn't working so (I think) thats why ps ax gives an error. Also I've tried both with and without --mount-proc
Questions:
How can I enter the PID namespace from Terminal 1?
What am I doing wrong here?
(Host linux kernel is 5.18 running Oracle Linux 8.)
|
There is a bug between before util-linux v2.36 that was patched in this commit:
0d5260b66 unshare: Fix PID and TIME namespace persistence
Use a version of util-linux containing the patch!
Here is a test-script to verify if you have this bug:
umount -l /mnt/jails/*/*/.ns/* /mnt/jails/*/*/.ns/
sleep 1
mkdir -p /mnt/jails/debian/bookworm/.ns
mount --bind /mnt/jails/debian/bookworm/.ns /mnt/jails/debian/bookworm/.ns
touch /mnt/jails/debian/bookworm/.ns/{mount,pid}
mount --make-private /mnt/jails/debian/bookworm/.ns
unshare --fork --mount-proc --mount=/mnt/jails/debian/bookworm/.ns/mount --pid=/mnt/jails/debian/bookworm/.ns/pid sleep 99 &
upid=$!
sleep 1
if nsenter --mount=/mnt/jails/debian/bookworm/.ns/mount --pid=/mnt/jails/debian/bookworm/.ns/pid [ -d /proc/self ]; then
kill $upid
echo worked
exit 0
else
kill $upid
echo didnt
exit 1
fi
| Why is the Linux command `unshare --pid=p --mount=m` not creating a persistent namespace? |
1,573,048,801,000 |
I am running openstack on LXC container and i found inside my LXC container network is very slow but from host its very fast
HOST
[root@ostack-infra-01 ~]# time wget http://mirror.cc.columbia.edu/pub/linux/centos/7.5.1804/updates/x86_64/repodata/0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2
--2018-08-04 00:24:09-- http://mirror.cc.columbia.edu/pub/linux/centos/7.5.1804/updates/x86_64/repodata/0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2
Resolving mirror.cc.columbia.edu (mirror.cc.columbia.edu)... 128.59.59.71
Connecting to mirror.cc.columbia.edu (mirror.cc.columbia.edu)|128.59.59.71|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4515677 (4.3M) [application/x-bzip2]
Saving to: ‘0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2’
100%[===========================================================================================================================================>] 4,515,677 23.1MB/s in 0.2s
2018-08-04 00:24:09 (23.1 MB/s) - ‘0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2’ saved [4515677/4515677]
real 0m0.209s
user 0m0.008s
sys 0m0.014s
LXC container on same host
[root@ostack-infra-01 ~]# lxc-attach -n ostack-infra-01_neutron_server_container-fbf14420
[root@ostack-infra-01-neutron-server-container-fbf14420 ~]# time wget http://mirror.cc.columbia.edu/pub/linux/centos/7.5.1804/updates/x86_64/repodata/0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2
--2018-08-04 00:24:32-- http://mirror.cc.columbia.edu/pub/linux/centos/7.5.1804/updates/x86_64/repodata/0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2
Resolving mirror.cc.columbia.edu (mirror.cc.columbia.edu)... 128.59.59.71
Connecting to mirror.cc.columbia.edu (mirror.cc.columbia.edu)|128.59.59.71|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4515677 (4.3M) [application/x-bzip2]
Saving to: ‘0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2’
100%[===========================================================================================================================================>] 4,515,677 43.4KB/s in 1m 58s
2018-08-04 00:26:31 (37.3 KB/s) - ‘0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2’ saved [4515677/4515677]
real 1m59.121s
user 0m0.002s
sys 0m0.361s
I don't have any fancy configuration of any limit set for network, i have other host which is working fine and max speed, what do you think wrong here
kernel version Linux ostack-infra-01 3.10.0-862.3.3.el7.x86_64 #1 SMP
CentOS 7.5
|
Solution
Host machine had following setting, it was causing my dmesg flood with lots of kernel error stack. (7 - debug level).
[root@lxc ~]# cat /proc/sys/kernel/printk
7 4 1 3
I have changed it to:
[root@lxc ~]# cat /proc/sys/kernel/printk
3 4 1 3
Later I found I had iptables --checksum-fill rules in iptables which was generating lots of checksum error which causing kernel stack error food into dmesg.
| LXC container network speed issue |
1,573,048,801,000 |
I have my rootfs snapshot in /mnt/mydisk/my_test_snapshot. It is completely writable copy of my current system.
I want to boot a virtual machine (possibly an LXC container or maybe something else) that will use /mnt/mydisk/my_test_snapshot as its root (/) folder.
At the end, I need to boot a virtual machine that:
uses a regular folder as its root filesystem.
can mount some permitted folders in its virtual environment
will use bridged networking (with zero configuration on the host's iptables)
hopefully will have a X window to use GUI applications within
Is there any LXC recipe (or something else) for that purpose?
Purpose
It might be re-inventing Docker or something, I don't know, but I need the following good bits:
As using BTRFS for the root partition, we can take a snapshot of current system with no cost and boot a virtual machine and play around (by installing new software, deleting something etc...)
If we liked what we did in the virtual machine, we can boot our real operating system from that snapshot (modified by VM)
We can clone any VM with no cost (time, cpu or disk space)
We can use this virtual machine as a time machine, for example, that will serve a database server from a backup. Good part is we can bring all services online in 1 minute and at once. Good for disaster recovery.
We can use it for specific applications (that we use for business) that will has to run any time we need, no matter what kind of upgrade or operating system change we did. This will completely create a sandbox for each application, without the cost of disk space and will bring BTRFS benefits (like snapshotting etc.)
|
As a partial answer, I created the following tool to create an LXC container from a subvolume: https://github.com/aktos-io/lxc-to-the-future
if [[ "$(grep br0 /etc/network/interfaces)" == "" ]]; then
cat <<ONETIME
ERROR: No br0 bridge device found in /etc/network/interfaces file.
Edit your /etc/network/interfaces file and add/replace the following section
in place of "eth0" section
auto br0
iface br0 inet dhcp
bridge-ifaces eth0
bridge-ports eth0
up ifconfig eth0 up
iface eth0 inet manual
Then run the following:
sudo ifup br0
ONETIME
exit
fi
echo "creating the container directory: $NAME"
mkdir $DIR/$NAME
echo "creating a writable snapshot of given subvolume"
btrfs sub snap $SUBVOL $DIR/$NAME/rootfs
echo "emptying the /etc/fstab file"
echo > $DIR/$NAME/rootfs/etc/fstab
echo "creating the config file"
cat <<CONFIG > $DIR/$NAME/config
# Distribution configuration
lxc.include = /usr/share/lxc/config/debian.common.conf
lxc.arch = x86_64
# Container specific configuration
lxc.rootfs = /var/lib/lxc/$NAME/rootfs
lxc.rootfs.backend = dir
lxc.utsname = $NAME
# Network configuration
lxc.network.type = veth
lxc.network.link = br0
lxc.network.hwaddr = 00:16:3e:7e:11:ac
lxc.network.flags = up
CONFIG
| How to boot a virtual machine from a regular folder? |
1,573,048,801,000 |
I'm playing around with LXC on my Arch Linux workstation as a learning experience. I'm following the guide on the LXC page on the Archwiki and setting up a static ip for the container. This is what my network config is like:
/etc/netctl/lxcbridge
---------------------
Description="LXC Bridge"
Interface=br0
Connection=bridge
BindsToInterfaces=(enp1s0)
IP=static
Address=('192.168.0.20/24')
Gateway='192.168.0.1'
DNS=('192.168.0.1')
And the container config:
/var/lib/lxc/testcontainer/config
---------------------------------
lxc.network.type = veth
lxc.network.link = br0
lxc.network.ipv4 = 192.168.0.198/24
However according to lxc-ls -f it gets given an extra ip address.
NAME STATE AUTOSTART GROUPS IPV4 IPV6
testcontainer RUNNING 0 - 192.168.0.198, 192.168.0.220 -
I only want 192.168.0.198. I'm not sure why it's getting the second one assigned to it.
|
So after a little bit more research I've determined why this is happening. I'm using the default Ubuntu and Debian templates to create the containers and their networking is set up so as to use DHCP to ask for a IP from the the router. So initially the static IP is set using the lxc.container.config and then when the container starts it queries the router (or whatever DHCP server you have) for a secondary IP that's then assigned to it.
The most logical way to stop this is likely to just assign the static ip inside the container. So on Debian based templates edit /etc/network/interfaces:
auto etho0
iface etho0 inet static
address 192.168.0.15
netmask 255.255.255.0
gateway 192.168.0.1
And then remove the ipv4 line from the lxc config /var/lib/lxc/testcontainer/config:
lxc.network.type = veth
lxc.network.link = br0
Another method is to let the host set the ip by keeping the ipv4 line in /var/lib/lxc/testcontainer/config and to tell the container explicitly to not touch the interface by setting it to manual:
auto eth0
iface eth0 inet manual
Apparently there are some issues with this second method if the host is suspended and then resumed. Probably best to use the first method.
| LXC container gets two IP addresses assigned to it |
1,573,048,801,000 |
I want to create a set of containers for simulating network traffic. Inside each of the containers, I would like to set a different network delay. Right now I am doing it manually using this command, after logging into the container:
sudo tc qdisc add dev eth0 root netem delay 128ms
I want it to be done automatically. Right now I am thinking about doing it like:
ssh root@container "my_commands"
but I am thinking about creating an instance of the container automatically (not that I am going to create many containers, each having different delay), so that later I would only have to start it.
What would be the correct way to configure it?
|
lxc-attach allows you to run a command in a container without logging in.
lxc-attach -n container-name -- <command>
So I guess you need to run:
lxc-attach -n container-name -- sudo tc qdisc add dev eth0 root netem delay 128ms
the output of the command, if any, is redirected to your standard outputs.
| What is the correct way to start a container with a given command? |
1,573,048,801,000 |
I rent a dedicated server and want to use LXC instead of KVM. I want to buy IPs for every single container. For now i have two external IPs:
193.X.X.30/32
213.X.X.31/32
I prefer a routing solution instead of NAT.
My last try is like this:
-------------------
| INTERNET |
-------------------
|
V
----------------------------------------------
| ------------------- ------- [HOST] |
| | br0: 193.X.X.30 | <--- | em1 | |
| ------------------- ------- |
| | |
| V |
| ------------------- |
| | vethXXXX | |
| ------------------- |
| | |
| V |
| -------------------------------------- |
| | -------------------- [CONTAINER] | |
| | | eth0: 213.X.X.31 | | |
| | -------------------- | |
| | | |
| -------------------------------------- |
----------------------------------------------
Network configuration on my host:
auto br0
iface br0 inet static
bridge_ports em1
bridge_fd 0
address 193.X.X.30
netmask 255.255.255.0
gateway 193.X.X.1
dns-nameservers 8.8.8.8 8.8.4.4
My container configuration:
lxc.network.type = veth
lxc.network.link = br0
lxc.network.ipv4 = 213.X.X.31/24
lxc.network.ipv4.gateway = 213.X.X.1
My container network configuration:
auto eth0
iface eth0 inet static
address 213.X.X.31
netmask 255.255.255.0
gateway 213.X.X.1
dns-nameservers 8.8.8.8
dns-nameservers 8.8.4.4
I didn't succeeded to connect the containers directly. What should be the right configuration/topology that the containers successfully host services like Web/Mail/DNS.
|
I don't know this is the right way or best solution but it works without NAT. The network topology is the same. We have one physical NIC (em1) and multiple IP for every container. Maybe later i can buy a subnet. But for now i'll buy 4 - 5 IPs.
-------------------
| INTERNET |
-------------------
|
V
----------------------------------------------
| ------------------- ------- [HOST] |
| | br0: 193.X.X.30 | <--- | em1 | |
| ------------------- ------- |
| | |
| V |
| ------------------- |
| | vethMyContainer | |
| ------------------- |
| | |
| V |
| -------------------------------------- |
| | -------------------- [CONTAINER] | |
| | | eth0: 213.X.X.31 | | |
| | -------------------- | |
| | | |
| -------------------------------------- |
----------------------------------------------
This is my network configuration on host (/etc/network/interfaces):
auto lo
iface lo inet loopback
auto br0
iface br0 inet static
bridge_ports em1
bridge_fd 0
address 193.X.X.30
netmask 255.255.255.0
gateway 193.X.X.1
dns-nameservers 8.8.8.8 8.8.4.4
Configuration file for the container (/var/lib/lxc/my-container/config):
lxc.include = /usr/share/lxc/config/ubuntu.common.conf
lxc.rootfs = /var/lib/lxc/my-container/rootfs
lxc.utsname = my-container
lxc.arch = amd64
lxc.network.type = veth
lxc.network.veth.pair = vethMyContainer
lxc.network.link = br0
lxc.network.ipv4 = 213.X.X.31/32
lxc.network.ipv4.gateway = 193.X.X.1
lxc.network.script.up = /var/lib/lxc/my-container/script-up.sh
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:aa:bb:cc
lxc.cgroup.memory.limit_in_bytes = 2048M
We must name our veth device. Because we will use the name in script file. Packages cannot automatically route from br0 to veth device. So i add a routing rule and my ARP table couldn't update automatically. So i added a static ARP record.
The script file (/var/lib/lxc/my-container/script-up.sh):
#!/bin/bash
route del 213.X.X.31 br0
route add 213.X.X.31 br0
The network configuration on my container:
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet manual
So i can ping directly to my container without using NAT. I'll update answer if i find a way not to use arp and route commands.
| LXC external IP configuration for containers |
1,573,048,801,000 |
I am trying to do a more complicated task inside a running lxc ubuntu container, but my problem can be explained using this simple example. When I run
sudo lxc-attach -n container1 -- echo "test" > test.txt
inside of a shell script, I expect to find test.txt inside of my container, but instead I find it on my host machine! What has gone wrong?
|
After some playing around I figured out the issue. I'll leave my question and answer here for the poor soul who inevitably runs into the same problem in the future and finds this question.
The key is to only attach to the container when accessing the file, not before running the whole command. Counterintuitively, the shell does not first connect into the container and execute the command, but rather interprets the > and creates a file locally first. To get around this, we use tee to connect to the container only when we need to. The solution is below.
echo "test" | sudo lxc-attach container1 -- tee test.txt
| Command executing outside container with lxc-attach? |
1,598,624,210,000 |
From yesterday I'm struggling with problem with LXC containers and limiting CPU resources per container. In my case command like lxc-cgroup -n srv50 cpu.shares 100 doesn't bring any results - containers still use CPU equally.
I'm using Centos 7 & LXC 1.0.8. All machines where I was checking that had the same effect: setting cpu.shares doesn't do anything.
Here's systemd-cgtop screen, from my 2 cores VM:
Path Tasks %CPU Memory Input/s Output/s
/ 178 199.7 360.8M - -
/lxc - 198.0 16.8M - -
/lxc/srv51 7 99.8 8.4M - -
/lxc/srv50 7 98.2 8.4M - -
/system.slice/NetworkManager.service 2 - - - -
/system.slice/auditd.service 1 - - - -
Container srv50 has cpu.shares set to 100, whereas srv51 set to 50. Both containers run command dd if=/dev/urandom | bzip2 -9 >> /dev/null. I was expecting one container takes 66% and other 133% CPU (or something like that), but both use 100%.
One hint. When I was trying find which one container uses the most CPU I noticed in htop tool that all containers have the same cgroup: :name=systemd:/user.slice/user-0.slice/session-1.scope? - not sure whether this is correct or not - just noticed that.
Limiting memory works, CPU nope.
I've just done testing cgroups and I can't set cpu.share for any process (by moving it to some group), so it's consistent. Smells like some missing kernel switch.
2: There's a bug in my example. To see difference in load on 2 cores machine, we must have at least 2 processes running 100% per container. Anyway, this is not an issue.
|
Yes, the issue in this case was with testing this feature. It works as expected. The only one issue I had on other cloud VM with 2 cores. As I don't need it, I won't be thinking about it any more. :) Cheers!
| LXC cpu.shares doesn't work |
1,598,624,210,000 |
Server: Ubuntu 16.04 Server, x64, no internet connection
I have LXD installed but am having trouble getting an image to that server to use as a baseline for containers. I have tried two options so far, with failed results.
1) Export image (Xenial)(meta.tar and rootfs.tar) from machine with internet and burn to cd. Import works fine, but when starting the newly built container it fails with the log showing
lxc_utils - utils.c:safe_mount:1692 - Operation not permitted - Failed to mount proc onto /usr/lib/x86_64-linux-gnu/lxc/proc
2) Download the meta and root tarballs from the linuxcontainers.org repo and burn them to a cd. Importing gives error that metadata.yaml does not exist (which looks to be true).
What other options do I have? I have 16.04 server on a disc if creating an image from that is a possibility.
|
Update: Tried the exact same 2 methods with the same files on a newly created VM and BOTH worked just fine. Issue must be in my VM, therefore I will be migrating to a new (working) VM.
Edit: The root cause actually lies somewhere in xen-guest-tools that provide additional functionality while running on Citrix XenServer (which in this case was). Pre-installation, containers worked fine. Post installation, I got the errors above.
| Create LXD containers on machine with no internet connection |
1,598,624,210,000 |
I am having the following configuration:
I have 5 LXC containers that are running nginx. On each container there are a couple of virtual hosts set up in nginx. That means for a container I have multiple virtual hosts that are available through port 80.
Each container has an IP like 10.0.3.100, 10.0.3.101, etc.
On the host machine, I also have a nginx server running that has virtual hosts defined.
I would like to know how can I achieve the following: The nginx virtual hosts on the host machine to map on each virtual host on the containers.
For example:
HOST: d1.example.com -> CONTAINER1: d1.example.com
HOST: d2.example.com -> CONTAINER1: d2.example.com
HOST: d3.example.com -> CONTAINER2: d3.example.com
All of them should be available on port 80.
Is there any way to achieve this setup?
|
This is a reverse-proxy and the directive you are looking for is proxy_pass. The host instance of nginx will have multiple server containers like:
server {
listen 80;
server_name d1.example.com;
location / {
proxy_pass http://10.0.3.100;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Accept-Encoding "";
}
}
See the proxy module documentation and the WebSocket documentation (if applicable).
| Nginx virtual hosts for multiple LXC containers |
1,598,624,210,000 |
Docker is based on Linux containers and control groups. But I would like to know which implementation of Linux containers , docker is using? Is it using the native/default (LXC execution environment ) of Linux or they have their own implementation of this?
|
Docker uses their own libcontainer library after they switched from using LXC in 2014.
LXC uses their (www.linuxcontainers.org) liblxc library.
Both libraries utilize the linux kernel namespaces technology.
LWN had a multi-part blog on namespaces: https://lwn.net/Articles/531114/
| Which container implementation docker is using |
1,598,624,210,000 |
lxc-autostart won't start unprivileged containers in Debian 11 Bullseye.
Start of unprivileged container in Debian 11 Bullseye was solved in this
answer by using lxc-unpriv-start instead of lxc-start but I was not able to take advantage of this approach when using lxc-autostart.
|
Basic solution
OK, so after few sleepless nights I ended up with a simple systemd unit file for each container. An example one may look like this:
[Unit]
Description=Linux container my-container-name
After=network.target
[Service]
Type=forking
ExecStart=/usr/bin/lxc-start -n my-container-name
ExecStop=/usr/bin/lxc-stop -n my-container-name
StandardOutput=journal
User=my-lxc-user
Group=my-lxc-user
Delegate=yes
[Install]
WantedBy=multi-user.target
The Delegate=yes is simple follow up of recommendation posted here and also in the answer I already linked above.
User lingering not required (mentioned here).
A sweet side-effect of this solution is that shutting down of the (unprivileged) containers no longer delays host shutdown (as described here) because /usr/bin/lxc-stop -n my-container-name defined in ExecStop is used instead of sending signals.
Tuning - Systemd templates
Thanks to systemd template unit files it is possible to use single unif file for all containers. My final template unit file looks like this:
[Unit]
Description=Linux container %I
After=network.target
[Service]
Type=forking
ExecStart=/usr/bin/lxc-start -n %i
ExecStop=/usr/bin/lxc-stop -n %i
StandardOutput=journal
User=lxc
Group=lxc
Delegate=yes
[Install]
WantedBy=multi-user.target
Since I named the file [email protected] and placed it to /etc/systemd/system/ I can control all my containers using systemctl COMMAND [email protected]
(Just beware that lxc.service is the original one, responsible for lxc-autostart)
Any improvements to the unit file etc. are welcome!! - as I'm no expert and I basically used the official docs and also this great answer.
Tuning - Systemd user service
Another step forward would be to use Systemd user services so there's no need to act as root when new container is deployed.
The unit file would be slightly different:
[Unit]
Description=LXC container %I
After=network.target
[Service]
Type=forking
ExecStart=/usr/bin/lxc-start -n %i
ExecStop=/usr/bin/lxc-stop -n %i
StandardOutput=journal
Delegate=yes
[Install]
WantedBy=default.target
Since multi-user.target is not available for user services we must use default.target instead.
User lingering must be enabled this time so that the service starts on boot and not on user log in. Lingering can be enabled from root account using: loginctl enable-linger <my-lxc-user>
I saved the service file to .config/systemd/user/[email protected] and enabled it using systemctl --user enable [email protected]
| Cannot autostart unprivileged LXC containers on Debian 11 Bullseye |
1,598,624,210,000 |
Here is the origin of my question:
I'm running Linux containers with LXD snap version at Ubuntu 22.04 on a VPS. The root file system of the VPS is Ext4 and there is not additional storage attached. So the default LXD storage pool is created by the dir option.
When I'm taking a snapshots of these LXCs, the whole data is duplicated - i.e. the if the container is 6G the snapshot become another 6G.
I think if it was LVM filesystem the snapshots will be created in a different way.
So my question is:
It possible to do something like fallocate -l 16G /lvm.fs, then format it as LVM, mount it and use it as storage pool for LXD? And of course, how can I do that if it is possible?
Some notes:
The solution provided by @larsks works as it is expected! Later I found, when we are using lxc storage create pool-name lvm without additional options and parameters, it does almost the same. I didn't test it before I published the question because I was thinking the lvm driver mandatory will require be in use a separate partition.
However in both cases this approach, in my opinion, has much more cons than pros, for example:
The write speed is decreased with about 10% in comparison of the cases when we are using the dir driver.
Hard to recover situations when no space left on the disk, even when the overload data is located in /tmp... in contrast, when the dir driver is used, LXD prevents the consumption of the entire host's file system space, so your system and containers are still operational. This is much conviniant in my VPS case.
|
It possible to do something like fallocate -l 16G /lvm.fs, then format it as LVM, mount it and use it as storage pool for LXD? And of course, how can I do that if it is possible?
Start by making your file. I like to place it in a directory other than /, so I created a /vol directory for this purpose:
truncate -s 16G /vol/pv0
(As @LustreOne notes in comments, using truncate rather than
fallocate doesn't preallocate blocks for the file, so it starts out
using zero bytes and only consumes as much disk space as is written to
it).
Configure that file as a block device using losetup:
losetup -fnP --show /vol/pv0
That will output the name of a loop device (probably /dev/loop0, but if not, adjust the following commands to match).
Set up LVM on that device:
pvcreate /dev/loop0
vgcreate vg0 /dev/loop0
lvcreate ...
Congratulations, you have a filed-backed LVM VG!
Unfortunately, if you were to reboot at this point, you would find that the VG was missing: loop devices aren't persistent, so we need to add some tooling to configure things when the system starts up.
Put the following into /usr/local/bin/activate-vg.sh:
#!/bin/sh
losetup -fnP /vol/pv0
vgchange -ay
And make sure it's executable:
chmod a+x /usr/local/bin/activate-vg.sh
Add a systemd unit to activate the service. Put the following into /etc/systemd/system/activate-vg.service:
[Unit]
DefaultDependencies=no
Requires=local-fs.target local-fs-pre.target
After=local-fs-pre.target
Before=local-fs.target
[Service]
Type=oneshot
ExecStart=/usr/local/bin/activate-vg.sh
[Install]
WantedBy=local-fs.target
Enable the service:
systemctl enable activate-vg
Now your file-backed LVM VG should be available when you reboot.
| Is it possible to use a file as Filesystem? |
1,598,624,210,000 |
I'm having hard time reading the output of ip a command.
Normally it prints something like this:
3: enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP group default qlen 1000
link/ether aa:bb:cc:dd:ee:ff brd ff:ff:ff:ff:ff:ff
Which is fine.
But inside the LXC container (not always) I can see something like that:
11: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.10.44.44/16 brd 10.10.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::24cb:a3ff:fefe:72cc/64 scope link
valid_lft forever preferred_lft forever
13: eth1@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.1.29/24 brd 192.168.1.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::b471:7eff:fea7:a8bc/64 scope link
valid_lft forever preferred_lft forever
What is this @if1[2,4]?
Moreover ifconfig always prints eth[0,1]
|
You have interfaces that are part of separate macvlans.
The output of ip listed above indicates that your host has two interfaces configured such that each interface, eth0@if12 and eth1@if14 are each a member of a separate macvlans configured in bridge mode (one physical interface to multiple virtual network interfaces that each have a separate MAC address).
I believe the notation is <interfaceNickname>@<macvlanID>.
As far as why the interfaces are not always formatted as such I can see at least two possible reasons.
The interface is not a part of a macvlan.
The host does not have at least two interfaces that are on different macvlans.
So if your container host has one macvlaned interface, it would not display the macvlanid just the interface nickname. But if your host had two interfaces that were on different macvlans, then at least one of the interfaces would be tagged with the <nic>@<macvlan> format.
For a great explanation of LXC networking that dives into macvlan configuration check out this well written article (about a third of the way down, in the section entitled 'Macvlan', the author dives into your particular configuration).
| ip address "@" (at) in output |
1,598,624,210,000 |
Can I install Docker on Red Hat server 5.9 so I can take an image container from an installed software and move it to another server?
Or should it be enough to update the Kernel up to version 3 so Docker can run?
|
Docker's website states that it can be installed on any 64 bit distribution of RHEL.
However, the kernel must be 3.10 at a minimum.
Check your kernel version first with the following command: uname -r.
| Docker container on Red Hat Linux server 5.9 |
1,598,624,210,000 |
Does a virtual bridge (added to /etc/network/interfaces) limit the transfer speed of data from the memory of one lxc/docker container to another?
For example does the memory throughput drop to that of 1G/10G Ethernet or is there no significant difference? That is would the throughput between two processes running on the same machine be almost identical to the two processes running on individual lxc containers on the same host?
|
Virtual interfaces do not artificially limit throughput to a particular data rate, like a physical interface would. However, they do incur software overhead, so you should expect transfer rates to be lower compared to simpler inter-process communication mechanisms, unless the bottleneck is some other factor.
| What is the network connection speed between two containers communicating via a virtual bridge running on the same host? |
1,598,624,210,000 |
We are using a Centos LXC container with the rootfs contained in a squashfs filesystem. I really like the fact that a user cannot edit the rootfs from the host.
During the development, developers would infact like to make changes to the filesystem, and I'd like to move to an overlayfs. But I notice that although the upper layer can be used to make changes to the lower layer, it is also possible to make changes to lower layer rootfs by simply editing the files on the host. How can I prevent this?
|
lxc.pre.mount gets executed before the rootfs gets loaded:
lxc.hook.pre-mount = /var/lib/lxc/container0/mount-squashfs.sh
lxc.rootfs.path = overlayfs:/var/lib/lxc/container0/rootfs:/var/lib/lxc/container0/delta0
And in the mount script:
#!/bin/bash
mount -nt squashfs -o ro /var/lib/lxc/container0/rootfs.sqsh /var/lib/lxc/container0/rootfs
| LXC Container with Overlayfs/Squashfs |
1,598,624,210,000 |
I've installed a Proxmox VE 5.1 on a VirtualBox in macOS (10.12).
The guess OS, Debian Strech (Proxmox is built on debian), has 2 "physical" network interfaces (configured from VirtualBox), Host-Only and NAT, I can access to internet through the NAT interface:
root@proxmox:~# traceroute 1.1.1.1
traceroute to 1.1.1.1 (1.1.1.1), 30 hops max, 60 byte packets
1 10.0.3.2 (10.0.3.2) 0.792 ms 0.694 ms 0.625 ms
2 1dot1dot1dot1.cloudflare-dns.com (1.1.1.1) 2.829 ms 2.818 ms 3.318 ms
The /etc/network/interfaces in the debian host contains:
auto lo
iface lo inet loopback
auto enp0s3
iface enp0s3 inet static
address 192.168.56.101
netmask 255.255.255.0
auto enp0s8
iface enp0s8 inet static
address 10.0.3.15
netmask 255.255.255.0
gateway 10.0.3.2
#NAT
auto vmbr0
iface vmbr0 inet static
address 172.16.1.1
netmask 255.255.255.0
bridge_ports dummy1
bridge_stp off
bridge_fd 0
The "guest", debian sees to macOS ("host") from both interfaces (macOS IPs: 192.168.56.1, 10.0.3.2).
The vmbr0 virtual interface was created for the proxmox LXC containers, I've added a iptables rule to send all the traffic from vmbr0 to the enp0s8 interface (the NAT interface in VirtualBox).
iptables -A POSTROUTING -s 172.16.1.0/24 -o enp0s8 -j MASQUERADE -t nat
The problem is that when I create a LXC container inside proxmox, using the vmbr0 as network interface, the LXC container has not internet access, I can ping to the proxmox "master" (IP: 172.16.1.1) but nothing else.
I've also tried to use enp0s8 as bridge_ports parameter, same result.
The file /etc/network/interfaces in the LXC container (Ubuntu 16.04) contains:
auto eth0
iface eth0 inet static
address 172.16.1.100
netmask 255.255.255.0
gateway 172.16.1.1
I have a quite similar config in another proxmox server (but in bare metal, not VirtualBox installation) and It works ok.
Can anyone tell me what is incorrect or missing in the network configuration to allow containers access to internet ?
|
In order to make routing from a VM work you need
a correct routing configuration on the host (which seems to be the case here)
to enable routing in general via /proc/sys/net/ipv4/ip_forward (which can be done on a permanent basis with the distro network tools or directly in /etc/sysctl.*)
to allow the routing of packets with iptables (i.e. filter/FORWARD chain and / or its children)
| Internet access from LXC containers in proxmox 5.1 on VirtualBox |
1,504,805,717,000 |
I am trying to use chroots under lxc for development. I have enabled the "nesting" option in the lxc container configuration and bound mounted proc and devpts into my chroot as I would if the chroots were on a normal Linux box.
Unfortunately when I try and use stuff in the chroot that needs ptys (for example the "script" command) I get errors like
root@manualdev:~# chroot /chroots/jessie-staging/
root@manualdev:/# script
script: openpty failed: No such file or directory
Terminated
root@manualdev:/#
System information:
Host kernel is 4.4.0-79-generic
Host distro is Ubuntu xenial
Host architecture is arm64
Container distro is Debian stretch
Container and chroot architecture is armhf
Chroot distro is Raspbian (tested with jessie, stretch and buster)
|
The fix for this (found by educated guesswork) was to execute the following commands in the chroot.
rm /dev/ptmx
ln -s /dev/pts/ptmx /dev/ptmx
I'm not 100% sure but I believe the reason this is needed is that lxc is using "multiple instance mode" for /dev/pts . As per the documentation at https://github.com/torvalds/linux/blob/v4.4/Documentation/filesystems/devpts.txt
If CONFIG_DEVPTS_MULTIPLE_INSTANCES=y and 'newinstance' option is specified,
the mount is considered to be in the multi-instance mode and a new instance
of the devpts fs is created. Any ptys created in this instance are independent
of ptys in other instances of devpts. Like in the single-instance mode, the
/dev/pts/ptmx node is present. To effectively use the multi-instance mode,
open of /dev/ptmx must be a redirected to '/dev/pts/ptmx' using a symlink or
bind-mount.
Looking at more recent versions of that file it seems that this may not be needed with more recent kernels.
| ptys not working in chroot under lxc |
1,504,805,717,000 |
In a web application I'm developing, users will be able to upload java code and I will need to compile and run that. For security reasons, I'd like to that inside an LXC container, and for footprint reasons I'd like that to be a busybox. So, I created a busybox container successfully with:
lxc-create -n my-box -t busybox
It's up and running fine. Then, I downloaded jdk-8u31-linux-i586.rpm from here and ran rpm -i jdk-8u31-linux-i586.rpm, which returned no output but created /usr/java/jdk1.8.0_31 which all looks good.
However, when I go to /usr/java/jdk1.8.0_31/bin and run ./javac -version, I get:
/usr/java/jdk1.8.0_31/bin # ./javac -version
Error occurred during initialization of VM
java/lang/NoClassDefFoundError: java/lang/Object
I figured this may be because of the classpath or java_home not being the right setting, so, I created a /etc/profile:
JAVA_HOME=/usr/java/jdk1.8.0_31
CLASSPATH=/usr/java/jdk1.8.0_31/lib
PATH=$PATH:$JAVA_HOME/bin
export JAVA_HOME
export CLASSPATH
export PATH
This works fine, when I echo the variables they have the values I set to them. However, the problem with javac persists. java has the exact same output.
What did I miss here?
The host system is Ubuntu Server 14.04. I have tried also the x64 version, with the same result.
|
As it turns out, the problem was the same as here: I still had to unpack the *.pack files from the lib and jre/lib folders in the java installation. unpack200, the program used to unpack *.pack files to .jar files isn't available in busybox, but it's shipped with java.
| Installing Java JDK in busybox in an LXC container - java/lang/NoClassDefFoundError: java/lang/Object |
1,504,805,717,000 |
I'm having trouble with sending shutdown -h 0 to a lxc Debian container (i.e. executing this command in the lxc) with with the python pexpect module (in a python script). In this module the user can "expect" (= wait for process output) a certain substring, amongst others EOF, which leads me to the question in order to be able to debug further why EOF isn't recognized in the output. I need to know what I can "expect" after termination of the process in order to wait for the process to end. I can't simply wait for the process because the pexpect module hides non-blocking functions for that.
The pexpect module (see http://www.bx.psu.edu/~nate/pexpect/pexpect.html#pexpect.spawn.expect for details) wraps the reception of EOF in the read system call in a (duck)type and makes it usable in pexpect.expect (an encapsulation of possible output/feedback of a process).
I've been wondering that because some processes like ls are expected to terminate with EOF, i.e. the pexpect sense of EOF (example at http://pexpect.sourceforge.net/pexpect.html).
|
EOF indicates that no further input is to be expected on a resource which possibly provides an endless amount of data (e.g. a stream). This situation is often expressed by writing a single character on the stream (to be defined by the underlying system (likely a OS or runtime environment )).
As processes use streams for inter-process communication they need to indicate the limits of their output and sending processes need to inidicate the limits of their input using EOF. The underlying system will very certainly forward this input and output to its own process handling mechanisms making EOF avaialble for evaluation in the program/on the system.
Note about pexpect use case in the question: shutil.pexpect doesn't seem to be suitable to copy files of a lxc container. It got stuck and the time offset of the pexpect output causes the confusion.
| Do all Linux processes write EOF to stdout when they are terminating/have finished terminating? |
1,504,805,717,000 |
Summary: I have a mail server (exim 4, Debian 10) in an LXC container. The host is running Debian 11. Since yesterday evening spam traffic has been coming in that appears to come from the LXC Host. However, tcpdump logs show that it is actually remote traffic. What is going on?
This is an example of an exim4 log entry on the mail server, for a spam mail seemingly coming from the lxc host:
2023-07-23 11:15:51 1qNX42-009wSW-VR <= [email protected] H=LXCHOST (prvzvtrfnh) [LXCHOSTIPV4] P=esmtp S=615 [email protected]
Yet on the tcpdump logs on the host I see corresponding entries like this:
14:06:07.165374 IP 39.170.36.149.34307 > MAILSERVERCONTAINER.smtp: Flags [P.], seq 5672:5702, ack 1397, win 27, options [nop,nop,TS val 1151815058 ecr 475541370], length 30: SMTP: MAIL FROM:<[email protected]>
So the traffic appears to come from the (Chinese) IP 39.170.36.149. (This IP does not appear at all in the container logs.) So why does this traffic appear as coming from the host to the mail server?
The relevant network interfaces on the host are:
eno1, the physical interface
br0, a bridge connecting the phyiscal interface with several lxc containers
The tcpdump command on the host that shows the spammy traffic is:
tcpdump -i br0 port 25 and dst host [MAILSERVERIPV4]
The bridge interface is setup like this in /etc/network/interfaces:
auto br0
iface br0 inet static
bridge_ports regex eth.* regex eno.*
bridge_fd 0
address HOSTADDRES
netmask 255.255.255.192
gateway HOSTGATEWAY
Both container and host are up to date with security updates. But the host's uptime is 248 days, so it is possible that it is running outdated binaries.
UPDATE
I think the problem was caused by an iptables rule on the host, -t nat -A POSTROUTING -o br0 -j MASQUERADE. This rule is intended for containers without an external IP to reach the internet. I have apparently misunderstood what this does. Shouldn't it only masquerade traffic that is routed from internal IPs to the internet? As I understand it, external traffic to the mail server is bridged and not routed at all. Also, it's only one particular spammer that was able to exploit my setup. The normal traffic to my mail server shows external IPs. How did the spammer do this?
UPDATE 2: The problems started after installing docker on the host. Could it be that docker and lxc interact in a way to create these problems?
|
I think the problem was caused by an iptables rule on the host
iptables -t nat -A POSTROUTING -o br0 -j MASQUERADE
This rule is intended for containers without an external IP to reach the internet.
What this rule does is masquerade any traffic going out through br0. It could be traffic going out from the host to a container, or it could (as intended) be traffic leaving the host and heading off to the wider Internet.
The problems started after installing docker on the host. Could it be that docker and lxc interact in a way to create these problems?
Yes, I would say that's quite likely. You will need to modify the rule to avoid masquerading local traffic.
As an example, let's assume your host is 192.168.1.1 (and maybe also has a public IPv4 address), and you have a hidden container subnet of 192.168.1.0/24. Docker has come along and grabbed 172.17.0.0/16.
We might suppose that this rule is intended to masquerade anything leaving the Docker subnet,
iptables -t nat -A POSTROUTING -o br0 --src 172.17.0.0/24 -j MASQUERADE
| Remote SMTP traffic appears to come from LXC Host to container |
1,504,805,717,000 |
Today I modified an LXC container to add an extra "bind mount", but forgot to create the mount directory in the container root filesystem.
As a result the container startup failed, and left the system in a strange state.
The startup had already created the "veth" interface for the container, and renamed another interface that I was binding to the container with the "phys" method from the system "predictable interface names" name of ensXfY to the container name of eth1.
But after the crash it didn't clean this up.
So even after fixing the underlying problem, the container still couldn't start, because the host networking was messed up.
This happended to me on Ubuntu 16.04 running LXC package 2.0.11-0ubuntu1~16.04.3, but it would probably affect some other versions of LXC on other Linux distros too.
|
This had created two separate issues - a stale "veth" pair, and a physical interface not being named correctly.
I solved the issue by combining bits of this post:
https://stackoverflow.com/questions/31989426/how-to-identify-orphaned-veth-interfaces-and-how-to-delete-them
for the "veth" issue, and this post:
CentOS 7 - Rename network interface without rebooting
For renaming network interfaces.
The two commands I ended up using ended up just being:
ip link delete vethXYZ
ip link set eth1 name ensXfY
After fixing the real original problem (by creating the mount point directory), and running these commands, I was then able to start the container correctly.
| How to cleanup network interfaces after an LXC container crashes on startup |
1,504,805,717,000 |
I also submitted this as a Fedora bug question, here.
I'm not sure if this is a bug, but here is my sudden issue.
The Linux/LXC single-box cluster setup
I use Fedora x86_64 (currently Fedora-25) as the LXC/HOST O/S. I use
CentOS-6 x86_64 (currently CentOS-6.9 Final) for the six (qty. 6)
LXC/GUEST O/S'.
This was working for a long time (a few years), but suddenly does not
after a 'sudo dnf -y update' (HOST) and 'sudo yum -y update'
(GUESTS). It has been a few months since I booted this HOST/GUESTS
LXC "cluster" and, as usual, O/S updates are the first thing that I
perform. This may provide a hint if some underlying system-level
component(s)/behavior(s) changed during that time.
The Fedora HOST and CentOS-6 GUESTS are on the same subnet, and share
the same default router: 192.168.0.0/24; 192.168.0.1 (all standard
stuff).
The Fedora Host does not have any firewall/firewalld RPM packages
installed, and therefore doesn't not run a firewall. I removed this
long ago to simplify things.
The issue
After performing the above O/S updates to the HOST and GUESTS, from within any GUEST, I can no longer (a) successfully ping/ssh guest-to-guest or (b) ping the default router.
I can, however, ping/ssh HOST-to-GUEST and GUEST-to-HOST with no issue.
From any computer outside this setup -- which, by the way, are also on the same subnet and share the same default router as above -- I can ping/ssh to the HOST but cannot to any of the GUESTS.
Other than performing the aforementioned O/S updates, I didn't alter
anything.
Some output
Here is output from the HOST: HOST.txt
Here is output from a GUEST: ONE_GUEST.txt
Note that the GUESTS are named vps00, vps01, vps02, vps03, vps04 and
vps10, and have identical configurations except MAC and IP addresses
(so I only provided output for one of them). While the HOST is named
lxc-host. Throughout the attachments, you'll see some in-line notes
that I annotated them with.
Any ideas? Thank you in advance. :)
|
Thanks to the accepted answer in this POST, I was able to finally figure out the iptables(1M) entries that were missing. Here they are:
sudo iptables -A INPUT -i eth0 -j ACCEPT
sudo iptables -A INPUT -i br0 -j ACCEPT
sudo iptables -A FORWARD -i br0 -j ACCEPT
I don't know what Fedora HOST O/S changes occurred to make these entries not be there suddenly (meaning after doing "dnf -y update; reboot" after a few months of not doing that), but would sure love to know because now I have to hardcode these entries in somewhere (which I'm not thrilled about LoL).
I hope this helps others who bridge their LXC guests like I do.
ADDITION-1:
Here are the sequence of commands I used to permanently incorporate them,
after interactively executive the above commands (in-memory):
root# cd /etc/sysconfig/
root# cp iptables iptables.FCS # Backup the current contents.
root# iptables-save > ./iptables # Overwrite with in-memory contents.
root# reboot
root# sudo systemctl start iptables.service # Not yet committed.
root# sudo iptables -L -n -v | more # Inspect in-memory. changes
root# sudo systemctl enable iptables.service # If all looks good, commit them permanently.
| Fedora-25 HOST + CentOS-6 GUESTS Linux/LXC: Guests can't connect to each other or to default router |
1,431,619,835,000 |
I'd like to archive an existing LXC container that has been configured to run as unprivileged LXC container (see this question).
How can I conserve all the file system meta-data that is used to store the mapped UID and GID for file/folder ownership?
NB: I know that the mapping itself happens on the host, but what I mean is that inside the userns there are a number of UIDs and GIDs which all map on the host to an unprivileged user, but which in the guest still resolve to different UIDs and GIDs. So whatever that magic sauce is that keeps these things connected at the file system level, I'd like to conserve it in an archive (tar or 7z or similar).
|
I have investigated that topic since I asked my question and it turns out that the ranges of sub-GIDs/sub-UIDs are indeed used as file-group/-owner for the files. There is no additional metadata (unless you'd be using SELinux or the like) which is relevant to this.
Also keep in mind that ACLs may have to be modified, if you make use of ACLs.
| How do I conserve the userns UID/GID mappings when archiving an LXC guest? |
1,431,619,835,000 |
To generate images LXD compresses files using gzip, which can only use one core. Thus, creating images can be very slow with large containers. I would like to use other compressors (e.g., pigz). What options I have to speed up the creation of images? A similar question was discused in this mailing list. However, that discussion was two years ago. Maybe, the status have changed.
I imagine using something like:
$ lxc publish $container --alias $container --compression pigz
If parallel compressors are not available, is it possible to specify the compression level?
|
You say "files". It is possible to compress files in parallel, using a non-parallel compressor. But this will require modifying the code that calls the compressor (does it have an option to do this already?).
Using a parallel compressor for each file, may be possible, but compression ratios will be reduced. e.g. independently compressing the two halves. But then if the two halves are the same, the compressor will not see it, and compression will be lost.
If lxc publish is compressing across files (for more compression), then even the first option will reduce compression, for the same reason outlined in the 2nd paragraph.
Edit:
Having said that I just looked at some benchmarks for pigz. I think using the default over-lapping blocks (I have now read the manual), and they are no worse than gzip.
Hope you find a solution.
| It is possible to use parallel compression method in lxc publish? |
1,431,619,835,000 |
I have several lxc containers that need network access. At the moment I am manually allocating them IP addresses in the relevant config file as so:
lxc.network.type = veth
lxc.network.flags = up
lxc.network.name = eth0
lxc.network.link = br0
lxc.network.ipv4 = 192.168.1.6/24
lxc.network.ipv4.gateway = 192.168.1.1
This works but does not scale and can conflict with my routers DHCP allocation. I try to use my router dhcp by leaving out the lxc.network.ipv4 lines (as described online elsewhere), and the container starts but dhcpcd reports no carrier. lxc-ls --fancy also does not show my container has an IP address. The bride is up and lxc.network.link set in the config file.
How can I use DHCP with my containers? Is it possible to use my routers DHCP, or do I need to run a server on my host? Some of my containers do need to be accessible from the outside, where as some only need to communicate to other containers/host.
I'm running arch linux, most of the help online seems ubuntu specific.
|
Make sure netctl and dhcpcd is installed inside the container (pacman -Q netctl dhcpcd) then run the following in the container:
cat > /etc/netctl/eth0
Connection=ethernet
IP=dhcp
Interface=eth0
Press CTRL-D to write the file. Then enable the profile by running:
netctl enable eth0
Finally restart the container and you should have a DHCP assigned IP address.
| Set up DHCP for LXC containers |
1,431,619,835,000 |
When trying LXD, I tried to share a folder from my computer with the LXC Container, but I could not write in the folder in the container because ls -l shows that it belongs to user nobody and group nobody.
How to know the ID of this user and group?
|
You can also use the id command to lookup uids and gids:
# Get the numeric uid of the user 'nobody'
$ id -u nobody
65534
# Get the numeric gid of the user 'nobody'
$ id -g nobody
65534
With no options, it'll print the uid and all the gids to which the user belongs:
$ id nobody
uid=65534(nobody) gid=65534(nogroup) groups=65534(nogroup)
| What is the ID of nobody user and nogroup group? |
1,431,619,835,000 |
I'm trying to make use of Ansible for configuration management and centralized administration.
All the machines I'm interested about are actually containers on the host which is going to run Ansible.
Currently I am writing a dynamic inventory script that groups the different hosts and makes certain hostvars available per group and also per host.
How can I use the inventory information to run local tasks?
Example: I have a container named foo and the dynamic inventory defines certain items like IP address, cgroup limits and so on for it. How can I reuse that information before the guest container is even up, in order to generate (using the usual Jinja2 templates) the container configuration on the host?
|
If I understand correctly you need to access some ansible variables defined for a generic host. You can access all hosts variables by the dictionary hostvars, that has hostname as primary key, for your example:
{{ hostvars['foo']['ipv4']['address'] }}
Credits goes to:
https://docs.ansible.com/playbooks_variables.html#magic-variables-and-how-to-access-information-about-other-hosts
https://serverfault.com/questions/638507/how-to-access-host-variable-of-a-different-host-with-ansible
| How can I reuse the Ansible inventory for local tasks? |
1,431,619,835,000 |
I have an lxc-container with eth0 and IP 172.17.0.2/16. The host has a bridge br0 with 172.17.0.1/16. Both can ping each other. Further, the host has a VPN wg0 and IP 172.16.0.1/16. If I ping from inside the container to the VPN I get:
# ping 172.16.0.1
PING 172.16.0.1 (172.16.0.1) 56(84) bytes of data.
64 bytes from 172.16.0.1: icmp_seq=1 ttl=64 time=0.051 ms
Why is this? I expected no connection between both interfaces as forwarding and NAT are not enabled.
|
Unless I am mistaken, all local addresses (those belonging to the host) will react to a ping from any interface. It's not a question of forwarding, it's a question of recognizing the destination address as one of the local ones.
You can test this with tcpdump, and I would expect no packets to show up on wg0. You can also test by pinging some other host in 172.16.0.0/16, and you should get no answer. Another test is to use ip addr add ... to add a few other addresses to wg0 (or any other interface), and see if you can ping them after they are added.
| Why can some container ping an interface of the host which is not inside same LAN? |
1,431,619,835,000 |
I am pretty green when it comes to grep, can someone point out how I can get an array in bash of the list of snapshot names (NOTE: just the names) when I do a lxc info mycontainer ?
My current results are:
root@hosting:~/LXC-Commander# lxc info mycontainer --verbose
Name: mycontainer
Remote: unix:/var/lib/lxd/unix.socket
Architecture: x86_64
Created: 2017/05/01 21:27 UTC
Status: Running
Type: persistent
Profiles: mine
Pid: 23304
Ips:
eth0: inet 10.58.122.150 vethDRS01G
eth0: inet6 fd9b:16e1:3513:f396:216:3eff:feb1:c997 vethDRS01G
eth0: inet6 fe80::216:3eff:feb1:c997 vethDRS01G
lo: inet 127.0.0.1
lo: inet6 ::1
Resources:
Processes: 1324
Memory usage:
Memory (current): 306.63MB
Memory (peak): 541.42MB
Network usage:
eth0:
Bytes received: 289.16kB
Bytes sent: 881.73kB
Packets received: 692
Packets sent: 651
lo:
Bytes received: 1.51MB
Bytes sent: 1.51MB
Packets received: 740
Packets sent: 740
Snapshots:
2017-04-29-mycontainer (taken at 2017/04/29 21:54 UTC) (stateless)
2017-04-30-mycontainer (taken at 2017/04/30 21:54 UTC) (stateless)
2017-05-01-mycontainer (taken at 2017/05/01 21:54 UTC) (stateless)
With my ultimate goal of simply containing an array such as: 2017-04-29-mycontainer 2017-04-30-mycontainer 2017-05-01-mycontainer
|
With lxc list --format=json you get a JSON document with a lot of information about all the various available containers.
lxc list mycontainer --format=json limits this to the containers whose names start with the string mycontainer (use 'mycontainer$' for an exact match).
Parsing JSON is generally safer than parsing a text document that is almost free form.
To extract the names of the snapshots using jq:
$ lxc list mycontainer --format=json | jq -r '.[].snapshots[].name'
This will give you a list like
2017-04-29-mycontainer
2017-04-30-mycontainer
2017-05-01-mycontainer
To put this into an array in bash:
snaps=( $( lxc list mycontainer --format=json | jq -r '.[].snapshots[].name' ) )
Just be aware that if you do this, snapshot names with characters that are special to the shell (*?[) will cause file name globbing to happen. You can prevent this with set -f before the command (and set +f after).
If you just want to loop over the snapshots:
lxc list mycontainer --format=json | jq -r '.[].snapshots[].name' |
while read snap; do
# do something with "$snap"
done
| Get Array of LXD Snapshot Names |
1,431,619,835,000 |
I'd like to connect to a LXC container through Proxmox via SSH without having SSH access to the container itself, so I can get the desired outcome by connecting to the Proxmox host first and then running lxc-attach <ID> to connect to my container.
Now I'd like to do this in one go. For this I have a function in my rc file:
sshc() { ssh $1 "lxc-attach $2; bash -i" }
It works, but in the terminal it looks like this:
This should look different, i.e.:
root@root1543:~# lxc-attach 1111
root@container:~# pwd
/root
root@container:~#
I want to see the user and host in the current shell, which I do not see in my solution.
I also considered to alter RemoteCommand in the SSH config, but apparently, it's impossible to have an argument passed into RemoteCommand, so I ditched that attempt.
|
Add the "-t" option to your ssh invocation:
sshc() { ssh -t $1 "lxc-attach $2; bash -i" }
When ssh is invoked with a command to run on the remote system, it doesn't allocate a TTY for the session by default. Adding "-t" tells ssh to requesta TTY for the session.
Interactive sessions normally operate through a TTY to provide certain features like the ability to use the backspace key to edit what you've typed. Your shell also uses the presence of a TTY to determine whether to operate interactively, e.g. by printing command-line prompts.
| How to SSH to a host and attach an LXC container in one command, properly? |
1,431,619,835,000 |
On my host Ubuntu 18.04 I am running two lxc containers using default setups. Containers use Ubuntu 18.04 as well. I have an app running on container1 that offers an https based service on https://localhost:3000/. Container2 is not able to even establish a connection with container1.
Container2 can ping container1 and read the html of the default Apache2 server running on localhost (for container1). Testing with netcat, I can establish connection with a few main ports, however I get connection refused for port 3000.
root@c2:~# nc -zv c1 22
Connection to c1 22 port [tcp/ssh] succeeded!
root@c2:~# nc -zv c1 80
Connection to c1 80 port [tcp/http] succeeded!
root@c2:~# nc -zv c1 443
nc: connect to c1 port 443 (tcp) failed: Connection refused
nc: connect to c1 port 443 (tcp) failed: Connection refused
root@c2:~# nc -zv c1 3000
nc: connect to c1 port 3000 (tcp) failed: Connection refused
nc: connect to c1 port 3000 (tcp) failed: Connection refused
The same situation applies between my host and any of my containers. Only ports 22 and 80 seem to be reachable by default. I tried enabling ufw on all containers, but it still doesnt work out:
root@c1:~# ufw status
Status: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
22/tcp ALLOW Anywhere
22 ALLOW Anywhere
443 ALLOW Anywhere
873 ALLOW Anywhere
3000 ALLOW Anywhere
Anywhere on eth0@if16 ALLOW Anywhere
Apache ALLOW Anywhere
80 ALLOW Anywhere
20 ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
22/tcp (v6) ALLOW Anywhere (v6)
22 (v6) ALLOW Anywhere (v6)
443 (v6) ALLOW Anywhere (v6)
873 (v6) ALLOW Anywhere (v6)
3000 (v6) ALLOW Anywhere (v6)
Anywhere (v6) on eth0@if16 ALLOW Anywhere (v6)
Apache (v6) ALLOW Anywhere (v6)
80 (v6) ALLOW Anywhere (v6)
20 (v6) ALLOW Anywhere (v6)
Anywhere ALLOW OUT Anywhere on eth0@if16
Anywhere (v6) ALLOW OUT Anywhere (v6) on eth0@if16
Even testing via curl clearly shows me that port connection is closed and thats the issue:
root@c2:~# curl https://10.155.120.175:3000/
curl: (7) Failed to connect to 10.155.120.175 port 3000: Connection refused
I have been stuck in this issue for a week, can anyone help me troubleshoot this?
Edit (additional data):
results for netstat on container1:
root@c1:~# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 289/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1385/sshd
tcp 0 0 127.0.0.1:3000 0.0.0.0:* LISTEN 293/MyApp
tcp6 0 0 :::80 :::* LISTEN 310/apache2
tcp6 0 0 :::22 :::* LISTEN 1385/sshd
|
Port 3000 is only listening for localhost. You need to configure your application correctly so that the port is open to 0.0.0.0 (as you can see in other ports).
| Connection refused between 2 linux containers |
1,431,619,835,000 |
I have problem with bind problem with lxd init. The port 8443 is not used by any other application, therefore I think, lxd init tries to bind this port twice.
My lxd version is 3.14 and I am using Gentoo.
Do you have any idea how to solve this please?
alpha /var/log # lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What name should be used to identify this node in the cluster? [default=alpha]: alpha.stty.cz
What IP address or DNS name should be used to reach this node? [default=171.25.220.247]: alpha.stty.cz
Are you joining an existing cluster? (yes/no) [default=no]:
Setup password authentication on the cluster? (yes/no) [default=yes]:
Trust password for new clients:
Again:
Do you want to configure a new local storage pool? (yes/no) [default=yes]:
Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]:
Would you like to create a new btrfs subvolume under /var/lib/lxd? (yes/no) [default=yes]:
Do you want to configure a new remote storage pool? (yes/no) [default=no]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes
Name of the existing bridge or host interface: ovs-br0
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
Error: Failed to update server configuration: cannot listen on https socket: listen tcp 171.25.220.247:8443: bind: address already in use
The output of sudo netstat -pna | grep 8443 is
unix 3 [ ] STREAM CONNECTED 28443 7135/konsole
The issue is also published on Github. (https://github.com/lxc/lxd/issues/7560)
|
More than likely you will need to update your version of lxd to 3.19 or higher; searching through bugs and issues brought me to this thread:
github.com/lxc/lxd/issues/6682
netstat will show not any process listening on 8443 and it sounds like there is no configuration that might be pear-shaped. It simply looks to be a bug that you've caught in an older version of lxd.
| lxd init: bind: address already in use |
1,431,619,835,000 |
I am trying to explore the page fault behavior of Linux.
I made an lxc container with restricting the memory to 1GB
(by adding 'lxc.cgroup.memory.limit_in_bytes = 1G' to /etc/lxc/default.conf).
Then, I ran a simple code which accesses 2GB amount of data.
int main() {
char* buf = malloc(1024*1024*1024);
char* buf2 = malloc(1024*1024*1024);
if (buf == 0 || buf2 == 0) {
printf("Malloc failed!\n");
return 0;
}
int i,j,k;
for (i=0; i<1024; i++)
for (j=0; j<1024; j++)
for (k=0; k<1024; k++)
buf[i*1024*1024 + j*1024 + k] = i+j+k;
for (i=0; i<1024; i++)
for (j=0; j<1024; j++)
for (k=0; k<1024; k++)
buf2[i*1024*1024 + j*1024 + k] = i+j+k;
free(buf);
free(buf2);
while(1);
return 0;
}
The code is compiled with -O0 and ran inside the container.
When the program reaches the while(1);, I check how many page fault it experienced with
ps -eo maj_flt,cmd | grep a.out
Where a.out is the compiled executable.
Sometimes I get 200~300 page faults; however, sometimes I only see 10~20 page faults.
Because memory is only 1G, I think at least always 1G/4K = 256K page fault should be happening.
Why am I only seeing 10~20 page fault sometimes? I confirmed my Linux uses 4K pages as default.
I am new to Linux. Any insights will be very helpful! Thank you.
|
I figured out the problem.
A major problem with my code was that on first write to the malloc'ed page, page fault does not occur because Linux does not have to read an empty page from the disk. I changed the code so that it runs the looping part of the code twice.
Also, I disabled Linux readahead (by echo "0" >> /proc/sys/vm/page-cluster)
With the two changes, I was able to see roughly 2G / 4K = 524,288 page faults
(precisely 524,304).
| Why am I not seeing as many page faults as I expect? |
1,431,619,835,000 |
I have a system with Proxmox VE 5.1 and a LXC container with Fedora 27.
The container has 2 disk (raw format), the rootfs and an additional mount point, both of them are in ext4, I want to format to xfs the second mount point.
I've tried to use the typical mkfs.xfs but I don't know where the linux block device is stored, It isn't in /dev directory.
The mount command shows:
/var/lib/vz/images/111/vm-111-disk-1.raw on / type ext4 (rw,relatime,data=ordered)
/var/lib/vz/images/111/vm-111-disk-2.raw on /var/db_data type ext4 (rw,relatime,data=ordered)
The df -h shows:
/dev/loop6 20G 1.1G 18G 6% /
/dev/loop7 9.8G 37M 9.3G 1% /var/db_data
However the "loopX" devices doesn't exist in container disk.
I've searched in proxmox forums without luck, so I don't know if this is a proxmox limitation...
|
In your example, the block device is /dev/loop7; It's a loop device backed by the file /var/lib/vz/images/111/vm-111-disk-2.raw. Per Wikipedia:
In Unix-like operating systems, a loop device, vnd (vnode disk), or
lofi (loop file interface) is a pseudo-device that makes a file
accessible as a block device.
There's no indication that your disk images contain partitions, so you can either create the filesystem from:
Within the container (recommended): mkfs.xfs /dev/loop7
From the host while the container is NOT running: mkfs.xfs /var/lib/vz/images/111/vm-111-disk-2.raw
| Format a raw image to XFS in Proxmox VE |
1,431,619,835,000 |
I have problem for past several months which was progressively getting worse and now I'm in state that if I try to do almost any intensive IO operation on my visualization host (Proxmox ve 5.1-41) like backup or even cp/rsync, dd speed of given transfer will drop to KB/s and server will hang pretty much indefinitely giving me lot of "task hanged for more than 120s" etc.
For a long time I thought that it is problem with discs, I'm running 2x SSD in ZFS Raid 1 for VM storage but recently I really started to be desperate because I'm now not able to do any backups elsewhere than SSD itself (speed when copying from one pool to same pool is OK).
I then tried same speed tests that I'm doing on host inside KVM/LXC and behold speeds were without single problem, no slowdowns everything working as expected.
This finding also explained why I never found out about this problem before because I was always testing performance of VMs, never thinking that performance of host would be worse than guest.
I already posted about this problem on Proxmox forums but I'm not entirely sure that It's actually fault of their system and I would love to hear what would some of you propose as test to find out what is causing this.
I already tested all Guest OS turn off and notginh changed.
Machine has plenty of free resources available in normal usage.
There is enough space on disc and on RAM.
CPU is: Intel Xeon E5-2620 v4
RAM: 64 GB
DATA DISKS: 2x 1TB SSD in ZFS RAID 10
BOOT DISK: 2x satadom 32 GB in ZFS RAID 10
EDIT: Only thing that will be abnormal on graphs inside Proxmox during high IO on host is Server Load which will rocket to around 50 and than most of the time all graphs will cut out because of the load. Actual CPU load and ram usage will be quite low.
Many thanks for any idea!
EDIT 2:
This is stats during data transfer (with rsync) from SSD with data sdd & sde (ZFS RAID 1) to test HDD (BTRFS RAID 1) sda & sdb but actual load is on sdf & sdg (and zd0 - SWAP) which are system SSD (ZFS RAID 1).
(load can be seen from second measurement)
iostat -x -d 2
Linux 4.13.13-2-pve (klaas) 01/03/2018 _x86_64_ (16 CPU)
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
loop0 0.00 0.00 0.04 2.00 0.17 869.78 850.88 0.01 4.13 5.71 4.10 1.46 0.30
sda 0.00 0.00 0.00 0.00 0.00 0.00 40.94 0.00 2.98 2.98 0.00 1.96 0.00
sdb 0.00 0.00 0.00 0.00 0.00 0.00 40.94 0.00 3.58 3.58 0.00 2.96 0.00
sdc 0.00 2.00 0.02 0.71 0.26 108.82 297.28 0.02 22.87 7.26 23.33 9.11 0.67
sdd 0.00 0.01 12.79 39.53 794.05 645.26 55.02 0.02 0.29 0.71 0.15 0.19 0.99
sde 0.00 0.00 12.80 39.00 794.16 645.26 55.58 0.02 0.30 0.72 0.17 0.20 1.04
sdf 0.00 0.00 0.88 10.16 10.27 139.85 27.22 0.13 11.66 4.42 12.28 5.96 6.57
sdg 0.00 0.00 0.89 10.39 10.32 139.85 26.63 0.14 12.53 4.38 13.24 6.41 7.23
zd0 0.00 0.00 0.04 0.24 0.16 0.94 8.00 0.02 87.75 5.03 101.71 35.04 0.97
zd16 0.00 0.00 0.33 0.46 2.82 8.71 28.95 0.00 1.17 0.28 1.80 0.11 0.01
zd32 0.00 0.00 0.03 5.96 0.77 88.80 29.88 0.00 0.19 0.31 0.18 0.02 0.01
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
loop0 0.00 0.00 0.00 0.50 0.00 2.00 8.00 0.00 0.00 0.00 0.00 0.00 0.00
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdc 0.00 0.50 0.00 1.00 0.00 6.00 12.00 0.01 6.00 0.00 6.00 6.00 0.60
sdd 0.00 0.00 17.50 16.50 24.00 162.00 10.94 0.01 0.35 0.69 0.00 0.35 1.20
sde 0.00 0.00 16.50 16.50 18.00 162.00 10.91 0.01 0.30 0.61 0.00 0.30 1.00
sdf 0.00 0.50 0.50 2.50 0.00 22.00 14.67 2.70 754.67 792.00 747.20 333.33 100.00
sdg 0.00 0.00 2.50 3.00 8.00 30.00 13.82 0.39 73.45 128.00 28.00 35.64 19.60
zd0 0.00 0.00 0.00 1.50 0.00 6.00 8.00 3.99 728.00 0.00 728.00 666.67 100.00
zd16 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
zd32 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdd 0.00 0.00 20.50 6.00 1566.00 104.00 126.04 0.01 0.30 0.39 0.00 0.23 0.60
sde 0.00 0.00 20.00 6.00 1690.00 104.00 138.00 0.01 0.46 0.40 0.67 0.38 1.00
sdf 0.00 0.50 13.50 44.50 10.00 646.00 22.62 2.93 68.03 78.67 64.81 16.97 98.40
sdg 0.50 0.50 19.00 44.00 40.00 630.00 21.27 2.85 44.41 34.74 48.59 15.24 96.00
zd0 0.00 0.00 0.00 11.00 0.00 44.00 8.00 2.59 375.45 0.00 375.45 91.09 100.20
zd16 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
zd32 0.00 0.00 0.00 4.00 0.00 32.00 16.00 0.00 0.00 0.00 0.00 0.00 0.00
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdd 0.00 0.00 61.00 117.00 7028.00 3236.00 115.33 0.04 0.25 0.49 0.12 0.19 3.40
sde 0.00 0.00 40.00 84.00 4680.00 3236.00 127.68 0.07 0.55 1.20 0.24 0.40 5.00
sdf 0.00 0.50 7.00 9.50 78.00 852.00 112.73 3.64 222.18 147.71 277.05 60.61 100.00
sdg 0.00 0.00 7.00 15.50 32.00 1556.00 141.16 2.89 121.60 59.71 149.55 44.44 100.00
zd0 0.00 0.00 0.00 21.00 0.00 84.00 8.00 19.72 2074.95 0.00 2074.95 47.62 100.00
zd16 0.00 0.00 0.00 1.00 0.00 4.00 8.00 0.00 0.00 0.00 0.00 0.00 0.00
zd32 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
loop0 0.00 0.00 0.00 1.00 0.00 4.00 8.00 0.00 0.00 0.00 0.00 0.00 0.00
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdc 0.00 0.50 0.00 1.50 0.00 8.00 10.67 0.02 13.33 0.00 13.33 13.33 2.00
sdd 0.00 0.00 10.50 4.00 832.00 50.00 121.66 0.01 0.41 0.57 0.00 0.28 0.40
sde 0.00 0.00 8.50 4.00 576.00 50.00 100.16 0.02 1.28 0.94 2.00 1.12 1.40
sdf 0.00 2.00 5.50 11.50 12.00 1534.00 181.88 2.76 160.59 110.18 184.70 58.82 100.00
sdg 0.00 1.50 6.00 13.00 48.00 1622.00 175.79 2.86 156.42 107.67 178.92 52.63 100.00
zd0 0.00 0.00 4.00 34.50 16.00 138.00 8.00 22.63 692.10 120.00 758.43 25.97 100.00
zd16 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
zd32 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdd 0.00 0.00 5.50 17.00 4.00 378.00 33.96 0.00 0.09 0.36 0.00 0.09 0.20
sde 0.00 0.00 7.50 6.50 42.00 98.00 20.00 0.01 0.71 0.53 0.92 0.57 0.80
sdf 0.00 1.00 7.50 11.00 28.00 1384.00 152.65 3.16 152.65 105.60 184.73 54.05 100.00
sdg 0.00 0.50 4.00 8.00 16.00 976.00 165.33 3.36 208.00 192.50 215.75 83.33 100.00
zd0 0.00 0.00 7.00 17.50 28.00 70.00 8.00 25.68 592.65 231.71 737.03 40.82 100.00
zd16 0.00 0.00 0.00 3.50 0.00 14.00 8.00 0.00 0.00 0.00 0.00 0.00 0.00
zd32 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
loop0 0.00 0.00 0.00 0.50 0.00 2.00 8.00 0.00 0.00 0.00 0.00 0.00 0.00
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdc 0.00 0.50 0.00 1.00 0.00 6.00 12.00 0.01 10.00 0.00 10.00 10.00 1.00
sdd 0.00 0.00 3.00 66.50 14.00 1308.00 38.04 0.01 0.17 1.33 0.12 0.12 0.80
sde 0.00 0.00 2.50 57.00 0.00 1588.00 53.38 0.01 0.24 1.60 0.18 0.17 1.00
sdf 0.00 0.00 1.50 1.00 6.00 128.00 107.20 3.27 1056.80 1004.00 1136.00 400.00 100.00
sdg 0.00 0.00 0.00 0.50 0.00 64.00 256.00 3.62 2176.00 0.00 2176.00 2000.00 100.00
zd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 25.00 0.00 0.00 0.00 0.00 100.00
zd16 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
zd32 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Other than that System SSDs have low utilization. But I don't get why it utilizes Drives that should not participate in data transfers at all.
EDIT 3:
Transfer starts at second measurement, data copied from DP1 to another HHDs with BTRFS, rpool (RAID 1 ZFS SSD) is being utilized to 100 % but it does not look like it's due to actual bandwidth
zpool iostat 2
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
DP1 554G 334G 16 80 1.22M 1.31M
rpool 6.69G 23.1G 0 21 17.0K 286K
---------- ----- ----- ----- ----- ----- -----
DP1 554G 334G 616 195 76.7M 4.85M
rpool 6.69G 23.1G 9 38 216K 3.87M
---------- ----- ----- ----- ----- ----- -----
DP1 554G 334G 1.05K 131 133M 1.41M
rpool 6.69G 23.1G 0 29 0 3.03M
---------- ----- ----- ----- ----- ----- -----
DP1 554G 334G 0 0 4.00K 0
rpool 6.69G 23.1G 0 25 0 3.25M
---------- ----- ----- ----- ----- ----- -----
DP1 554G 334G 1 0 8.00K 0
rpool 6.69G 23.1G 0 25 2.00K 3.14M
---------- ----- ----- ----- ----- ----- -----
DP1 554G 334G 0 0 4.00K 0
rpool 6.69G 23.1G 3 26 114K 3.10M
---------- ----- ----- ----- ----- ----- -----
DP1 554G 334G 0 0 2.00K 0
rpool 6.69G 23.1G 0 20 0 2.56M
---------- ----- ----- ----- ----- ----- -----
DP1 554G 334G 0 0 2.00K 0
rpool 6.69G 23.1G 0 15 4.00K 1.94M
---------- ----- ----- ----- ----- ----- -----
DP1 554G 334G 0 0 0 0
rpool 6.69G 23.1G 0 25 0 3.19M
---------- ----- ----- ----- ----- ----- -----
DP1 554G 334G 21 0 130K 0
rpool 6.69G 23.1G 0 14 0 1.81M
---------- ----- ----- ----- ----- ----- -----
DP1 554G 334G 0 0 8.00K 0
rpool 6.69G 23.1G 0 1 2.00K 256K
---------- ----- ----- ----- ----- ----- -----
DP1 554G 334G 0 0 2.00K 0
rpool 6.69G 23.1G 0 12 0 1.62M
---------- ----- ----- ----- ----- ----- -----
DP1 554G 334G 0 0 0 0
rpool 6.69G 23.1G 1 18 8.00K 2.37M
---------- ----- ----- ----- ----- ----- -----
DP1 554G 334G 0 0 0 0
rpool 6.69G 23.1G 8 15 84.0K 2.00M
Its related of course to some IO problems, because even when I stop the transfer, host (Proxmox GUI) will freeze and not respond for like 5-15 minutes and commands like df in cli will not respond at all for same period of time. All VMs running on machine are working as expected without any slowdown.
Amount of data actually writen to System SSDs is so small that used space (21 %) and swap usage (360 MB out of 3,6 GB - swapiness set to 10) will barely change.
Also I also tried to change disc scheduler multiple times right now I'm on noop.
I noticed that when IO'm watching top there is z_wr_iss multiple times running for longer time
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1967 root 1 -19 0 0 0 S 0.3 0.0 5:15.03 z_wr_iss
1969 root 1 -19 0 0 0 S 0.3 0.0 5:14.76 z_wr_iss
1974 root 1 -19 0 0 0 S 0.3 0.0 5:14.56 z_wr_iss
1975 root 1 -19 0 0 0 S 0.3 0.0 5:14.71 z_wr_iss
1981 root 0 -20 0 0 0 S 0.3 0.0 4:02.77 z_wr_int_1
1984 root 0 -20 0 0 0 S 0.3 0.0 4:02.33 z_wr_int_4
1986 root 0 -20 0 0 0 S 0.3 0.0 4:02.29 z_wr_int_6
Right now I'm not able to run iotop because system will start freezing as soon as I run it, because It's still slowed down from previous tests.
OK I'm sorry It's probably caused byZFS problem as posted in @Mark answer because as I newer see it before now when I run iotop
3268 be/0 root 0.00 B/s 0.00 B/s 0.00 % 99.99 % [z_null_int]
Is definitely there.
|
It sounds as though you are having a similar problem described by various people over the last 8 or so months. In essence the version of ZFS shipped with Proxmox 5.1 is reported to have a bug which in certain circumstances results in crippling high IO. (search z_null_int high disk I/O #6171)
Two current options are to run Proxmox 4.1 (ZFS 0.6) or use an alternate file system on your proxmox 5.1 host.
| High IO on KVM/LXC host will hang server but not on guest (Proxmox/Debian) |
1,431,619,835,000 |
i set up lxd on my host system with lxdbr0 bridge as nic. Now my containers get their ip addresses via dhcp from lxdbr0 (in the range of 10.204.x.x).
Also i have 2 public ip addresses. One for the host (x.x.x.x) and one for the container (b.b.b.b). The container should use the second public ip for outgoing and ingoing traffic. Both public ip addresses go to the host system so my host system gets all traffic in the first place.
I already accomplished to set up a preroute (on the host) from my public ip to the private ip so that all incoming traffic for the public ip goes to a specific container.
BUT i can't figure out how to route the outgoing traffic FROM the container to the public ip. I've tried to set up a preroute like i did with the incoming traffic but no result.
iptables -L shows
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:domain /* managed by lxd-bridge */
ACCEPT udp -- anywhere anywhere udp dpt:domain /* managed by lxd-bridge */
ACCEPT tcp -- anywhere anywhere tcp dpt:bootps /* managed by lxd-bridge */
ACCEPT udp -- anywhere anywhere udp dpt:bootps /* managed by lxd-bridge */
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* managed by lxd-bridge */
ACCEPT all -- anywhere anywhere /* managed by lxd-bridge */
iptables -t nat -L shows
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DNAT all -- anywhere ip-b.ip-b-b-b.eu to:10.204.119.5
DNAT all -- anywhere 10.204.119.5 to:b.b.b.b
b.b.b.b --> second public ip (for the container)
10.204.119.5 --> containers (private) ip in the lxdbr0 bridge
Incoming traffic on the public ip gets routed to the container but the outgoing traffic from the container doesn't.
Also i set LXD_IPV4_NAT="false" in the lxd bridge config since this enabled the containers to use my hosts ip adress for outgoing traffic (which i don't want)
EDIT #1:
route -n shows
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 x.x.x.1 0.0.0.0 UG 0 0 0 ens3
10.204.119.0 0.0.0.0 255.255.255.0 U 0 0 0 lxdbr0
x.x.x.1 0.0.0.0 255.255.255.255 UH 0 0 0 ens3
x.x.x.1 --> gateway of my hosts ip (x.x.x.x)
EDIT #2: Example
- pIP1 = public ip 1, should be used for host
- pIP2 = " " 2, should be used for the container
the container runs on the host system.
container = 10.204.119.5 (device lxdbr0)
host = pIP1 (device ens3) and pIP2 (device ens3:0)
Outgoing packets from the container come with the source ip 10.204.119.5.
Now these packets should change the source ip to pIP2 and then sent to the
gateway (so it appears to the router, that the packet from the container
comes from the pIP2)
|
All you need to do is to NAT the traffic coming from the container's private IP to the host's interface for the container's public IP ($publicIP2):
iptables -t nat -A POSTROUTING -s 10.204.119.5/32 -j SNAT --to-source $publicIP2
| Route outging traffic from private network (lxdbr0) |
1,431,619,835,000 |
From my understanding on container technology (like lxc or docker), when you create a container it by default uses a "private network" like 10.0.1.x.
I do not understand why you would not want it on the same NAT as your host OS. Why would you not want your container to be bridge on your primary network interface by default?
Is it for security reasons? And is there a faster command to bridge the container interface or do I really have to apt-get install bridge-utils and configure everything for lxc?
Please help me understand.
|
Setting up a bridge is extra work. Someone has to do this work. Since some containers should be bridged and others shouldn't be bridged, container utilities don't systematically set up a bridge.
It's very common for a container not to be bridged. The point of a container is to isolate the container from the rest of the world. This often means that the container should not have unconstrained network access. If the host does the firewalling, then the container must not be bridged.
| Why would a container not be bridged to the local network by default? |
1,431,619,835,000 |
I'm trying to get to grips with Incus, because it looks like it is a fork of Canonical's LXD, which I can run fairly easily on Debian 12 with a deb package, rather than using snaps.
I have it all set up in a virtual machine on my KVM, running with both a basic directory based storage pool, and a zfs storage pool. I have spun up a test container called test that I want to take a stateful snapshot of, but it tells me that:
To create a stateful snapshot, the instance needs the migration.stateful config set to true
After reading the documentation on configuring an instance's options, I have tried running these various commands (the first being the one that I think is the most likely to be correct):
incus config set test migration.stateful=true
incus config set test config.migration.stateful=true
incus config set test.migration.stateful=true
incus config set migration.stateful=true
... but I always get an error message similar to below about an unknown configuration key:
Error: Invalid config: Unknown configuration key: migration.stateful
I have also tried setting the option through the YAML configuration as shown below:
... but it just gets stuck on "Processing..."
How is one supposed to enable stateful snapshots of incus linux containers? Perhaps this is just not possible because I am running inside a virtual machine, rather than the physical box?
|
In the end I tested out incus on a physical box, and did not experience this issue. I would recommend that if someone is using Incus with the goal of virtual machines rather than just containers, that one does this on a physical host.
| Incus - Setting migration.stateful for stateful snapshots |
1,431,619,835,000 |
My prelimanary actions:
setting up a ddns hostname with noip service (ok)
configured to automatically keep alive the association on my home router(ok)
installed a proxmox server v8 (ok)
create a lxc container with model "debian11-turnkey-wordpress" with a static IP (ok)
configured port forwarding 80,443 on router to point the lxc wordpress container(ok)
The current situation:
The website in lan is ok
When I try to access from internet with the ddns hostname it works the first time and then goes in ssl cert error both with http/https
When I try to get let's encrypt certificate with the selfconsole panel it fail in fatal error
My questions: How i can implement correctly the ssl certification in the container to use https from internet to lan container inside proxmox with a forwarding from outside to home network lan?
The configuration must be applied only on containers or there is somthing to do in general in the proxmox os, instead, to have all the newest containers ssl encrypted by default?
|
The problem was a bad configuration of port 443 on the router. Problem solved.
| Obtain a Wordpress Website with a Proxmox container available from outside with https (ssl encryption) |
1,431,619,835,000 |
OS: Debian Buster 10.10 inside lxc
I am attempting to install a new package (I tried different packages) and apt (and DPkg) is complaining with the following error message(s):
/etc/etckeeper/pre-install.d/README: 1: /etc/etckeeper/pre-install.d/README: Files: not found
/etc/etckeeper/pre-install.d/README: 2: /etc/etckeeper/pre-install.d/README: etc.: not found
/etc/etckeeper/pre-install.d/README: 3: /etc/etckeeper/pre-install.d/README: uncommitted: not found
E: Problem executing scripts DPkg::Pre-Invoke 'if [ -x /usr/bin/etckeeper ]; then etckeeper pre-install; fi'
E: Sub-process returned an error code
I decided to unstall etckeeper and got the exact same error message.
My googling / searching seems to be lacking. My reasoning is the problem lies with etckeeper, although I could be wrong.
|
edit: update with more information found from here. I renamed the following directories and recreated them:
/etc/etckeeper/
pre-install.d
post-install.d
unclean.d
And it allowed me to install new packages.
If you want to get rid of etckeeper all together:
rm -rf /var/lib/dpkg/info/etckeeper.*
rm -rf /usr/share/etckeeper
rm -rf /etc/default/etckeeper
rm -rf /etc/init.d/etckeeper
apt-get purge etckeeper
mv /usr/bin/etckeeper /usr/bin/etckeeper.bak
mv /etc/etckeeper/ /etc/etckeeper.bak
| How can I fix etckeeper or uninstall it so apt will install / remove new packages? |
1,431,619,835,000 |
Debian Buster amd64
Two containers, 192.168.122.2,3 both can resolve but cannot get to the Internet
Both containers can ping / interact with the host server.
Here is what I have in iptables.
# Generated by xtables-save v1.8.2 on Sat Mar 6 17:16:16 2021
*filter
:INPUT ACCEPT [47377:13690982]
:FORWARD ACCEPT [419:628058]
:OUTPUT ACCEPT [24929:4008372]
:POSTROUTING - [0:0]
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 8080 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 4430 -j ACCEPT
-A INPUT -i virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A INPUT -i virbr0 -p udp -m udp --dport 67 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT
-A FORWARD -d 192.168.122.2/32 -p tcp -m tcp --dport 80 -j ACCEPT
-A FORWARD -d 192.168.122.2/32 -p tcp -m tcp --dport 443 -j ACCEPT
-A FORWARD -d 192.168.122.3/32 -p tcp -m tcp --dport 8080 -j ACCEPT
-A FORWARD -d 192.168.122.3/32 -p tcp -m tcp --dport 4430 -j ACCEPT
-A FORWARD -d 192.168.122.0/24 -o virbr0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 192.168.122.0/24 -i enxd03745c9b08e -j ACCEPT
COMMIT
# Completed on Sat Mar 6 17:16:16 2021
# Generated by xtables-save v1.8.2 on Sat Mar 6 17:16:16 2021
*nat
:PREROUTING ACCEPT [2101:142603]
:INPUT ACCEPT [1480:106813]
:POSTROUTING ACCEPT [430:29500]
:OUTPUT ACCEPT [329:23520]
-A PREROUTING -i enxd03745c9b08e -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.122.2:80
-A PREROUTING -i enxd03745c9b08e -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.122.2:80
-A PREROUTING -i enxd03745c9b08e -p tcp -m tcp --dport 443 -j DNAT --to-destination 192.168.122.2:443
-A PREROUTING -i enxd03745c9b08e -p tcp -m tcp --dport 8080 -j DNAT --to-destination 192.168.122.3:8080
-A PREROUTING -i enxd03745c9b08e -p tcp -m tcp --dport 4430 -j DNAT --to-destination 192.168.122.3:4430
COMMIT
# Completed on Sat Mar 6 17:16:16 2021
# Generated by xtables-save v1.8.2 on Sat Mar 6 17:16:16 2021
*mangle
:PREROUTING ACCEPT [49751:14725298]
:INPUT ACCEPT [47442:13695764]
:FORWARD ACCEPT [1555:987308]
:OUTPUT ACCEPT [24929:4008372]
:POSTROUTING ACCEPT [26484:4995680]
COMMIT
# Completed on Sat Mar 6 17:16:16 2021
|
I found the fix for it. https://discuss.linuxcontainers.org/t/internet-access-issue-inside-container/5258
I had to use iptables-legacy and do the following:
/sbin/iptables-legacy -t nat -A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p tcp -j MASQUERADE --to-ports 1024-65535
/sbin/iptables-legacy -t nat -A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p udp -j MASQUERADE --to-ports 1024-65535
/sbin/iptables-legacy -t nat -A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -j MASQUERADE
| lxc containers can ping host and can resolve dns internet addresses but cannot get to the internet |
1,431,619,835,000 |
I have a linux server, Debian buster with LXC configured for unprivileged containers.
I have also a lot of crash from sssd_be, dmesg from server said:
dmesg|grep segfault
sssd_be[6739]: segfault at 8 ip 00007f080b190714 sp 00007ffc24a170a8 error 4 in libdbus-1.so.3.19.7[7f080b15d000+52000]
sssd_be[7517]: segfault at 8 ip 00007fec6ca4f714 sp 00007ffc71eec028 error 4 in libdbus-1.so.3.19.7[7fec6ca1c000+52000]
sssd_be[8853]: segfault at 8 ip 00007f9d181be714 sp 00007ffd42f784e8 error 4 in libdbus-1.so.3.19.7[7f9d1818b000+52000]
sssd_be[15961]: segfault at 8 ip 00007f1560855714 sp 00007ffc784710e8 error 4 in libdbus-1.so.3.19.7[7f1560822000+52000]
sssd_be[16728]: segfault at 8 ip 00007fa83b9df714 sp 00007fff1432b228 error 4 in libdbus-1.so.3.19.7[7fa83b9ac000+52000]
sssd_be[30789]: segfault at 8 ip 00007f0c21213714 sp 00007ffd37808908 error 4 in libdbus-1.so.3.19.7[7f0c211e0000+52000]
sssd_be[13515]: segfault at 8 ip 00007f67fd079714 sp 00007ffdae2dac78 error 4 in libdbus-1.so.3.19.7[7f67fd046000+52000]
sssd_be[26637]: segfault at 8 ip 00007fa775531714 sp 00007ffd3bd1b9a8 error 4 in libdbus-1.so.3.19.7[7fa7754fe000+52000]
sssd_be[4466]: segfault at 8 ip 00007f10cc150714 sp 00007ffe8e909a08 error 4 in libdbus-1.so.3.19.7[7f10cc11d000+52000]
sssd_be[11382]: segfault at 8 ip 00007f0bcddee714 sp 00007fffcd021998 error 4 in libdbus-1.so.3.19.7[7f0bcddbb000+52000]
I want to identify the container in which the process running is causing the crash, how to do?
htop report openvz container names, but not the lxc container name.
So I tried ps but very strange result appear
ps -efww -O lxc
PID LXC S TTY TIME COMMAND
31825 - S pts/9 00:00:00 bash
1478 - R pts/9 00:00:00 \_ ps
Any way to see the name of container?
|
I found a workaround.
Using ansible, and create a script which domain is running sssd_be
#!/bin/sh
pgrep -a sssd
cat /etc/hostname
I run the script with this "playbook" which show on stdout the result
- name: Transfer and execute a script.
hosts: all
become_user: root
tasks:
- script: script.sh
register: results
- debug:
var: results.stdout
After running the script I had found two domains with the process sssd_be, so I disable sssd for that machine and now works fine.
| Is possible to identify the LXC unprivileged container owner of process? |
1,431,619,835,000 |
I have some troubles with lxc unprivileged containers on Debian.
I follow this method:
a)I create the user unprivileged with home in /var/lxcunpriv
useradd -m -d /var/lxcunpriv lxcunpriv
b)I install the require packages
apt -y install lxc libvirt0 libpam-cgroup libpam-cgfs bridge-utils cgroupfs-mount
c)I change the file lxc-net
vim /etc/default/lxc-net
USE_LXC_BRIDGE="true"
d)I restart lxc-net
systemctl restart lxc-net
e)check, all green(works fine)
lxc-checkconfig
f)I apply this
sh -c 'echo "kernel.unprivileged_userns_clone=1" > /etc/sysctl.d/80-lxc-userns.conf'
sysctl -w -p --system
g)as a NOT root user I did
cat /etc/s*id|grep $USER
h)It return 100000-165536, so...
usermod --add-subuids 100000-165536 lxcunpriv
usermod --add-subgids 100000-165536 lxcunpriv
i)I give some permission on /var/lxcunpriv
cd /var/lxcunpriv
setfacl -m u:100000:x . .local .local/share
l)I configure the usernet, bridge1 is the name of my bridge net
echo "lxcunpriv veth bridge1 10"| tee -i /etc/lxc/lxc-usernet
m)I create the dirs
su - lxcunpriv
mkdir -p .config/lxc
n) then..
echo \
'lxc.include = /etc/lxc/default.conf
# Subuids and subgids mapping
lxc.id_map = u 0 100000 65536
lxc.id_map = g 0 100000 65536
# "Secure" mounting
lxc.mount.auto = proc:mixed sys:ro cgroup:mixed
lxc.apparmor.profile = unconfined
# Network configuration
lxc.network.type = veth
lxc.network.link = bridge1
lxc.network.flags = up
lxc.network.hwaddr = 00:FF:xx:xx:xx:xx'>.config/lxc/default.conf
o)I edit /etc/lxc/default.conf
lxc.network.type = veth
lxc.network.link = bridge1
p)update the .config/lxc/default.conf
lxc-update-config -c .config/lxc/default.conf
q)I create the first container
lxc-create --name mylinux -t download
lxc-start --name mylinux
lxc-attach --name mylinux
Now the problem, when I start the container...
lxc-start --name mylinux
lxc-start: mylinux: lxccontainer.c: wait_on_daemonized_start: 833 No such file or directory - Failed to receive the container state
lxc-start: mylinux: tools/lxc_start.c: main: 330 The container failed to start
lxc-start: mylinux: tools/lxc_start.c: main: 333 To get more details, run the container in foreground mode
lxc-start: mylinux: tools/lxc_start.c: main: 336 Additional information can be obtained by setting the --logfile and --logpriority options
Searching on forum I found this workaround
#!/bin/sh
printf '\n\033[42mCreating cgroup hierarchy\033[m\n\n' &&
for d in /sys/fs/cgroup/*; do
f=$(basename $d)
echo "looking at $f"
if [ "$f" = "cpuset" ]; then
echo 1 | sudo tee -a $d/cgroup.clone_children;
elif [ "$f" = "memory" ]; then
echo 1 | sudo tee -a $d/memory.use_hierarchy;
fi
sudo mkdir -p $d/$USER
sudo chown -R $USER $d/$USER
# add current process to cgroup
echo $PPID > $d/$USER/tasks
done
sh workaround.sh
give me a "permission denied" on line echo $PPID > $d/$USER/tasks
but works.
lxc-start -n mylinux
echo $?
0
Now the problem.
I want the containers to start on boot(they are unprivileged) so lxc-autostart
don't work
I have created the file /etc/rc.local, but fail
I have tried this way
#!/bin/bash
# Action at boot
start() {
su - lxcunpriv -c "lxc-start -n mylinux"
su - lxcunpriv -c "lxc-start -n myothercontainer"
....
}
in this case failed with the error
lxc-start: mylinux: lxccontainer.c: wait_on_daemonized_start: 833 No such file or directory - Failed to receive the container state
lxc-start: mylinux: tools/lxc_start.c: main: 330 The container failed to start
lxc-start: mylinux: tools/lxc_start.c: main: 333 To get more details, run the container in foreground mode
lxc-start: mylinux: tools/lxc_start.c: main: 336 Additional information can be obtained by setting the --logfile and --logpriority options
and also this to exec the "workaround" script from rc.local
su - lxcunpriv <<EOF
sh workaround.sh
lxc-start -n myothercontainer
EOF
in this case the workaround run but the lxc-start command fail with the same error
lxc-start --name mylinux
lxc-start: mylinux: lxccontainer.c: wait_on_daemonized_start: 833 No such file or directory - Failed to receive the container state...
Of course if I do
su - lxcunpriv
sh workaround.sh
lxc-start -n mylinux
it works, why don't work also from rc-local?
|
Solution found
I edit rc.local
Instead of those lines
su - lxcunpriv <<EOF
sh workaround.sh
lxc-start -n myothercontainer
EOF
the correct lines are those
start() {
su - lxcunpriv <<EOF
/var/lxcunpriv/workaround.sh
lxc-start --name mycontainer
lxc-start --name myothercontainer
...
EOF
}
The container start.
The problem was the word "sh" before the script, which start another subshell and vanish the effect of the workaround script.
| Why this script works fine if run as user, but faili if run from rc.local? |
1,336,906,003,000 |
I would like to monitor one process's memory / cpu usage in real time. Similar to top but targeted at only one process, preferably with a history graph of some sort.
|
On Linux, top actually supports focusing on a single process, although it naturally doesn't have a history graph:
top -p PID
This is also available on Mac OS X with a different syntax:
top -pid PID
| How to monitor CPU/memory usage of a single process? |
1,336,906,003,000 |
How can I immediately detect when new files were added to a folder within a bash script?
I would like the script to process files as soon as they are created in the folder. Are there any methods aside from scheduling a cron job that checks for new files each minute or so?
|
You should consider using inotifywait, as an example:
inotifywait -m /path -e create -e moved_to |
while read dir action file; do
echo "The file '$file' appeared in directory '$dir' via '$action'"
# do something with the file
done
In Ubuntu, inotifywait is provided by the inotify-tools package.
As of version 3.13 (current in Ubuntu 12.04) inotifywait will include the filename without the -f option. Older versions may need to be coerced.
What is important to note is that the -e option to inotifywait is the best way to do event filtering. Also, your read command can assign the positional output into multiple variables that you can choose to use or ignore. There is no need to use grep/sed/awk to preprocess the output.
| Tool to monitor folder for new files and run command whenever new file is detected |
1,336,906,003,000 |
I have a growing log file for which I want to display only the last 15 lines. Here is what I know I can do:
tail -n 15 -F mylogfile.txt
As the log file is filled, tail appends the last lines to the display.
I am looking for a solution that only displays the last 15 lines and get rid of the lines before the last 15 after it has been updated. Would you have an idea?
|
It might suffice to use watch:
$ watch tail -n 15 mylogfile.txt
| How to monitor only the last n lines of a log file? |
1,336,906,003,000 |
For debugging purposes I want to monitor the http requests on a network interface.
Using a naive tcpdump command line I get too much low-level information and the information I need is not very clearly represented.
Dumping the traffic via tcpdump to a file and then using wireshark has the disadvantage that it is not on-the-fly.
I imagine a tool usage like this:
$ monitorhttp -ieth0 --only-get --just-urls
2011-01-23 20:00:01 GET http://foo.example.org/blah.js
2011-01-23 20:03:01 GET http://foo.example.org/bar.html
...
I am using Linux.
|
Try tcpflow:
tcpflow -p -c -i eth0 port 80 | grep -oE '(GET|POST|HEAD) .* HTTP/1.[01]|Host: .*'
Output is like this:
GET /search?q=stack+exchange&btnI=I%27m+Feeling+Lucky HTTP/1.1
Host: www.google.com
You can obviously add additional HTTP methods to the grep statement, and use sed to combine the two lines into a full URL.
| On-the-fly monitoring HTTP requests on a network interface? |
1,336,906,003,000 |
I'm looking to list all ports a PID is currently listening on.
How would you recommend I get this kind of data about a process?
|
You can use ss from the iproute2 package (which is similar to netstat):
ss -l -p -n | grep "pid=1234,"
or (for older iproute2 version):
ss -l -p -n | grep ",1234,"
Replace 1234 with the PID of the program.
| List ports a process PID is listening on (preferably using iproute2 tools)? |
1,336,906,003,000 |
time is a brilliant command if you want to figure out how much CPU time a given command takes.
I am looking for something similar that can list the files being accessed by a program and its children. Either in real time or as a report afterwards.
Currently I use:
#!/bin/bash
strace -ff -e trace=file "$@" 2>&1 | perl -ne 's/^[^"]+"(([^\\"]|\\[\\"nt])*)".*/$1/ && print'
but its fails if the command to run involves sudo. It is not very intelligent (it would be nice if it could only list files existing or that had permission problems or group them into files that are read and files that are written). Also strace is slow, so it would be good with a faster choice.
|
I gave up and coded my own tool. To quote from its docs:
SYNOPSIS
tracefile [-adefnu] command
tracefile [-adefnu] -p pid
OPTIONS
-a List all files
-d List only dirs
-e List only existing files
-f List only files
-n List only non-existing files
-p pid Trace process id
-u List only files once
It only outputs the files so you do not need to deal with the output from strace.
https://codeberg.org/tange/tangetools/src/branch/master/tracefile
| List the files accessed by a program |
1,336,906,003,000 |
I'm getting a lot of mail in my root user's mail account. This appears to be mostly reports and errors from things like cron scripts. I'm trying to work though and solve these things, possibly even have them be piped to some sort of "dashboard" - but until then how can I have these messages go to my personal e-mail account instead?
|
Any user, including root, can forward their local email by putting the forwarding address in a file called ~/.forward. You can have multiple addresses there, all on one line and separated by comma. If you want both local delivery and forwarding, put root@localhost as one of the addresses.
The system administrator can define email aliases in the file /etc/aliases. This file contains lines like root: [email protected], /root/mailbox; the effect is the same as having [email protected], /root/mailbox in ~root/.forward. You may need to run a program such as newaliases after changing /etc/aliases.
Note that the workings of .forward and /etc/aliases depend on your MTA. Most MTAs implement the main features provided by the traditional sendmail, but check your MTA's documentation.
| Can I change root's email address or forward it to an external address? |
1,336,906,003,000 |
Given file path, how can I determine which process creates it (and/or reads/writes to it)?
|
The lsof command (already mentioned in several answers) will tell you what process has a file open at the time you run it. lsof is available for just about every unix variant.
lsof /path/to/file
lsof won't tell you about file that were opened two microseconds ago and closed one microsecond ago. If you need to watch a particular file and react when it is accessed, you need different tools.
If you can plan a little in advance, you can put the file on a LoggedFS filesystem. LoggedFS is a FUSE stacked filesystem that logs all accesses to files in a hierarchy. The logging parameters are highly configurable. FUSE is available on all major unices. You'll want to log accesses to the directory where the file is created. Start with the provided sample configuration file and tweak it according to this guide.
loggedfs -l /path/to/log_file -c /path/to/config.xml /path/to/directory
tail -f /path/to/log_file
Many unices offer other monitoring facilities. Under Linux, you can use the relatively new audit subsystem. There isn't much literature about it (but more than about loggedfs); you can start with this tutorial or a few examples or just with the auditctl man page. Here, it should be enough to make sure the daemon is started, then run auditctl:
auditctl -w /path/to/file
(I think older systems need auditctl -a exit,always -w /path/to/file) and watch the logs in /var/log/audit/audit.log.
| How to determine which process is creating a file? [duplicate] |
1,336,906,003,000 |
$ tail -f testfile
the command is supposed to show the latest entries in the specified file, in real-time right? But that's not happening. Please correct me, if what I intend it to do is wrong...
I created a new file "aaa" and added a line of text and closed it. then issued this command (first line):
$ tail -f aaa
xxx
xxa
axx
the last three lines are the contents of the file aaa. Now that the command is still running (since I used -f), I opened the file aaa via the GUI and started adding a few more lines manually. But the terminal doesn't show the new lines added in the file.
What's wrong here? The tail -f command only shows new entries if they are written by system only? (like log files etc)
|
From the tail(1) man page:
With --follow (-f), tail defaults to following the file descriptor,
which means that even if a tail’ed file is renamed, tail will continue
to track its end. This default behavior is not desirable when you
really want to track the actual name of the file, not the file descrip-
tor (e.g., log rotation). Use --follow=name in that case. That causes
tail to track the named file in a way that accommodates renaming,
removal and creation.
Your text editor is renaming or deleting the original file and saving the new file under the same filename. Use -F instead.
| How does the "tail" command's "-f" parameter work? |
1,336,906,003,000 |
I'd like to do a health check of a service by calling a specific url on it. Feels like the simplest solution would be to use cron to do the check every minute or so. In case of errors, cron sends me an email.
I tried using cUrl for this but I can't get it to output messages only on errors. If I try to direct output to /dev/null, it prints out progress report.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 5559 100 5559 0 0 100k 0 --:--:-- --:--:-- --:--:-- 106k
I tried looking through the curl options but I just can't find anything to suit the situation where you want it to be silent on success but make noise on errors.
Is there a way to make curl do what I want or is there some other tool I should be looking at?
|
What about -sSf? From the man pages:
-s/--silent
Silent or quiet mode. Do not show progress meter or error messages.
Makes Curl mute.
-S/--show-error
When used with -s it makes curl show an error message if it fails.
-f/--fail
(HTTP) Fail silently (no output at all) on server errors. This is mostly
done to better enable scripts etc to better deal with failed attempts. In
normal cases when a HTTP server fails to deliver a document, it returns
an HTML document stating so (which often also describes why and more).
This flag will prevent curl from outputting that and return error 22.
This method is not fail-safe and there are occasions where non-successful
response codes will slip through, especially when authentication is
involved (response codes 401 and 407).
For example:
curl -sSf http://example.org > /dev/null
| Health check of web page using curl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.