date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,694,871,099,000
Debian 11 here with LXDE (Xorg + lightdm) How could I change the driver used by X? My X is using the default options, I have no /etc/X11/xorg.conf file, so I generate one with the command sudo X -configure :1 and after that copying the contents of /root/xorg.conf.new to /etc/X11/xorg.conf After reboot I have a running X on :0 like before (ps auxww | grep Xorg returns ... /usr/lib/xorg/Xorg -s 0 :0 -seat seat0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch), but with a blank screen (no EE in /var/log/Xorg.0.log file). If I remove that /etc/X11/xorg.conf file and reboot I have a normal screen again. The generated /root/xorg.conf.new is this (removed commented options, some FontPaths and SubSections Display's, for brevity) Section "ServerLayout" Identifier "X.org Configured" Screen 0 "Screen0" 0 0 InputDevice "Mouse0" "CorePointer" InputDevice "Keyboard0" "CoreKeyboard" EndSection Section "Files" ModulePath "/usr/lib/xorg/modules" FontPath "/usr/share/fonts/X11/misc" FontPath "built-ins" EndSection Section "Module" Load "glx" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/input/mice" Option "ZAxisMapping" "4 5 6 7" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Monitor Vendor" ModelName "Monitor Model" EndSection Section "Device" Identifier "Card0" Driver "intel" BusID "PCI:0:2:0" EndSection Section "Screen" Identifier "Screen0" Device "Card0" Monitor "Monitor0" SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Noticed that without an /etc/X11/xorg.conf file the output names returned by xrandr have a dash, like DP-1, DP-2 etc and with that file xrandr return those names without a dash: DP1, DP2... The only change I want to do is replace: Driver "intel" with Driver "fbdev" Already tried to remove the xserver-xorg-video-intel package but it would break my system removing the WM, DE and a lot of other apps.
A little confusion by my side, so I have to make the appropriate corrections to get in, I think, a somewhat good explanation. I'm working on several machines trying to make that driver change, and I mixed the results with the machine with radeon driver and the machine with the intel driver, my bad. The problem is that the intel driver (xserver-xorg-video-intel, not radeon, fixed already) does not generate output names with the dash (DP1), when using the /etc/X11/xorg.conf file, but without that file those names are generated with the dashes (DP-1) and since my screens are configured using a set of xrandr scripts, generated when I was using an Xorg without a /etc/X11/xorg.conf file, that script could not set my screens.
Change default xorg.conf video driver
1,694,871,099,000
I'm using debian 8.3 jessie stable with openbox, fbpanel, lightdm. When I'm in openbox session, I do not find an option to switch user without logging out the current user. How can I switch user from openbox without logging out from current user session?
It can be done with the following command for lightdm: /usr/bin/dm-tool switch-to-greeter
openbox - How to switch user without logout?
1,694,871,099,000
I have a system with multiple Desktop Environments installed (Ubuntu 14.04 with Unity and Xfce). I would like to configure (with a non-interactive script) a particular DE for a particular user. How is this controlled? Would it be the same for e.g. KDE?
I figured it out. I'm writing the lightDM configuration when configuring autologin anyway, and that's where I'm specifying the user, so the right thing is to specify the system default at the same time: wiki.ubuntu.com/LightDM#Changing_the_Default_Session However, when this bug is fixed: https://bugs.launchpad.net/lightdm/+bug/1371710 I'll need a better way to set a per-user default since lightdm won't be reconfigured/restarted for each user autologin. More data: with lightDM, a desktop preference will be looked up in /var/lib/AccountsService/users/$USER (no good docs, but some tantalizing details here), and if not found will be looked up in $HOME/.dmrc (described here). When a user logs in and chooses a DE, both of these locations are populated. So, a script could provision either of these locations (either using the DBus interface or writing to the AccountService file directly) to set a suitable default for the user.
What configuration determines which Desktop Environment to run?
1,694,871,099,000
I'm on a Fedora Server 37 iso, so no DM/DE is pre-installed. Install wayland, lightdm, and lightdm-gtk-greeter. Edit lightdm's config to use lightdm-gtk-greeter (Line 102) change greeter-session=example-gtk-gnome to greeter-session=lightdm-gtk-greeter (Line 107) change user-session=default to user-session=qtile Try to start graphical.target, and it fails. From sudo tail -n100 /var/log/lightdm/lightdm.log, here's a chunk of the relevant part. It tries to start an xserver, even though I don't have one installed and want to use wayland instead. I can't find a config option to tell it to use wayland either. Did I miss it, or is there another way to do this? [+0.00s] DEBUG: Using D-Bus name org.freedesktop.DisplayManager [+0.00s] DEBUG: _g_io_module_get_default: Found default implementation local (GLocalVfs) for ‘gio-vfs’ [+0.00s] DEBUG: Using cross-namespace EXTERNAL authentication (this will deadlock if server is GDBus < 2.73.3) [+0.00s] DEBUG: Monitoring logind for seats [+0.00s] DEBUG: New seat added from logind: seat0 [+0.00s] DEBUG: Seat seat0: Loading properties from config section Seat:* [+0.00s] DEBUG: Seat seat0 has property CanMultiSession=no [+0.00s] DEBUG: Seat seat0: Starting [+0.00s] DEBUG: Seat seat0: Creating greeter session [+0.00s] DEBUG: Seat seat0: Creating display server of type x [+0.00s] DEBUG: Using VT 1 [+0.00s] DEBUG: Seat seat0: Starting local X display on VT 1 [+0.00s] DEBUG: XServer 0: Logging to /var/log/lightdm/x-0.log [+0.00s] DEBUG: XServer 0: Can't launch X server X -core -noreset, not found in path [+0.00s] DEBUG: XServer 0: X server stopped [+0.00s] DEBUG: Releasing VT 1 [+0.00s] DEBUG: Seat seat0: Display server stopped [+0.00s] DEBUG: Seat seat0: Can't create display server for greeter [+0.00s] DEBUG: Seat seat0: Session stopped [+0.00s] DEBUG: Seat seat0: Stopping display server, no sessions require it [+0.00s] DEBUG: Seat seat0: Stopping [+0.00s] DEBUG: Seat seat0: Stopped [+0.00s] DEBUG: Failed to start seat: seat0 Edit: This github issue is someone's log that gets a lot farther than I do. My log above, you see Seat seat0: Creating display server of type x, but theirs is type wayland. This is the main thing I'd like to figure out. However later in their log, they call /etc/lightdm/Xsession ... still too. Is there a way to install a tiny subset of x11 to work, or do I need an entire x server package along with wayland to get LightDM running?
or do I need an entire x server package along with wayland to get LightDM running? LightDM requires the full featured Xorg server. You may want to install GDM, the only display manager which works in pure Wayland mode as of April, 2023. Addendum: Fedora 38 contains a GIT version of SDDM which can work in Wayland mode as well. There's also an sddm-git Wayland enabled package in Arch.
How to switch lightdm-gtk-greeter to use wayland only? (x11 not installed)
1,694,871,099,000
Update 2: The behavior only happens when both monitors are enabled and when lightdm-gtk-greeter is active. Disconnecting or disabling (with xrandr) results in expected behavior. Update: Seems to have nothing to do with light-locker actually. Setting dpms with a display-setup script for LightDM exhibits the same behavior (sleeping and then immediately waking) before i3 is initialized and with light-locker completely removed and purged. Disabling dpms and setting the screensaver to noblank with xset results in the screensaver working as expected, but the monitors stay powered on. It is seeming like when the screens blank, some event is triggered as @WayneWerner suggested below but I'm not sure how to identify what that event would be. I am using i3 on Debian testing/sid and have lightdm and light-locker configured. I am also not using any additional power-manager like tpu or xfce4-power-manager. I recently upgraded GPU and went from using a proprietary nVidia driver to AMDGPU. An unexpected side effect has occurred: now when xset initiates a lock with light-locker, lightdm is brought up, the monitors go to sleep and 2-3 seconds afterword the monitors wake up with no additional input. When the configured lightdm-gtk-greeter screensave-timeout is triggered the process occurs again, monitors sleep then immediately wake up again and this process continues indefinitely. I would like for the monitors to sleep and stay asleep until awoken like they did with my nVidia card. I'm sure I have something configured incorrectly, I'm just not sure where. The GPU and graphics driver are the only things that have changed in this situation. Here are the contents of my current lightdm config files. /etc/lightddm/lightdm.conf [LightDM] [Seat:*] greeter-hide-users=false [XDMCPServer] [VNCServer] /etc/lightdm/lightdm-gtk-greeter.conf [greeter] theme-name = Breeze-Dark active-monitor = DisplayPort-1 screensaver-timeout = 10
More of a mitigation than a solution, but one I'm happy with. Disabling output on the non-primary monitor on LightDM initialization and enabling dpms results in a successful monitor timeout that remains asleep. Configured as follows: LightDM Config [LightDM] [Seat:*] greeter-hide-users=false display-setup-script=/etc/lightdm/display-config [XDMCPServer] [VNCServer] /etc/lightdm/display-config #!/bin/sh xrandr --output HDMI-A-0 --off --output DisplayPort-1 --primary --mode 1920x1080 --rate 75 xset dpms 10 10 10 s 10 These settings are overwritten in the user context with the following script ran from .xinitrc #!/bin/sh xrandr --output HDMI-A-0 --mode 1920x1080 --rate 75 --output DisplayPort-1 --primary --mode 1920x1080 --rate 75 --left-of HDMI-A-0 xset dpms 600 600 600 s 600 It's messier than I'd like but fine for now.
LightDM displays wake immediately after sleep
1,694,871,099,000
I am using Debian desktop for Lichee Pi and I am new this platform(linux). After energizing, the graphical login screen appears. But I don't have a real or virtual keyboard. Can I enter this password from the command line? or can I skip this screen and switch to the desktop?
What you want is named autologin and what you need is to update lightdm's configuration file : /etc/lightdm/lightdm.conf. Edit this file and search for the following lines : [Seat:*] #autologin-user= #autologin-user-timeout=0 Uncomment the two lines (remove the #) and add your user name at the end of the second. There are a couple of other settings you might want to consider.
How to enter password without Keyboard on graphical login screen?
1,694,871,099,000
The interesting part is I found that I was able to login with xfce but not with either lightdm or gdm. (I am using Ubuntu 16.04)
Did you do something before this problem? Maybe you run .Xauthority with sudo. Log in using CTRL+ALT+F1 and type: sudo chown yourusername ~/.Xauthority
Stuck in a login-loop in lightdm and gdm, but not with xfce and other systems
1,694,871,099,000
I'm trying to define Suru++ as the cursor theme for LightDM GTK+ greeter on Debian stretch. This is supposedly done by adding Suru++ to the list of alternatives for x-cursor-theme, and choosing it as the alternative: user@debian:~$ sudo update-alternatives --install /usr/share/icons/default/index.theme x-cursor-theme /usr/share/icons/Suru++/cursor.theme 0 user@debian:~$ sudo update-alternatives --config x-cursor-theme There are 4 choices for the alternative x-cursor-theme (providing /usr/share/icons/default/index.theme). Selection Path Priority Status ------------------------------------------------------------ 0 /etc/X11/cursors/breeze_cursors.theme 102 auto mode 1 /etc/X11/cursors/Breeze_Snow.theme 41 manual mode 2 /etc/X11/cursors/breeze_cursors.theme 102 manual mode 3 /usr/share/icons/Adwaita/cursor.theme 90 manual mode * 4 /usr/share/icons/Suru++/cursor.theme 0 manual mode Press <enter> to keep the current choice[*], or type selection number: If I pick any alternative in that list other than Suru++, the cursor used by LightDM GTK+ greeter is the one from the selected alternative. However, when I pick Suru++ as the alternative in the list, the cursor used by LightDM GTK+ greeter is from the Adwaita theme. This behaviour is weird to me because Suru doesn't inherit anything from Adwaita as far as I can see. In fact the contents of /usr/share/icons/Suru++/cursor.theme are very simple: [Icon Theme] Name = Suru Comment = A Suru-like cursor designed by Sam Hewitt Inherits = Suru How should make the LightDM GTK+ greeter use this particular cursor theme?
The problem was that the value of the Inherits option in cursor.theme didn't match the name of the directory containing the theme. Setting Inherits to Suru++ instead of Suru solved the problem.
LightDM GTK+ greeter doesn't use the right cursor theme
1,694,871,099,000
I've setup Winbind & Kerberos on my CentOS 7 server to allow network users to login. The network users can login fine through SSH, but not through the display manager. I've experienced the same issue whether using LightDM or GDM. Local users are able to login just fine. For the network users, when they login it will accept their password but kick them back to the login screen. I have been scratching my head over this all day, tweaking pam settings to see if I can get it to work. I have also disabled SELinux and rebooted the server to rule out that possibility. Does anyone know what could be wrong here? Here are the logs for a network user login: System logs: Jul 03 16:15:01 iisfyblabetl001.incite.local lightdm[10471]: pam_unix(lightdm:auth): authentication failure; logname= uid=0 euid=0 tty=:0 ruser= rhost= user=mmoyles Jul 03 16:15:01 iisfyblabetl001.incite.local lightdm[10471]: pam_krb5[10471]: TGT verified using key for 'host/[email protected]' Jul 03 16:15:01 iisfyblabetl001.incite.local lightdm[10471]: pam_krb5[10471]: authentication succeeds for 'mmoyles' ([email protected]) Jul 03 16:15:01 iisfyblabetl001.incite.local lightdm[10471]: pam_winbind(lightdm:account): user 'mmoyles' granted access Jul 03 16:15:01 iisfyblabetl001.incite.local lightdm[9639]: pam_unix(lightdm-greeter:session): session closed for user lightdm Jul 03 16:15:01 iisfyblabetl001.incite.local systemd-logind[679]: New session 29 of user mmoyles. -- Subject: A new session 29 has been created for user mmoyles -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 29 has been created for the user mmoyles. -- -- The leading process of the session is 10471. Jul 03 16:15:01 iisfyblabetl001.incite.local systemd[1]: Started Session 29 of user mmoyles. -- Subject: Unit session-29.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-29.scope has finished starting up. -- -- The start-up result is done. Jul 03 16:15:01 iisfyblabetl001.incite.local systemd[1]: Starting Session 29 of user mmoyles. -- Subject: Unit session-29.scope has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-29.scope has begun starting up. Jul 03 16:15:01 iisfyblabetl001.incite.local lightdm[10471]: pam_unix(lightdm:session): session opened for user mmoyles by (uid=0) Jul 03 16:15:01 iisfyblabetl001.incite.local lightdm[10471]: pam_unix(lightdm:session): session closed for user mmoyles Jul 03 16:15:01 iisfyblabetl001.incite.local systemd-logind[679]: Removed session 29. -- Subject: Session 29 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 29 has been terminated. Jul 03 16:15:01 iisfyblabetl001.incite.local lightdm[10517]: pam_unix(lightdm-greeter:session): session opened for user lightdm by (uid=0) Jul 03 16:15:01 iisfyblabetl001.incite.local systemd-logind[679]: New session c19 of user lightdm. -- Subject: A new session c19 has been created for user lightdm -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID c19 has been created for the user lightdm. lightdm.log: +1215.10s] DEBUG: Seat: Greeter stopped, running session [+1215.10s] DEBUG: Registering session with bus path /org/freedesktop/DisplayManager/Session6 [+1215.10s] DEBUG: Session pid=10471: Running command /etc/X11/xinit/Xsession mate-session [+1215.10s] DEBUG: Creating shared data directory /var/lib/lightdm-data/mmoyles [+1215.10s] DEBUG: Session pid=10471: Logging to .xsession-errors [+1215.14s] DEBUG: Activating VT 1 [+1215.14s] DEBUG: Activating login1 session 29 [+1215.17s] DEBUG: Session pid=10471: Exited with return value 0 [+1215.17s] DEBUG: Seat: Session stopped [+1215.17s] DEBUG: Seat: Stopping display server, no sessions require it [+1215.17s] DEBUG: Sending signal 15 to process 9627 [+1215.24s] DEBUG: Process 9627 exited with return value 0 [+1215.24s] DEBUG: DisplayServer x-0: X server stopped [+1215.24s] DEBUG: Releasing VT 1 [+1215.24s] DEBUG: DisplayServer x-0: Removing X server authority /var/run/lightdm/root/:0 [+1215.24s] DEBUG: Seat: Display server stopped [+1215.24s] DEBUG: Seat: Active display server stopped, starting greeter [+1215.24s] DEBUG: Seat: Creating greeter session [+1215.24s] DEBUG: Seat: Creating display server of type x [+1215.24s] DEBUG: Using VT 1 [+1215.24s] DEBUG: Seat: Starting local X display on VT 1 [+1215.24s] DEBUG: DisplayServer x-0: Logging to /var/log/lightdm/x-0.log [+1215.24s] DEBUG: DisplayServer x-0: Writing X server authority to /var/run/lightdm/root/:0 [+1215.24s] DEBUG: DisplayServer x-0: Launching X Server [+1215.24s] DEBUG: Launching process 10509: /usr/bin/X -background none :0 -seat seat0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt1 -novtswitch [+1215.24s] DEBUG: DisplayServer x-0: Waiting for ready signal from X server :0 [+1215.42s] DEBUG: Got signal 10 from process 10509 [+1215.43s] DEBUG: DisplayServer x-0: Got signal from X server :0 [+1215.43s] DEBUG: DisplayServer x-0: Connecting to XServer :0 [+1215.43s] DEBUG: Seat: Display server ready, starting session authentication [+1215.43s] DEBUG: Session pid=10517: Started with service 'lightdm-greeter', username 'lightdm' [+1215.44s] DEBUG: Session pid=10517: Authentication complete with return value 0: Success [+1215.44s] DEBUG: Seat: Session authenticated, running command [+1215.44s] DEBUG: Session pid=10517: Running command /usr/sbin/lightdm-gtk-greeter [+1215.44s] DEBUG: Creating shared data directory /var/lib/lightdm-data/lightdm [+1215.44s] DEBUG: Session pid=10517: Logging to /var/log/lightdm/x-0-greeter.log [+1215.44s] DEBUG: Activating VT 1 [+1215.44s] DEBUG: Activating login1 session c19 [+1215.46s] DEBUG: Session pid=10517: Greeter connected version=1.10.6 [+1215.69s] DEBUG: Session pid=10517: Greeter start authentication [+1215.69s] DEBUG: Session pid=10535: Started with service 'lightdm', username '(null)' [+1215.70s] DEBUG: Session pid=10535: Got 1 message(s) from PAM [+1215.70s] DEBUG: Session pid=10517: Prompt greeter with 1 message(s) [+1215.73s] DEBUG: User /org/freedesktop/Accounts/User1000 changed [+1215.74s] DEBUG: User /org/freedesktop/Accounts/User11092 changed [+1215.74s] DEBUG: User /org/freedesktop/Accounts/User1001 changed pam.d/system-auth #%PAM-1.0 # This file is auto-generated. # User changes will be destroyed the next time authconfig is run. auth required pam_env.so auth sufficient pam_unix.so nullok try_first_pass #auth requisite pam_succeed_if.so uid >= 1000 quiet_success auth sufficient pam_krb5.so use_first_pass auth sufficient pam_winbind.so krb5_auth krb5_ccache_type=KEYRING use_first_pass auth required pam_deny.so account required pam_unix.so broken_shadow account sufficient pam_localuser.so #account sufficient pam_succeed_if.so uid < 1000 quiet account [default=bad success=ok user_unknown=ignore] pam_krb5.so account [default=bad success=ok user_unknown=ignore] pam_winbind.so krb5_auth krb5_ccache_type=KEYRING account required pam_permit.so password requisite pam_pwquality.so try_first_pass local_users_only retry=3 authtok_type= password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok password sufficient pam_krb5.so use_authtok password sufficient pam_winbind.so krb5_auth krb5_ccache_type=KEYRING use_authtok password required pam_deny.so session optional pam_keyinit.so revoke session required pam_limits.so -session optional pam_systemd.so session optional pam_oddjob_mkhomedir.so umask=0077 session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session required pam_unix.so session optional pam_krb5.so session optional pam_winbind.so krb5_auth krb5_ccache_type=KEYRING pam.d/lightdm #%PAM-1.0 auth [success=done ignore=ignore default=bad] pam_selinux_permit.so auth required pam_env.so auth substack system-auth -auth optional pam_gnome_keyring.so -auth optional pam_kwallet5.so -auth optional pam_kwallet.so auth include postlogin account required pam_nologin.so account include system-auth password include system-auth session optional pam_selinux.so close session optional pam_loginuid.so session optional pam_console.so -session optional pam_ck_connector.so session optional pam_selinux.so open session optional pam_keyinit.so force revoke session optional pam_namespace.so -session optional pam_gnome_keyring.so auto_start -session optional pam_kwallet5.so -session optional pam_kwallet.so session include system-auth session optional pam_lastlog.so silent session include postlogin The .xsession-errors file in the network user's home directory is empty, and it does appear to create an .Xauthority file in the home directory.
Well I feel dumb... After battling with this problem for 3 days, The reason was that I had a .profile in /etc/skel that was setting SHELL=/bin/bash so pam_mkhomedir was adding this file for new domain users but it wasn't on my local account.
AD Users Cannot Login Through GDM/LightDM
1,694,871,099,000
I cannot change my login screen background to the desktop background in my elementary os which is based on ubuntu. I have decrypted my home folder according to this link: http://www.howtogeek.com/116179/how-to-disable-home-folder-encryption-after-installing-ubuntu/ and when I ran the command to check for encryption I got the following result- ~$ ls -A /home lost+found ramiz [note: no .ecryptfs folder there!!] ~$ sudo blkid | grep swap /dev/sda5: UUID="00ec9e13-4cfe-4f73-905a-fef05d73caa2" TYPE="swap" I have come to understand that it means neither my home nor swap partition is encrypted. Still I cannot change my login screen wallpaper. I have tried the simple version given in the official website - i.e., 1) Open “Files” 2) In plank, right click on “Files” and click “Open new window as admin” 3) Insert your password 4) On the new ADMIN window, go to: /usr/share/backgrounds 5) Paste your new image file there [/usr/share/backgrounds] (make sure it’s a JPEG file with, at least, 1920x1080 resolution) 6) Go to “System Settings”, “Desktop” and your new image file should be there 7) Click on it 8) Log out This wasn't helpful so I went for the decryption as I read that it was due to the fact that my home folder is encrypted that I cannot change my login screen background to the wallpaper I downloaded. (I read that here:- http://elementaryos.org/answers/login-screen-isnt-the-wallpaper-i-want-i-also-want-to-delete-all-the-default-wallpapers-1 ) Please help me. I have installed elementaryos for a friend of mine recently and she doesn't have this problem. I could change my login screen before(have tried only once!) but cannot anymore. I always thought that the lock icon beside my partition in the gparted app is saying that its encrypted. The lock icon is there. Please help me!!
Got it! 1) Copy all the pics to /usr/share/backgrounds 2) Then take terminal and change directory to that folder. cd /usr/share/backgrounds 3) Then for every image_name.jpg that you just added, type the command: sudo chmod a+rw image_name.jpg 4) Now exit terminal and check your System settings -> Desktop . Your custom wallpapers will be available there. Selecting from it also changed the login screen background in my lightdm.
cannot change my login screen background
1,694,871,099,000
I have elementaryOS and after updated my Kernel to 3.11 I get black screen after boot. If I type Ctrl + Alt + F1 to go to command prompt and then do sudo service lightdm restart, lightdm starts correctly. Why its not starting at first? I had troubles with my X stack and drivers before but if that was the problem, I shouldnt be able to start lightdm even from terminal I guess. Lightdm.log [+0.00s] DEBUG: Starting Light Display Manager 1.2.3, UID=0 PID=3650 [+0.00s] DEBUG: Loading configuration from /etc/lightdm/lightdm.conf [+0.00s] DEBUG: Using D-Bus name org.freedesktop.DisplayManager [+0.00s] DEBUG: Registered seat module xlocal [+0.00s] DEBUG: Registered seat module xremote [+0.00s] DEBUG: Adding default seat [+0.00s] DEBUG: Starting seat [+0.00s] DEBUG: Starting new display for greeter [+0.00s] DEBUG: Starting local X display [+0.00s] DEBUG: Using VT 7 [+0.00s] DEBUG: Activating VT 7 [+0.01s] DEBUG: Logging to /var/log/lightdm/x-0.log [+0.01s] DEBUG: Writing X server authority to /var/run/lightdm/root/:0 [+0.01s] DEBUG: Launching X Server [+0.01s] DEBUG: Launching process 3659: /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch [+0.01s] DEBUG: Waiting for ready signal from X server :0 [+0.01s] DEBUG: Acquired bus name org.freedesktop.DisplayManager [+0.01s] DEBUG: Registering seat with bus path /org/freedesktop/DisplayManager/Seat0 [+1.01s] DEBUG: Got signal 10 from process 3659 [+1.01s] DEBUG: Got signal from X server :0 [+1.01s] DEBUG: Connecting to XServer :0 [+1.01s] DEBUG: Starting greeter [+1.01s] DEBUG: Started session 3747 with service 'lightdm', username 'lightdm' [+1.05s] DEBUG: Session 3747 authentication complete with return value 0: Success [+1.05s] DEBUG: Greeter authorized [+1.05s] DEBUG: Logging to /var/log/lightdm/x-0-greeter.log [+1.05s] DEBUG: Session 3747 running command /usr/lib/lightdm/lightdm-greeter-session /usr/share/xgreeters/pantheon-greeter [+1.70s] DEBUG: Greeter connected version=1.2.3 [+1.70s] DEBUG: Greeter connected, display is ready [+1.70s] DEBUG: New display ready, switching to it [+1.70s] DEBUG: Activating VT 7 [+5.20s] DEBUG: Greeter start authentication for bruno [+5.20s] DEBUG: Started session 4697 with service 'lightdm', username 'bruno' [+5.20s] DEBUG: Session 4697 got 1 message(s) from PAM [+5.20s] DEBUG: Prompt greeter with 1 message(s) [+5.23s] DEBUG: Continue authentication [+5.29s] DEBUG: Session 4697 authentication complete with return value 0: Success [+5.29s] DEBUG: Authenticate result for user bruno: Success [+5.34s] DEBUG: User bruno authorized [+5.36s] DEBUG: Greeter requests session pantheon [+5.36s] DEBUG: Using session pantheon [+5.36s] DEBUG: Stopping greeter [+5.36s] DEBUG: Session 3747: Sending SIGTERM [+5.41s] DEBUG: Session 3747 exited with return value 0 [+5.41s] DEBUG: Greeter quit [+5.43s] DEBUG: Dropping privileges to uid 1000 [+5.43s] DEBUG: Restoring privileges [+5.45s] DEBUG: Dropping privileges to uid 1000 [+5.46s] DEBUG: Writing /home/bruno/.dmrc [+5.49s] DEBUG: Restoring privileges [+5.54s] DEBUG: Starting session pantheon as user bruno [+5.54s] DEBUG: Session 4697 running command /usr/sbin/lightdm-session gnome-session --session=pantheon [+5.57s] DEBUG: Registering session with bus path /org/freedesktop/DisplayManager/Session0 [+5.57s] DEBUG: Greeter closed communication channel boot.log * Starting RPC portmapper replacement [ OK ] * Starting Start this job to wait until rpcbind is started or fails to start [ OK ] * Stopping Start this job to wait until rpcbind is started or fails to start [ OK ] * Starting mDNS/DNS-SD daemon [ OK ] * Stopping rpcsec_gss daemon [ OK ] * Starting NSM status monitor [ OK ] * Starting configure network device [ OK ] * Stopping Failsafe Boot Delay [ OK ] * Starting System V initialisation compatibility [ OK ] * Starting modem connection manager [ OK ] * Starting configure network device security [ OK ] Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd * Starting AppArmor profiles [ OK ] * Starting CUPS printing spooler/server [ OK ] * Stopping System V initialisation compatibility [ OK ] * Starting network connection manager [ OK ] * Starting System V runlevel compatibility [ OK ] * Starting save kernel messages [ OK ] * Starting anac(h)ronistic cron [ OK ] * Starting regular background program processing daemon [ OK ] * Starting deferred execution scheduler [ OK ] * Starting ACPI daemon [ OK ] * Starting CPU interrupts balancing daemon [ OK ] * Stopping anac(h)ronistic cron [ OK ] * Starting LightDM Display Manager [ OK ] * Stopping Send an event to indicate plymouth is up [ OK ] * Starting configure network device security [ OK ] * Starting configure network device [ OK ] * Starting KVM [ OK ] * Stopping save kernel messages [ OK ]
It was a problem with the drivers. It is related to this: ElementaryOS Gala using more than 100% CPU constantly After fixing this issue, everything works fine.
lightdm fails to start properly on boot (works from terminal)
1,694,871,099,000
osinfo : cat /etc/*-release DISTRIB_ID=neon DISTRIB_RELEASE=16.04 DISTRIB_CODENAME=xenial DISTRIB_DESCRIPTION="KDE neon User Edition 5.10" NAME="KDE neon" VERSION="5.10" ID=neon ID_LIKE="ubuntu debian" PRETTY_NAME="KDE neon User Edition 5.10" VERSION_ID="16.04" HOME_URL="http://neon.kde.org/" SUPPORT_URL="http://neon.kde.org/" BUG_REPORT_URL="http://bugs.kde.org/" VERSION_CODENAME=xenial UBUNTU_CODENAME=xenial My problem : when i do apt-get update or install or whatever, i get the same thing (maybe slightly different). I suspect it has something to do with lightdm or sddm (i don't really know what they do). So i tried to purge and reinstall them. I'm afraid i made it a bit worse, but not too bad (since everything else on the os still seems to be working). I wasn't fooling around with anything when this first occurred. Error : [sudo] password for alex: Reading package lists... Done Building dependency tree Reading state information... Done Entering ResolveByKeep 50% Calculating upgrade... Done The following packages will be upgraded: flashplugin-installer 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 13 not fully installed or removed. Need to get 6.834 B of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue? [Y/n] Y Get:1 http://be.archive.ubuntu.com/ubuntu xenial-updates/multiverse amd64 flashplugin-installer amd64 26.0.0.131ubuntu0.16.04.1 [6.834 B] Fetched 6.834 B in 0s (39,0 kB/s) Preconfiguring packages ... (Reading database ... 191730 files and directories currently installed.) Preparing to unpack .../flashplugin-installer_26.0.0.131ubuntu0.16.04.1_amd64.deb ... Unpacking flashplugin-installer (26.0.0.131ubuntu0.16.04.1) over (26.0.0.126ubuntu0.16.04.1) ... Processing triggers for update-notifier-common (3.168.4) ... flashplugin-installer: processing... flashplugin-installer: downloading http://archive.canonical.com/pool/partner/a/adobe-flashplugin/adobe-flashplugin_20170616.1.orig.tar.gz Get:1 http://archive.canonical.com/pool/partner/a/adobe-flashplugin/adobe-flashplugin_20170616.1.orig.tar.gz [30,4 MB] Fetched 30,4 MB in 8s (3.494 kB/s) W: Can't drop privileges for downloading as file '/var/lib/update-notifier/package-data-downloads/partial/adobe-flashplugin_20170616.1.orig.tar.gz' couldn't be accessed by user '_apt'. - pkgAcquire::Run (13: Permission denied) Installing from local file /var/lib/update-notifier/package-data-downloads/partial/adobe-flashplugin_20170616.1.orig.tar.gz Flash Plugin installed. Setting up lightdm (1.18.3-0ubuntu1.1) ... insserv: warning: script 'K07smfpd' missing LSB tags and overrides insserv: warning: script 'smfpd' missing LSB tags and overrides insserv: There is a loop at service plymouth if started insserv: There is a loop between service plymouth and procps if started insserv: loop involving service procps at depth 2 insserv: loop involving service udev at depth 1 insserv: Starting smfpd depends on plymouth and therefore on system facility `$all' which can not be true! insserv: Max recursions depth 99 reached insserv: loop involving service bluetooth at depth 1 insserv: Starting smfpd depends on plymouth and therefore on system facility `$all' which can not be true! insserv: There is a loop between service smfpd and hwclock if started insserv: loop involving service hwclock at depth 1 insserv: There is a loop at service smfpd if started insserv: Starting smfpd depends on plymouth and therefore on system facility `$all' which can not be true! insserv: loop involving service networking at depth 4 insserv: loop involving service checkroot at depth 4 insserv: loop involving service mountdevsubfs at depth 2 insserv: Starting smfpd depends on plymouth and therefore on system facility `$all' which can not be true! insserv: There is a loop between service plymouth and urandom if started insserv: loop involving service urandom at depth 4 insserv: Starting smfpd depends on plymouth and therefore on system facility `$all' which can not be true! insserv: exiting now without changing boot order! update-rc.d: error: insserv rejected the script header dpkg: error processing package lightdm (--configure): subprocess installed post-installation script returned error exit status 1 Setting up grub-common (2.02~beta2-36ubuntu3.11) ... update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults insserv: warning: script 'K07smfpd' missing LSB tags and overrides insserv: warning: script 'smfpd' missing LSB tags and overrides insserv: There is a loop at service plymouth if started insserv: There is a loop between service plymouth and procps if started insserv: loop involving service procps at depth 2 insserv: loop involving service udev at depth 1 insserv: Starting smfpd depends on plymouth and therefore on system facility `$all' which can not be true! insserv: Max recursions depth 99 reached insserv: loop involving service bluetooth at depth 1 insserv: Starting smfpd depends on plymouth and therefore on system facility `$all' which can not be true! insserv: There is a loop between service smfpd and hwclock if started insserv: loop involving service hwclock at depth 1 insserv: There is a loop at service smfpd if started insserv: Starting smfpd depends on plymouth and therefore on system facility `$all' which can not be true! insserv: loop involving service networking at depth 4 insserv: loop involving service checkroot at depth 4 insserv: loop involving service mountdevsubfs at depth 2 insserv: Starting smfpd depends on plymouth and therefore on system facility `$all' which can not be true! insserv: There is a loop between service plymouth and urandom if started insserv: loop involving service urandom at depth 4 insserv: Starting smfpd depends on plymouth and therefore on system facility `$all' which can not be true! insserv: exiting now without changing boot order! update-rc.d: error: insserv rejected the script header dpkg: error processing package grub-common (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of grub2-common: grub2-common depends on grub-common (= 2.02~beta2-36ubuntu3.11); however: Package grub-common is not configured yet. dpkg: error processing package grub2-common (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of grub-efi-amd64-bin: grub-efi-amd64-bin depends on grub-common (= 2.02~beta2-36ubuntu3.11); however: Package grub-common is not configured yet. dpkg: error processing package grub-efi-amd64-bin (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of grub-efi-amd64: grub-efi-amd64 depends on grub-common (= 2.02~beta2-36ubuntu3.11); however: Package grub-common is not configured yet. grub-efi-amd64 depends on grub2-common (= 2.02~beta2-36ubuntu3.11); however: Package grub2-common is not configured yet. grub-efi-amd64 depends on grub-efi-amd64-bin (= 2.02~beta2-36ubuntu3.11); however: Package grub-efi-amd64-bin is not configured yet. dpkg: error processing package grub-efi-amd64 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of grub-efi: grub-efi depends on grub-common (= 2.02~beta2-36ubuntu3.11); however: Package grub-common is not configured yet. grub-efi depends on grub-efi-amd64 (= 2.02~beta2-36ubuntu3.11); however: Package grub-efi-amd64 is not configured yet. dpkg: error processing package grub-efi (--configure): dependency problems - leaving unconfigured Setting up cgmanager (0.39-2ubuntu5) ... insserv: warning: script 'K07smfpd' missing LSB tags and overrides insserv: warning: script 'smfpd' missing LSB tags and overrides insserv: There is a loop at service plymouth if started insserv: There is a loop between service plymouth and procps if started insserv: loop involving service procps at depth 2 insserv: loop involving service udev at depth 1 insserv: Starting smfpd depends on plymouth and therefore on system facility `$all' which can not be true! insserv: Max recursions depth 99 reached insserv: loop involving service bluetooth at depth 1 insserv: Starting smfpd depends on plymouth and therefore on system facility `$all' which can not be true! insserv: There is a loop between service smfpd and hwclock if started insserv: loop involving service hwclock at depth 1 insserv: There is a loop at service smfpd if started insserv: Starting smfpd depends on plymouth and therefore on system facility `$all' which can not be true! insserv: loop involving service networking at depth 4 insserv: loop involving service checkroot at depth 4 insserv: loop involving service mountdevsubfs at depth 2 insserv: Starting smfpd depends on plymouth and therefore on system facility `$all' which can not be true! insserv: There is a loop between service plymouth and urandom if started insserv: loop involving service urandom at depth 4 insserv: Starting smfpd depends on plymouth and therefore on system facility `$all' which can not be true! insserv: exiting now without changing boot order! update-rc.d: error: insserv rejected the script header dpkg: error processing package cgmanager (--configure): subprocess installed post-installation script returned error exit status 1 Setting up ebtables (2.0.10.4-3.4ubuntu2) ... update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults insserv: warning: script 'K07smfpd' missing LSB tags and overrides insserv: warning: script 'smfpd' missing LSB tags and overrides insserv: There is a loop at service plymouth if started insserv: There is a loop between service plymouth and procps if started insserv: loop involving service procps at depth 2 insserv: loop involving service udev at depth 1 insserv: Starting smfpd depends on plymouth and therefore on system facility `$all' which can not be true! insserv: Max recursions depth 99 reached insserv: loop involving service bluetooth at depth 1 insserv: Starting smfpd depends on plymouth and therefore on system facility `$all' which can not be true! insserv: There is a loop between service smfpd and hwclock if started insserv: loop involving service hwclock at depth 1 insserv: There is a loop at service smfpd if started insserv: Starting smfpd depends on plymouth and therefore on system facility `$all' which can not be true! insserv: loop involving service networking at depth 4 insserv: loop involving service checkroot at depth 4 insserv: loop involving service mountdevsubfs at depth 2 insserv: Starting smfpd depends on plymouth and therefore on system facility `$all' which can not be true! insserv: There is a loop between service plymouth and urandom if started insserv: loop involving service urandom at depth 4 insserv: Starting smfpd depends on plymouth and therefore on system facility `$all' which can not be true! insserv: exiting now without changing boot order! update-rc.d: error: insserv rejected the script header dpkg: error processing package ebtables (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of libvirt-bin: libvirt-bin depends on cgmanager | cgroup-lite | cgroup-bin; however: Package cgmanager is not configured yet. Package cgroup-lite is not installed. Package cgroup-bin is not installed. libvirt-bin depends on ebtables; however: Package ebtables is not configured yet. dpkg: error processing package libvirt-bin (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of os-prober: os-prober depends on grub-common; however: Package grub-common is not configured yet. dpkg: error processing package os-prober (--configure): dependency problems - leaving unconfigured Setting up sddm (0.14.0-1~2.gbpf70012+16.04+xenial+build8) ... dpkg: error processing package sddm (--configure): subprocess installed post-installation script returned error exit status 1 Setting up virtualbox (5.0.40-dfsg-0ubuntu1.16.04.1) ... insserv: warning: script 'K07smfpd' missing LSB tags and overrides insserv: warning: script 'smfpd' missing LSB tags and overrides insserv: There is a loop at service plymouth if started insserv: There is a loop between service plymouth and procps if started insserv: loop involving service procps at depth 2 insserv: loop involving service udev at depth 1 insserv: Starting smfpd depends on plymouth and therefore on system facility `$all' which can not be true! insserv: Max recursions depth 99 reached insserv: loop involving service bluetooth at depth 1 insserv: Starting smfpd depends on plymouth and therefore on system facility `$all' which can not be true! insserv: There is a loop between service smfpd and hwclock if started insserv: loop involving service hwclock at depth 1 insserv: There is a loop at service smfpd if started insserv: Starting smfpd depends on plymouth and therefore on system facility `$all' which can not be true! insserv: loop involving service networking at depth 4 insserv: loop involving service checkroot at depth 4 insserv: loop involving service mountdevsubfs at depth 2 insserv: Starting smfpd depends on plymouth and therefore on system facility `$all' which can not be true! insserv: There is a loop between service plymouth and urandom if started insserv: loop involving service urandom at depth 4 insserv: Starting smfpd depends on plymouth and therefore on system facility `$all' which can not be true! insserv: exiting now without changing boot order! update-rc.d: error: insserv rejected the script header dpkg: error processing package virtualbox (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of virtualbox-qt: virtualbox-qt depends on virtualbox (= 5.0.40-dfsg-0ubuntu1.16.04.1); however: Package virtualbox is not configured yet. dpkg: error processing package virtualbox-qt (--configure): dependency problems - leaving unconfigured Setting up flashplugin-installer (26.0.0.131ubuntu0.16.04.1) ... Errors were encountered while processing: lightdm grub-common grub2-common grub-efi-amd64-bin grub-efi-amd64 grub-efi cgmanager ebtables libvirt-bin os-prober sddm virtualbox virtualbox-qt E: Sub-process /usr/bin/dpkg returned an error code (1) note : i deleted a few : nsserv: Starting smfpd depends on plymouth and therefore on system facility `$all' which can not be true! lines so the questions wouldn't be too big. How would i best go about fixing this?
Thanks to steeldriver and his comment i could read this question on Ask Ubuntu: 16.04 LTS Update fails - Errors were encountered while processing util-linux. I don't have a samsung printer/driver but it did get me to try to uninstall my dell printer driver. To uninstall : cd /opt/DELL/mfp/uninstall sudo ./uninstall.sh Doing this solved the problem.
insserv: warning: script 'K07smfpd' missing LSB tags and overrides
1,694,871,099,000
I need to edit /etc/lightdm/lightdm.conf using sed inside specific section, uncomment and set value. This section is [Seat:*] and line is #autologin-user= I expect this change: Before: [LightDM] . . . [Seat:*] . . . #autologin-user= . . . After: [LightDM] . . . [Seat:*] . . . autologin-user=pi . . . I've tried this command: sed -i.bak '/^\[Seat:*]/{s/#autologin-user/autologin-user=pi/}' /etc/lightdm/lightdm.conf But without success. PS: There are more occurrences of #autologin-user, so selecting section [Seat:*] is really important.
Try with this, given an altered input file sample: [LightDM] [Seat:*] #autologin-user= [Foo:*] #autologin-user= [Bar:*] #autologin-user= The command: $ sed '/^\[Seat:\*\]$/,/\[/s/^#autologin-user=$/autologin-user=pi/' foo.txt [LightDM] [Seat:*] autologin-user=pi [Foo:*] #autologin-user= [Bar:*] #autologin-user=
Enable lightdm autologin using sed
1,694,871,099,000
I am making an ansible check to see if lightdm is up before continuing a task in my playbook. Is there a port number that lightdm creates once it's up?
Q: "Check to see if lightdm is up before continuing a task" A: It's possible to use service_facts and select attributes of a particular service. For example the playbook - hosts: localhost vars: my_service: 'lightdm.service' tasks: - service_facts: - set_fact: my_state: "{{ services| dict2items| selectattr('key', 'match', my_service)| map(attribute='value.state')| list| first }}" - debug: msg: "{{ my_service }} is {{ my_state }}" gives "msg": "lightdm.service is running"
lightdm port to listen if it is up
1,601,415,487,000
So I thought about switching to LightDM. I switched by doing sudo systemctl disable sddm sudo systemctl enable lightdm Then I restarted arch linux and everything ran fine. But then I thought about getting a better looking display manager. So I uncommented the line in /ect/lightdm/lightdm.conf that says this: ...-session=...slick... I can't remember exactly what was there. Now, I think that I didn't install that theme, because when I booted up arch linux, it says "cannot boot light display manager" or something like that. However, I can't figure out how to access the terminal. If I get the terminal, I can just change back the display manager or re-comment that line. How do I fix this? All I need to do is access the terminal and then I'm good to go, but I can't seem to figure out how.
If your system boots at all, you can always switch to a terminal and back to the DE with the Ctrl + Alt + F1, ... + F2, etc. key combo. There is no need to edit grub. From a terminal you can edit the file you need then reboot.
Arch Linux Not Working After I Change Display Manager To Lightdm
1,601,415,487,000
I accidentally erased the plymouth package which removed many more packages. The system would still boot but the console hung. I added the packages lightdm and lightdm-gtk-greeter, and now I do get a kind of graphical desktop. Still things are not what they should be: I do not have the vertical line of icons at the left of the screen, isn't that called the "launcher"? Well, it is missing. I can start a terminal after right-clicking in the desktop, and from that terminal I can start an application, Firefox or whatever. The application is however started at the same screen position as where it was at last closure, and I cannot drag it around. It also doesn't have the top line where I can click to close/minimise/maximise it; the mouse cursor looks normal inside an application; but when hovering over the desktop it becomes a big X like in the early days of X the desktop doesn't show the usual top line, where the time and such are displayed How to get back to my usual desktop? Environment is ubuntu 16.04.5 / lightdm 1.18.3 Things tried: dpkg-reconfigure lightdm , dpkg-reconfigure lightdm-gtk-greeter. Both returned silently, no errors, no messages.
All actions with apt (apt-get) are logged. These files are available in /var/log/apt/. To view the most recent history log, execute: less /var/log/apt/history.log Than you can install all of the packages that are removed manually apt install package1 package2 package3 ...
lightdm config damaged
1,601,415,487,000
I am using Debian Stretch with lightdm. Suddenly, I am no longer asked for a password when I log in. As soon as I fill in my username and press tab, the password field is removed and I can log in without ever entering it. Moreover: With a command line login, the situation is the same: I only have to enter my username to log in. No password is asked when I use sudo, even across reboots and after executing sudo -K. Some applications, such as the root terminal in Thunar, warn me that due to my 'authentication mechanism setup', I do not need to enter a password. My password is no longer accepted by programs which have a graphic prompt for a password, such as synaptic-pkexec. sudo synaptic-pkexec does work. The answers I'm finding are specific to settings in lightdm, or in /etc/sudoers. However those files look normal, and this problem seems to be more general. Can anyone offer pointers on how to find out what has changed in my system?
The root cause was that there was no password set anymore for my user. I executed: sudo journalctl -b | grep pam And noticed: lightdm[1193]: gkr-pam: no password is available for user The problem was then solved by executing passwd at the terminal and setting a new password.
Password for user is not asked anymore [closed]
1,601,415,487,000
I am hoping someone can explain what dbus_enable is and what benefits it would bring should I include it. Additionally I am not sure where this would be placed however I feel it may be added to the ~/.bash_profile file, like so: # User specific environment and startup programs dbus_enable=YES The reason I ask this is because I stumbled across some old posts, which I've listed below: dbus_enable and hald_enable KDE and dbus_enable
dbus_enable is a FreeBSD configuration option which controls whether dbus-daemon is started. This provides an interprocess communications bus, D-Bus, used by some desktop environments (including GNOME and KDE) and various other tools commonly used on Linux systems (not so much FreeBSD). The setting is stored in /etc/rc.conf. Your tags suggest you’re using RHEL or Alma Linux; dbus_enable is irrelevant there.
What is dbus_enable?
1,601,415,487,000
Using lightdm, I'm trying to set a wallpaper on the logging screen using lightdm-settings. If I set an image under /usr/share/backgrounds it works. If I set an image under /home/me/Images, the image is not loaded and background color is shown instead. Testing the image with lightdm --test-mode shows it works fine ; but it won't work on the true loading screen. Tested under linux mint 21.1 (nb: it worked fine with linux mint 21)
You need to make sure the files are world readable, and that any directories leading up to them are executable for all users. To recap, execute permission is required on a directory in order to read the files within it; merely making the files themselves readable to everyone is not enough if they are in a directory which lacks execute permissions. Lightdm itself is a system process, and so has some component of it running as root; but as dictated by security concerns, the parts we are discussing are probably running as a low-privilege system user (like nobody) with limited access to your personal files. Ideally, if you want to use these files for the entire system, they should be owned by the system and stored in a system location (probably somewhere like /usr/local/lib or etc). But if this is just your personal computer, your current arrangement is probably acceptable.
Lightdm do not have access to files under /home
1,601,415,487,000
I am trying to let the login greeter of my Debian unstable machine show a custom wallpaper. It shows up quickly at boot, like half a second but then it fades into the default wallpaper. What could be the reason? Here is how I configured it and some info about my machine: > sudo cat /etc/lightdm/lightdm-gtk-greeter.conf | ag -v \# [greeter] background=/usr/share/wallpaper/leaf.png ls -lisah /usr/share/wallpaper/leaf.png 18350307 1,8M -rw-rw-rw- 1 foo foo 1,8M 10. Jul 08:51 /usr/share/wallpaper/leaf.png > apt info lightdm Package: lightdm Version: 1.26.0-8 > apt info lightdm-gtk-greeter Package: lightdm-gtk-greeter Version: 2.0.8-2+b1 > uname -a && lsb_release -a Linux foo 5.18.0-2-amd64 #1 SMP PREEMPT_DYNAMIC Debian 5.18.5-1 (2022-06-16) x86_64 GNU/Linux No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux bookworm/sid Release: unstable Codename: sid
Had a similar problem and I managed to solve it by adding: user-background=false so the full file looks like this: [greeter] background=/usr/share/wallpaper/leaf.png user-background=false
My light-dm greeter only quickly shows my wallpaper then changes to the default one
1,601,415,487,000
Upgrading Debian from stretch to buster, and I've had troubles starting xmonad. What doesn't work I previously started from the login manager, but now either with gdm3 or lightdm, I get thrown out directly. Workaround After I login from a raw console (Ctrl+Alt+F2), I can start xmonad if I create a file ~/.xinitrx and run startx. What I want I would like to be able to select xmonad in gdm3, then login, as I used to do in stretch. I'm using a default minimalist config (works from the console), so I don't think that's the problem. Please tell me what log files might be relevant to sent! Logs /var/log/syslog at the attempt of login: Aug 13 09:12:24 vinden systemd[1]: session-c8.scope: Killing process 15753 (lightdm) with signal SIGTERM. Aug 13 09:12:24 vinden systemd[1]: session-c8.scope: Killing process 15756 (lightdm-gtk-gre) with signal SIGTERM. Aug 13 09:12:24 vinden systemd[1]: session-c8.scope: Killing process 15769 (uim-helper-serv) with signal SIGTERM. Aug 13 09:12:24 vinden systemd[1]: Stopping Session c8 of user lightdm. Aug 13 09:12:24 vinden systemd[1]: session-66.scope: Failed to add inotify watch descriptor for control group /user.slice/user-1000.slice/session-66.scope: No space left on device Aug 13 09:12:24 vinden systemd[1]: Started Session 66 of user gauthier. Aug 13 09:12:24 vinden at-spi-bus-launcher[8381]: dbus-daemon[8385]: Activating service name='org.a11y.atspi.Registry' requested by ':1.5' (uid=1000 pid=15983 comm="trayer --edge top --align right --SetDockType true") Aug 13 09:12:24 vinden at-spi2-registr[15994]: Could not open X display Aug 13 09:12:24 vinden at-spi-bus-launcher[8381]: dbus-daemon[8385]: Successfully activated service 'org.a11y.atspi.Registry' Aug 13 09:12:24 vinden at-spi-bus-launcher[8381]: SpiRegistry daemon is running with well-known name - org.a11y.atspi.Registry Aug 13 09:12:24 vinden at-spi2-registr[15994]: AT-SPI: Cannot open default display Aug 13 09:12:24 vinden dbus-daemon[7613]: [session uid=1000 pid=7613] Activating service name='org.freedesktop.portal.IBus' requested by ':1.22' (uid=1000 pid=15977 comm="/usr/bin/ibus-daemon --daemonize --xim ") Aug 13 09:12:24 vinden dbus-daemon[7613]: [session uid=1000 pid=7613] Successfully activated service 'org.freedesktop.portal.IBus' Aug 13 09:12:25 vinden systemd[1]: session-66.scope: Killing process 15897 (lightdm) with signal SIGTERM. Aug 13 09:12:25 vinden systemd[1]: session-66.scope: Killing process 15903 (gnome-keyring-d) with signal SIGTERM. Aug 13 09:12:25 vinden systemd[1]: session-66.scope: Killing process 15906 (xmonad-session) with signal SIGTERM. Aug 13 09:12:25 vinden systemd[1]: session-66.scope: Killing process 15960 (ssh-agent) with signal SIGTERM. Aug 13 09:12:25 vinden systemd[1]: session-66.scope: Killing process 15977 (ibus-daemon) with signal SIGTERM. Aug 13 09:12:25 vinden systemd[1]: session-66.scope: Killing process 15988 (ibus-dconf) with signal SIGTERM. Aug 13 09:12:25 vinden systemd[1]: session-66.scope: Killing process 16007 (dropbox) with signal SIGTERM. Aug 13 09:12:25 vinden systemd[1]: session-66.scope: Killing process 16008 (dropbox) with signal SIGTERM. Aug 13 09:12:25 vinden systemd[1]: Stopping Session 66 of user gauthier. Aug 13 09:12:25 vinden acpid: client 15746[0:0] has disconnected Aug 13 09:12:25 vinden acpid: client connected from 16014[0:0] Aug 13 09:12:25 vinden acpid: 1 client rule loaded Aug 13 09:12:26 vinden systemd[1]: session-c9.scope: Failed to add inotify watch descriptor for control group /user.slice/user-118.slice/session-c9.scope: No space left on device Aug 13 09:12:26 vinden systemd[1]: Started Session c9 of user lightdm.
Problem solved, but was unrelated. I'll post how I debugged this problem. I needed to check /var/log/Xorg.0.log, but the file was somehow truncated so that I missed the error. login in another tty with Ctrl+Alt+F1. save all new input in the log file to a separate file: tail -f /var/log/Xorg.0.log >> ~/tmp/Xorg.log return to the login manager with Ctrl+Alt+F7. attempt to login, in my case it crashed and returned to the login manager. return to your other tty with Ctrl+Alt+F1, observe what happened in ~/tmp/Xorg.log. In my case, if you must know, xkbcomp crashed with a segmentation fault because of an error in my keyboard config file, taking with it the whole thing. I suppose that xkbcomp is a newer version than the one I previously had (because I upgraded Debian), and is less resilient to config errors than the older version.
Error when using xmonad on Debian Buster after distribution upgrade
1,601,415,487,000
Symptom Upgrading from a WQHD ViewSonic display to a giant 4K UHD ViewSonic display yields a No Signal error (via HDMI) after my workstation successfully makes it through GRUB and Plymouth screens. The Debian-branded Plymouth splash screen seems to display at 4K, but then it never displays the LightDM greeter; the new monitor loses the signal at that point. I can switch over to TTY1 where that TTY seems to be running at 4K, based on tiny font size and its sharpness. Furthermore a forest-style ps reveals that the greeter is hanging off a TTY7 X process: root 810 1 0 59886 9164 2 16:09 ? 00:00:00 /usr/sbin/lightdm root 817 810 0 138738 82472 2 16:09 tty7 00:00:00 \_ /usr/lib/xorg/Xorg :0 -seat seat0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch root 843 810 0 41710 7412 2 16:09 ? 00:00:00 \_ lightdm --session-child 16 19 lightdm 858 843 0 196049 133484 2 16:09 ? 00:00:01 | \_ /usr/sbin/lightdm-gtk-greeter root 897 810 0 4687 5112 0 16:09 ? 00:00:00 \_ lightdm --session-child 12 19 Configuration The sole GPU: 01:00.0 VGA compatible controller: NVIDIA Corporation GK107 [GeForce GT 640] (rev a1) (prog-if 00 [VGA controller]) Subsystem: eVga.com. Corp. GK107 [GeForce GT 640] Flags: bus master, fast devsel, latency 0, IRQ 27 Memory at f6000000 (32-bit, non-prefetchable) [size=16M] Memory at e0000000 (64-bit, prefetchable) [size=256M] Memory at f0000000 (64-bit, prefetchable) [size=32M] I/O ports at e000 [size=128] Expansion ROM at 000c0000 [disabled] [size=128K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Endpoint, MSI 00 Capabilities: [b4] Vendor Specific Information: Len=14 <?> Capabilities: [100] Virtual Channel Capabilities: [128] Power Budgeting <?> Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?> Capabilities: [900] #19 Kernel driver in use: nouveau Kernel modules: nouveau And Debian 10 Kernel Debian 4.19.132-1 (2020-07-24) x86_64 GNU/Linux lightdm 1.26.0-4 xfce4 4.12.5 xserver-xorg 1:7.7+19 Troubleshooting Verified that the new monitor can be driven via HDMI at 4K by connecting it to a notebook running Pop_OS + Gnome. Back on my main workstation I collected various LightDM, X11, and XFCE config and log files after booting separately with the WQHD and 4K monitors, looking for a smoking gun. X11 seems to try to drive the monitor at 4K, :~/troubleshooting/xfce_at_4k_uhd$ egrep -C2 "3840.+2160" * Xorg.0.log-[ 26.169] (II) modeset(0): 0000000000000000000000000000001a Xorg.0.log-[ 26.169] (--) modeset(0): HDMI max TMDS frequency 600000KHz Xorg.0.log:[ 26.169] (II) modeset(0): Not using default mode "3840x2160" (bad mode clock/interlace/doublescan) Xorg.0.log-[ 26.169] (II) modeset(0): Not using default mode "2560x1440" (bad mode clock/interlace/doublescan) Xorg.0.log:[ 26.169] (II) modeset(0): Not using default mode "3840x2160" (bad mode clock/interlace/doublescan) Xorg.0.log:[ 26.169] (II) modeset(0): Not using default mode "3840x2160" (bad mode clock/interlace/doublescan) Xorg.0.log-[ 26.169] (II) modeset(0): Printing probed modes for output HDMI-1 Xorg.0.log:[ 26.169] (II) modeset(0): Modeline "3840x2160"x60.0 533.00 3840 3888 3920 4000 2160 2163 2168 2222 +hsync -vsync (133.2 kHz d) Xorg.0.log:[ 26.169] (II) modeset(0): Modeline "3840x2160"x24.0 297.00 3840 5116 5204 5500 2160 2168 2178 2250 +hsync +vsync (54.0 kHz e) Xorg.0.log:[ 26.169] (II) modeset(0): Modeline "3840x2160"x24.0 296.70 3840 5116 5204 5500 2160 2168 2178 2250 +hsync +vsync (53.9 kHz e) Xorg.0.log:[ 26.169] (II) modeset(0): Modeline "3840x2160"x30.0 262.75 3840 3888 3920 4000 2160 2163 2168 2191 +hsync -vsync (65.7 kHz e) Xorg.0.log-[ 26.169] (II) modeset(0): Modeline "3200x1800"x60.0 492.00 3200 3456 3800 4400 1800 1803 1808 1865 -hsync +vsync (111.8 kHz d) Xorg.0.log-[ 26.169] (II) modeset(0): Modeline "3200x1800"x59.9 373.00 3200 3248 3280 3360 1800 1803 1808 1852 +hsync -vsync (111.0 kHz d) -- Xorg.0.log-[ 26.170] (II) modeset(0): Output HDMI-1 connected Xorg.0.log-[ 26.170] (II) modeset(0): Using exact sizes for initial modes Xorg.0.log:[ 26.170] (II) modeset(0): Output HDMI-1 using initial mode 3840x2160 +0+0 Xorg.0.log-[ 26.170] (==) modeset(0): Using gamma correction (1.0, 1.0, 1.0) Xorg.0.log-[ 26.170] (==) modeset(0): DPI set to (96, 96) But it's unclear to me where things go wrong. Can anybody suggest what to look for, or what else to post here as clues? Thanks
I finally got to the bottom of most of this. For whatever reason this card, stock nouveau driver, etc. can't truly drive the monitor at 4K. XFCE and LightDM/GTK Greeter do work at 3K (2880x1620) by dialing back the resolution in the XFCE Display settings widget and configuring a resolution override via xrandr in /etc/lightdm/lightdm.conf: display-setup-script=/usr/bin/xrandr --output HDMI-1 --primary --mode 2880x1620 More details of my troubleshooting are on another forum.
No HDMI signal on TTY7 when running Buster + NVIDIA GT 640 + nouveau + 4K UHD
1,601,415,487,000
On a normal working Linux machine the command w report 2 users(because 2 users are connected) w 19:23:19 up 1:53, 2 users, load average: 0,44, 0,63, 0,81 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT after reboot report correctly 1 user w 19:26:44 up 1:03, 1 users, load average: 0,44, 0,73, 0,90 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT On my Slackware current w reports the sum(!) of the previous users + the actual user connected, after 4 reboots it report 4 users. w 19:28:16 up 1:58, 4 users, load average: 0,26, 0,59, 0,76 Why? I use Lightdm. Is it possible to use Xreset with a properly sessreg line like XDM?
Solution found using a Xreset script First I edit the script(if not exist create it) vim /etc/lightdm/Xreset #!/bin/sh sessreg -d -l $DISPLAY $USER chmod 755 /etc/lightdm/Xreset Then I edit /etc/lightdm/lightdm.conf session-cleanup-script=/etc/lightdm/Xreset After reboot w report the correct number of users, which is 2 one for Xsession, one for shell opened.
Why don't my wtmp/utmp reset the user count?
1,601,415,487,000
I think I've been doing more than a dozen system reboots in the last few minutes to check this strange behavior. It seems that the service of dropbox succedes or fails in running at startup depending on how much time I spend at the login screen where I enter the password. I've not done any timing, so I don't know what much time exactly is, but it's not more than a half a dozen seconds. When I enter the password quickly, Dropbox starts up, indeed I verify the following, $ systemctl status [email protected] ● [email protected] - Dropbox Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/[email protected] └─override.conf Active: active (running) since Wed 2018-06-13 17:07:04 CEST; 6min ago Main PID: 1006 (dropbox) Tasks: 84 (limit: 19660) Memory: 148.0M CGroup: /system.slice/system-dropbox.slice/[email protected] ├─1006 /usr/bin/dropbox ├─1052 dbus-launch --autolaunch 9f3c6fabb4aa40d1b7d5b3a3881af003 --binary-syntax --close-stderr ├─1053 /usr/bin/dbus-daemon --syslog-only --fork --print-pid 5 --print-address 7 --session └─1057 /usr/bin/dunst Jun 13 17:07:05 greywarden dropbox[1006]: dropbox: load fq extension '/opt/dropbox/PyQt5.QtWebKitWidgets.so' Jun 13 17:07:05 greywarden dropbox[1006]: dropbox: load fq extension '/opt/dropbox/PyQt5.QtWidgets.so' Jun 13 17:07:05 greywarden dropbox[1006]: dropbox: load fq extension '/opt/dropbox/PyQt5.QtPrintSupport.so' Jun 13 17:07:05 greywarden dropbox[1006]: dropbox: load fq extension '/opt/dropbox/PyQt5.QtDBus.so' Jun 13 17:07:05 greywarden dbus-daemon[1053]: [session uid=1000 pid=1051] Activating service name='org.freedesktop.Notifications' requested by ':1.0' (uid=1000 pid=1006 comm="/usr/bin/dropbox ") Jun 13 17:07:05 greywarden org.freedesktop.Notifications[1053]: Warning: 'allow_markup' is deprecated, please use 'markup' instead. Jun 13 17:07:05 greywarden org.freedesktop.Notifications[1053]: Warning: The frame section is deprecated, width has been renamed to frame_width and moved to the global section. Jun 13 17:07:05 greywarden org.freedesktop.Notifications[1053]: Warning: The frame section is deprecated, color has been renamed to frame_color and moved to the global section. Jun 13 17:07:05 greywarden org.freedesktop.Notifications[1053]: Warning: Unknown keyboard shortcut: mod4+grave Jun 13 17:07:05 greywarden dbus-daemon[1053]: [session uid=1000 pid=1051] Successfully activated service 'org.freedesktop.Notifications' On the other hand, when I'm too slow in entering the password (e.g. type, backspace, retype, ...) or I'm simply not looking at the screen while I'm reading a book, the output is $ systemctl status [email protected] ● [email protected] - Dropbox Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/[email protected] └─override.conf Active: failed (Result: exit-code) since Wed 2018-06-13 17:15:03 CEST; 39s ago Process: 1031 ExecStart=/usr/bin/dropbox (code=exited, status=250) Main PID: 1031 (code=exited, status=250) and dropbox is indeed not running (no tray icon). I'm running i3 on ArchLinux and LightDM login manager. I followed Prevent automatic updates and Autostart sections at the wiki page, but it's not unlikely that I made confusion between Autostart on boot with systemd and Autostart on login with systemd and that could be the root of the problem.
As suggested in the first comment to the question, this wikipage solved the issue.
dropbox service fails or succedes depending on time spent at the login screen
1,389,293,469,000
Do changes in /etc/security/limits.conf require a reboot before taking effect? For example, if I have a script that sets the following limits in /etc/security/limits.conf, does this require a system reboot before those limits will take effect? * hard nofile 94000 * soft nofile 94000 * hard nproc 64000 * soft nproc 64000
No but you should close all active sessions windows. They still remember the old values. In other words, log out and back in. Every remote new session or a local secure shell take effect of the limits changes.
do changes in /etc/security/limits.conf require a reboot?
1,389,293,469,000
Right now, I know how to: find open files limit per process: ulimit -n count all opened files by all processes: lsof | wc -l get maximum allowed number of open files: cat /proc/sys/fs/file-max My question is: Why is there a limit of open files in Linux?
The reason is that the operating system needs memory to manage each open file, and memory is a limited resource - especially on embedded systems. As root user you can change the maximum of the open files count per process (via ulimit -n) and per system (e.g. echo 800000 > /proc/sys/fs/file-max).
Why is number of open files limited in Linux?
1,389,293,469,000
How to limit process to one cpu core ? Something similar to ulimit or cpulimit would be nice. (Just to ensure: I do NOT want to limit percentage usage or time of execution. I want to force app (with all it's children, processes (threads)) to use one cpu core (or 'n' cpu cores)).
Under Linux, execute the sched_setaffinity system call. The affinity of a process is the set of processors on which it can run. There's a standard shell wrapper: taskset. For example, to pin a process to CPU #0 (you need to choose a specific CPU): taskset -c 0 mycommand --option # start a command with the given affinity taskset -c -pa 0 1234 # set the affinity of a running process There are third-party modules for both Perl (Sys::CpuAffinity) and Python (affinity) to set a process's affinity. Both of these work on both Linux and Windows (Windows may require other third-party modules with Sys::CpuAffinity); Sys::CpuAffinity also works on several other unix variants. If you want to set a process's affinity from the time of its birth, set the current process's affinity immediately before calling execve. Here's a trivial wrapper that forces a process to execute on CPU 0. #!/usr/bin/env perl use POSIX; use Sys::CPUAffinity; Sys::CpuAffinity::setAffinity(getpid(), [0]); exec $ARGV[0] @ARGV
How to limit a process to one CPU core in Linux? [duplicate]
1,389,293,469,000
I am running a docker server on Arch Linux (kernel 4.3.3-2) with several containers. Since my last reboot, both the docker server and random programs within the containers crash with a message about not being able to create a thread, or (less often) to fork. The specific error message is different depending on the program, but most of them seem to mention the specific error Resource temporarily unavailable. See at the end of this post for some example error messages. Now there are plenty of people who have had this error message, and plenty of responses to them. What’s really frustrating is that everyone seems to be speculating how the issue could be resolved, but no one seems to point out how to identify which of the many possible causes for the problem is present. I have collected these 5 possible causes for the error and how to verify that they are not present on my system: There is a system-wide limit on the number of threads configured in /proc/sys/kernel/threads-max (source). In my case this is set to 60613. Every thread takes some space in the stack. The stack size limit is configured using ulimit -s (source). The limit for my shell used to be 8192, but I have increased it by putting * soft stack 32768 into /etc/security/limits.conf, so it ulimit -s now returns 32768. I have also increased it for the docker process by putting LimitSTACK=33554432 into /etc/systemd/system/docker.service (source, and I verified that the limit applies by looking into /proc/<pid of docker>/limits and by running ulimit -s inside a docker container. Every thread takes some memory. A virtual memory limit is configured using ulimit -v. On my system it is set to unlimited, and 80% of my 3 GB of memory are free. There is a limit on the number of processes using ulimit -u. Threads count as processes in this case (source). On my system, the limit is set to 30306, and for the docker daemon and inside docker containers, the limit is 1048576. The number of currently running threads can be found out by running ls -1d /proc/*/task/* | wc -l or by running ps -elfT | wc -l (source). On my system they are between 700 and 800. There is a limit on the number of open files, which according to some sources is also relevant when creating threads. The limit is configured using ulimit -n. On my system and inside docker, the limit is set to 1048576. The number of open files can be found out using lsof | wc -l (source), on my system it is about 30000. It looks like before the last reboot I was running kernel 4.2.5-1, now I’m running 4.3.3-2. Downgrading to 4.2.5-1 fixes all the problems. Other posts mentioning the problem are this and this. I have opened a bug report for Arch Linux. What has changed in the kernel that could be causing this? Here are some example error messages: Crash dump was written to: erl_crash.dump Failed to create aux thread   Jan 07 14:37:25 edeltraud docker[30625]: runtime/cgo: pthread_create failed: Resource temporarily unavailable   dpkg: unrecoverable fatal error, aborting: fork failed: Resource temporarily unavailable E: Sub-process /usr/bin/dpkg returned an error code (2)   test -z "/usr/include" || /usr/sbin/mkdir -p "/tmp/lib32-popt/pkg/lib32-popt/usr/include" /bin/sh: fork: retry: Resource temporarily unavailable /usr/bin/install -c -m 644 popt.h '/tmp/lib32-popt/pkg/lib32-popt/usr/include' test -z "/usr/share/man/man3" || /usr/sbin/mkdir -p "/tmp/lib32-popt/pkg/lib32-popt/usr/share/man/man3" /bin/sh: fork: retry: Resource temporarily unavailable /bin/sh: fork: retry: No child processes /bin/sh: fork: retry: Resource temporarily unavailable /bin/sh: fork: retry: No child processes /bin/sh: fork: retry: No child processes /bin/sh: fork: retry: Resource temporarily unavailable /bin/sh: fork: retry: Resource temporarily unavailable /bin/sh: fork: retry: No child processes /bin/sh: fork: Resource temporarily unavailable /bin/sh: fork: Resource temporarily unavailable make[3]: *** [install-man3] Error 254   Jan 07 11:04:39 edeltraud docker[780]: time="2016-01-07T11:04:39.986684617+01:00" level=error msg="Error running container: [8] System error: fork/exec /proc/self/exe: resource temporarily unavailable"   [Wed Jan 06 23:20:33.701287 2016] [mpm_event:alert] [pid 217:tid 140325422335744] (11)Resource temporarily unavailable: apr_thread_create: unable to create worker thread
The problem is caused by the TasksMax systemd attribute. It was introduced in systemd 228 and makes use of the cgroups pid subsystem, which was introduced in the linux kernel 4.3. A task limit of 512 is thus enabled in systemd if kernel 4.3 or newer is running. The feature is announced here and was introduced in this pull request and the default values were set by this pull request. After upgrading my kernel to 4.3, systemctl status docker displays a Tasks line: # systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/etc/systemd/system/docker.service; disabled; vendor preset: disabled) Active: active (running) since Fri 2016-01-15 19:58:00 CET; 1min 52s ago Docs: https://docs.docker.com Main PID: 2770 (docker) Tasks: 502 (limit: 512) CGroup: /system.slice/docker.service Setting TasksMax=infinity in the [Service] section of docker.service fixes the problem. docker.service is usually in /usr/share/systemd/system, but it can also be put/copied in /etc/systemd/system to avoid it being overridden by the package manager. A pull request is increasing TasksMax for the docker example systemd files, and an Arch Linux bug report is trying to achieve the same for the package. There is some additional discussion going on on the Arch Linux Forum and in an Arch Linux bug report regarding lxc. DefaultTasksMax can be used in the [Manager] section in /etc/systemd/system.conf (or /etc/systemd/user.conf for user-run services) to control the default value for TasksMax. Systemd also applies a limit for programs run from a login-shell. These default to 4096 per user (will be increased to 12288) and are configured as UserTasksMax in the [Login] section of /etc/systemd/logind.conf.
Creating threads fails with “Resource temporarily unavailable” with 4.3 kernel
1,389,293,469,000
So I have 4 GB RAM + 4GB swap. I want to create a user with limited ram and swap: 3 GB RAM and 1 GB swap. Is such thing possible? Is it possible to start applications with limited RAM and swap avaliable to them without creating a separate user (and not installing any special apps - having just a default Debian/CentOS server configuration, and not using sudo)? Update: So I opened terminall and typed into it ulimit command: ulimit -v 1000000 which shall be like 976,6Mb limitation. Next I called ulimit -a and saw that limitation is "on". Then I started some bash script that compiles and starts my app in nohup, a long one nohup ./cloud-updater-linux.sh >& /dev/null &... but after some time I saw: (which would be ok if no limitations were applied - it downloaded some large lib, and started to compile it.) But I thought I applied limitations to the shell and all processes launched with/from it with ulimit -v 1000000? What did I get wrong? How to make a terminal and all sub processes it launches be limited on ram usage?
ulimit is made for this. You can setup defaults for ulimit on a per user or a per group basis in /etc/security/limits.conf ulimit -v KBYTES sets max virtual memory size. I don't think you can give a max amount of swap. It's just a limit on the amount of virtual memory the user can use. So you limits.conf would have the line (to a maximum of 4G of memory) luser hard as 4000000 UPDATE - CGroups The limits imposed by ulimit and limits.conf is per process. I definitely wasn't clear on that point. If you want to limit the total amount of memory a users uses (which is what you asked). You want to use cgroups. In /etc/cgconfig.conf: group memlimit { memory { memory.limit_in_bytes = 4294967296; } } This creates a cgroup that has a max memory limit of 4GiB. In /etc/cgrules.conf: luser memory memlimit/ This will cause all processes run by luser to be run inside the memlimit cgroups created in cgconfig.conf.
How to create a user with limited RAM usage?
1,389,293,469,000
I operate a Linux system which has a lot of users but sometimes an abuse occurs; where a user might run a single process that uses up more than 80% of the CPU/Memory. So is there a way to prevent this from happening by limiting the amount of CPU usage a process can use (to 10% for example)? I'm aware of cpulimit, but it unfortunately applies the limit to the processes I instruct it to limit (e.g single processes). So my question is, how can I apply the limit to all of the running processes and processes that will be run in the future without the need of providing their id/path for example?
While it can be an abuse for memory, it isn't for CPU: when a CPU is idle, a running process (by "running", I mean that the process isn't waiting for I/O or something else) will take 100% CPU time by default. And there's no reason to enforce a limit. Now, you can set up priorities thanks to nice. If you want them to apply to all processes for a given user, you just need to make sure that the user's login shell is run with nice: the child processes will inherit the nice value. This depends on how the users log in. See Prioritise ssh logins (nice) for instance. Alternatively, you can set up virtual machines. Indeed setting a per-process limit doesn't make much sense since the user can start many processes, abusing the system. With a virtual machine, all the limits will be global to the virtual machine. Another solution is to set /etc/security/limits.conf limits; see the limits.conf(5) man page. For instance, you can set the maximum CPU time per login and/or the maximum number of processes per login. You can also set maxlogins to 1 for each user.
Limiting processes to not exceed more than 10% of CPU usage
1,389,293,469,000
I'm looking for a way to limit a processes disk io to a set speed limit. Ideally the program would work similar to this: $ limitio --pid 32423 --write-limit 1M Limiting process 32423 to 1 megabyte per second hard drive writing speed.
That is certainly not trivial task that can't be done in userspace. Fortunately, it is possible to do on Linux, using cgroup mechanizm and its blkio controller. Setting up cgroup is somehow distribution specific as it may already be mounted or even used somewhere. Here's general idea, however (assuming you have proper kernel configuration): mount -t tmpfs cgroup_root /sys/fs/cgroup mkdir -p /sys/fs/cgroup/blkio mount -t cgroup -o blkio none /sys/fs/cgroup/blkio Now that you have blkio controller set, you can use it: mkdir -p /sys/fs/cgroup/blkio/limit1M/ echo "X:Y 1048576" > /sys/fs/cgroup/blkio/limit1M/blkio.throttle.write_bps_device Now you have a cgroup limit1M that limits write speed on device with major/minor numbers X:Y to 1MB/s. As you can see, this limit is per device. All you have to do now is to put some process inside of that group and it should be limited: echo $PID > /sys/fs/cgroup/blkio/limit1M/tasks I don't know if/how this can be done on other operating systems.
How to Throttle per process I/O to a max limit?
1,389,293,469,000
We have an Ubuntu 12.04 server with httpd on port 80 and we want to limit: the maximum connections per IP address to httpd to 10 the maximum new connections per second to httpd to 150 How can we do this with iptables?
iptables -A INPUT -p tcp --syn --dport 80 -m connlimit --connlimit-above 15 --connlimit-mask 32 -j REJECT --reject-with tcp-reset This will reject connections above 15 from one source IP. iptables -A INPUT -m state --state RELATED,ESTABLISHED -m limit --limit 150/second --limit-burst 160 -j ACCEPT In this 160 new connections (packets really) are allowed before the limit of 150 NEW connections (packets) per second is applied.
Limit max connections per IP address and new connections per second with iptables
1,389,293,469,000
Is there a method of slowing down the copy process on Linux? I have a big file, say 10GB, and I'd like to copy it to another directory, but I don't want to copy it with full speed. Let's say I'd like to copy it with the speed of 1mb/s, not faster. I'd like to use a standard Linux cp command. Is this possible? (If yes, how?) Edit: so, I'll add more context to what I'm trying to achieve. I have a problem on the ArchLinux system when copying large files over USB (to a pendrive, usb disk, etc). After filling up the usb buffer cache, my system stops responding (even the mouse stops; it moves only sporadically). The copy operation is still ongoing, but it takes 100% resources of the box. When the copy operation finishes, everything goes back to normal -- everything is perfectly responsive again. Maybe it's a hardware error, I don't know, but I do know I have two machines with this problem (both are on ArchLinux, one is a desktop box, second is a laptop). Easiest and fastest "solution" to this (I agree it's not the 'real' solution, just an ugly 'hack') would be to prevent this buffer from filling up by copying the file with an average write speed of the USB drive, for me that would be enough.
You can throttle a pipe with pv -qL (or cstream -t provides similar functionality) tar -cf - . | pv -q -L 8192 | tar -C /your/usb -xvf - -q removes stderr progress reporting. The -L limit is in bytes. More about the --rate-limit/-L flag from the man pv: -L RATE, --rate-limit RATE Limit the transfer to a maximum of RATE bytes per second. A suffix of "k", "m", "g", or "t" can be added to denote kilobytes (*1024), megabytes, and so on. This answer originally pointed to throttle but that project is no longer available so has slipped out of some package systems.
Make disk/disk copy slower
1,389,293,469,000
I have a systemd service (a CI runner) that tends to bog the system down with very CPU intensive jobs. I caught the load average flying over 100 just now and want to put a stop to that nonsense. Nothing else on the system is limited in any way, so the deal is that I want everything else to continue as it would run now, but either: (a) every process running as the unique user the CI jobs run as or (b) any child processes instantiated by the systemd service daemon ... to play second fiddle to everything else on the system. It fact I'd like them to have something like a 90% absolute cap even when nothing else on the system needs the remaining 10% of CPU cycles, but if anything else at all requests CPU time I would like them to get as much as they want first. What is the best way to configure this? I'm running Arch Linux on EC2 and have cgroups available (including cgmanager) but have never used them.
First of all, most of what comes up in search on the web has been deprecated. For example cgmanager is no longer supported on new systemd versions. Don't follow 99% of what comes up in web serches as far as using cpulimit, nice, cgset or other tools for this job. They either won't work at all as advertised (as in the case of cgroup management tools that expect you to create your own hierarchy) or won't get the job done without resorting to lots of hacks (as in the case of using 'nice' levels to manage whole groups of processes). The good news is that along with those deprecations (and pursuing the traditional all-devouring octopus monster modus operandi of systemd) a default configuration is in place for everything on the system, and tweaking it for systemd services is trivial. Just add an overlay configuration to the service you want to limit: $ sudo systemctl edit <servicename> Add a section with whatever resource control values you want to override. In my case I came up with this: [Service] CPUWeight=20 CPUQuota=85% IOWeight=20 MemorySwapMax=0 These values are not all necessary, but the first two answer the question as asked: CPUWeight defaults to 100 for all processes on the system. Setting a low value still lets process use the CPU if nothing else is effectively keeping the system responsive for other tasks while not slowing down the results much. This is a arbitrary weight integer. CPUQuota is an absolute limit on how much CPU time is granted even if nothing else is going on. This is a percent value. In my case it wasn't really necessary to set this to fix the resource hogging issue. I ended up setting it anyway to keep the CPU temperature down when lots of CI jobs pile up. IOWeight is much the same as CPUWeight, in this case used to keep disks free for system tasks and only keep them busy with CI jobs when nothing else is going on. MemorySwapMax also isn't in scope for the question, in my case I ended up adding it because the ray traver (povray) running in some of the CI jobs seems to think using 30+ Gigs of swap in addition to the 30+ Gigs of RAM in this system is good idea just because it is there. It runs faster if you don't let it use it at all. This is probably something better configured in povray, but this way I don't have to police what happens inside of CI jobs and don't have to disable the system swap. Lastly these values can be changed on the fly without restarting services by running systemctl daemon-reload. This is quite handy to watch the effect of the changes right away.
How to limit a systemd service to "play nice" with the CPU?
1,389,293,469,000
On Unix systems path names have usually virtually no length limitation (well, 4096 characters on Linux)... except for socket files paths which are limited to around 100 characters (107 characters on Linux). First question: why such a low limitation? I've checked that it seems possible to work around this limitation by changing the current working directory and creating in various directories several socket files all using the same path ./myfile.sock: the client applications seem to correctly connect to the expected server processes even-though lsof shows all of them listening on the same socket file path. Is this workaround reliable or was I just lucky? Is this behavior specific to Linux or may this workaround be applicable to other Unixes as well?
Compatibility with other platforms, or compatibility with older stuff to avoid overruns while using snprintf() and strncpy(). Michael Kerrisk explain in his book at the page 1165 - Chapter 57, Sockets: Unix domain : SUSv3 doesn’t specify the size of the sun_path field. Early BSD implementations used 108 and 104 bytes, and one contemporary implementation (HP-UX 11) uses 92 bytes. Portable applications should code to this lower value, and use snprintf() or strncpy() to avoid buffer overruns when writing into this field. Docker guys even made fun of it, because some sockets were 110 characters long: lol 108 chars ETOOMANY This is why LINUX uses a 108 char socket. Could this be changed? Of course. And this, is the reason why in the first place this limitation was created on older Operating Systems: Why is the maximal path length allowed for unix-sockets on linux 108? Quoting the answer: It was to match the space available in a handy kernel data structure. Quoting "The Design and Implementation of the 4.4BSD Operating System" by McKusick et. al. (page 369): The memory management facilities revolve around a data structure called an mbuf. Mbufs, or memory buffers, are 128 bytes long, with 100 or 108 bytes of this space reserved for data storage. Other OSs(unix domain sockets): OpenBSD: 104 characters FreeBSD: 104 characters Mac OS X 10.9: 104 characters
Why is socket path length limited to a hundred chars?
1,389,293,469,000
How can I limit the size of a log file written with >> to 200MB? $ run_program >> myprogram.log
If your application (ie. run_program) does not support limiting the size of the log file, then you can check the file size periodically in a loop with an external application or script. You can also use logrotate(8) to rotate your logs, it has size parameter which you can use for your purpose: With this, the log file is rotated when the specified size is reached. Size may be specified in bytes (default), kilobytes (sizek), or megabytes (sizem).
How to limit log file size using >>
1,389,293,469,000
I'm tuning the nofile value in /etc/security/limits.conf for my oracle user and I have a question about its behavior: does nofile limit the total number of files the user can have open for all of its processes or does it limit the total number of files the user can have open for each of its processes? Specifically, for the following usage: oracle hard nofile 65536
Most of the values¹ in limits.conf are limits that can be set with the ulimit shell command or the setrlimit system call. They are properties of a process. The limits apply independently for each process. In particular, each process can have up to nofile open files. There is no limit to the number of open files cumulated by the processes of a user. The nproc limit is a bit of a special case, in that it does sum over all the processes of a user. Nonetheless, it still applies per-process: when a process calls fork to create a new process, the call is denied if the number of processes belonging to the process's euid is would be larger than the process's RLIMIT_NPROC value. The limits.conf man page explains that the limits apply to a session. This means that all the processes in a session will all have these same limits (unless changed by one of these processes). It doesn't mean that any sum is done over the processes in a session (that's not even something that the operating system tracks — there is a notion of session, but it's finer-grained than that, for example each X11 application tends to end up in its own session). The way it works is that the login process sets itself some limits, and they are inherited by all child processes. ¹ The exceptions are maxlogins, maxsyslogins and chroot, which are applied as part of the login process to deny or influence login.
Are limits.conf values applied on a per-process basis?
1,389,293,469,000
I am considering using btrfs on my data drive so that I can use snapper, or something like snapper, to take time based snapshots. I believe this will let me browse old versions of my data. This would be in addition to my current off site backup since a drive failure would wipe out the data and the snapshots. From my understanding btrfs snapshots do not take up much space (meta data and the blocks that have changed, plus maybe some overhead), so space doesn't seem to be a constraint. If I have a million snapshots (e.g., a snapshot every minute for two years) would that cause havoc, assuming I have enough disk space for the data, the changed data, and the meta data? If there is a practical limit on the number of snapshots, does it depend on the number of files and/or size of files?
While technically there is no limit on the number of snapshots, I asked on the BTRFS mailing list: The (practical) answer depends to some extent on how you use btrfs. Btrfs does have scaling issues due to too many snapshots (or actually the reflinks snapshots use, dedup using reflinks can trigger the same scaling issues), and single to low double-digits of snapshots per snapshotted subvolume remains the strong recommendation for that reason. But the scaling issues primarily affect btrfs maintenance commands themselves, balance, check, subvolume delete. While millions of snapshots will make balance for example effectively unworkable (it'll sort of work but could take months), normal filesystem operations like reading and saving files doesn't tend to be affected, except to the extent that fragmentation becomes an issue (tho cow filesystems such as btrfs are noted for fragmentation, unless steps like defrag are taken to reduce it). It appears that using snapshots as an archival backup similar to time machine/snapper is not a good idea.
Practical limit on the number of btrfs snapshots?
1,389,293,469,000
I have /etc/security/limits.conf, that seems not been applied: a soft nofile 1048576 # default: 1024 a hard nofile 2097152 a soft noproc 262144 # default 128039 a hard noproc 524288 Where a is my username, when I run ulimit -Hn and ulimit -Sn, it shows: 4096 1024 There's only one other file in the /etc/security/limits.d that the content is: scylla - core unlimited scylla - memlock unlimited scylla - nofile 200000 scylla - as unlimited scylla - nproc 8096 I tried also append those values to /etc/security/limits.conf then restarting, and do this: echo ' session required pam_limits.so ' | sudo tee -a /etc/pam.d/common-session but it didn't work. My OS is Ubuntu 17.04.
ERROR: type should be string, got "\nhttps://superuser.com/questions/1200539/cannot-increase-open-file-limit-past-4096-ubuntu/1200818#_=_\nThere's a bug since Ubuntu 16 apparently.\nBasically:\n\nEdit /etc/systemd/user.conf for the soft limit, and add DefaultLimitNOFILE=1048576.\nEdit /etc/systemd/system.conf for the hard limit, and add DefaultLimitNOFILE=2097152.\n\nCredit goes to @mkasberg.\n"
/etc/security/limits.conf not applied
1,389,293,469,000
There are plenty of questions and answers about constraining the resources of a single process, e.g. RLIMIT_AS can be used to constrain the maximum memory allocated by a process that can be seen as VIRT in the likes of top. More on the topic e.g. here Is there a way to limit the amount of memory a particular process can use in Unix? setrlimit(2) documentation says: A child process created via fork(2) inherits its parent's resource limits. Resource limits are preserved across execve(2). It should be understood in the following way: If a process has a RLIMIT_AS of e.g. 2GB, then it cannot allocate more memory than 2GB. When it spawns a child, the address space limit of 2GB will be passed on to the child, but counting starts from 0. The 2 processes together can take up to 4GB of memory. But what would be the useful way to constrain the sum total of memory allocated by a whole tree of processes?
I am not sure if this answers your question, but I found this perl script that claims to do exactly what you are looking for. The script implements its own system for enforcing the limits by waking up and checking the resource usage of the process and its children. It seems to be well documented and explained, and has been updated recently. As slm said in his comment, cgroups can also be used for this. You might have to install the utilities for managing cgroups, assuming you are on Linux you should look for libcgroups. sudo cgcreate -t $USER:$USER -a $USER:$USER -g memory:myGroup Make sure $USER is your user. Your user should then have access to the cgroup memory settings in /sys/fs/cgroup/memory/myGroup. You can then set the limit to, lets say 500 MB, by doing this: echo 500000000 > /sys/fs/cgroup/memory/myGroup/memory.limit_in_bytes Now lets run Vim: cgexec -g memory:myGroup vim The vim process and all its children should now be limited to using 500 MB of RAM. However, I think this limit only applies to RAM and not swap. Once the processes reach the limit they will start swapping. I am not sure if you can get around this, I can not find a way to limit swap usage using cgroups.
How to limit the total resources (memory) of a process and its children
1,389,293,469,000
When I enter text on stdin in an OS X Terminal, a single line is limited to 1024 characters. For example, cat > /dev/null beeps after I type (or paste) a line longer than this, and refuses to accept more characters. A problematic example is when I want to count characters from pasted text with cat | wc -c: the cat blocks at the first long line. This seems to be a general problem with pasting to stdin. Can this observed stdin limitation of 1024 characters per line be removed or pushed to a higher limit? I need this because I want to paste text that has lines longer than 1024 characters. I could also use a "heredoc" << EOT and paste my long lines without any problem, but then the text appears in my shell history, which I don't want.
Probably a limit of the terminal device line discipline internal line editor buffer. You should be able to enter long lines by pressing Ctrl+D in the middle of it (so the currently entered part be sent to cat and the line editor flushed), or by disabling that line editor altogether. For instance, if using zsh: STTY=-icanon cat > file Note that then you can't use Backspace or any other editing capability. You'd also need to press Ctrl-C to stop cat. With other shells: s=$(stty -g); stty -icanon; cat > file Followed by: stty "$s" Or just: stty -icanon; cat > file stty sane Of course, things like cat | wc -l or wc -l won't work. Because Ctrl+C kills all the processes in the foreground process group. You could do: STTY=-icanon cat | (trap '' INT; wc -l) Or as suggested by @mikeserv: STTY='eol " "' wc -l That way, the buffer will be flushed every time you enter space. You're still in canonical mode, so you can still edit words (as opposed to lines) and use Ctrl+D to signify EOF. Or: STTY='-icanon min 0 time 30' wc -l EOF will come 3 seconds after you stop typing. Or: STTY=-icanon sed -n '/^EOF$/q;p' | wc -l And enter EOF (the 3 letters on a line on its own) to end the input. As suggested by Gilles, where possible (as in generally not a telnet/ssh session for instance), use pbpaste instead of pasting. (That's on OSX; under X11, call xsel or xclip.): pbpaste | wc -l That will also avoid problems with some control characters (like ^C) that may be found in the copy-paste buffer.
Terminal does not accept pasted or typed lines of more than 1024 characters [duplicate]
1,389,293,469,000
I tested this on different GNU/Linux installations: perl -e 'while(1){open($a{$b++}, "<" ,"/dev/null") or die $b;print " $b"}' System A and D The first limit I hit is 1024. It is easily raised by putting this into /etc/security/limits.conf: * hard nofile 1048576 and then run: ulimit -n 1048576 echo 99999999 | sudo tee /proc/sys/fs/file-max Now the test goes to 1048576. However, it seems I cannot raise it above 1048576. If I put 1048577 in limits.conf it is simply ignored. What is causing that? System B On system B I cannot even get to 1048576: echo 99999999 | sudo tee /proc/sys/fs/file-max /etc/security/limits.conf: * hard nofile 1048576 Here I get: $ ulimit -n 65537 bash: ulimit: open files: cannot modify limit: Operation not permitted $ ulimit -n 65536 #OK Where did that limit come from? System C This system also has the 1048576 limit in limits.conf and 99999999 in /proc/sys/fs/file-max. But here the limit is 4096: $ ulimit -n 4097 -bash: ulimit: open files: cannot modify limit: Operation not permitted $ ulimit -n 4096 # OK How do I raise that to (at least) 1048576? (Note to self: Don't do: echo 18446744073709551616 | sudo tee /proc/sys/fs/file-max)
Check that /etc/ssh/sshd_config contains: UsePAM=yes and that /etc/pam.d/sshd contains: session required pam_limits.so In the comment below @venimus states the 1M limit is hardcoded: The kernel 2.6.x source states ./fs/file.c:30:int sysctl_nr_open __read_mostly = 1024*1024; which is 1048676 The 1048576 is per process. So by having multiple processes this limit can be overcome.
Fixing ulimit: open files: cannot modify limit: Operation not permitted
1,389,293,469,000
I want update my linux in one shell but by default wget or axel in updater use all the bandwidth. How can I limit the speed in this shell? I want other shells to have a fair share, and to limit everything in that shell – something like a proxy! I use Zsh and Arch Linux. This question focuses on process-wide or session-wide solutions. See How to limit network bandwidth? for system-wide or container-wide solutions on Linux.
Have a look at trickle a userspace bandwidth shaper. Just start your shell with trickle and specify the speed, e.g.: trickle -d 100 zsh which tries to limit the download speed to 100KB/s for all programs launched inside this shell. As trickle uses LD_PRELOAD this won't work with static linked programs but this isn't a problem for most programs.
Limiting a specific shell's internet bandwidth usage
1,389,293,469,000
We have a regular job that does du summaries of a number of subdirectories, picking out worst offenders, and use the output to find if there are things that are rapidly rising to spot potential problems. We use diff against snapshots to compare them. There is a top level directory, with a number (few hundred) of subdirectories, each of which may contain 10's of thousands of files each (or more). A "du -s" in this context can be very IO aggressive, causing our server to bail its cache and then massive IO spikes which are a very unwelcome side affect. What strategy can be used to get the same data, without the unwanted side effects?
Take a look at ionice. From man ionice: This program sets or gets the io scheduling class and priority for a program. If no arguments or just -p is given, ionice will query the current io scheduling class and priority for that process. To run du with the "idle" I/O class, which is the lowest priority available, you can do something like this: ionice -c 3 du -s This should stop du from interfering with other process' I/O. You might also want to consider renicing the program to lower its CPU priority, like so: renice -n 19 "$duPid" You can also do both at initialisation time: nice -n 19 ionice -c 3 du
Can the "du" program be made less aggressive?
1,389,293,469,000
In our cluster, we are restricting our processes resources, e.g. memory (memory.limit_in_bytes). I think, in the end, this is also handled via the OOM killer in the Linux kernel (looks like it by reading the source code). Is there any way to get a signal before my process is being killed? (Just like the -notify option for SGE's qsub, which will send SIGUSR1 before the process is killed.) I read about /dev/mem_notify here but I don't have it - is there something else nowadays? I also read this which seems somewhat relevant. I want to be able to at least dump a small stack trace and maybe some other useful debug info - but maybe I can even recover by freeing some memory. One workaround I'm currently using is this small script which frequently checks if I'm close (95%) to the limit and if so, it sends the process a SIGUSR1. In Bash, I'm starting this script in background (cgroup-mem-limit-watcher.py &) so that it watches for other procs in the same cgroup and it quits automatically when the parent Bash process dies.
It's possible to register for a notification for when a cgroup's memory usage goes above a threshold. In principle, setting the threshold at a suitable point below the actual limit would let you send a signal or take other action. See: https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt
receive signal before process is being killed by OOM killer / cgroups
1,389,293,469,000
Is there a (technical or practical) limit to how large you can configure the maximum number of open files in Linux? Are there some adverse effects if you configure it to a very large number (say 1-100M)? I'm thinking server usage here, not embedded systems. Programs using huge amounts of open files can of course eat memory and be slow, but I'm interested in adverse effects if the limit is configured much larger than necessary (e.g. memory consumed by just the configuration).
I suspect the main reason for the limit is to avoid excess memory consumption (each open file descriptor uses kernel memory). It also serves as a safeguard against buggy applications leaking file descriptors and consuming system resources. But given how absurdly much RAM modern systems have compared to systems 10 years ago, I think the defaults today are quite low. In 2011 the default hard limit for file descriptors on Linux was increased from 1024 to 4096. Some software (e.g. MongoDB) uses many more file descriptors than the default limit. The MongoDB folks recommend raising this limit to 64,000. I've used an rlimit_nofile of 300,000 for certain applications. As long as you keep the soft limit at the default (1024), it's probably fairly safe to increase the hard limit. Programs have to call setrlimit() in order to raise their limit above the soft limit, and are still capped by the hard limit. See also some related questions: https://serverfault.com/questions/356962/where-are-the-default-ulimit-values-set-linux-centos https://serverfault.com/questions/773609/how-do-ulimit-settings-impact-linux
Largest allowed maximum number of open files in Linux
1,389,293,469,000
I'm trying to increase the maximum number of open files for the current user > ulimit -n 1024 I attempt to increase and fail as follows > ulimit -n 4096 bash: ulimit: open files: cannot modify limit: Operation not permitted So I do the natural thing and try to run with temp permission, but fail > sudo ulimit -n 4096 sudo: ulimit: command not found Questions How to increase ulimit? Why is this happening? Using Fedora 14
ulimit is a shell built-in, not an external command. It needs to be built in because it acts on the shell process itself, like cd: the limits, like the current directory, are a property of that particular process. sudo bash -c 'ulimit -n 4096' would work, but it would change the limit for the bash process invoked by sudo only, which would not help you. There are two values for each limit: the hard limit and the soft limit. Only root can raise the hard limit; anyone can lower the hard limit, and the soft limit can be modified in either direction with the only constraint that it cannot be higher than the hard limit. The soft limit is the actual value that matters. Therefore you need to arrange that all your processes have a hard limit for open files of at least 4096. You can keep the soft limit at 1024. Before launching that process that requires a lot of files, raise the soft limit. In /etc/security/limits.conf, add the lines paislee hard nofile 4096 paislee soft nofile 1024 where paislee is the name of the user you want to run your process as. In the shell that launches the process for which you want a higher limit, run ulimit -Sn unlimited to raise the soft limit to the hard limit.
ulimit PICKLE: "Operation not permitted" and "Command not found"
1,389,293,469,000
I am using convert to create a PDF file from about 2,000 images: convert 0001.miff 0002.miff ... 2000.miff -compress jpeg -quality 80 out.pdf The process terminates reproducible when the output file has reached 2^31-1 bytes (2 GB −1) with the message convert: unknown `out.pdf'. The PDF file specification allows for ≈10 GB. I tried to pull more information from -debug all, but I didn’t see anything helpful in the logging output. The file system is ext3 which allows for files at least up to 16 GiB (may be more). As to ulimit, file size is unlimited. /etc/security/limits.conf only contains commented-out lines. What else can cause this and how can I increase the limit? ImageMagick version: 6.4.3 2016-08-05 Q16 OpenMP Distribution: SLES 11.4 (i586)
Your limitation does not stem indeed from the filesystem; or from package versions I think. Your 2GB limit is coming from you using a 32-bit version of your OS. The option to increase the file would be installing a 64-bit version if the hardware supports it. See Large file support Traditionally, many operating systems and their underlying file system implementations used 32-bit integers to represent file sizes and positions. Consequently, no file could be larger than 232 − 1 bytes (4 GB − 1). In many implementations, the problem was exacerbated by treating the sizes as signed numbers, which further lowered the limit to 231 − 1 bytes (2 GB − 1).
Get over 2 GB limit creating PDFs with ImageMagick
1,389,293,469,000
According to Wikipedia, ZFS has the following limits: Max. volume size: 256 trillion yobibytes (2128 bytes) Max. file size: 16 exbibytes (264 bytes) Max. number of files: Per directory: 248 Per file system: unlimited Max. filename length: 255 ASCII characters (fewer for multibyte character encodings such as Unicode) Why does it have these limits? What internally limits these things? Why couldn't ZFS have a theoretically unlimited volume size, or filename length, and so on?
What internally limits these things? Long answer ZFS's limits are based on fixed-size integers because that's the fastest way to do arithmetic in a computer. The alternative is called arbitrary-precision arithmetic, but it's inherently slow. This is why arbitrary-precision arithmetic is an add-on library in most programming languages, not the default way of doing arithmetic. There are exceptions, but these are usually mathematics-oriented DSLs like bc or Wolfram Language. If you want fast arithmetic, you use fixed-size words, period. The speed hit from arbitrary precision arithmetic is bad enough inside a computer's RAM, but when a filesystem doesn't know how many reads it needs to make in order to load all of the numbers it needs into RAM, that would be very costly. A filesystem based on arbitrary sized integers would have to piece each number together from multiple blocks, requiring a lot of extra I/O from multiple disk hits relative to a filesystem that knows up front how big its metadata blocks are. Now let's discuss the practical import of each of those limits: Max. volume size 2128 bytes is effectively infinite already. We can write that number instead as roughly 1038 bytes, which means in order to hit that limit, you'd have to have a single Earth-sized ZFS pool where every one of its 1050 atoms is used to store data, and each byte is stored by an element no larger than 1012 atoms. 1012 atoms sounds like a lot, but it's only about 47 picograms of silicon. The density of data in grams is 2.5×10-13 g/byte for microSD storage, as of this writing: the largest available SD card is 1 TB, and it weighs about 0.25g.¹ A microSD card isn't made of pure silicon, but you can't ignore the packaging, because we'll need some of that in our Earth-computer, too; we'll assume that the low density of the plastic and the higher density of the metal pins average out to about the same density as silicon. We also need some slop here to account for inter-chip interconnects, etc. A pico-anything is 10-12, so our 47 pg and 2.5×10-13 g/B numbers above are about an order of magnitude apart. That means that to a first approximation, to construct a single maximally-sized ZFS pool out of the current largest-available microSD cards, you might have to use an entire Earth-sized planet's worth of atoms, and then only if you start off with something close to the right mix of silicon, carbon, gold, etc. such that you don't end up with so much slag that you blow the estimate. If you think it's unfair that I'm using flash storage here instead of something denser like tape or disk, consider the data rates involved, as well as the fact that we haven't even tried to consider redundancy or device replacement. We have to assume that this Earth-sized ZFS pool will be composed of vdevs that never need replacing, and that they can transfer data fast enough that you can fill the pool in a reasonable time. Only solid-state storage makes sense here. The approximation above is quite rough, and storage densities continue to climb, but keep things in perspective: in the future, to pull off this stunt of constructing maximally-sized ZFS pools, we'll still need to use the total crust-to-core resources of small planets. Max. file size So we've got a filesystem the size of a planet now. What can we say about the size of the files stored within it? Let's give every person on the planet their own equally-sized slice of that pool: 1038 ÷ 1010 ≈ 1028 ÷ 1019 ≈ 109 That's the size of the pool divided by the population of Earth² divided by the maximum file size, in round numbers. In other words, every person can store about a billion maximally-sized files in their tiny personal slice of our Earth-sized ZFS storage array. (If it's bothering you that our storage array is still the size of a planet here in this example, remember that it had to be that big in order to hit the first limit above, so it is fair to continue to use it for this example here.) That per-file maximum file size is 16 EiB under ZFS, which is 16× larger than the maximum volume size of ext4, which is considered ridiculously large today in its own right. Imagine someone using their slice of Planet ZFS (formerly known as Earth) to store backups of maximally-sized ext4 disk images. Further, this demented customer (there's always one) has decided to tar them up, 16 per file, just to hit the ZFS maximum file size limit. Having done so, that customer will still have room to do that again about a billion more times. If you're going to worry about this limit, that's the sort of problem you have to imagine needing to solve. And that's without even getting into the required data bandwidth needed to transfer that file to the online backup service once. Let's also be clear about how improbable that Earth-computer is. First you'd have to figure out how to construct it without allowing it to collapse in on itself under the force of gravity and become molten at the center. Then you'd have to figure out how to manufacture it using every single atom on Earth without any leftover slag. Now, since you've turned the surface of the Earth-computer into a hellscape, all the people trying to make use of that computer would have to live somewhere else, a place where you'd frequently hear people cursing the speed-of-light delays that add latency to every transaction between between the Earth-computer and wherever they live now. If you think your ~10ms Internet ping time is a problem today, imagine putting 2.6 light-seconds between your keyboard and the computer if we move the population of Earth to the moon so we can make this Earth-computer. ZFS's volume and file size limitations are science fiction big. Max. number of files per directory 248 is roughly 1014 files per directory, which is only going to be a problem for applications that try to treat ZFS as a flat filesystem. Imagine an Internet researcher who is storing files about each IP address on the Internet. Let's say there are exactly 232 IPs being tracked after first subtracting the slack spaces in the old IPv4 space and then adding in the hosts now using IPv6 addresses to make the arithmetic come out nice. What problem is this researcher trying to tackle which requires him to construct a filing system that can store more than 216 — 65536! — files per IP? Let's say this researcher is storing files per TCP port as well, so that with just one file per IP:port combination, we've eaten up our 216 multiplier. The fix is simple: store the per-IP files in a subdirectory named after the IP, and store the per-port files in a subdirectory of the directory holding the per-IP files. Now our researcher can store 1014 files per IP:port combination, sufficient for a long-term global Internet monitoring system. ZFS's directory size limit isn't what I'd call "science fiction big," as we know of real applications today that can hit this limit, but the power of hierarchy means you can just add another directory layer if you run up against the limit. This limit is probably set as low as this purely to avoid making the data structures needed to find files in a given directory too big to fit into RAM. It encourages you to organize your data hierarchically to avoid this problem in the first place. Max. filename length While this one limit does seem stringent, it actually makes sense. This limit doesn't originate with ZFS. I believe it dates back to FFS in 4.2BSD. I can't find the quote, but when this limit was young, someone pointed out that this is enough space for "a short letter to grandma." So, that begs the question: why do you need to name your files more descriptively than that? Any true need greater than that probably calls for hierarchy, at which point you multiply the limit by the number of levels in the hierarchy, plus one. That is, if the file is buried 3 levels deep in the hierarchy, the limit on the name of the full path is 4 × 255 = 1020 characters. Ultimately, this limit is a human limit, not a technological limit. File names are for the human's use, and humans really don't need more than 255 characters to usefully describe the content of a file. A higher limit simply wouldn't be helpful. The limitation is old (1983) because humans haven't acquired the ability to cope with longer file names since then. If you're asking where the odd-looking "255" value comes from, it's some limitation based on the size of an 8-bit byte. 28 is 256, and the N-1 value used here probably means they're using a null terminator to mark the end of the file name string in a 256-byte field in the per-file metadata. Short answer Practically speaking, what limits? Footnotes: I measured this using a scale specified with an accuracy of 0.01g. 7.55 billion, as of this writing. Above, we're rounding this off to 1010, which we should hit by mid-century.
What is the sense behind ZFS's limits?
1,389,293,469,000
I want to backup 1 terabyte of data to an external disk. I am using this command: tar cf /media/MYDISK/backup.tar mydata PROBLEM: My poor laptop freezes and crashes whenever I use 100% CPU or 100% disk (if you want to react about this please write here). So I want to stay at around 50% CPU and 50% disk max. My question: How to throttle CPU and disk with the tar command? Rsync has a --bwlimit option, but I want an archive because 1) there are many small files 2) I prefer to manage a single file rather a tree. That's why I use tar.
You can use pv to throttle the bandwidth of a pipe. Since your use case is strongly IO-bound, the added CPU overhead of going through a pipe shouldn't be noticeable, and you don't need to do any CPU throttling. tar cf - mydata | pv -L 1m >/media/MYDISK/backup.tar
Preventing tar from using too much CPU and disk (old laptop crashes if 100%)
1,389,293,469,000
We have a script which runs on our web servers, triggered by customer action, which initiates a unix process to generate some cache files. Because this process acts upon files supplied by our customer, it sometimes misbehaves, running so long that the PHP process which spawns it times out or using so much CPU time that a sysadmin will kill it. Is there any command which I could run which would limit the CPU time / runtime of the process? I am looking for a command like /usr/bin/time, where I could run that command and pass it the commandline I want it to run and limit.
In addition to Gilles answer there is cpulimit tool that does exactly what you want - including modifing in runtime. Additionally it can limit to only certain CPUs/Cores IIRC.
Can I limit a process to a certain amount of time / CPU cycles?
1,389,293,469,000
I used to work with an HP-UX system and the old admin told me there is an upper limit on the number of zombie processes you can have on the system, I believe 1024. Is this a hard fact ceiling? I think you could have any number of zombies just as if you can have any number of processes...? Is it different value from distro to distro? What occurs if we hit the upper limit and try to create another zombie?
I don't have HP-UX available to me, and I've never been a big HP-UX fan. It appears that on Linux, a per-process or maybe per-user limit on how many child processes exists. You can see it with the limit Zsh built-in (seems to be analogous to ulimit -u in bash): 1002 % limit cputime unlimited filesize unlimited datasize unlimited stacksize 8MB coredumpsize 0kB memoryuse unlimited maxproc 16136 ... That's on an Arch linux laptop. I wrote a little program to test that limit: #include <stdio.h> #include <signal.h> #include <unistd.h> #include <errno.h> #include <string.h> #include <sys/types.h> #include <sys/wait.h> volatile int sigchld_cnt = 0; voida sigchld_hdlr(int signo) { ++sigchld_cnt; } int main(int ac, char **av) { int looping = 1; int child_cnt = 0; int status; signal(SIGCHLD, sigchld_hdlr); printf("Parent PID %d\n", getpid()); while (looping) { switch (fork()) { case 0: _exit(0); break; case -1: fprintf(stderr, "Problem with fork(), %d children: %s\n", child_cnt, strerror(errno)); looping = 0; break; default: ++child_cnt; break; } } fprintf(stderr, "Sleeping, forked %d child processes\n", child_cnt); fprintf(stderr, "Received %d sigchild\n", sigchld_cnt); sleep(10); looping = 1; do { int x = wait(&status); if (x != -1) --child_cnt; else if (errno != EINTR) { fprintf(stderr, "wait() problem %d children left: \%s\n", child_cnt, strerror(errno)); looping = 0; } } while (looping); printf("%d children left, %d SIGCHLD\n", child_cnt, sigchld_cnt); return 0; } It was surprisingly difficult to "collect" all the zombies by calling wait(2) enough times. Also, the number of SIGCHLD signals received is never the same as the number of child processes forked: I believe the linux kernel sometimes sends 1 SIGCHLD for a number of exited child processes. Anyway, on my Arch linux laptop, I get 16088 child processes forked, and that has to be the number of zombies, as the program doesn't do wait(2) system calls in the signal handler. On my Slackware 12 server, I get 6076 child processes, which closely matches the value of maxproc 6079. My user ID has 2 other processes running, sshd and Zsh. Along with the first, non-zombie instance of the program above that makes 6079. The fork(2) system call fails with a "Resource temporarily unavailable" error. I don't see any other evidence of what resource is unavailable. I do get somewhat different numbers if I run my program simultaneously in 2 different xterms, but they add up to the same number as if I run it in one xterm. I assume it's process table entries, or swap or some system-wide resource, and not just an arbitrary limit. I don't have anything else running to try it on right now.
Is there an upper limit to the number of zombie processes you can have?
1,389,293,469,000
I want a user to run a specific process on the system with a negative nice value. I can't simply fork the process to background as this specific program is a minecraft server and I rely on the command line to control the server. My current bash script looks like this (the important part): sleep 10 && \ sudo renice -n $NICENESS $(ps -u $(id -u) -o "%p:%c" | sed -n "s/:java$//p") & \ java -Xmx8G -Xms2G -jar minecraft_server.jar nogui sleep simply delays execution of renice. renice itself uses ps to check for a java process using the users own ID. There might be other instances of java spawning under different users, but the minecraft server runs under its own user minecraft. I obviously don't want to enter a password every time I start the server. from /etc/sudoers: minecraft ALL = NOPASSWD: /etc/renice Is there a more elegant way to do this? Simply using nice is not an option, sudo nice bash in combination with the NOPASSWD: option would be a great security issue.
The pam_limits.so module can help you there. It allows you to set certain limits on specific individual users and groups or wildcards or ranges of users and groups. The limits you can set are typically ulimit settings but also on the number of concurrent login sessions, processes, CPU time, default priority and maximum priority (renice). Check the limits.conf man page for more. For example you can configure your mindcraft group to have all their processes started with an increased default priority and you can allow them to use the nice and renice commands to increase the priority of their important jobs manually as well instead of only reducing priority. # /etc/security/limits.conf # increase default and max prio for members of the mindcraft group @mindcraft hard priority -10 @mindcraft hard nice -18
How can I allow a user to prioritize a process to negative niceness?
1,389,293,469,000
Occasionally some processes on my GNU/Linux desktop (such as gv and gnash) use up the physical memory and cause thrashing. Since these processes aren't important, I want them to be automatically killed if they use too much memory. I think the /etc/security/limits.conf file and the -v option could be used for this. The question is whether it limits the amount of available memory per process of a particular user, or the sum for all the processes of a user. Also I would like to ask how to make change to that file in effect without rebooting.
There's also the ulimit mechanism. There's a system call (in Linux, it's a C library function) ulimit(3) and a Bash builtin ulimit. Type ulimit -a to see all the things you can limit to. To see the current virtual memory limit say ulimit -v. You can set it by saying ulimit -v INTEGER-KILOBYTES. Running ulimit changes things for your current shell, and you can only select a value smaller than the current one. To run a command with limited virtual memory, you can just use a Bash sub-shell: ( ulimit -v 131072; some-app )
How to limit available virtual memory per process [duplicate]
1,389,293,469,000
Could someone tell me how to set the default value of nice (as displayed by top) of a user? I have found that /etc/security/limits.conf is the place but if I put either: username_of_a_guy - nice 19 username_of_a_guy soft nice 19 username_of_a_guy hard nice 19 It doesn't work (while it should, right?). Note that I've rebooted since then. Thank you very much in advance for any help. I'm using debian unstable (uptodate). Context: At my work, we have a local network: everyone has its own computer and everyone can create an account on someone else's machine if one likes. The rule of thumb is simply that if you work on someone else computer, please nice your processes (nice 19). I would like to set the default nice value for a given user to 19 once and for all.
I believe the correct format is: @users - priority 10 username - priority 19 This is an example of the settings I am using in production (obviously with real users/groups). The nice setting is to determine the minimum nice value (i.e. maximum priority) someone can set their process to, not their default priority.
Set default nice value for a given user (limits.conf)
1,389,293,469,000
The default PID max number is 32768. To get this information type: cat /proc/sys/kernel/pid_max 32768 or sysctl kernel.pid_max kernel.pid_max = 32768 Now, I want to change this number... but I can't. Well, actually I can change it to a lower value or the same. For example: linux-6eea:~ # sysctl -w kernel.pid_max=32768 kernel.pid_max = 32768 But I can't do it for a greater value than 32768. For example: linux-6eea:~ # sysctl -w kernel.pid_max=32769 error: "Invalid argument" setting key "kernel.pid_max" Any ideas ? PS: My kernel is Linux linux-6eea 3.0.101-0.35-pae #1 SMP Wed Jul 9 11:43:04 UTC 2014 (c36987d) i686 i686 i386 GNU/Linux
The value can only be extended up to a theoretical maximum of 32768 for 32 bit systems or 4194304 for 64 bit. From man 5 proc: /proc/sys/kernel/pid_max This file (new in Linux 2.5) specifies the value at which PIDs wrap around (i.e., the value in this file is one greater than the maximum PID). The default value for this file, 32768, results in the same range of PIDs as on earlier kernels. On 32-bit platfroms, 32768 is the maximum value for pid_max. On 64-bit systems, pid_max can be set to any value up to 2^22 (PID_MAX_LIMIT, approximately 4 million).
How to change the kernel max PID number? [duplicate]
1,389,293,469,000
getrlimit(2) has the following definition in the man pages: RLIMIT_AS The maximum size of the process's virtual memory (address space) in bytes. This limit affects calls to brk(2), mmap(2) and mremap(2), which fail with the error ENOMEM upon exceeding this limit. Also automatic stack expansion will fail (and generate a SIGSEGV that kills the process if no alternate stack has been made available via sigaltstack(2)). Since the value is a long, on machines with a 32-bit long either this limit is at most 2 GiB, or this resource is unlimited. What is meant by "automatic stack expansion" here? Does the stack in a Linux/UNIX environment grow as needed? If yes, what's the exact mechanism?
Yes stacks grow dynamically. The stack is in the top of the memory growing downwards towards the heap. -------------- | Stack | -------------- | Free memory| -------------- | Heap | -------------- . . The heap grows upwards (whenever you do malloc) and the stack grows downwards as and when new functions are called. The heap is present just above the BSS section of the program. Which means the size of your program and the way it allcates memory in heap also affect the maximum stack size for that process. Usually the stack size is unlimited (till heap and stack areas meet and/or overwrite which will give a stack overflow and SIGSEGV :-) This is only for the user processes, The kernel stack is fixed always (usually 8KB)
What is "automatic stack expansion"?
1,389,293,469,000
I know Linux supports multiple users being logged in at the same time. But what's the maximum number of users that can be logged into Linux at the same time? I see there are there are 69 tty files (ttyn or ttysn, where n is an integer, such as tty0, tty1, tty2... ) in my /dev directory. I assume that these files are the shells. So I am thinking that this Linux system will support only 69 user logged in simultaneously. Is my thinking correct? If my assumption is wrong, please explain the users limit of Linux, including how it's implemented. Also, how do I access the details of already logged in users? I know commands w, who, but I am looking for sophisticated tools.
When logging in using SSH, you use a pseudo-terminal (a pty) allocated to the SSH daemon, not a real one (a tty). Pseudo-terminals are created and destroyed as needed. You can find the number of ptys allowed to be allocated at one time at /proc/sys/kernel/pty/max, and this value can be modified using the kernel.pty.max sysctl variable. Assuming that no other ptys are in use, that would be your limit. w, who, and users are the canonical tools for accessing information about logged in users. last and lastlog also contain historical data.
How many users does Linux support being logged in at the same time via SSH?
1,389,293,469,000
In a python script, I am creating a bunch of symbolic links chained together. example: link1->link2->link3->.......->somefile.txt I was wondering how you can change the max number of symlinks to be greater than 20?
On Linux (3.5 at least), it's hardcoded to 40 (see follow_link() in fs/namei.c), and note that it's the number of links followed when resolving all the components of a path, you can only change it by recompiling the kernel. $ ln -s . 0 $ n=0; repeat 50 ln -s $((n++)) $n $ ls -LdF 39 39/ $ ls -LdF 40 ls: cannot access 40: Too many levels of symbolic links $ ls -LdF 20/18 10/10/10/6 10/10/10/6/ 20/18/ $ ls -LdF 20/19 10/10/10/7 ls: cannot access 20/19: Too many levels of symbolic links ls: cannot access 10/10/10/7: Too many levels of symbolic links
How do you increase MAXSYMLINKS
1,389,293,469,000
I am using Fedora 17 and over the last few days I am having an issue with my system. Whenever I try to start httpd it shows me: Error: No space left on device When I execute systemctl status httpd.service, I receive the following output: httpd.service - The Apache HTTP Server (prefork MPM) Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled) Active: inactive (dead) since Tue, 19 Feb 2013 11:18:57 +0530; 2s ago Process: 4563 ExecStart=/usr/sbin/httpd $OPTIONS -k start (code=exited, status=0/SUCCESS) CGroup: name=systemd:/system/httpd.service I tried to Google this error and all links point to clearing the semaphores. I don't think this is the issue as I tried to clear the semaphores but that didn't work. Edit 1 here is the output of df -g [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on rootfs 50G 16G 32G 34% / devtmpfs 910M 0 910M 0% /dev tmpfs 920M 136K 920M 1% /dev/shm tmpfs 920M 1.2M 919M 1% /run /dev/mapper/vg-lv_root 50G 16G 32G 34% / tmpfs 920M 0 920M 0% /sys/fs/cgroup tmpfs 920M 0 920M 0% /media /dev/sda1 497M 59M 424M 13% /boot /dev/mapper/vg-lv_home 412G 6.3G 385G 2% /home Here is the deatail of httpd error log [root@localhost ~]# tail -f /var/log/httpd/error_log [Tue Feb 19 11:45:53 2013] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Tue Feb 19 11:45:53 2013] [notice] Digest: generating secret for digest authentication ... [Tue Feb 19 11:45:53 2013] [notice] Digest: done [Tue Feb 19 11:45:54 2013] [notice] Apache/2.2.23 (Unix) DAV/2 PHP/5.4.11 configured -- resuming normal operations [Tue Feb 19 11:47:23 2013] [notice] caught SIGTERM, shutting down [Tue Feb 19 11:48:00 2013] [notice] SELinux policy enabled; httpd running as context system_u:system_r:httpd_t:s0 [Tue Feb 19 11:48:00 2013] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Tue Feb 19 11:48:00 2013] [notice] Digest: generating secret for digest authentication ... [Tue Feb 19 11:48:00 2013] [notice] Digest: done [Tue Feb 19 11:48:00 2013] [notice] Apache/2.2.23 (Unix) DAV/2 PHP/5.4.11 configured -- resuming normal operations tail: inotify resources exhausted tail: inotify cannot be used, reverting to polling Edit 2 here is the output of df-i [root@localhost ~]# df -i Filesystem Inodes IUsed IFree IUse% Mounted on rootfs 3276800 337174 2939626 11% / devtmpfs 232864 406 232458 1% /dev tmpfs 235306 3 235303 1% /dev/shm tmpfs 235306 438 234868 1% /run /dev/mapper/vg-lv_root 3276800 337174 2939626 11% / tmpfs 235306 12 235294 1% /sys/fs/cgroup tmpfs 235306 1 235305 1% /media /dev/sda1 128016 339 127677 1% /boot /dev/mapper/vg-lv_home 26984448 216 26984232 1% /home Thanks
Here we see evidence of a problem: tail: inotify resources exhausted By default, Linux only allocates 8192 watches for inotify, which is ridiculously low. And when it runs out, the error is also No space left on device, which may be confusing if you aren't explicitly looking for this issue. Raise this value with the appropriate sysctl: fs.inotify.max_user_watches = 262144 (Add this to /etc/sysctl.conf and then run sysctl -p.)
Httpd : no space left on device
1,389,293,469,000
The default open file limit per process is 1024 on - say - Linux. For certain daemons this is not enough. Thus, the question: How to change the open file limit for a specific user?
On Linux you can configure it via limits.conf, e.g. via # cd /etc/security # echo debian-transmission - nofile 8192 > limits.d/transmission.conf (which sets both the hard and soft limit for processes started under the user debian-transmission to 8192) You can verify the change via: # sudo -u debian-transmission bash -c "ulimit -a" [..] open files (-n) 8192 [..] If a daemon is already running, it has to be restarted such that the new limit is picked up. In case the daemon is manually started from a user session, the user has to re-login to get the new limit. Alternatively, you can also specify additional limits directly in /etc/security/limits.conf, of course - but I prefer the .d directory approach for better maintainability. For enforcing different soft/hard limits use two entries, e.g. debian-transmission soft nofile 4096 debian-transmission hard nofile 8192 (rationale behind this: the soft value is set after the user logs in but a users process is allowed to increases the limit up to the hard limit) The limits.conf/limits.d configuration is used by pam_limits.so, which is enabled by default on current Linux distributions. Related There is also a system-wide limit on Linux, /proc/sys/fs/file-max: This file defines a system-wide limit on the number of open files for all processes. For example the default on Ubuntu 10.04: $ cat /proc/sys/fs/file-max 786046 The pseudo file /proc/sys/fs/file-nr provides more information, e.g. $ cat /proc/sys/fs/file-nr 1408 0 786046 the number of allocated file handles (i.e., the number of files presently opened); the number of free file handles; and the maximum number of file handles Thus, on the one hand, you also may have to adjust system-wide file-max limit, in case its is very small and/or the system is already very loaded. On the other hand, just increasing file-max is not sufficient, because it does not influence the soft/hard limits enforced by the pam_limits mechanism. To change file-max on the command line (no reboot necessary): # sysctl -w fs.file-max=786046 For permanent changes add fs.file-max=786046 to /etc/sysctl.conf or /etc/sysctl.d. The upper limit on fs.file-max is recorded in fs.nr_open. For example, (again) on Ubuntu 10.04: $ sysctl -n fs.nr_open 1048576 (which is 1024*1024) This sysctl is also configurable.
How to configure the process open file limit of a user?
1,389,293,469,000
In Linux, there is open file limit. I can use ulimit -n to see open file limit, which is 1024 default. Then I also can see per process open file soft/hard limit by looking at /proc/$PID/limits. I see soft = 1024 and hard = 4096. I am wondering what is the difference between these two output? Also, do setRlimit() and getRlimit() apply to system-wide or per process?
ulimit -n sets the soft limit by default; you can add the -H option to view/set the hard limit. For the most part, soft and hard limits behave like this: root's processes (actually, any process with CAP_SYS_RESOURCE) may raise or lower any limit on any process. any user's processes may lower any limit on other processes owned by that user. any user's processes may raise the soft limit up to the hard limit on processes owned by that user. If a process attempts to exceed its soft limit, the attempt will fail. So, hard limits function as a cap on soft limits (except for root, who as normal can do anything). There is an exception: A soft CPU limit sends a SIGXCPU signal. A process may choose to ignore that, or spend time doing cleanup, etc. Once the hard CPU limit is crossed, the kernel sends SIGKILL—which is not catchable, handleable, or ignorable. So in this case, the soft limit functions as a warning "you're out of CPU time—finish up and exit promptly, or else!" and the hard limit is the "or else." Most are per-process, but a few (such as RLIMIT_NPROC) are per user. The getrlimit(2) manual page specifies for each limit.
Difference between ulimit -n and /proc/$PID/limits
1,389,293,469,000
Following ARG_MAX, maximum length of arguments for a new process it seems like ARG_MAX is wrongly (or at least ambiguously) defined on my Mac Mini 3,1 running Ubuntu 12.04: $ getconf ARG_MAX # arguments 2097152 $ locate limits.h | xargs grep -ho 'ARG_MAX[ \t]\+[0-9]\+' | uniq | cut -d ' ' -f 8 131072 The actual limit seems to be somewhere between these: $ cd "$(mktemp -d)" $ touch $(seq 1 131072) && find . -mindepth 1 -printf x | wc -c && rm * 131072 $ touch $(seq 1 131073) && find . -mindepth 1 -printf x | wc -c && rm * 131073 $ touch $(seq 1 $(getconf ARG_MAX)) && find . -mindepth 1 -printf x | wc -c && rm * bash: /usr/bin/touch: Argument list too long I did a small search: cd "$(mktemp -d)" min=131072 max=2097152 while true do search=$((min + (max - min) / 2)) if touch $(seq 1 $search) 2>/dev/null then min=$search else max=$search fi [[ $((max - min)) -le 1 ]] && echo "ARG_MAX = $min" && break done Eventually this resulted in ARG_MAX = 314290, which doesn't seem to have any relation to either of the ARG_MAX values found before. Is this normal? Is there a simpler way to find the actual ARG_MAX? Did I misunderstand the definition of ARG_MAX? It seems it's actually the byte (or possibly character) length of the arguments with or without (?) the separating spaces. If it's really the byte length, are there also other restrictions?
Yes, it's the length in bytes, including the environment. Very roughly: $ { seq 1 314290; env; } | wc -c 2091391 linux sysconf The maximum length of the arguments to the exec(3) family of functions. Must not be less than _POSIX_ARG_MAX (4096). POSIX 2004 limits.h Maximum length of argument to the exec functions including environment data. Minimum Acceptable Value: {_POSIX_ARG_MAX}
What is a canonical way to find the actual maximum argument list length?
1,360,673,307,000
I have a standard Linux (Debian testing) laptop, with a swap partition. I do a lot of experiments with it. Some of them are really memory hungry and the way Linux behaves by default is an issue for me... Let's give a stupid example: Sit in front of the laptop Open a terminal Type python, then a = [0]*100000000 Now chances are high that you won't have enough RAM to handle that big list. Linux will fill the RAM, then the swap and, a couple of minutes later, the OOM killer will be triggered off and kill (almost) random services and hopefully, if you hit Ctrl+C at the good time, python, and if the terminal still had focus, the computer will become responsive again. I'd like to enforce some memory limits to avoid that unwanted swapping and to refuse to a process the right to allocate more memory than I have (in RAM). If the memory demand is below a certain limit or asked by root, then just kill the most memory hungry process of any user except root. ulimit -Sv [mem] I hear in the back! Ho Ho! "Use cgroups via cgexec!" someone says at the first row! Yes, you are right: these are indeed very good solutions. But: They do not apply system-wide The limits are set per-process The limits are static, disregarding the real amount a free RAM (AFAIK) Here and there, they say these are not really a good solution to enforce hard limits. What I'd like is that the kernel say: "You belongs to user foo (not root), you use a lot of memory and we're gonna run out of memory. Sorry dude... die now!" Or: "What the hell are you doing? You need x MB and there is only y MB available. Yes, SWAP is empty, but you don't intend to use the SWAP to do your dirty work, do you? No, I said no! No memory for you! If you insist, you're gonna die!"
Someone suggested in your hear cgroups. Well, try to seek that direction as it can provide you with: applied to a group of task you choose (thus not system wide but neither per process) the limits are set for the group the limits are static they can enforce hard limit on memory and/or memory+swap Something like that could bring you closer to your goals: group limited { memory { memory.limit_in_bytes = 50M; memory.memsw.limit_in_bytes = 50M; } } This tells that the tasks under this cgroup can use at maximum 50M of memory only and 50M of memory+swap, so when the memory is full, it won't swap, but if the memory is not full and some data could be mapped in swap, this could be allowed. Here is an excerpt from the cgroup's memory documentation: By using memsw limit, you can avoid system OOM which can be caused by swap shortage.
How to solve this memory issue gracefully?
1,360,673,307,000
SERVER:/etc # ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited pending signals (-i) 96069 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 96069 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited SERVER:/etc # How can I set the limit of the root user from 1024 to something else, PERMANENTLY? How can I set the ulimit globally? Will the changes take effect in the moment? p.s.: I already googled for it but can't find the file where I can set it permanently: SERVER:/etc # grep -RiI ulimit * 2>/dev/null | egrep -v ":#|#ulimit" init.d/boot.multipath: ulimit -n $MAX_OPEN_FDS init.d/multipathd: ulimit -n $MAX_OPEN_FDS rc.d/boot.multipath: ulimit -n $MAX_OPEN_FDS rc.d/multipathd: ulimit -n $MAX_OPEN_FDS and..: SERVER:/etc # grep -RiI 'MAX_OPEN_FDS' * 2>/dev/null init.d/boot.multipath:MAX_OPEN_FDS=4096 init.d/boot.multipath: if [ -n "$MAX_OPEN_FDS" ] ; then init.d/boot.multipath: ulimit -n $MAX_OPEN_FDS init.d/multipathd:MAX_OPEN_FDS=4096 init.d/multipathd: if [ -n "$MAX_OPEN_FDS" ] ; then init.d/multipathd: ulimit -n $MAX_OPEN_FDS rc.d/boot.multipath:MAX_OPEN_FDS=4096 rc.d/boot.multipath: if [ -n "$MAX_OPEN_FDS" ] ; then rc.d/boot.multipath: ulimit -n $MAX_OPEN_FDS rc.d/multipathd:MAX_OPEN_FDS=4096 rc.d/multipathd: if [ -n "$MAX_OPEN_FDS" ] ; then rc.d/multipathd: ulimit -n $MAX_OPEN_FDS SERVER:/etc #
Use pam_limits(8) module and add following two lines to /etc/security/limits.conf: root hard nofile 8192 root soft nofile 8192 This will increase RLIMIT_NOFILE resource limit (both soft and hard) for root to 8192 upon next login.
How to modify ulimit for open files on SUSE Linux Enterprise Server 10.4 permanently?
1,360,673,307,000
If I want to see all relevant log files of my apache2 server at once, I use tail -f /var/kunden/logs/*log /var/kunden/logs/*log /var/log/apache2/*log |grep -v robots|grep -v favicon But since those are too many files by now, I would like to encrease that limit. How can I increase it for one ssh session? And How could I increase it globally systemwide? I can see the open files limit is 1024 on my machine: ulimit -n 1024
It is Important to know that there are two kinds of limits: A hard limit is configurable by root only. This is the highest possible value (limit) for the soft limit. A soft limit can be set by an ordinary user. This is the actual limit in effect. Solution for a single session In the shell set the soft limit: ulimit -Sn 2048 This example will raise the actual limit to 2048 but the command will succeed only if the hard limit (check: ulimit -Hn) is the same or higher. If you need higher values, raise the hard limit using one of the methods below. The limits are set per process and they are inherited by newly spawned processes, so anything you run after this command in the same shell will have the new limits. Changing hard limit in a single session This is not easy because only root can change a hard limit and after switching to root you have to switch back to the original user. Here is the solution with sudo: sudo sh -c "ulimit -Hn 9000 ; exec su \"$USER\"" System-wide solution In Debian and many other systems using pam_limits you can set the system-wide limits in /etc/security/limits.conf and in files in /etc/security/limits.d. The conf file contains description. Example lines: @webadmins hard nofile 16384 @webadmins soft nofile 8192 This will set the hard limit and default soft limit for users in group webadmins after login. Other limits The hard limit value is limited by global limit of open file descriptors value in /proc/sys/fs/file-max which is pretty high by default in modern Linux distributions. This value is limited by NR_OPEN value used during kernel compilation. Is there not a better solution? Maybe you could check if all the *log files you feed to tail -f are really active files which need to be monitored. It is possible that some of them are already closed for logging and you can just open a smaller number of files.
How to circumvent "Too many open files" in debian
1,360,673,307,000
Multiple sessions of the same user. When one of them gets to the point that it can no longer run new programs, none of them can, not even a new login of that user. Other users can still run new programs just fine, including new logins. Normally user limits are in limits.conf, but its documentation says "please note that all limit settings are set per login. They are not global, nor are they permanent; existing only for the duration of the session." I'm nowhere close to running out of ram (44GB available), but I can't figure out what else to look at. What limits exist that would have a global effect on all sessions using the same UID, but not other UIDs? Edited on 6/12/16 at 8:45p to add: While writing the below I realized that the problem could be X11 related. This user account on this box is used nearly exclusively for GUI applications. Is there a good text based program I can try to run from bash that will use lots of resources and give good error messages? The box does not get to the point where it cannot even run ls. Unfortunately, the GUI programs this problem normally affects (Chrome and Firefox) do not do a good job of leaving error messages behind. Chrome tabs will start showing up blank or with the completely useless "Aw, Snap!" error. Firefox simply will refuse to start. The only even partially helpful error messages I managed to obtain came from trying to start Firefox from bash: [pascal@firefox ~]$ firefox --display=:0 --safe-mode Assertion failure: ((bool)(__builtin_expect(!!(!NS_FAILED_impl(rv)), 1))) && thread (Should successfully create image decoding threads), at /builddir/build/BUILD/firefox-45.2.0/firefox-45.2.0esr/image/DecodePool.cpp:359 #01: ???[/usr/lib64/firefox/libxul.so +0x10f2165] #02: ???[/usr/lib64/firefox/libxul.so +0xa2dd2c] #03: ???[/usr/lib64/firefox/libxul.so +0xa2ee29] #04: ???[/usr/lib64/firefox/libxul.so +0xa2f4c1] #05: ???[/usr/lib64/firefox/libxul.so +0xa3095d] #06: ???[/usr/lib64/firefox/libxul.so +0xa52d44] #07: ???[/usr/lib64/firefox/libxul.so +0xa4c051] #08: ???[/usr/lib64/firefox/libxul.so +0x1096257] #09: ???[/usr/lib64/firefox/libxul.so +0x1096342] #10: ???[/usr/lib64/firefox/libxul.so +0x1dba68f] #11: ???[/usr/lib64/firefox/libxul.so +0x1dba805] #12: ???[/usr/lib64/firefox/libxul.so +0x1dba8b9] #13: ???[/usr/lib64/firefox/libxul.so +0x1e3e6be] #14: ???[/usr/lib64/firefox/libxul.so +0x1e48d1f] #15: ???[/usr/lib64/firefox/libxul.so +0x1e48ddd] #16: ???[/usr/lib64/firefox/libxul.so +0x20bf7bc] #17: ???[/usr/lib64/firefox/libxul.so +0x20bfae6] #18: ???[/usr/lib64/firefox/libxul.so +0x20bfe5b] #19: ???[/usr/lib64/firefox/libxul.so +0x21087cd] #20: ???[/usr/lib64/firefox/libxul.so +0x2108cd2] #21: ???[/usr/lib64/firefox/libxul.so +0x210aef4] #22: ???[/usr/lib64/firefox/libxul.so +0x22578b1] #23: ???[/usr/lib64/firefox/libxul.so +0x228ba43] #24: ???[/usr/lib64/firefox/libxul.so +0x228be1d] #25: XRE_main[/usr/lib64/firefox/libxul.so +0x228c073] #26: ???[/usr/lib64/firefox/firefox +0x4c1d] #27: ???[/usr/lib64/firefox/firefox +0x436d] #28: __libc_start_main[/lib64/libc.so.6 +0x21b15] #29: ???[/usr/lib64/firefox/firefox +0x449d] #30: ??? (???:???) Segmentation fault [pascal@firefox ~]$ firefox --display=:0 --safe-mode -g 1465632860286DeferredSave.extensions.jsonWARNWrite failed: Error: Could not create new thread! (resource://gre/modules/PromiseWorker.jsm:173:18) JS Stack trace: [email protected]:173:18 < [email protected]:292:9 < [email protected]:315:40 < [email protected]:933:23 < [email protected]:812:7 < this.PromiseWalker.scheduleWalkerLoop/<@Promise-backend.js:746:1 < [email protected]:770:1 < [email protected]:284:9 1465632860287addons.xpi-utilsWARNFailed to save XPI database: Error: Could not create new thread! (resource://gre/modules/PromiseWorker.jsm:173:18) JS Stack trace: [email protected]:173:18 < [email protected]:292:9 < [email protected]:315:40 < [email protected]:933:23 < [email protected]:812:7 < this.PromiseWalker.scheduleWalkerLoop/<@Promise-backend.js:746:1 < [email protected]:770:1 < [email protected]:284:9 1465632860288addons.xpi-utilsWARNFailed to save XPI database: Error: Could not create new thread! (resource://gre/modules/PromiseWorker.jsm:173:18) JS Stack trace: [email protected]:173:18 < [email protected]:292:9 < [email protected]:315:40 < [email protected]:933:23 < [email protected]:812:7 < this.PromiseWalker.scheduleWalkerLoop/<@Promise-backend.js:746:1 < [email protected]:770:1 < [email protected]:284:9 1465632860289addons.xpi-utilsWARNFailed to save XPI database: Error: Could not create new thread! (resource://gre/modules/PromiseWorker.jsm:173:18) JS Stack trace: [email protected]:173:18 < [email protected]:292:9 < [email protected]:315:40 < [email protected]:933:23 < [email protected]:812:7 < this.PromiseWalker.scheduleWalkerLoop/<@Promise-backend.js:746:1 < [email protected]:770:1 < [email protected]:284:9 1465632860289addons.xpi-utilsWARNFailed to save XPI database: Error: Could not create new thread! (resource://gre/modules/PromiseWorker.jsm:173:18) JS Stack trace: [email protected]:173:18 < [email protected]:292:9 < [email protected]:315:40 < [email protected]:933:23 < [email protected]:812:7 < this.PromiseWalker.scheduleWalkerLoop/<@Promise-backend.js:746:1 < [email protected]:770:1 < [email protected]:284:9 1465632860290addons.xpi-utilsWARNFailed to save XPI database: Error: Could not create new thread! (resource://gre/modules/PromiseWorker.jsm:173:18) JS Stack trace: [email protected]:173:18 < [email protected]:292:9 < [email protected]:315:40 < [email protected]:933:23 < [email protected]:812:7 < this.PromiseWalker.scheduleWalkerLoop/<@Promise-backend.js:746:1 < [email protected]:770:1 < [email protected]:284:9 1465632860358DeferredSave.addons.jsonWARNWrite failed: Error: Could not create new thread! (resource://gre/modules/PromiseWorker.jsm:173:18) JS Stack trace: [email protected]:173:18 < [email protected]:292:9 < [email protected]:315:40 < [email protected]:933:23 < [email protected]:812:7 < this.PromiseWalker.scheduleWalkerLoop/<@Promise-backend.js:746:1 < [email protected]:770:1 < [email protected]:284:9 1465632860359addons.repositoryERRORSaveDBToDisk failed: Error: Could not create new thread! (resource://gre/modules/PromiseWorker.jsm:173:18) JS Stack trace: [email protected]:173:18 < [email protected]:292:9 < [email protected]:315:40 < [email protected]:933:23 < [email protected]:812:7 < this.PromiseWalker.scheduleWalkerLoop/<@Promise-backend.js:746:1 < [email protected]:770:1 < [email protected]:284:9 Segmentation fault [pascal@firefox ~]$ [pascal@localhost ~]$ ulimit -aH core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 579483 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 65536 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited max user processes (-u) 579483 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited [pascal@localhost ~]$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 579483 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 32768 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 4096 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited [pascal@localhost ~]$ set /proc/*/task/*/cwd/.; echo $# 306 [pascal@localhost ~]$ prlimit RESOURCE DESCRIPTION SOFT HARD UNITS AS address space limit unlimited unlimited bytes CORE max core file size 0 unlimited blocks CPU CPU time unlimited unlimited seconds DATA max data size unlimited unlimited bytes FSIZE max file size unlimited unlimited blocks LOCKS max number of file locks held unlimited unlimited MEMLOCK max locked-in-memory address space 65536 65536 bytes MSGQUEUE max bytes in POSIX mqueues 819200 819200 bytes NICE max nice prio allowed to raise 0 0 NOFILE max number of open files 32768 65536 NPROC max number of processes 4096 579483 RSS max resident set size unlimited unlimited pages RTPRIO max real-time priority 0 0 RTTIME timeout for real-time tasks unlimited unlimited microsecs SIGPENDING max number of pending signals 579483 579483 STACK max stack size 8388608 unlimited bytes Edited on 6/13/16 at 10:24p to add: Not a GUI problem. When I tried to su to the user today, that doesn't even work. Root is fine. I can ls, vi, create a new user, su to that user, everything works fine for that user, I exit and try to su to the problem user and no go. Bash kinda loaded the first time, but even exit didn't work. I had to reconnect to get back to root. [root@firefox ~]# su - pascal Last login: Sat Jun 11 03:08:47 CDT 2016 on pts/1 -bash: fork: retry: No child processes -bash: fork: retry: No child processes -bash: fork: retry: No child processes -bash: fork: retry: No child processes -bash: fork: Resource temporarily unavailable -bash-4.2$ ls -bash: fork: retry: No child processes -bash: fork: retry: No child processes -bash: fork: retry: No child processes -bash: fork: retry: No child processes -bash: fork: Resource temporarily unavailable -bash: fork: retry: No child processes -bash: fork: retry: No child processes -bash: fork: retry: No child processes -bash: fork: retry: No child processes -bash: fork: Resource temporarily unavailable -bash-4.2$ exit logout -bash: fork: retry: No child processes -bash: fork: retry: No child processes -bash: fork: retry: No child processes -bash: fork: retry: No child processes -bash: fork: Resource temporarily unavailable -bash-4.2$ [root@firefox ~]# ls -l / total 126 lrwxrwxrwx. 1 root root 7 Jan 28 23:53 bin -> usr/bin ---- snip ---- drwxr-xr-x. 19 root root 23 May 27 18:03 var [root@firefox ~]# vi /etc/rc.local [root@firefox ~]# useradd test [root@firefox ~]# su - test [test@firefox ~]$ cd [test@firefox ~]$ ls -l total 0 [test@firefox ~]$ ls -l / total 126 lrwxrwxrwx. 1 root root 7 Jan 28 23:53 bin -> usr/bin ---- snip ---- drwxr-xr-x. 19 root root 23 May 27 18:03 var [test@firefox ~]$ vi /etc/rc.local [test@firefox ~]$ exit logout [root@firefox ~]# su - pascal Last login: Mon Jun 13 22:12:12 CDT 2016 on pts/1 su: failed to execute /bin/bash: Resource temporarily unavailable [root@firefox ~]#
nproc was the problem: [root@localhost ~]# ps -eLf | grep pascal | wc -l 4068 [root@localhost ~]# cat /etc/security/limits.d/20-nproc.conf # Default limit for number of user's processes to prevent # accidental fork bombs. # See rhbz #432903 for reasoning. * soft nproc 4096 root soft nproc unlimited [root@localhost ~]# man limits.conf states: Also, please note that all limit settings are set per login. They are not global, nor are they permanent; existing only for the duration of the session. One exception is the maxlogin option, this one is system wide. But there is a race, concurrent logins at the same time will not always be detected as such but only counted as one. It appears to me that nproc is only enforced per login but counts globally. So a login with nproc 8192 and 5000 threads would have no problems, but a simultaneous login of the same UID with nproc 4096 and 50 threads would not be able to create more because the global count (5050) is above its nproc setting. [root@localhost ~]# ps -eLf | grep pascal | grep google/chrome | wc -l 3792
How can I tell which user limit I am running into?
1,360,673,307,000
I recently installed Scientific linux 7 (64-bit) on my Dell box which has 2 cores (i.e. 2 logical CPUs). I haven't opened the computer to clean the fan, cooler, etc. for a while and the computer shuts down when using high (e.g. 100%) cpu on either (or both) cores for a few minutes (usually when noveau corrupts the display graphics or loading a PDF document on firefox or any long-running command on the terminal like make). Until I clean up the system, I want to limit the CPU usage % to about 75 or 80% per core (not per process!) so I don't get unexpected shutdowns. That way, processes can still take advantages of multiple CPUs but not push either of them above the CPU usage % limit. Any ways to do this?
After a few days of intensive research I found two methods of lowering the cpu usage for processes. Generally if you want to lower the cpu usage of the entire machine, there are probably some programs which use the most long-lasting cpu and you should restrict those rather than burden the entire machine. And if you doing this to save battery life then you might also want to control the hardware's power usage with e.g. tuned or powertop, most distros have tools to help you with that. Stop/Continue signals Signals were around since UNIX. You send a SIGSTOP or SIGTSTP to a process (the difference is the former may crash a process if it must do cleanup work, processes are not forced to stop at the latter, use the one that suits your process) to pause it (freeing the CPU and possibly lowering temperature). Then you send a SIGCONT to the process to resume it, taking the CPU. This method will make a series of "spikes" on the cpu graph and will stop the processor from overheating because you're not giving it enough time for that by pausing processes. A consequence of this method is that these pauses are not smooth, meaning video playing and even web browsing won't be smooth either, so you may want to use this method with shell commands (multi-process programs or commands like Google Chrome or make won't work well with this method either). Obviously, it's not recommended to pause/resume system processes like systemd. Although you could do this manually, cpulimit is a nice small program that uses this method (it uses SIGSTOP/SIGCONT). Contrary to the description, the cpu % you specify is between 0 and 100 even if you have multiple cores. And you can always suspend a job in the terminal with Ctrl-Z. cpupower (highly recommended) This one is built in the Linux kernel so most distros should provide it (get it here if you don't have it). This command-line utility manages the CPU frequency so it pretty much controls the entire cpu using governor states (e.g., performance, powersave, etc.), it can also do other things. Unlike the pause/resume method, processes are much smoother with this. You'll need to set the maximum frequency for the processor. Run cpupower frequency-info to see your available processor states. As root, type cpupower frequency-set -u <frequency>, start with the lowest one you have and then try to find the highest frequency that doesn't overheat. (This is optional) If you want, you can install the lm_sensors package which allows you to see your system temperature. Then run sensors-detect and answer 'yes' to all the questions. Last, run sensors to see the current and critical (beyond which the system overheats) temperatures. At this point the current temperature should be much lower now. Be aware though some performance-intensive programs like games might hang after typing the above command, if you get a popup window with that message you should wait for the program rather than force quit. Note you have to type this command every time the system reboots (unless you can get it to run automatically). See this and this for more information on cpupower.
Limit CPU usage % all processes and cores
1,360,673,307,000
I have multiple users on a server. They upload and download their files through FTP. Sometimes some heavy transfer causes high load on the server. I am wondering, if there is any way to limit the ftp speed to avoid high load. Any help would be much appreciated.
I found a way to limit ftp speed: In the /etc/proftpd.conf insert this line: TransferRate RETR,STOR,APPE,STOU 2000 This will limit ftp speed to 2 megabyte per second. After changing the file you should restart the proftpd service: /etc/init.d/proftpd restart
How to limit ftp speed
1,360,673,307,000
I need to read a large log file and send it over a local network using (netbsd) netcat between two VMs on the same host workstation. I know that netcat has an interval, but as far as I can tell, the smallest interval you can use is 1 line/second. Most of the files I need to send this way have hundreds of thousands of lines, and some close to a million lines, so one line per second isn't feasible. If I just use cat, my host computer/workstation winds up getting bogged down to the point of being unusable. Using bash and common *nix tools, is there a way I can send the files, but feed it to netcat at a rate of say, 5-10 lines/second or something like that? The end goal of this is to allow me to do some proof of concept testing for a centralized log database I am considering.
If you use bash and pipes, and are looking for an easy and dirty solution, you can try using sleep. You can use this which act like cat but with a pause at each line. while read i; do echo "$i"; sleep 0.01; done. Here is an example at a little less than 100 lines per second. $ time (seq 1 100 | while read i; do echo "$i"; sleep 0.01; done) [...] real 0m1.224s user 0m0.012s sys 0m0.052s
How can I read lines at a fixed speed?
1,360,673,307,000
I've just mounted a microSD card which has 17 partitions in my laptop and I'm getting the following error in the YaST partitioner: Your disk /dev/mmcblk0 contains 17 partitions. The maximum number of partitions that the kernel driver of the disk can handle is 7. Partitions above 7 cannot be accessed and indeed - I have only /dev/mmcblkp0...7. Well, actually I have only 3 partitions because an extended partition starts at partition number 5, so it's p0, p5, p6, p7. I've formatted this card using a card reader in a printer on another PC which was exposing the microSD card as /dev/sdxY and allowed me to create 17 partitions. Now I've put it into a laptop with a built-in card reader and it gives the above error. Why is that? It's suggesting to use LVM but come on, LVM on microSD is overkill and inconvenient as hell for removable storage.
LVM is not overkill if you have 17 partitions. (IMHO) As for the partition limit, it just happens to be the default. Probably no one expected that many partitions on a device that used to have only a few megs. /usr/src/linux/Documentation/devices.txt: 179 block MMC block devices 0 = /dev/mmcblk0 First SD/MMC card 1 = /dev/mmcblk0p1 First partition on first MMC card 8 = /dev/mmcblk1 Second SD/MMC card ... The start of next SD/MMC card can be configured with CONFIG_MMC_BLOCK_MINORS, or overridden at boot/modprobe time using the mmcblk.perdev_minors option. That would bump the offset between each card to be the configured value instead of the default 8. So it might work if you recompile your kernel with CONFIG_MMC_BLOCK_MINORS=18 or with the mmcblk.perdev_minors=18 kernel parameter. (Or 32 in case it has to be a power of 2). Doing so will reduce the total number of mmcblkX you may have in your system. Personally I'd rather lower the number of partitions so it will work everywhere and not just your customized system.
/dev/mmcblk0 partitions limit
1,360,673,307,000
When using Google Chrome I frequently get “Sorry Jim” tabs. The browser also frequently freezes and crashes. Running it from terminal emulator show a long line of Too many open files: [...:ERROR:shared_memory_posix.cc(231)] Creating shared memory in /dev/shm/.com.google.Chrome.0A3O7D failed: Too many open files [...:ERROR:shared_memory_posix.cc(231)] Creating shared memory in /dev/shm/.com.google.Chrome.gr0r3Q failed: Too many open files Google Chrome (32-bit, not sure if it affects 64-bit).
Increase hard/soft limit. /etc/security/limits.conf Thus far a limit of 8192 seems to be enough. 4096 have proven to be to small. Optionally only increase hard limit (if needed) and do: ulimit -Sn 8192 from shell in which Chrome is started. Note that the use of (the somewhat wide spread) way: sudo sh -c "ulimit -n 8192 && exec su -i $LOGNAME" might not be what one want, as it also strips your environment.
Chrome too many files open / crash / Sorry Jim
1,360,673,307,000
My website is being DoS'ed by Google webspiders. Google is welcome to index my site, but sometimes it is querying a tagcloud on my site faster than my webserver can produce the results, making my webserver run out of resources. How can I limit access to my webserver in such a way that normal visitors are not affected? robots.txt is no option because it would block the whole site from being indexed. iptables -m recent is tricky, because some pages have a lot of images or other data files and 'recent' triggers on those too (typically my RSS aggregator, loading images and feeds). iptables -m limit has the same disadvantage and on top of that, I wasn't able to be selective per IP source address. How can I limit visitors that cause my server load to rise too high? I am running apache2 on Ubuntu server in a VirtualBox VM.
Try the mod_qos Apache module. The current version has the following control mechanisms. The maximum number of concurrent requests to a location/resource (URL) or virtual host. Limitation of the bandwidth such as the maximum allowed number of requests per second to an URL or the maximum/minimum of downloaded kbytes per second. Limits the number of request events per second (special request conditions). It can also "detect" very important persons (VIP) which may access the web server without or with fewer restrictions. Generic request line and header filter to deny unauthorized operations. Request body data limitation and filtering (requires mod_parp). Limitations on the TCP connection level, e.g., the maximum number of allowed connections from a single IP source address or dynamic keep-alive control. Prefers known IP addresses when server runs out of free TCP connections. This sample conditional rule from the documentation should get you going in the right direction. # set the conditional variable to spider if detecting a # "slurp" or "googlebot" search engine: BrowserMatch "slurp" QS_Cond=spider BrowserMatch "googlebot" QS_Cond=spider # limits the number of concurrent requests to two applications # (/app/b and /app/c) to 300 but does not allow access by a "spider" # if the number of concurrent requests exceeds the limit of 10: QS_LocRequestLimitMatch "^(/app/b/|/app/c/).*$" 300 QS_CondLocRequestLimitMatch "^(/app/b/|/app/c/).*$" 10 spider
Throttling web crawlers
1,360,673,307,000
I want to run a task with limits on the kernel objects that they will indirectly trigger. Note that this is not about the memory, threads, etc. used by the application, but about memory used by the kernel. Specifically, I want to limit the amount of inode cache that the task can use. My motivating example is updatedb. It can use a considerable amount of inode cache, for things that mostly won't be needed afterwards. Specifically, I want to limit the value that is indicated by the ext4_inode_cache line in /proc/slabinfo. (Note that this is not included in the “buffers” or “cache” lines shown by free: that's only file content cache, the slab content is kernel memory and recorded in the “used” column.) echo 2 >/proc/sys/vm/drop_caches afterwards frees the cache, but that doesn't do me any good: the useless stuff has displaced things that I wanted to keep in memory, such as running applications and their frequently-used files. The system is Linux with a recent (≥ 3.8) kernel. I can use root access to set things up. How can I run a command in a limited environment (a container?) such that the contribution of that environment to the (ext4) inode cache is limited to a value that I set?
Following my own question on LKML this can be archived using Control Group v2: Pre-requisits Make sure your Linux kernel has MEMCG_KMEM enabled, e.g. grep CONFIG_MEMCG_KMEM "/boot/config-$(uname -r)" Depending on the OS (and systemd version) enable the use of cgroups2 by specifying systemd.unified_cgroup_hierarchy=1 on the Linux kernel command line, e.g. via /boot/grub/grub.cfg. Make sure the cgroup2 file system is mounted on /sys/fs/cgroup/, e.g. mount -t cgroup2 none /sys/fs/cgroup or the equivalent in /etc/fstab. (systemd will do this for you automatically by default) Invocation Create a new group my-find (once per boot) for your process: mkdir /sys/fs/cgroup/my-find Attach the (current) process (and all its future child processes) to that group: echo $$ >/sys/fs/cgroup/my-find/cgroup.procs Configure a soft-limit, e.g. 2 MiB: echo 2M >/sys/fs/cgroup/my-find/memory.high Finding the right value requires tuning and experimenting. You can get the current values from memory.current and/or memory.stat. Over time you should see high incrementing in memory.events, as the Linux kernel is now repeatedly forces to shrink the caches. Appendix Notice that the limit applies both to user-space memory and kernel memory. It also applies to all processes of the group, which includes child-processes started by updatedb, which basically does a find | sort | frcode, where: find is the program trashing the dentry and inode caches, which we want to constrain. Otherwise its user-space memory requirement (theoretically) is constant. sort want lots of memory, otherwise it will fall back to using temporary files, which will result in additional IO. frcode will write the result to disk - e.g. a single file - which requires constant memory. So basically you should put only find into a separate cgroup to limit its cache trashing, but not sort and frcode. Post scriptum It does not work with cgroup v1 as setting memory.kmem.limit_in_bytes is both deprecated and results in an "out-of-memory" event as soon as the processes go over the configured limit, which gets your processes killed immediately instead of forcing the Linux kernel to shrink the memory usage by dropping old data. Quoting from section CONFIG_MEMCG_KMEM Currently no soft limit is implemented for kernel memory. It is future work to trigger slab reclaim when those limits are reached.
Limit the inode cache used by a command
1,360,673,307,000
I need to externally limit a process/session to a certain number of cores. Are there any other possibilities than CPU affinity (I don't like the need to specify the actual cores) and cgroups (hard to integrate into our project)?
We went with cgroups in the end, since there really doesn't seem to be any other approach that would accomplish this. Cgroups allow CPU utilization limiting through the kernel scheduler, using cpu.cfs_period_us and cpu.cfs_quota_us. This avoids the explicit specification of CPU cores.
Externaly limiting number of CPU cores used
1,360,673,307,000
In a Debian lenny server running postgresql, I noticed that a lack of semaphore arrays is preventing Apache from starting up. Looking at the limits, I see 128 arrays used out of 128 arrays maximum, for semaphores. I know this is the problem because it happens on a semget call. How do I increase the number of arrays? PS: I need Apache running to make use of phppgadmin.
If you read the manpage for semget, in the Notes section you'll notice: System wide maximum number of semaphore sets: policy dependent (on Linux, this limit can be read and modified via the fourth field of /proc/sys/kernel/sem). On my system, cat /proc/sys/kernel/sem reports: 250 32000 32 128 So do that on your system, and then echo it back after increasing the last number: printf '250\t32000\t32\t200' >/proc/sys/kernel/sem (There are tab characters between the numbers, so I'm using printf to generate them.)
How do I increase the number of semaphore arrays in Linux?
1,360,673,307,000
I have a packet rate limit (max. 10 per seconds) which is set by my internet provider. This is a problem if I want to use the AceStream player, because if I exceed the limit I get disconnected. How can I restrict the internet access of this program? I tried the suggested command: iptables -A OUTPUT -m limit --limit 10/s -j ACCEPT but I get a fatal error message: FATAL: Error inserting ip_tables (/lib/modules/3.2.0-67-generic/kernel/net/ipv4/netfilter/ip_tables.ko): Operation not permitted iptables v1.4.12: can't initialize iptables table `filter': Table does not exist (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded. With administor rights: sudo iptables -A OUTPUT -m limit --limit 10/s -j ACCEPT there is no errror message anymore. But it is still not working, I get disconnected. Is there an error in the command line? Or do I have to use other arguments of iptables? Below is the actual message that I get, when I exceed the limits of the provider. Up to now, I tried different approaches, but none of them didn't work. sudo iptables -A INPUT -p tcp --syn --dport 8621 -m connlimit --connlimit-above 10 --connlimit-mask 32 -j REJECT --reject-with tcp-reset sudo iptables -A INPUT -m state --state RELATED,ESTABLISHED -m limit --limit 9/second --limit-burst 10 -j ACCEPT sudo iptables -A INPUT -p tcp --destination-port 8621 --syn -m state --state NEW -m limit --limit 9/s --limit-burst 10 -j ACCEPT This approach seems not to help in order to still use the application. So, I posted another question: set connection limit via iptables .
The solution you found was correct: iptables -A OUTPUT -m limit --limit 10/s -j ACCEPT But it is assuming a default policy of DROP or REJECT which is not usual for OUTPUT. You need to add: iptables -A OUTPUT -j REJECT Be sure to add this rule after the ACCEPT one. Either execute them in this order, or use -I instead of -A for the ACCEPT. Also, depending on the application this might kill the connection. In that case try with DROP instead of REJECT or try with a different --reject-with (default is icmp-port-unreachable). I just tested with telnet against a DVR server and it didn't kill the connection. Of course, since a new connection is an output packet, trying to reconnect right after hitting the limit will fail right away if you use REJECT. I gather from the comments that your ISP also expects you to limit your INPUT packets... you cannot do this. By the time you are able to stop them they've already reached your NIC, which means the were already accounted for by your ISP. The INPUT packet count will also increase considerably when you limit your OUTPUT because most of the ACK won't make it out, causing lots of retransmissions. 10 packets per second is insane.
set packet rate limit via iptables
1,360,673,307,000
I am facing a problem where I have a fleet of servers which contain a lot of data. Each of the host runs many instances of a specific process p1, which makes several scp connections to other hosts in parallel to get the data it has to process. This in turn puts a lot of load on these hosts and many times they go down. I am looking for ways through which I can limit the number of concurrent scp processes that can be run on a single host. Most of the links pointed me to MaxStartup & MaxSessions settings in /etc/ssh/sshd_config which were more to do with limiting the number of ssh sessions that can be made/initiated at any given point etc. Is there a specific config file for scp which can be used here? Or is there a way at the system level to limit the number of instances of a specific process/command that can run concurrently at a time?
scp itself has no such feature. With GNU parallel you can use the sem command (from semaphore) to arbitrarily limit concurrent processes: sem --id scp -j 50 scp ... For all processes started with the same --id, this applies a limit of 50 concurrent instances. An attempt to start a 51st process will wait (indefinitely) until one of the other processes exits. Add --fg to keep the process in the foreground (default is to run it in the background, but this doesn't behave quite the same a shell background process). Note that the state is stored in ${HOME}/.parallel/ so this won't work quite as hoped if you have multiple users using scp, you may need a lower limit for each user. (It should also be possible override the HOME environment variable when invoking sem, make sure umask permits group write, and modify the permissions so they share state, I have not tested this heavily though, YMMV.) parallel requires only perl and a few standard modules. You might also consider using scp -l N where N is a transfer limit in kBps, select a specific cipher (for speed, depending on your required security), or disable compression (especially if the data is already compressed) to further reduce CPU impact. For scp, ssh is effectively a pipe and an scp instance runs on each end (the receiving end runs with the undocumented -t option). Regarding MaxSessions, this won't help, "sessions" are multiplexed over a single SSH connection. Despite copious misinformation to the contrary, MaxSessions limits only the multiplexing of sessions per-TCP connection, not any other limit. The PAM module pam_limits supports limiting concurrent logins, so if OpenSSH is built with PAM, and usePAM yes is present in the sshd_config you can set limit by username, group membership (and more). You can then set a hard maxlogins to limit the logins in /etc/security/limits.conf. However this counts up all logins per user, not just the new logins using just ssh, and not just scp, so you might run into trouble unless you have a dedicated scp user id. Once enabled, it will also apply to interactive ssh sessions. One way around this is to copy or symlink the sshd binary, calling it sshd-scp then you can use a separate PAM configuration file, i.e. /etc/pam.d/sshd-scp (OpenSSH calls pam_start() with the "service name" set to that of the binary it was invoked as). You'll need to run this on a separate port (or IP), and using a separate sshd_config is probably a good idea too. If you implement this, then scp will fail (exit code 254) when the limit is reached, so you'll have to deal with that in your transfer process. (Other options include ionice and cpulimit, these may cause scp sessions to timeout or hang for long periods, causing more problems.) The old school way of doing something similar is to use atd and batch, but that doesn't offer tuning of concurrency, it queues and starts processes when the load is below a specific threshold. A newer variation on that is Task Spooler that supports queueing and running jobs in a more configurable sequential/parallel way, with runtime reconfiguration supported (e.g. changing queued jobs and concurrency settings), though it offers no load or CPU related control itself.
Limit maximum number of concurrent scp processes running on a host
1,360,673,307,000
Let's say I'm running on a resource-constrained system, and I want to ensure that the applications I run open no more than 10 files total. If I try to do it using setrlimit, something like: if (fork() == 0) { struct rlimit l = { 10, 10 }; setrlimit(RLIMIT_NOFILE, &l); execl(EVIL_PROGRAM, args); } then EVIL_PROGRAM will inherit the limit of 10 open file descriptors. However, what's to stop a malicious/poorly coded application from spawning X child processes, all with 10 open files? (This is a real-life scenario). I don't want to prevent it from creating child processes entirely (this should be governed by the global limits.conf), just to set a reasonable limit on the number of open files. I found references to using cgroups for this purpose, but I think you have to be root to use this feature?
Specifically for setrlimit Here are some of the more useful command options that you may wish to look into; pulled'em from the man pages. RLIMIT_NOFILE Specifies a value one greater than the maximum file descriptor number that can be opened by this process. RLIMIT_NPROC The maximum number of processes (or, more precisely on Linux, threads) that can be created for the real user ID of the calling process. Upon encountering this limit RLIMIT_SIGPENDING Specifies the limit on the number of signals that may be queued for the real user ID of the calling process. Both standard and real-time signals are counted for the purpose of checking this limit. There also seems to be other really cool limitations that can be set so I'm thankful I ran across your question as it has shown me yet another tool for keeping processes in check. General Unix/Linux I believe the general term of application limitation tool you are looking for is called a Sandbox for UNIX it looks like Contractor and Passenger are solid options and for Linux I've seen Docker, KVM & Firejail used on systems as constrained as the Raspberry Pi B+v2 or dule core netbooks. For most of the Sandboxing action you'll need a system and Kernel capible of Virtualization. On systems such as Android I've seen Selinux used on the latest CyonagenMod ROMs, frustrating bit to get around if ya want to use a chroot app... but I digress, on some systems that I've run Ubuntu I've run across Apparmor poping errors when a newly installed program tries to phone home with a persistent connection. Suffice it to say there's lot's of options for controlling what a specific program or set of programs may do, see, and or communicate with, and how much of the CPU's & GPU's resources maybe used. The best out of the bunch if you can get it working (kinda iffy as I'm still working with the Dev. to get ARMhf binaries working), for your usage scenario, would be Firejail as the guide hosted on the Dev's home page covers a dual-gaming rig that could be modified to suit your needs. It has a low memory foot print in comparison to the others mentioned (from what I've seen that is) and is highly configurable as to what files a process has access to and whether or not persistence is allowed. This would be good for testing as you would have a set working environment that is repeatable, customizable, and ultimately deletable if needed. For systems without full virtualization support I've seen that selinux is usually used to define stricter rules over the user/group permission settings that are already in place to keep read & write permissions. The term to search there is Linux name space permissions, turns out there's lot's of hidden ways that one can restrict actions but the biggest hole for all these options is root even in a well constructed chroot jail if there are ways to obtain root permissions within a jail or sandbox then there are ways to escalate into the user's ID that is running the jailed process. Basically there should be multiple layers for a process to have to break out of, ie for a web server I'll be setting up a restrictive set of firewall rules, log readers to dynamically add rules and change firewall settings (fail2ban with custom actions and scripts), then a chroot jail that only has the required depends for a web server in it's directory structure bound to a port above 1024 such that it doesn't even request root level permissions for socket binding, and wrapping those inside a virtualized sandbox (likely with Firejail), that has a host running penetration detection mesures such as tripwire and honeyd within their own respective jails. All so that if .php and similar code that should not be modified on the public server does receive a bad-touch it is ignored, back-ips resored and the offender banded from future access. In your example code it doesn't look like you're doing much with networking but more than likely it will be called from another script or function and because it is obviously calling up child processes you'll want to figure out how to sanitize input, and catch errors at every step (look up the link that killed the Chrome browser for why), and ensure that unsanitized input is not read or inturprated by a privileged user (look up how to add shell-shock to Firefox's browser ID for why), and if there is networking involved with calling or returning output then the ports that the process is bound to should be on an un-privileged port (use iptables/firewall for forwarding if it's a web app kinda thing). While there's a plethora of options for locking a system's services down to consider there also seems to be many options for testing code's breakability; Metasploit and drone.io are two fairly well known pentesting and code testing options that you may wish to look into before someone does it for you.
Can I set a resource limit for the current process tree?
1,360,673,307,000
Anyone understand the following code , running in bash ? :(){ :|:& };: It seems to be a "fork" bomb on Linux.
It's not that difficult to decipher in fact. This piece of code just defines a function named : which calls two instances of itself in a pipeline: :|:&. After the definition an instance of this function is started. This leads to a fast increasing number of subshell processes. Unprotected systems (systems without a process number limit per user) will be severely affected by such fork bombs since legitimate processes will quickly be outnumbered and thus deprived of most CPU resources.
Why is the following command killing a system?
1,360,673,307,000
Which value is correct?(or they are all correct, but which one will take effect?) $ cat /proc/sys/kernel/pid_max 32768 $ ulimit -a |grep processes max user processes (-u) 77301 $ cat /proc/1/limits |grep processes Max processes 77301 77301 p
All values is correct and have different meanings./proc/sys/kernel/pid_max is maximum value for PID, ulimit -u is maximum value for number of processes. From man 5 proc: /proc/sys/kernel/pid_max (since Linux 2.5.34) This file specifies the value at which PIDs wrap around (i.e., the value in this file is one greater than the maximum PID). The default value for this file, 32768, results in the same range of PIDs as on earlier kernels. On 32-bit platforms, 32768 is the maximum value for pid_max. On 64-bit systems, pid_max can be set to any value up to 2^22 (PID_MAX_LIMIT, approximately 4 million). From man bash: ulimit [-HSTabcdefilmnpqrstuvx [limit]] ..... -u The maximum number of processes available to a single user ..... Note When a new process is created, it is assigned next number available of kernel processes counter. When it reached pid_max, the kernel restart the processes counter to 300. From linux source code, pid.c file: .... #define RESERVED_PIDS 300 .... static int alloc_pidmap(struct pid_namespace *pid_ns) { int i, offset, max_scan, pid, last = pid_ns->last_pid; struct pidmap *map; pid = last + 1; if (pid >= pid_max) pid = RESERVED_PIDS;
how to determine the max user process value?
1,360,673,307,000
I would like to run a backup script in low CPU and disk I/O. Is there any different between this: #!/bin/bash ulimit -e 19 ionice -c3 -p $$ and this: #!/bin/bash ionice -c3 -p $$ renice -n 19 -p $$
There is big difference between them. ulimit -e only set the RLIMIT_NICE, which is a upper bound value to which the process's nice value can be set using setpriority or nice. renice alters the priority of running process. Doing strace: $ cat test.sh #!/bin/bash ulimit -e 19 Then: $ strace ./test.sh ................................................... read(255, "#!/bin/bash\n\nulimit -e 19\n", 26) = 26 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 getrlimit(RLIMIT_NICE, {rlim_cur=0, rlim_max=0}) = 0 getrlimit(RLIMIT_NICE, {rlim_cur=0, rlim_max=0}) = 0 setrlimit(RLIMIT_NICE, {rlim_cur=19, rlim_max=19}) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(255, "", 26) = 0 exit_group(0) You can see, ulimit only call setrlimit syscall to change the value of RLIMIT_NICE, nothing more. Note man setrlimit A good explanation about RLIMIT_NICE
Different between `ulimit -e` and `renice`?
1,360,673,307,000
Sometimes when I write code I found that I made a stupid mistake and some loop takes almost all CPU time forever. Is there a way the running time of a program for example to 10 seconds in bash?
The timeout command will do this for you, i.e. timeout 10s command It will kill command after 10 seconds. Instead of s for seconds, you can also use m for minutes, h for hours or d for days.
How to limit the running time of a program?
1,360,673,307,000
I'm using Dropbox without the GUI in Linux. I would like to limit the upload rate, sometimes large files eats my internet bandwidth. Anyone knows how I do that?
You can start the Dropbox executable under trickle. This is a simple program that limits the bandwidth used by the program that it starts. trickle -u 42 dropbox.py
Control Dropbox upload rate on the command line
1,360,673,307,000
I'm trying to write a bash script that will let me save a backup code (lots of numbers) in a file. I've finished the script but it's only letting me to save 4096 digits of the code. I tried to do this: # Ask for backup code read -p "Backup code:" backupcode # Check backup code length l="${#backupcode}" m=4096 if (( l > m )); then echo -e "${RED}ERROR:${NC} Backup is too large! The limit is 4096 digits." else # Save backup code in the file echo $backupcode > "${path[$i]}" fi I think this didn't detect that the backup was too large. So, I think there's something with read command. If there's a limit in read, are there any alternatives I can use?
There's no limit for the read command itself. But there's a limit for how much you can type on a single line in the terminal. To see this, try running the command wc -c and typing a very long line. You'll hit that same limit at 4096 bytes. To input more than the limit, either arrange to make the code multi-line with each line being short enough, or input it in some other way than directly reading from the terminal in cooked mode. If you enable readline, bash reads characters one by one, and there's no limit to the line length other than available memory. read -e -p "Backup code:" backupcode However, reading such a long input from the terminal is a very bad user interface. A user isn't going to sit there and type thousands of characters. Instead, read input from the clipboard or from a file.
Is there a limit for the read command?
1,360,673,307,000
What are the length limitations for using here-docs as part of bash command lines? I am finding that short here-docs work fine, but when they get longer there is some point or structure after which they break. When it fails, there are two symptoms. One is that the here-doc is abandoned early. The other is that one or more of the here-doc lines is truncated in the output, and it is not necessarily the last line. Below you will see a contrived example command with which I can reproduce the problem. The here-doc is a boring 58 lines each containing 79 'x' characters and a newline. If I copy and paste all 60 lines, NOT including the newline from the EOF, to an interactive bash shell, then I must manually hit Enter and the entire command will run successfully every time. However, if I copy and paste all 60 lines, including the EOF newline, then this is when failures occur. If I do the latter in the Terminal application on the desktop, it always fails. If I do it in a remote SSH client, it sometimes works and sometimes fails. Significantly reduce the number of here-doc lines and the problem disappears. The abandoning of the here-doc part way through means that all remaining here-doc lines and the EOF line are seen by bash as new individual command lines, which just results in a flurry of "command not found" errors (or potentially much worse if the here-doc includes legitimate command lines). cat > ~/test.txt << 'EOF' xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx EOF I understand that bash has command line length limits, but this example is well below those limits. In fact, no one line in the above multi-line command is longer than 80 characters and each line gets its own prompt from the shell, so I'm not even sure what limits would apply. It seems odd to me that a here-doc should be limited in length. I am using Linux Mint 17 (Ubuntu 14.04). $ bash --version GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu) ... $ uname -a Linux QWERTY 3.13.0-24-generic #46-Ubuntu SMP Thu Apr 10 19:11:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux This problem does not occur when such commands are used from within a bash script.
I am not aware of any size limits for here-doc. I'm running kernel 3.9.1 and I've been experiencing the same issue here: when pasting large chunks of text in terminal some lines are truncated or missing. I found out (after some googling) that if you turn off line editing, pasting works fine (discussion here: Pasting large amounts of text into readline-enabled programs truncates parts of the lines of the text being pasted). Further investigation reveals that the root of this problem was in fact a kernel bug (drivers/tty) and apparently it's fixed in kernels >= 3.14. This probably explains why people with more up-to-date installs could not replicate the behaviour we both experience. UPDATE: I can confirm that after installing kernel 3.16.6 everything works fine so patching your kernel or upgrading to a more recent version (>=3.14) should fix this problem.
What are the bash shell length limitations for here-docs?
1,360,673,307,000
I made a very simple bash script (echo at start, runs commands, echos at end) to add approx 7300 rules to iptables blocking much of China and Russia, however it gets through adding approximately 400 rules before giving the following error for every subsequent attempt to add a rule to that chain: iptables: Unknown error 18446744073709551615 I even tried manually adding rules afterwards and it won't let me add them (it gives the same error). The command to add each rule looks like this: /sbin/iptables -A sshguard -s x.x.x.0/x -j DROP sshguard is a chain I created for use with the sshguard daemon, and I wanted to add the rules there so I wasn't muddying up the INPUT chain. The ip ranges I am supplying are not to blame here, as I have supplied valid ranges to test and they are met with the same error. Flushing the chain of rules and adding individual ones work, but again, not after ~400 entries. I did some googling beforehand, but the others having this issue don't seem to be having it for the same reasons I am. Is there some kind of rule limit per chain with iptables? Also, is this the proper way to go about blocking these ranges (errors aside)? # iptables -V iptables v1.3.5 # cat /etc/issue CentOS release 5.8 (Final) # uname -a Linux domain.com 2.6.18-028stab101.1 #1 SMP Sun Jun 24 19:50:48 MSD 2012 x86_64 x86_64 x86_64 GNU/Linux Edit: To clarify, the bash script is running each iptables command individually, not looping through a file or list of IPs. Also, my purpose for blocking these ranges is preventative -- I am trying to limit the amount of bots that scrape, crawl, or attempt to create spam accounts on a few of my websites. I am already using sshguard to block brute force attempts on my server, but that does not help with the other bots, obviously.
OK, I figured it out. I should have mentioned that I had a Virtuozzo container for my VPS. http://kb.parallels.com/en/746 mentions the following: Also it might be required to increase numiptent barrier value to be able to add more iptables rules: ~# vzctl set 101 --save --numiptent 400 FYI: The container has to be restarted for this to take effect. This explains why I hit the limit at around 400. If I had CentOS 6, I would install the ipset module (EPEL) for iptables instead of adding all these rules (because ipset is fast). As it stands now, on CentOS 5.9, I'd have to compile iptables > 1.4.4 and my kernel to get ipset. Since this is a VPS and my host may eventually upgrade to CentOS 6, I am not going to pursue that.
Can't add large number of rules to iptables
1,360,673,307,000
I run DB2 on Linux where I have to allocate the vast majority of memory on the machine to shared memory segments. This page is typical of the info that I've found about shmall/shmmax: http://www.pythian.com/news/245/the-mysterious-world-of-shmmax-and-shmall/ My system is running fine now, but I'm wondering if there's a historical or philosophical reason why shared memory is so low by default. In other words, why not let shmall default to the max physical memory on the machine? Or in other words, why should a typical admin need to be 'protected from himself' if an app happens to use a lot of shared memory, and have to go in and change these settings? The only thing I can think of is that it does let me set an upper bound to how much memory DB2 can use, but that's a special case.
Shared memory is not always a protected resource. As such many users can allocate shared memory. It is also not automatically returned to the memory pool when the process which allocated it dies. This can result in shared memory allocations which have been allocated but not used. This results in a memory leak that may not be obvious. By keeping shared memory limits low, most processes which use shared memory (in small amounts) can run. However, the potential damage is limited. The only systems I have uses which require large amounts of shared memory are database servers. These usually are administered by system administrators who are aware of the requirements. If not, the DBA usually is aware of the requirement and can ask for appropriate configuration changes. The database installation instructions usually specify how to calculate and set the appropriate limits. I have had databases die and leave large amounts of shared memory allocated, but unused. This created problems for users of the system, and prevented restarting the database. Fortunately, there where tools which allowed the memory to be located and released.
Linux - why is kernel.shmall so low by default?
1,360,673,307,000
Chrome/Chromium won't load any websites and just shows the "Aw, Snap! Something went wrong..." page. Some sub-processes segfault. When started in a Terminal, it will show lots of these: [...ERROR:platform_thread_posix.cc(126)] pthread_create: Resource temporarily unavailable While Chrome is still running, starting another program sometimes triggers the same error: Resource temporarily unavailable This is on Arch Linux with systemd 229, but similar behavior has been reported on Fedora Linux. What is causing these crashes? At first glance, the process limit does not seem to be the issue: $ ulimit -a core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 23870 max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 99 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 23870 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
While investigating another issue, I may have found something relevant. It wasn't possible to switch to another tty (Ctrl + Alt + F2): A start job is running for Login Service... Turns out this may be another systemd issue, which has its own limits. The following config file was created, which apparently fixed the issue: # mkdir /etc/systemd/logind.conf.d/ # /etc/systemd/logind.conf.d/systemd-stupid-limits.conf LimitNOFILE=500000 LimitNPROC=100000 UserTasksMax=100000 After a reboot, Chrome does not crash anymore and switching to another tty is working again. Not sure if this is the right solution, but it does seem to work so far. If someone has a better idea, please post an answer. For future reference, this was logged in /var/log/daemon.log when the tty wasn't working: systemd[1]: Starting Login Service... systemd[1]: systemd-logind.service: Main process exited, code=exited, status=1/FAILURE systemd[1]: Failed to start Login Service. systemd[1]: systemd-logind.service: Unit entered failed state. systemd[1]: systemd-logind.service: Failed with result 'exit-code'. systemd[1]: systemd-logind.service: Service has no hold-off time, scheduling restart. systemd[1]: Stopped Login Service.
Chrome/Chromium crashes ("Aw, snap!", segfault): "Resource temporarily unavailable"
1,360,673,307,000
In the shell, as explained in this this Q&A in the context of expansion, depending on the system, the maximum length of a command's argument is initially constrained by the kernel setup. The maximum value is revealed at runtime using the getconf command (see also IEEE Std 1003.1, 2013 Edition): # getconf ARG_MAX 2097152 vs. value found in limits.h on my setup: #define ARG_MAX 131072 /* # bytes of args + environ for exec() */ Indeed: The sysconf() call supplies a value that corresponds to the conditions when the program was either compiled or executed, depending on the implementation; the system() call to getconf always supplies a value corresponding to conditions when the program is executed. The manpages reference POSIX, from the prolog alluding the POSIX Programmer's manual, to the description itself: The value of each configuration variable shall be determined as if it were obtained by calling the function from which it is defined to be available by this volume of POSIX.1-2008 or by the System Interfaces volume of POSIX.1-2008 (see the OPERANDS section). The value shall reflect conditions in the current operating environment. The basic variables which can be queried appear in the table for the sysconf function specification and there is more information about the values in the limits.h header documentation: {ARG_MAX} Maximum length of argument to the exec functions including environment data. Minimum Acceptable Value: {_POSIX_ARG_MAX} ...(nb you cannot be POSIX compliant under a certain value...) {_POSIX_ARG_MAX} Maximum length of argument to the exec functions including environment data. Value: 4 096 The xargs --show-limits command confirms some of this: Your environment variables take up 3134 bytes POSIX upper limit on argument length (this system): 2091970 POSIX smallest allowable upper limit on argument length (all systems): 4096 Maximum length of command we could actually use: 2088836 Size of command buffer we are actually using: 131072 sysconf was initially designed to find the system value for the PATH variable, then it was extended to other variables. Now, the Open Group documentation explores the rationale for having such a framework where applications can poll for system variables at runtime, and the related practical considerations about the baseline...: (...) If limited to the most restrictive values in the headers, such applications would have to be prepared to accept the most limited environments offered by the smallest microcomputers. Although this is entirely portable, there was a consensus that they should be able to take advantage of the facilities offered by large systems, without the restrictions associated with source and object distributions. During the discussions of this feature, it was pointed out that it is almost always possible for an application to discern what a value might be at runtime by suitably testing the various functions themselves. And, in any event, it could always be written to adequately deal with error returns from the various functions. In the end, it was felt that this imposed an unreasonable level of complication and sophistication on the application developer. ...as well as the shortcomings of such a setup as it relates to some file variables with fpathconf: The pathconf() function was proposed immediately after the sysconf() function when it was realized that some configurable values may differ across file system, directory, or device boundaries. For example, {NAME_MAX} frequently changes between System V and BSD-based file systems; System V uses a maximum of 14, BSD 255. On an implementation that provides both types of file systems, an application would be forced to limit all pathname components to 14 bytes, as this would be the value specified in on such a system. So the intent was to relieve developers of some burden for the baseline while also acknowledging variety in the filesystems and generally enabling some customizing on different variants of the platform. The evolution of hardware, Unix and related standards (C and POSIX) plays a role here. Questions: The command getconf doesn't have a "list" option, and set, printenv or export don't show those variables. Is there a command which lists their value? Why facilities like fpathconf were seemingly built to introduce more flexibility, but only for PATH and file related system variables? Is it just because at that time getconf was only about PATH? What is the current Linux implementation, and is it POSIX compliant? In the linked Q there is reference in the answers to ARG_MAX varying with the stack size ("on Linux 3.11... a quarter of the limit set on the stack size, or 128kiB if that's less than 512kiB"): What is the rationale for this? Is this choice (1/4 of the stack size) a Linux specific implementation or just a feature on top of the basic implementation or did the historical UNIX implementation always yield basically that 1/4th of the stack size? Are many other variables besides ARG_MAX a function of the stack size or similar resources or does the importance of this variable warrant a special treatment? Practically, does one deliver a POSIX compliant Linux system/solution and there's configuration of the stack size limit for example to allow some application to go beyond the basic maximum spec if it scales up with the hardware or is it a practice to customize directly limits.h and compile for specific needs? What is the difference for something like ARG_MAX between using limits.h vs. changing the variable at runtime with something like the ulimit -s command vs. having the kernel manage it directly? In particular is the (low)value of that variable in my limits.h obsolete on Linux because of kernel changes i.e. has it been superseded? The command line supposedly has shell specific length restrictions which are not related to expansion and ARG_MAX; what are they in bash?
There is no standard way to retrieve the list of configuration variables that are supported on a system. If you program for a given POSIX version, the list in that version of the POSIX specification is your reference list. On Linux, getconf -a lists all available variable. fpathconf isn't specific to PATH. It's about variables that are related to files, which are the ones that may vary from file to file. Regarding ARG_MAX on Linux, the rationale for depending on the stack size is that the arguments end up on the stack, so there had better be enough room for them plus everything else that must fit. Most other implementations (including older versions of Linux) have a fixed size. Most limits go together with resource availability, with different resources depending on the limit. For example, a process may be unable to open a file even if it has fewer than OPEN_MAX files open, if the system is out of memory that can be used for the file-related data. Linux is POSIX-compliant on this point by default, so I don't know where you're getting at. If you use ulimit -s to restrict the stack size to less than ARG_MAX, you're making the system no longer compliant. A POSIX system can typically be made non-compliant in any number of ways, including PATH=/nowhere (making all standard utilities unavailable) or rm -rf /. The value of ARG_MAX in limits.h provides a minimum that applications can rely on. A POSIX-compliant system is allowed to let execve succeed even if the arguments exceed that size. The guarantee related to ARG_MAX is that if the arguments fit in that size then execve will not fail due E2BIG.
Is the Linux implementation of the system configuration "variable" ARG_MAX different from other system variables and is it POSIX compliant?
1,360,673,307,000
I'm trying to increase the default file descriptor limits for processes on my system. Specifically I'm trying to get the limits to apply to the Condor daemon and its sub-processes when the machine boots. But the limits are never applied on machine boot. I have the limits set in /etc/sysctl.conf: [root@mybox ~]# cat /etc/sysctl.conf # TUNED PARAMETERS FOR CONDOR PERFORMANCE # See http://www.cs.wisc.edu/condor/condorg/linux_scalability.html for more information # Allow for more PIDs (to reduce rollover problems); may break some programs kernel.pid_max = 4194303 # increase system file descriptor limit fs.file-max = 262144 # increase system IP port limits net.ipv4.ip_local_port_range = 1024 65535 And in /etc/security/limits.conf: [root@mybox ~]# cat /etc/security/limits.conf # TUNED PARAMETERS FOR CONDOR PERFORMANCE # See http://www.cs.wisc.edu/condor/condorg/linux_scalability.html for more information # Increase the limit for a user continuously by editing etc/security/limits.conf. * soft nofile 32768 * hard nofile 262144 #65536 The trouble I run in to is, on system reboot, the limits don't seem to apply to Condor and its processes. After a reboot, if I look at the file descriptor limit for a Condor process I see: [root@mybox proc]# cat /proc/`/sbin/pidof condor_schedd`/limits | grep 'Max open files' Max open files 1024 1024 But if I restart the condor_schedd process after a reboot the limits are increased as expected: [root@mybox proc]# cat /proc/`/sbin/pidof condor_schedd`/limits | grep 'Max open files' Max open files 32768 262144 The boot.log indicates these limits are being set before my Condor daemon and its processes are being started: May 18 07:51:52 mybox sysctl: net.ipv4.ip_forward = 0 May 18 07:51:52 mybox sysctl: net.ipv4.conf.default.rp_filter = 1 May 18 07:51:52 mybox sysctl: net.ipv4.conf.default.accept_source_route = 0 May 18 07:51:52 mybox sysctl: kernel.sysrq = 0 May 18 07:51:52 mybox sysctl: kernel.core_uses_pid = 1 May 18 07:51:52 mybox sysctl: kernel.pid_max = 4194303 May 18 07:51:52 mybox sysctl: fs.file-max = 262144 May 18 07:51:52 mybox sysctl: net.ipv4.ip_local_port_range = 1024 65535 May 18 07:51:52 mybox network: Setting network parameters: succeeded May 18 07:51:52 mybox network: Bringing up loopback interface: succeeded May 18 07:51:57 mybox ifup: Enslaving eth0 to bond0 May 18 07:51:57 mybox ifup: Enslaving eth1 to bond0 May 18 07:51:57 mybox network: Bringing up interface bond0: succeeded May 18 07:52:17 mybox hpsmhd: smhstart startup succeeded May 18 07:52:17 mybox condor: Starting up Condor May 18 07:52:17 mybox rc: Starting condor: succeeded May 18 07:52:17 mybox crond: crond startup succeeded Obviously I'd like to avoid having to boot a machine and then restart process that I need these increased limits to apply to -- what have I done wrong that's preventing these limits from applying to the processes when the machine boots?
Add ulimit -n 262144 to the condor init script.
File descriptor limits are lost after a system reboot
1,360,673,307,000
On Mac or Linux if you use the command ulimit -n you can see the page open limit for what seems to be an individual process according to this stackoverflow post. So if a parent process spawns child processes and those child processes open files, do those files count against the open file limit for the parent?
The RLIMIT_NOFILE is on the maximum file descriptor value you may obtain/allocate, not on how many files may be open at a time. Child processes inherit the limit, but other than that there's nothing that a child may do to influence the parent here. If the parent has some free fds in the range 0->limit-1, then it will still be able to open new files (wrt to that limit) regardless of what any of its children does (you may run into other global limits though). In any case, note that if the limit is say 500, you can still have more than 500 file descriptors open if you had some that were open (including in parent processes) prior to the limit being lowered. $ bash -c 'exec 1023> /dev/null; ulimit -n 500; command exec 600> /dev/null; ls -l /proc/self/fd; exit' bash: 600: Bad file descriptor total 0 lrwx------ 1 chazelas chazelas 64 Jun 17 08:40 0 -> /dev/pts/1 lrwx------ 1 chazelas chazelas 64 Jun 17 08:40 1 -> /dev/pts/1 l-wx------ 1 chazelas chazelas 64 Jun 17 08:40 1023 -> /dev/null lrwx------ 1 chazelas chazelas 64 Jun 17 08:40 2 -> /dev/pts/1 l-wx------ 1 chazelas chazelas 64 Jun 17 08:40 3 -> /dev/null lr-x------ 1 chazelas chazelas 64 Jun 17 08:40 4 -> /proc/8034/fd That process running ls has a limit of 500 there inherited from its parent (so can't get a new fd bigger than 499). Still it does have the fd 1023 open.
Do files opened by child processes count against the file open limit for the parent process?
1,360,673,307,000
I've been seeing a strange behavior when messing up with the environment variables. I'm setting up a very long environment variable and this prevent launching any command: ( Ubuntu 14.04.5 LTS (GNU/Linux 4.4.0-66-generic x86_64) ) $ export A=$(python -c "print 'a'*10000000") $ env -bash: /usr/bin/env: Argument list too long $ ls -bash: /bin/ls: Argument list too long $ cat .bashrc -bash: /bin/cat: Argument list too long $ id -bash: /usr/bin/id: Argument list too long What is happening here ?
The list of arguments and the environment of a command are copied in the same space in memory when a program starts. The error message is “Argument list too long”, but in fact the exact error is that the argument list plus the environment is too long. This happens as part of the execve system call. Most if not all unix variants have a limit on the size of this temporary space. The reason for this limit is to avoid a buggy or malicious program causing the kernel to use a huge amount of memory outside of that program's own memory space. The POSIX standard specifies that the maximum size of this memory space must be at least ARG_MAX, and that the minimum value of that (_POSIX_ARG_MAX) is 4096. In practice most Unix variants allow more than that, but not 10MB. You can check the value on your system with getconf ARG_MAX. On modern Linux systems, the maximum is 2MB (with typical settings). Traditionally many systems had a 128kB limit. Linux also still has a 128kB limit for the value of a single argument or the definition of an environment variable. If you need to pass more than a few hundred bytes of information, pass it in a file.
Setting a long environment variable breaks a lot of commands
1,360,673,307,000
Please help me to find out how to limit number of program concurrent executions. I mean, particular program can be ran only, for example 5 times at once. I know how to limit proccess number for user, but how to do that for program, using PAM?
PAM is used to authorize logins and account modifications. It is not at all relevant to restricting a specific program. The only way to apply a limit to the number of times a program can be executed is to invoke it through a wrapper that applies this limit. Users can of course bypass this wrapper by having their own copy of the program; if you don't want that, don't give those users account on your machine. To restrict a program to a single instance, you can make it take an exclusive lock on a file. There's no straightforward way to use a file to allow a limited number of instances, but you can use 5 files to allow 5 instances, and make the wrapper script try each file in turn. Create a directory /var/lib/myapp/instances (or wherever you want to put it) and create 5 files in it, all world-readable but only writable by root. umask 022 mkdir /var/lib/myapp touch /var/lib/myapp/instances/{1,2,3,4,5} Wrapper script (replace myapp.original by the path to the original executable), using Linux's flock utility: #!/bin/sh for instance in /var/lib/myapp/instances/*; do flock -w 0 -E 128 "$instance" myapp.original "$@" ret=$? if [ "$ret" -ne 128 ]; then exit "$ret"; fi done echo >&2 "Maximum number of instances of myapp reached." exit 128
Limit number of program executions
1,360,673,307,000
From Robert Love's Linux System Programming (2007, O'Reilly), this is what is given in the first paragraph (Chapter 1, Page 10): The file position’s maximum value is bounded only by the size of the C type used to store it, which is 64-bits in contemporary Linux. But in the next paragraph he says: A file may be empty (have a length of zero), and thus contain no valid bytes. The maximum file length, as with the maximum file position, is bounded only by limits on the sizes of the C types that the Linux kernel uses to manage files. I know this might be very, very basic, but is he saying that the file size is limited by the FILE data type or the int data type?
He's saying it's bound by a 64-bit type, which has a maximum value of (2 ^ 64) - 1 unsigned, or (2 ^ 63) - 1 signed (1 bit holds the sign, +/-). The type is not FILE; it's what the implementation uses to track the offset into the file, namely off_t, which is a typedef for a signed 64-bit type.1 (2 ^ 63) - 1 = 9223372036854775807. If a terabyte is 1000 ^ 4 bytes, that's ~9.2 million TB. Presumably the reason a signed type is used is so that it can hold a value of -1 (for errors, etc), or a relative offset. Functions like fseek() and ftell() use a signed long, which on 64-bit GNU systems is also 64-bits. 1. See types.h and typesizes.h in /usr/include/bits.
FILE size limitation according to Robert Love's textbook
1,360,673,307,000
The following error occurs while using the below commands.Guide me to overcome this issue. rpasa-vd1-363: cd /home/rpasa/DDEMO No more processes.
You have hit our per-user process limit and you will have to talk to your system administrator to find out how many process you have running under your user account or you can try and run the ps command to see what processes you are running.
Why the following error occurs in the terminal while using linux commands?
1,428,257,586,000
How can I limit SSH login attempts per minute per IP ? I want to disable login attempts during 5 seconds after a failure. Is this possible ? I'm not talking about ban a user after parsing logs like Fail2ban.
Question 1 This can be done with the module hashlimit. iptables -A INPUT -p tcp --dport 22 -m hashlimit \ --hashlimit-mode srcip --hashlimit-above 3/minute -j DROP Question 2 Netfilter does not see login failures only connections. You need a tool (like Fail2ban) which is active on both levels. You could create a chain with blocked IPs and run a script after each login failure which would do something like iptables -A blocked_ips -s $evil_ip -j DROP sleep 5 iptables -D blocked_ips -s $evil_ip -j DROP
How to temporarily ban an IP address, after "n" number of SSH login failures?
1,428,257,586,000
I am running CentOS 7 on my VPS and I would like to limit bandwidth on a specific port. I have looked around extensively, and out of the solutions I can find, either it's a limit placed on an interface, or it's a vaguely described iptable setup that seems to have only been tried on CentOS 6. In my case, my Shadowsocks (a proxy application) serverside is listening on port 1080, 1081, and 1082 on eth0. I would like to allow 1080 unlimited bandwidth, but limit both 1081 and 1082 to around 1MBps. Since it's a proxy application the inbound and outbound traffic is roughly equal. Note that it is a single instance of Shadowsocks listening on 3 ports, NOT 3 instances listening on 1 port each, so limiting bandwidth by process is not applicable. But otherwise any solution is on the table for me, whether it's something CentOS supports out of the box, or some kind of intermediate monitoring layer. As long as it gets the job done I'm open to it. Thanks in advance.
Traffic can be limited using only Linux's Traffic Control. Just to clarify, shadowsocks creates a tunnel with one side as a SOCKS5 proxy (sslocal, I'm assuming that's what is running on the OP's server considering the given ports), communicating with a remote endpoint (ssserver) which will itself communicate with the actual target servers. shadowsocks handles SOCKS5 UDP ASSOCIATE, and uses then (SOCKS5) UDP on the same port as the (SOCKS5) TCP port. This solution works as is (see note 1) for both TCP and UDP, except UDP might give additional challenges: if a source is creating "bigger than MTU" sized UDP packets (which probably shouldn't be done by a well behaving client or server), they get fragmented. tc, which works earlier than netfilter in ingress and later than netfilter in egress, will see the fragments. The UDP port is not available in fragments, so no filter will be able to catch them and almost no limitation will happen. TCP naturally using MTU for packet size limit (and doing path MTU discovery anyway) doesn't suffer this issue in most settings. Here's a packet flow ascii picture (the whole picture would typically represent one client activity resulting in two flows, one to the left and one to the right of the proxy): traffic controlled TCP self-adjusting / no UDP control -------------> <------------- / \ / \ clients | | proxy | | remote ====== real servers \ / (sslocal) \ / (ssserver) <------------- -------------> traffic controlled already rate limited There's no need or use to worry about the traffic with the remote server: outgoing from proxy to remote server will of course be limited by clients' incoming, incoming from remote/servers to proxy TCP will typically adjust and behave like the traffic on the clients side. UDP will not have such possibility, unless the application protocol can do it. Eg: if two video feeds over simple UDP arrive from server side and exceed the limit on client side, both clients flows will likely be corrupted. There should be an application feedback to reduce bandwidth, this is out of this scope. Anyway it would become much more complex, probably involving changes inside shadowsocks, to link remote/server's side traffic to client's side for tc usage. For SOCKS5 clients only sending data, limiting ingress from them is required to limit bandwidth, and for SOCKS5 clients only receiving data, limiting egress to them is required to limit bandwidth: unless the application in use is well known, both ways should be traffic controlled. Traffic Control is a complex topic, which I can barely scratch. I'll give two kinds of answers: the simple and crude one doing policing (drop excess) only, and a more complex one, doing shaping (incl. delaying before having to drop), with an IFB interface to work around limitations of ingress. The documentation below should be read to understand the concepts and the Linux implementation: http://www.tldp.org/HOWTO/Traffic-Control-HOWTO/ Also this command implemented in shell script (and using similar mechanisms as in this answer) can really do wonders too: https://github.com/magnific0/wondershaper Simple and crude A police action is used to drop any excess packet matching ports (which is a crude method). It's usually used in ingress but works on egress too. Traffic is rate limited, but there might be be fluctuations and unfair sharing among various rate limited clients (especially if UDP vs TCP is involved). egress (outgoing packets) The simpliest qdisc allowing to attach filters is the prio qdisc, whose specific features won't really be used. tc qdisc add dev eth0 root handle 1: prio Simply adding the following filter (with 8mbits/s <=> 1MBytes/s) one per port (u16 at 0 layer transport means "source port"), will get it done for TCP and UDP (see also note 2): tc filter add dev eth0 parent 1: protocol ip basic match 'cmp(u16 at 0 layer transport eq 1081)' action police rate 8mibit burst 256k tc filter add dev eth0 parent 1: protocol ip basic match 'cmp(u16 at 0 layer transport eq 1082)' action police rate 8mibit burst 256k In case I misunderstood and there should be only one common limit for 1081 and 1082, use this instead of the two above, grouping them in the same action (which is easy with the basic/ematch filter), which will then handle them in a single token bucket: tc filter add dev eth0 parent 1: protocol ip basic match 'cmp(u16 at 0 layer transport eq 1081) or cmp(u16 at 0 layer transport eq 1082)' action police rate 8mibit burst 256k ingress (incoming packets) Ingress is more limited than egress (can't do shaping), but it wasn't done in the simple case anyway. Using it just requires adding an ingress qdisc (see note 3): tc qdisc add dev eth0 ingress The equivalent filters (u16 at 2 layer transport means "destination port"): tc filter add dev eth0 ingress protocol ip basic match 'cmp(u16 at 2 layer transport eq 1081)' action police rate 8mibit burst 256k tc filter add dev eth0 ingress protocol ip basic match 'cmp(u16 at 2 layer transport eq 1082)' action police rate 8mibit burst 256k or for a single limit, instead of the two above: tc filter add dev eth0 ingress protocol ip basic match 'cmp(u16 at 2 layer transport eq 1081) or cmp(u16 at 2 layer transport eq 1082)' action police rate 8mibit burst 256k Cleaning tc egress, ingress or both settings can be replaced with their improved version below. previous settings should be cleaned first. To remove previously applied tc settings, simply delete the root and ingress qdiscs. Everything below them, including filters, will also be removed. The default interface root qdisc with the reserved handle 0: will be put back. tc qdisc del dev eth0 root tc qdisc del dev eth0 ingress More complex setup with classful qdiscs and IFB interface The use of shaping, which can delay packets before having to drop them should improve overall results. Hierarchy Token Bucket (HTB), a classful qdisc will handle bandwidth, while below it Stochastic Fairness Queueing (SFQ) will improve fairness between clients when they're competing within the restricted bandwidth. egress Here's an ascii picture describing the next settings: root 1: HTB classful qdisc | / | \ / | \ / | \ / | \ / 1:20 1:30 HTB classes / 8mibit 8mibit / | \ / | \ / 20: 30: / SFQ SFQ still 1: default port port incl. port 1080 1081 1082 The limited bandwidths will not borrow extra available traffic (it was not asked by OP): that's why they aren't a subclass of a "whole available bandwidth" default class. The remaining default traffic, including port 1080, just stays at 1:, with no special handling. In different settings where classes are allowed to borrow available bandwidth, those classes should be put below a parent class having its rate set with an accurate value of the maximum available bandwidth, to know what to borrow. So the configuration would require fine-tuning for each case. I kept it simple. The htb classful qdisc: tc qdisc add dev eth0 root handle 1: htb The htb classes, attached sfq, and filters directing to them: tc class add dev eth0 parent 1: classid 1:20 htb rate 8mibit tc class add dev eth0 parent 1: classid 1:30 htb rate 8mibit tc qdisc add dev eth0 parent 1:20 handle 20: sfq perturb 10 tc qdisc add dev eth0 parent 1:30 handle 30: sfq perturb 10 tc filter add dev eth0 parent 1: protocol ip prio 1 basic match 'cmp(u16 at 0 layer transport eq 1081)' flowid 1:20 tc filter add dev eth0 parent 1: protocol ip prio 1 basic match 'cmp(u16 at 0 layer transport eq 1082)' flowid 1:30 or for a single limit, instead of the 6 commands above: tc class add dev eth0 parent 1: classid 1:20 htb rate 8mibit tc qdisc add dev eth0 parent 1:20 handle 20: sfq perturb 10 tc filter add dev eth0 parent 1: protocol ip prio 1 basic match 'cmp(u16 at 0 layer transport eq 1081)' flowid 1:20 tc filter add dev eth0 parent 1: protocol ip prio 1 basic match 'cmp(u16 at 0 layer transport eq 1082)' flowid 1:20 ingress Ingress qdisc can't be used for shaping (eg delaying packets) but only to have them dropped with filters like in the simple case. In order to have better control, a trick is available: the Intermediate Functional Block, which appears as an artificial egress interface where ingress traffic can be redirected with filters, but else has little interaction with the rest of the network stack. Once in place, egress features can be applied on it, even if some of them might not always be helpful, considering the real control of incoming traffic is not in the hands of the receiving system. So here I setup the ifb0 interface then duplicate above (egress) settings on it, to have sort-of ingress shaping behaving better than just policing. Creating ifb0 (see note 4) and applying the same settings as previous egress: ip link add name ifb0 type ifb 2>/dev/null || : ip link set dev ifb0 up tc qdisc add dev ifb0 root handle 1: htb Classes and filters directing to them: tc class add dev ifb0 parent 1: classid 1:20 htb rate 8mibit tc class add dev ifb0 parent 1: classid 1:30 htb rate 8mibit tc qdisc add dev ifb0 parent 1:20 handle 20: sfq perturb 10 tc qdisc add dev ifb0 parent 1:30 handle 30: sfq perturb 10 tc filter add dev ifb0 parent 1: protocol ip prio 1 basic match 'cmp(u16 at 2 layer transport eq 1081)' flowid 1:20 tc filter add dev ifb0 parent 1: protocol ip prio 1 basic match 'cmp(u16 at 2 layer transport eq 1082)' flowid 1:30 or for a single limit, instead if the 6 commands above: tc class add dev ifb0 parent 1: classid 1:20 htb rate 8mibit tc qdisc add dev ifb0 parent 1:20 handle 20: sfq perturb 10 tc filter add dev ifb0 parent 1: protocol ip prio 1 basic match 'cmp(u16 at 2 layer transport eq 1081)' flowid 1:20 tc filter add dev ifb0 parent 1: protocol ip prio 1 basic match 'cmp(u16 at 2 layer transport eq 1082)' flowid 1:20 The redirection from eth0's ingress to ifb0 egress is done below. To optimize, only redirect intended ports instead of all traffic. The actual filtering and shaping is done above in ifb0 anyway. tc qdisc add dev eth0 ingress tc filter add dev eth0 ingress protocol ip basic match 'cmp(u16 at 2 layer transport eq 1081)' action mirred egress redirect dev ifb0 tc filter add dev eth0 ingress protocol ip basic match 'cmp(u16 at 2 layer transport eq 1081)' action mirred egress redirect dev ifb0 Notes: 1. Tested using a few network namespaces on Debian 10 / kernel 5.3. Commands syntax also tested on CentOS 7.6 container / kernel 5.3 (rather than 3.10). 2. u32 match ip sport 1081 0xffff could have been used instead to match source port 1081. But it wouldn't handle the presence of an IP option .u32 match tcp src 1081 0xffff could handle it but it actually requires the complex usage of three u32 filters as explained in the man page. So I chose basic match in the end. 3. ingress has the reserved handle ffff: wether specified or not (the specified handle value is ignored), so I'd rather not specify it. Referencing ingress by parent ffff: can be replaced by just ingress so that's what I chose. 4. When creating an IFB interface for the first time, the ifb module gets loaded, which by default automatically creates the ifb0 and ifb1 interfaces in initial namespace, resulting in an error if the interface name ifb0 is asked, while it was actually created as a result of the command. At the same time this interface doesn't appear in a network namespace (eg: container) if simply loading the module, so is still needed there. So adding 2>/dev/null || : solves it for both cases. Of course I'm assuming IFB support is really available.
Limit bandwidth on a specific port in CentOS 7?
1,428,257,586,000
I have observed the following behavior: [ec2-user@ip-10-66-68-55 ~]$ uname -a Linux ip-10-66-68-55 3.4.103-76.114.amzn1.x86_64 #1 SMP Fri Sep 12 00:57:39 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux [ec2-user@ip-10-66-68-55 ~]$ ulimit -H -n; ulimit -S -n 1000 1000 [ec2-user@ip-10-66-68-55 ~]$ cat /etc/security/limits.d/80-nofile.conf ec2-user hard nofile 123456 ec2-user soft nofile 123456 [ec2-user@ip-10-66-68-55 ~]$ bash -c 'ulimit -H -n; ulimit -S -n' #I expect this limit to be 1000, same as above. it is. 1000 1000 [ec2-user@ip-10-66-68-55 ~]$ sudo -u ec2-user bash -c 'ulimit -H -n; ulimit -S -n' #I expected this limit to be 1000, the same as if I had not called sudo -u ec2-user #Instead, I get the one from limits.d/80-nofile.conf. 123456 123456 A quick search through the sudo man page did not show any direct reference to /etc/security/limits. I want to rely on this behavior for ensuring that a daemon I care about runs with the limits set on 80-nofile.conf, but I don't know reliable this is. Is it documented somewhere? What kinds of config files / variables affect this behavior? can I tell sudo to not override the current limits?
You should probably take a look inside /etc/pam.d/sudo file and check if pam_limits.so is required in it or in any of the other files it includes. For example, the /etc/pam.d/sudo file in my system looks like below. #%PAM-1.0 auth required pam_env.so readenv=1 user_readenv=0 auth required pam_env.so readenv=1 envfile=/etc/default/locale user_readenv=0 @include common-auth @include common-account @include common-session-noninteractive Now, you can look for pam_limits.so in the other files that are included using the @inlude parameter.
does sudo always set process limits to the numbers from /etc/security/limits.d?
1,428,257,586,000
I am increasing nproc limit for one of a development user account in my rhel6 system . After searching some rubust solution , I zerod at editing /etc/security/limits.conf with these two lines : @dev_user hard nproc 4096 @dev_user soft nproc 4096 For some cases I have to deal with so much number of threads that's why I want those numbers high . Also this solution serves the pupose well . BUT my problem is if any time I edit that file with sudo permission then it only becomes after the system restart. This dev_user have been provided root access with sudo permissions only. Here is my humble request to you to please suggest me some solution which should do the task without restart . Also increased limits should last long until unless no else edits it again.
The settings specified in /etc/security/limits.conf are applied by pam_limits.so (man 8 pam_limits). The pam stack is only involved during the creation of a new session (login). Thus you need to log out and back in for the settings to take effect.
Increasing nproc limit for a non-root user . Only effective by restart