date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,656,029,979,000
Usually - ldapadduser assumes only one attribute for group-name: # ldapadduser sysuser2 sysusers Can I add add this users in to two groups while creating user? If I try run like: # ldapadduser sysuser2 sysusers,wheel Warning : using command-line passwords, ldapscripts may not be safe Cannot resolve group sysusers,wh...
ldapadduser set the user primary group which is unique. You should use ldapaddusertogroup for secondary ones.
LDAP: ldapadduser - can I add to two different groups?
1,656,029,979,000
I am trying to set up a couple of Linux workstations (RedHat 7), and I am trying to figure out how to set up authentication against an LDAP server with some unusual requirements. I basically know how to set up LDAP authentication using sssd, but I don't know how to restrict authentication to only certain users to meet...
Put those users into a group, then use a pam_access rule in /etc/security/access.conf to only allow logins if the user is in that group (and also for root, any sysadmins, and monitoring, if necessary) e.g. + : root wheel nagios : ALL + : yourusergrouphere : ALL - : ALL : ALL
Using openldap authentication only for some users
1,656,029,979,000
I have problem with creating my own attribute (eg. dateOfExpire-generalized time) and then with adding this attribute to own ObjecClass (eg. dormitory) and after that add this attribute with ObjectClass to existed schema inetorgperson. This is what I added to inetorgperson.ldif file: olcAttributeTypes: ( 2.5.18.1 NA...
You've marked the attribute as operational (with USAGE directoryOperation), hence the error. Operational attributes are not supposed to be modifiable by users; they require code running within OpenLDAP to update them based on some sort of event. Also, I would recommend against altering the standard schemas, such as in...
Problem with creating own attribute on openldap
1,656,029,979,000
I have configured ldap local server running centos 7, using this article: https://www.itzgeek.com/how-tos/linux/centos-how-tos/step-step-openldap-server-configuration-centos-7-rhel-7.html. Now my LDAP server is running without any issue. In my ldap server firewall is disabled. However, selinux is enabled. Also, I migr...
Note that for a successful login two things have to work: Name service switch configured in file /etc/nsswitch.conf PAM config as defined in various files in directoy /etc/pam.d Since getent seems to return correct data your /etc/nsswitch.conf seems to be correct. Then I'd check configuration in /etc/pam.d/common* w...
ldap users unable to ssh to the server
1,656,029,979,000
I'm currently doing openldap via command line. I added user John and added group devgroup, and I assigned John into devgroup group. When I deleted a user(John) via command line ldapdelete -Y EXTERNAL -H ldapi:/// -D "cn=admin,dc=example,dc=local" "uid=john,dc=example,dc=local" The user is gone but not in previously ...
I believe you're just deleting the user with that command but not all their entries from the OU. It's my understanding that LDAP doesn't maintain linkages with disparate objects like you're thinking, rather you're expected to do ldapsearch's first to produce lists of objects that you then want to act on either using l...
OpenLDAP: Deleted user is still listed in the group
1,656,029,979,000
I have installed an openldap server with memberof function on centos via slapd.conf: needed part of config?: index objectClass eq,pres index ou,cn,surname,givenname eq,pres,sub index uidNumber,gidNumber,loginShell eq,pres index uid,memberUid eq,pres,sub index nisM...
I fixed this warning by reindexing: systemctl stop slapd rm /var/lib/ldap/alock slapindex chown -R ldap:ldap /var/lib/ldap/ systemctl start slapd
openLDAP bdb_equality_candidates: (memberOf) not indexed
1,381,441,038,000
We have LDAP and NFS setup in the lab. The lab has 16 machines and a server. All the LDAP users home directory is present in the server. Whenever, the LDAP user logs in from any of the 16 machines, his home is presented from the server in the client machine through the NFS automounting. In all the client machines, we...
This was due to the incorrect group ID being assigned. When the system was installed freshly, the system arbitrarily assigned the group ID 501 to another group. In all the remaining machines of the lab, we had the group ID 501 assigned to vboxusers. That was the reason, the LDAP users were unable to access the Virtual...
LDAP user not present in the desired group
1,381,441,038,000
I have installed openLDAP using this https://www.techrepublic.com/article/how-to-install-openldap-and-phpldapadmin-on-ubuntu-server-20-04/on a Ubuntu 20.04 Serverand Ubuntu 20.04 Desktop has a client installed using https://computingforgeeks.com/how-to-configure-ubuntu-as-ldap-client/ Login of local user "client" work...
Okay got it working. I started all over on new VM, replaced libnss-ldap with libnss-ldapd (mark the d) as it says as comment on the instructions above, only selected passwd, group and shadow during configuration step when install asks which services to configure.
LDAP client mixing up credentials
1,381,441,038,000
I'm trying to setup sudo-ldap in a clean CentOS 7 docker environment. I've successfully setup sssd and PAM authentication, and it works. However, sudo-ldap works only if !authenticate is set: dn: cn=test,ou=SUDOers,ou=People,dc=srv,dc=world objectClass: top objectClass: sudoRole cn: test sudoUser: test sudoHost: ALL s...
Interesting, order does matter in PAM. It works if pam_unix come before pam_sss: auth sufficient pam_unix.so try_first_pass nullok auth sufficient pam_sss.so use_first_pass password sufficient pam_unix.so try_first_pass use_authtok nullok sha512 shadow password sufficient pam_sss.so us...
sudo-ldap works with !authenticate only
1,381,441,038,000
I have the following query against my directory: ldapsearch -x -H ldaps://example.com -D "cn=gitlab,ou=Service Accounts,dc=example,dc=com" -w foobar -b "ou=Persons,dc=example,dc=com" With the following olcAccess I get following results: dn: olcDatabase={1}mdb,cn=config olcAccess: {0}to dn.subtree="ou=Persons,dc=examp...
olcAccess: {0}to dn.children="ou=Persons,dc=example,dc=com" attrs=entry,uid,cn,userPassword,mail by dn="cn=gitlab,ou=Service Accounts,dc=example,dc=com" tls_ssf=128 read by * none break olcAccess: {1}to dn.subtree="ou=Persons,dc=example,dc=com" by dn="cn=gitlab,ou=Service Accounts,dc=example,dc=com" tls_ssf=128 search...
Restricting openldap ldapsearch on attributes
1,381,441,038,000
I have a Mac I have installed openLDAP on (using Macports). I have gotten the system up and am able to create objects. The only schema I have configured in the slapd.conf is core.schema. I am looking to add nis.schema, but when i try this the slapd -d3 command won't work for me. Specifically, it says: 5b994529 @(#)...
Based on https://github.com/openshift/openldap/blob/master/2.4.41/contrib/config/schema/nis.schema (among other references) saying: Depends upon core.schema and cosine.schema you'll need to include those before including nis.schema: include /opt/local/etc/openldap/schema/core.schema include /opt/local/etc/openldap/s...
openLDAP won't start after including second schema
1,381,441,038,000
I recently joined an organization and got privileges to add/remove entries or say attributes to the LDAP (OpenDJ ldap and opensource LINUX base ldap). So far I have added thousand of modification and attributes with no issues, but the very awkward when I added an attribute to the LDAP which was created soon after I s...
The point is that the account cn=Directory Manager is created at installation time and used to run rest of the setup. (I forgot the details but OpenDJ allows to have several such admin entries.) The point is that those admin entities are not subject to any access control or constraints. Especially it can mess up datab...
OpenDG a Opensource ldap database handling and issues
1,381,441,038,000
I am pxe installing Ubuntu over a network, unattended. I want Ldap installed as well, but I need to provide the ldap db root password in the seed: ldap-auth-config ldap-auth-config/rootbindpw password How can I keep this secure? I don't want to provide the plain text password on this line.
AFAIK, it's not possible. You can preseed a pre-encrypted password for the root and the first user accounts. You can even do it with the grub password (and a few others too). e.g. d-i passwd/root-password-crypted password [MD5 hash] d-i passwd/user-password-crypted password [MD5 hash] d-i grub-installer/password-cry...
How to provide password in a secure way to LDAP seed?
1,381,441,038,000
Initially I've installed Freeradius from stable branch as follows: apt-get install python-software-properties apt-add-repository ppa:freeradius/stable-3.0 apt-get update apt-get install freeradius make And I thought, that all modules were also installed; but now, when I need to get Freeradius be authenticated against...
To have support for LDAP in FreeRadius, please install the corresponding package with the command: sudo apt-get install freeradius-ldap Also relating to your doubts about mixed versions, to check up the version installed, do: dpkg -l | grep freeradius and/or: dpkg -l freeradius-ldap
Installing Freeradius-LDAP 3.x from PPA - Repository
1,381,441,038,000
I have a couple of LDAP servers, redundant with replication enabled. I'm having trouble with Apache Directory Studio not being able to fetch the base DN of one of these LDAP servers, showing an empty Root DSE. For the other server, however, it shows the whole DIT without problems. I found that the problem is the root...
Having multi-master replication in place, modifying the problematic domain entry by just adding a simple description on the working server seemed to solve the issue. The entry got replicated to the troublesome server automatically. dn: dc=example,dc=com changetype: modify add: description description: example - Now i...
Existing LDAP object not showing in ldapsearch
1,381,441,038,000
I can't find an example of how to use the ldapscripts command ldapmodifyuser and I'm not familiar enough with ldapmodify to figure it out. For example, how can I use ldapmodifyuser to change a user's givenName? Here's my attempt: ~$ sudo ldapmodifyuser 9928892 # About to modify the following entry : dn: uid=9928892,ou...
You did not specify changetype: modify and replace: givenName It should have been: sudo ldapmodifyuser 9928892 # About to modify the following entry : dn: uid=9928892,ou=Users,dc=thisplace,dc=com objectClass: inetOrgPerson objectClass: posixAccount objectClass: shadowAccount uid: 9928892 sn: FUJI givenName: GABUTO ...
how to use ldapmodifyuser from ldapscripts to change a value
1,381,441,038,000
I have installed OpenLDAP on Debian 11 as well as LDAP Account Manager 8.2, which seems to work well, but I have a question: On my login page, it says No default profile set. Please set it in the server profile configuration. - but I can't find where to do that. Where do I set this? I have tried googling it, but it ju...
On the login page of LDAP Account Manager (online demo here), click on the LAM configuration link in the right top corner. Then choose Edit server profiles followed by Manage server profiles. Here, you can add/rename/delete existing profiles and set a default profile.
LDAP Account Manager: Default profile?
1,381,441,038,000
I need to do practice with LDAP so I think that is a good idea to install a LDAP server only for do some test. For client side I'm using a LInux Mint distribution and I have installed all the software packages as I found in this link. In my company is available an Active Directory service, but obviously my user has no...
You don't need anything other than a normal user account to query Active Directory through LDAP. (Some fields will be inaccessible, but the majority of them are relatively public access.) authacct='[email protected]' # Authentication (your AD account) ldapuri='ldap://ad.contoso.com' # Address of any AD server ...
What LDAP server can I install only for test LDAP authentication?
1,381,441,038,000
I'm in a Docker, in a VM (Ubuntu Serv). I have created a OpenLDAP server. I want to know simply : how to create groups and users. My DIT tree : dc=company,dc=com ou=group1 ou=group2 ou=group3 ou=group4 and in each groupe I have many users : cn=user1 cn=user2 etc... Thank you
create an LDIF for each object like this: dn: uid=cx,ou=group1,ou=People,dc=company,dc=com loginShell: /bin/bash objectClass: account objectClass: posixAccount objectClass: top objectClass: shadowAccount userPassword:: ...pwhash..... cn: Your Name gecos: The Gecos field (infos),,, gidNumber: 100 uid: xy homeDirectory:...
How I can create accounts with command lines for my OpenLDAP server?
1,381,441,038,000
Installed OpenLDAP with this command # yum -y install openldap openldap-clients openldap-servers Copied reference data structures: # cp /usr/share/openldap-servers/DB_CONFIG.example /var/lib/ldap/DB_CONFIG Generated a password hash for 'test' by: # slappasswd In file /etc/openldap/slapd.d/cn=config/olcDatabase={2}...
It is also required to change in /etc/openldap/slapd.d/cn=config/olcDatabase={0}config.ldif file next lines: # olcAccess: {0}to * by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth" manage by dn.base="cn=admin,dc=mydomain,dc=com" manage by * none There is also added manage by dn.base="cn=admin,dc=my...
CentOS 7: ldap_add: Insufficient access (50)
1,381,441,038,000
So I have setup LDAP Login on every server at my work successfully except one. Of course there has to be that one! And I want to close my jira ticket, but I can't figure out what the issue is. The system is a Ubuntu 10 x32 Here is the output of the auth.log Oct 29 10:56:33 localhost sshd[2560]: Invalid user LDAPUSERN...
For some reason /etc/nslcd.conf was not created during installation. I copied it from another Ubuntu 10 Server which had working LDAP setup, but it wouldn't start because of the line >> #nss_initgroups_ignoreusers ALLLOCAL Which is odd, because its on the other Ubuntu 10 server, which I setup as well with the same co...
Ldap SSH Login not working - Same configs worked on 20+ other servers - Ubuntu
1,680,785,256,000
I have successfully added the recipe openldap to my yocto-base Linux distribution, by the instruction: IMAGE_INSTALL += "openldap" After that I've created a path/to/my-layer/recipes-support/openldap/openldap_%.bbappend file and put in it the instruction: INSANE_SKIP_${PN} += "already-stripped" The previous setting s...
The recipe meta-openembedded/meta-oe/recipes-support/openldap_2.4.50.bb present in my yocto base system build containes many packages and not only the package openldap. One of the other is package is openldap-bin and this is the package which adds ldapsearch to the image. So I have changed the assignment to IMAGE_INST...
How to add utility ldapsearch to yocto image?
1,680,785,256,000
I have configured an OpenLDAP 2.4.23 as a proxy to multiple separate Active Directory, it works fine when each AD as a different suffix/search base. I have an use case to fullfil : one application server is only able to check ONE LDAP server and it allows only to check ONE search base, and the users are in several sep...
Okay found a solution : an entry point of type meta several subentries of type ldap with suffixmassaging.
LDAP : one suffix : search multiple separate Active Directory
1,680,785,256,000
I can successfully start slapd on FreeBSD 11 perfectly fine, but it won't run on startup. Here is what I put in my rc.conf: slapd_enable="YES" slapd_flags="-h "ldap://1.2.3.4/ ldapi://%2fvar%2frun%2fopenldap%2fldapi/"" slapd_sockets="/var/run/openldap/ldapi" 1.2.3.4 is replaced with my actual public IP. I have tried ...
I didn't post this until I had searched for days, and I just now found the answer. If no one else finds this useful, I'll end up deleting, but here it is: https://forums.freebsd.org/threads/58365/ Basically, if networking isn't up yet, then it cannot bind and will fail. The solution is to edit /usr/local/etc/rc.d/slap...
slapd doesn't start automatically despite rc.conf entry
1,680,785,256,000
I have several machines running Debian that I'm configuring to work with Kerberos and LDAP. I thought I would automate using rsync. At first I tried a basic rsync clone excluding directories and files such as /run, /sys, /etc/fstab, /etc/hosts, etc. That failed -- somehow, some file specifying a UUID got copied (and ...
In general, you should use the distribution's package manager (apt) to install software, and proper configuration management tools (eg, Ansible/Chef/Puppet as mentioned in a comment, or Debian's local debconf) to propagate site-specific information and files. Copying files in bulk is not a good approach. If you have r...
How to automate (copy) LDAP/Kereberos install
1,680,785,256,000
I keep getting an invalid syntax error when trying to create a user in OpenLDAP (CentOS 7). This is a new install of OpenLDAP for testing purposes. So far I've managed to create a group called "Lab Staff", and now I'm trying to add a user to it Here is the LDIF file: dn: uid=lsuarez,ou=Lab Staff,dc=sftest,dc=net objec...
You need to import the schema for inetOrgPerson into slapd. I have no idea about OpenLDAP installation on CentOS 7, but if you have a file /etc/ldap/schema/inetorgperson.ldif and dynamic slapd configuration (/etc/ldap/slapd.d/), it might accept the following command (run as root). ldapadd -Y EXTERNAL -H ldapi:/// -f /...
OpenLDAP: Invalid syntax error when trying to add LDIF
1,680,785,256,000
I'm using openldap + nslcd to connect to a LDAP server for authentication of some users (these users would want their passwords and most of their configuration shared over many devices). I don't control the LDAP server. However, the synchronization came after the users already had dual accounts, so names of home folde...
I think you cannot implement individual overrides with map directive in nslcd.conf(5). Such a mapping is applied to whole passwd map. However depending on the order of module names in /etc/nsswitch.conf you could set a local passwd entry with different home directory which has higher precedence in file /etc/passwd. Ex...
LDAP per-user overrides
1,680,785,256,000
In organization we have setup ldap using openldap, i access it with GUI phpldapadmin. we have one requirement to allow access some user from specific IPAddress. I searched but still not able to get the exact solution. example.ldif dn: cn=xyz,ou=Person,dc=example,dc=com cn: xyz gidnumber: 570 homedirectory: /home/users...
You can do this by creating appropriate ACLs in your directory. Take a look at this forum thread in which the OP wants to have an IP address-based (and also filter based) access control to the directory. There are examples for IP-based ACLs which might help you. Perhaps, something like this: access to * by peer...
How to restrict user based on ip address in openldap
1,680,785,256,000
I want to apply password policy to one particular user, using an LDIF file. Here is my test.ldif file: dn: cn=purval,ou=Users,dc=xxx,dc=com changetype: modify add: pwdPolicySubentry pwdPolicySubentry: Defaultpolicy The command is: ldapmodify -x -D "cn=admin,dc=xxx,dc=com" -w password -f /tmp/addpolicy.ldif The error...
ldapmodify is telling you that the word Defaultpolicy is not a valid value for the element pwdPolicySubentry. To fix this you need to identify what your schema tells you the valid values can be, and use one of those valid values. Here is an example value: pwdPolicySubentry: cn=Default Password Policy,cn=Password Polic...
error during applying password policy with ldapmodify
1,680,785,256,000
as I mentioned in another thread, I have an LDAP system supporting two dozen Linux servers. When LDAP server is down for various reasons (firewall rule changes, power outage etc), my rest of the systems became hanged. I am hoping to build some redundancy, and stumbled upon articles on using nscd or sssd for caching lo...
If I understood correctly, nscd caches password for 10 min for each login. No, it only caches the user information that would be found in /etc/passwd, which is generally everything except the password. That would be the shadow map – but most LDAP configurations do not use shadow; they don't reveal the password hash ...
Understanding risks of setting nscd positive-time-to-live to a longer duration
1,680,785,256,000
im trying to configure an LDAP using slapd and PhpLdapAdmin using the non-official version from github on my Ubuntu server. the slapd and phpldapadmin works perfectly but i have a problem while login using the admin user from the slapd, phpldapadmin want to use email for login. but the slapd dont have a mail attribut...
Use ldapadd to create a "normal" LDAP entry with this DN, with whatever attributes you like. The DN will retain its "superuser" privileges. (However, once it becomes a normal entry, its password will be checked against the 'userPassword' attribute – no longer against the rootPW from configuration. So when you're creat...
Cant login on PhpLdapAdmin using admin user
1,680,785,256,000
I have created new schema which looks like this attributetype ( 2.25.3236588 NAME 'x-candidateNumber' DESC 'Candidate number' EQUALITY caseIgnoreMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{32768} ) attributetype ( 2.25.3536282 NAME 'x-candidateFullName' DESC 'Candidate ...
The first problem I see is that you're trying to create an object with this distinguishedName: dn: cn=ekcrlegalofficer That's invalid; the schema needs to exist "inside" an existing hierarchy. The "no global superior knowledge" error means "I have no idea where to place this object". Take a look at the existing schem...
Getting "ldap_add: Server is unwilling to perform (53) additional info: no global superior knowledge" when trying to add new objectClass to openLDAP
1,680,785,256,000
I have set up openLDAP 2.4 on Debian 11, and I want to change a few parameters, like the loglevel and logfile, which appears to be really simple: # man slapd.conf SLAPD.CONF(5) File Formats Manual SLAPD.CONF(5) ...
See the slapd-config(5) manual page. Nearly all settings retain the same names in the LDAP configuration backend, only with the olc namespace prefix. (And you're really supposed to edit these via LDAP or at least via slapmodify, not by hand.) olcLogFile: <filename> Specify a file for recording sla...
How to set loglevel in slapd.d?
1,680,785,256,000
I have a system with Debian 11, and want to experiment with setting it up as an LDAP client for user authentication, following this: https://linuxhint.com/configure-ldap-client-debian/. However, about half-way through the configuration, that happens as part of the installation, I pressed the wrong key, which terminate...
The dialog boxes were likely from debconf to help configure the installed packages. If so, the options were saved into the debconf database and you should be able to see them using debconf-get-selections | grep ldap. You can change the options using debconf-set-selections. The debconf options should be removed when r...
how to re-install LDAP client in Debian?
1,680,785,256,000
I have compiled the current version of OpenLDAP on a fresh RHEL 8 instance and am setting it up with a signed SSL certificate. When I start slapd, I get unable to open pid file "/var/run/openldap/slapd.pid": 2 (No such file or directory). Surprise, surprise, the openldap directory does not exist. I created the directo...
Your first error is: unable to open pid file "/var/run/openldap/slapd.pid": 2 (No such file or directory) There are a couple of ways to resolve this error. Fix the filesystem slapd is trying to write a pid file to /var/run/openldap/slapd.pid, but the directory /var/run/openldap doesn't exist. /var/run is a symlink to...
Installing OpenLDAP on RHEL 8 -- slapd.pid problem
1,680,785,256,000
I'm finally moving from RHEL 7 to 8. I have a new 8.6 installation and I've compiled OpenLDAP 2.5.13 and done the basic setup. As I'm moving from an existing OpenLDAP instance, I exported my LDAP settings on the old server. This new OpenLDAP uses mdb instead of hdb so I changed all instances of that in the exported ld...
Using OpenLDAP 2.5.13 from https://ltb-project.org/documentation/index.html on CentOS 8 stream, I'm able to get your LDIF to load with the following changes: I commented out all of the path-related configurations from cn=config because they didn't apply on my system, and I didn't want to bother setting up certificate...
Moving OpenLDAP to new server -- getting olcBackend error
1,680,785,256,000
I am trying to build SimGear from the FlightGear project using the download_an_compile.sh script (which uses CMake to build the binaries). The build went fine so far, but when the script tried linking the built object file together to a library, I get tons of //usr/lib/x86_64-linux-gnu/libldap_r-2.4.so.2: warning: und...
At least in Debian (and derivatives thereof), a shared library's development files are split off into a separate binary package: If there are development files associated with a shared library, the source package needs to generate a binary development package named libraryname-dev, or if you need to support multiple ...
Weird linking issue with libldap using cmake
1,680,785,256,000
I configured sudoers ldap (with openldap as backend LDAP) using the instruction provided from the official sudoers website. (link) Also restricted /etc/sudo-ldap.conf with 600 root:root permissions so that the normal users in the machine won't be able to know the LDAP server to which they are talking. But the ldap ser...
I figured out the right approach: Create a new user in LDAP say cn=sudoread,dc=example,dc=com cat > /tmp/tmplif <<EOF dn: cn=sudoread,dc=example,dc=com objectClass: top objectClass: person cn: sudoread sn: read userPassword: sudoread EOF $ ldapadd -H ldap://localhost -f /tmp/tmplif -D 'cn=root,dc=example,dc=com' -W ...
Ideal way to restrict querying of sudoers ldap configuration by anonymous users
1,409,764,135,000
This answer explains the actions taken by the kernel when an OOM situation is encountered based on the value of sysctl vm.overcommit_memory. When overcommit_memory is set to 0 or 1, overcommit is enabled, and programs are allowed to allocate more memory than is really available. Now what happens when we run out of m...
If memory is exhaustively used up by processes, to the extent which can possibly threaten the stability of the system, then the OOM killer comes into the picture. NOTE: It is the task of the OOM Killer to continue killing processes until enough memory is freed for the smooth functioning of the rest of the process that...
How does the OOM killer decide which process to kill first?
1,409,764,135,000
The following report is thrown in my messages log: kernel: Out of memory: Kill process 9163 (mysqld) score 511 or sacrifice child kernel: Killed process 9163, UID 27, (mysqld) total-vm:2457368kB, anon-rss:816780kB, file-rss:4kB Doesn't matter if this problem is for httpd, mysqld or postfix but I am curious how can I ...
The kernel will have logged a bunch of stuff before this happened, but most of it will probably not be in /var/log/messages, depending on how your (r)syslogd is configured. Try: grep oom /var/log/* grep total_vm /var/log/* The former should show up a bunch of times and the latter in only one or two places. That is ...
Debug out-of-memory with /var/log/messages
1,409,764,135,000
My computer recently ran out of memory (a not-unexpected consequence of compiling software while working with large GIS datasets). In the system log detailing how it dealt with the OOM condition is the following line: Out of memory: Kill process 7429 (java) score 259 or sacrifice child What is that or sacrifice chil...
From source files I found oom_kill.c, the OOM Killer, after such message is written in system log, checks children of the process identified and evaluates if possible to kill one of them in place of the process itself. Here a comment extracted from source file explaining this: /* * If any of p's children has a differ...
What is the Out of Memory message: sacrifice child?
1,409,764,135,000
In our cluster, we are restricting our processes resources, e.g. memory (memory.limit_in_bytes). I think, in the end, this is also handled via the OOM killer in the Linux kernel (looks like it by reading the source code). Is there any way to get a signal before my process is being killed? (Just like the -notify option...
It's possible to register for a notification for when a cgroup's memory usage goes above a threshold. In principle, setting the threshold at a suitable point below the actual limit would let you send a signal or take other action. See: https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt
receive signal before process is being killed by OOM killer / cgroups
1,409,764,135,000
Having some problems with httpd (Apache/2.2.29) memory usage. Over time, memory usage in the httpd processes creep up until it's eventually at 100%. Last time I restarted httpd was about 24 hours ago. Output from free -m is: [ec2-user@www ~]$ free -m total used free shared buffers c...
Here's what I've done to 'solve' it: Set MaxClients 7 (based on (1740.8Mb Memory on server - 900Mb for MySQL + other stuff) / 111Mb average usage per httpd process = 7.5747747747747747747747747747748) Therefore: <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 ...
httpd memory usage
1,409,764,135,000
Running some Linux servers with single or just a few vital system service daemons, I would like to adjust the OOM killer for those daemonized processes in case something odd happens. For example, today some Ubuntu server running MySQL got a killed MySQL daemon because tons of apt-checker processes were consuming all m...
Several modern dæmon supervision systems have a means for doing this. (Indeed, since there is a chain loading tool for the job, arguably they all have a means for doing this.) Upstart: Use oom score in the job file.oom score -500 systemd: Use the OOMScoreAdjust= setting in the service unit. You can use service unit...
How to set OOM killer adjustments for daemons permanently?
1,409,764,135,000
I have 5 million files which take up about 1TB of storage space. I need to transfer these files to a third party. What's the best way to do this? I have tried reducing the size using .tar.gz, but even though my computer has 8GB RAM, I get an "out of system memory" error. Is the best solution to snail-mail the files ov...
Additional information provided in the comments reveals that the OP is using a GUI method to create the .tar.gz file. GUI software often includes a lot more bloat than the equivalent command line equivalent software, or performs additional unnecessary tasks for the sake of some "extra" feature such as a progress bar. ...
Memory problems when compressing and transferring a large number of small files (1TB in total)
1,409,764,135,000
So, I thought this would be a pretty simple thing to locate: a service / kernel module that, when the kernel notices userland memory is running low, triggers some action (e.g. dumping a process list to a file, pinging some network endpoint, whatever) within a process that has its own dedicated memory (so it won't fail...
What you are asking is, basically, a kernel-based callback on a low-memory condition, right? If so, I strongly believe that the kernel does not provide such mechanism, and for a good reason: being low on memory, it should immediately run the only thing that can free some memory - the OOM killer. Any other programs can...
How to trigger action on low-memory condition in Linux?
1,409,764,135,000
On one of our MySQL master, OOM Killer got invoked and killed MySQL server which lead to big outage. Following is the kernel log: [2006013.230723] mysqld invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0 [2006013.230733] Pid: 1319, comm: mysqld Tainted: P 2.6.32-5-amd64 #1 [2006013.230735] Call Trace:...
Linux does memory overcommit. That means it allows process to request more memory than really available on the system. When a program tries to malloc(), the kernel says "OK you got the memory", but don't reserve it. The memory will only be reserved when the process will write something in this space. To see the diffe...
OOM Killer - killed MySQL server
1,409,764,135,000
It is explained here: Will Linux start killing my processes without asking me if memory gets short? that the OOM-Killer can be configured via overcommit_memory and that: 2 = no overcommit. Allocations fail if asking too much. 0, 1 = overcommit (heuristically or always). Kill some process(es) based on some heuristics ...
Consider this scenario: You have 4GB of memory free. A faulty process allocates 3.999GB. You open a task manager to kill the runaway process. The task manager allocates 0.002GB. If the process that got killed was the last process to request memory, your task manager would get killed. Or: You have 4GB of memory free...
Why can't the OOM-Killer just kill the process that asks for too much?
1,409,764,135,000
I'm trying to use the systemd infrastructure to kill my memory leaking service when its memory usage reaches some value. The configuration file used is this: [Unit] Description="Start memory gobbler" After=network.target MemoryAccounting=true MemoryHigh=1024K MemoryMax=4096K [Service] ExecStart=/data/memgoble 8388600...
You have the config parameters in the wrong section. If you look in your logs, you should see: Unknown lvalue 'MemoryAccounting' in section 'Unit' Unknown lvalue 'MemoryHigh' in section 'Unit' Unknown lvalue 'MemoryMax' in section 'Unit' https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html ...
systemd memory limit not working/example
1,409,764,135,000
I have a perplexing problem. I have a library which uses sg for executing customized CDBs. There are a couple of systems which routinely have issues with memory allocation in sg. Usually, the sg driver has a hard limit of around 4mb, but we're seeing it on these few systems with ~2.3mb requests. That is, the CDBs ...
The 1 GiB limit for Linux kernel memory in a 32-bit system is a consequence of 32-bit addressing, and it's a pretty stiff limit. It's not impossible to change, but it's there for a very good reason; changing it has consequences. Let's take the wayback machine to the early 1990s, when Linux was being created. Back in t...
memory limit of the Linux kernel
1,409,764,135,000
I have a 250 MB text file, all in one line. In this file I want to replace a characters with b characters: sed -e "s/a/b/g" < one-line-250-mb.txt It fails with: sed: couldn't re-allocate memory It seems to me that this kind of task could be performed inline without allocating much memory. Is there a better tool for ...
Yes, use tr instead: tr 'a' 'b' < file.txt > output.txt sed deals in lines so a huge line will cause it problems. I expect it is declaring a variable internally to hold the line and your input exceeds the maximum size allocated to that variable. tr on the other hand deals with characters and should be able to handle ...
Basic sed command on large one-line file: couldn't re-allocate memory
1,409,764,135,000
I'm running Gentoo on my server, and I've just upgraded from kernel 4.4.39 to 4.9.6, with the kernel configuration essentially unchanged. My system log is filling up with error reports such as the following: [50547.483577] ksoftirqd/0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK) [50547....
Your problem is shown in this line: [50547.483932] Normal free:1376kB min:3660kB low:4572kB high:5484kB active_anon:0kB inactive_anon:0kB active_file:227508kB inactive_file:96kB unevictable:0kB writepending:4104kB present:892920kB managed:855240kB mlocked:0kB slab_reclaimable:531548kB slab_unreclaimable:25576kB kernel...
System unable to allocate memory even though memory is available
1,409,764,135,000
I have found that when running into an out-of-memory OOM situation, my linux box UI freezes completely for a very long time. I have setup the magic-sysrq-key then using echo 1 | tee /proc/sys/kernel/sysrq and encountering a OOM->UI-unresponsive situation was able to press Alt-Sysrq-f which as dmesg log showed causes...
The reason the OOM-killer is not automatically called is, because the system, albeit completely slowed down and unresponsive already when close to out-of-memoryy, has not actually reached the out-of-memory situation. Oversimplified the almost full ram contains 3 type of data: kernel data, that is essential pages of e...
Why does linux out-of-memory (OOM) killer not run automatically, but works upon sysrq-key?
1,409,764,135,000
If I disable memory overcommit by setting vm.overcommit_memory to 2, by default the system will allow to allocate the memory up to the dimension of swap + 50% of physical memory, as explained here. I can change the ratio by modifying vm.overcommit_ratio parameter. Let's say I set it to 80%, so 80% of physical memory m...
What the system will do with the remaining 20%? The kernel will use the remaining physical memory for its own purposes (internal structures, tables, buffers, caches, whatever). The memory overcommitment setting handle userland application virtual memory reservations, the kernel doesn't use virtual memory but physica...
Where the remaining memory of vm.overcommit_ratio goes?
1,409,764,135,000
On my Debian VM machine with 512 MB RAM and 348 MB swap, what will happen if I open a 1 GB file in an editor and get out of memory? Will it crash the system? Or if not, how will Linux handle this? Wouldn't it be wise to install Swapspace so if needed there will be created enough swap automatically and dynamically? sud...
It depends on the settings you're running with, in particular memory overcommit (/proc/sys/vm/overcommit_memory; see man 5 proc for details). If memory overcommit is disabled, the editor's (and possibly other programs attempting at the same time) attempt to allocate memory will fail. They'll get a failure result from ...
Out of swap - what happens?
1,409,764,135,000
I am using Linux 5.15 with Ubuntu 22.04. I have a process that uses a lot of memory. It requires more memory than I have RAM in my machine. The first time that I ran it, it was killed by the OOM Killer. I understand this: the system ran out of memory, the OOM Killer was triggered, my process was killed. This makes sen...
First of all, I'd like to thank MC68020 for taking the time to look into this for me. As it happens, their answer didn't include what was really happening in this situation - but they got the bounty anyway as it's a great answer and a helpful reference for the future. I'd also like to thank Philip Couling for his answ...
Why do processes on Linux crash if they use a lot of memory, yet still less than the amount of swap space available?
1,409,764,135,000
I use CentOS 7 with kernel 3.1.0 I know there is a hitman in Linux called oom killer which kills a process that uses too much memory out of available space. I want to configure it to log the activities so that I can check whether it happens or not. How can I set it up? Thanks,
OOMkiller's activities are guaranteed to be in /var/log/dmesg (at least for a time). Usually the system logger daemon will also put it in /var/log/messages by default on most distributions with which I've worked. These commands might be of help in tracking the logs down: grep oom /var/log/* grep total_vm /var/log/* ...
How to make OOM killer log into /var/log/messages when it kills any process?
1,409,764,135,000
Some of my jobs are getting killed by the os for some reason. I need to investigate why this is happening. The jobs that I run don't show any error messages in their own logs, which probably indicates os killed them. Nobody else has access to the server. I'm aware of OOM killer, are there any other process killer...
oom is currently the only thing that kills automatically. dmesg and /var/log/messages should show oom kills. If the process can handle that signal, it could log at least the kill. Normally memory hogs get killed. Perhaps more swap space can help you, if the memory is only getting allocated but is not really needed. ...
what process killers does linux have? [closed]
1,409,764,135,000
I wanted to backup all my .vcf files from my carddav server (ownCloud). The script is very simple and is the following: $ wget -Avcf -r -np -l0 --no-check-certificate -e robots=off --user=user \ --password='password' https://cloud.domain.com/foo/carddav The total number of .vcf files is about 400, after downloadi...
It seems wget loops causes memory overflow. The natural first suggestion is to increase again memory of your cloud instance from 1Gb to 2Gb. This solved a similar issue recently. If this is not possible or doesn't solve the problem the second solution is to run wget within 2 steps: Retrieve files list. As I see in yo...
Wget out of memory error kills process
1,409,764,135,000
For many years I set up my Linux machines with no swap, as they had enough memory to do what I needed and I would rather a process get killed if it used too much memory, instead of growing larger and larger and quietly slowing everything down. However I found out I required swap in order to use hibernate on a laptop, ...
Is there some way I can tell Linux to use a given swap partition for hibernation only, and not to use it for swapping during normal operation? Remove or comment the corresponding line from /etc/fstab. Example on my system $ grep swap /etc/fstab /dev/mapper/NEO--L196--vg-swap_1 none swap sw ...
Hibernate to a swap partition without using it as actual swap space
1,409,764,135,000
Can using bash's globstar (**) operator cause an out of memory error? Consider something like: for f in /**/*; do printf '%s\n' "$f"; done When ** is being used to generate an enormous list of files, assuming the list is too large to fit in memory, will bash crash or does it have a mechanism to handle this? I know I'...
Yes, it can, and this is explicitly accounted for in the globbing library: /* Have we run out of memory? */ if (lose) { tmplink = 0; /* Here free the strings we have got. */ while (lastlink) { /* Since we build the list in reverse order, the first N entries w...
Can ** (bash's globstar) run out of memory?
1,409,764,135,000
I need to run some memory heavy tests in a remote computer through SSH. Last time I did this, the computer stopped responding, and it was necessary for someone to physically reboot it. Is there a way I can set it up so that the system restarts instead of freezing if too much memory is being used? (I do have root acces...
To monitor/recover the control of a "unstable"/starver server, I would advise to use an hardware, or failing that a software watchdog; in Debian you can install it with: sudo apt-get install watchdog Then you edit /etc/watchdog.conf and add thresholds or tests; from the top of my head, the watchdog is also activated ...
Restart system if it runs out of memory?
1,409,764,135,000
Directly set by echo 1000 >/proc/<pid>/oom_score_adj is unreliable because target program is already running , in this case maybe target program caused OOM before echo 1000 >/proc/<pid>/oom_score_adj
oom_score_adj is inherited on fork, so you can set its initial value for new children by setting the desired value on the parent process. Thus if you’re starting the target from a shell script, echo 1000 > /proc/$$/oom_score_adj will change the shell’s value to 1000, and any process subsequently forked by the shell w...
How set the "oom_score_adj" when(before) run target program?
1,409,764,135,000
I have an ARM based server with just under 2GB of addressable memory and 4GB of swap activated: root@bang:~> free -m total used free shared buff/cache available Mem: 1976 388 48 15 1539 1487 Swap: 4095 1 409...
This was caused by a kernel bug present in Linux kernels 4.7.0 to 4.7.4 (it's fixed by this commit in 4.7.5 and this commit in 4.8.0).
Why is the OOM killer killing processes when swap is hardly used?
1,409,764,135,000
I am currently trying to remove all newlines that are not preceded by a closing parenthesis, so I came up with this expression: sed -r -i -e ":a;N;$!ba;s/([^\)])\n/\1/g;d" reallyBigFile.log It does the job on smaller files, but on this large file I am using (3GB), it works for a while then returns with an out of memo...
Your first three commands are the culprit: :a N $!ba This reads the entire file into memory at once. The following script should only keep one segment in memory at a time: % cat test.sed #!/usr/bin/sed -nf # Append this line to the hold space. # To avoid an extra newline at the start, replace instead of append. 1h ...
Out of memory while using sed with multiline expressions on giant file
1,409,764,135,000
I do a lot of work on in the cloud running statistical models that take up a lot of memory, usually with Ubuntu 18.04. One big headache for me is when I set up a model to run for several hours or overnight, and I check on it later to find that the processes was killed. After doing some research, it seems like this i...
You can ask the kernel to panic on oom: sysctl vm.panic_on_oom=1 or for future reboots echo "vm.panic_on_oom=1" >> /etc/sysctl.conf You can adjust a process's likeliness to be killed, but presumably you have already removed most processes, so this may not be of use. See man 5 proc for /proc/[pid]/oom_score_adj. Of c...
Trigger a script when OOM Killer kills a process
1,409,764,135,000
Is there anyway to exclude some users from the out-of- memory killer in Unix? On the other way, can I set priority for user?
There is no way to instruct OOM to ignore specific user processes. Though you can instruct it to ignore a specific process and based on that you can construct a loop which will check all processes for specific user and update it via cron or whatever way you like. Cycle itself will look something like that: while read ...
Exclude user from OOM killer in unix
1,409,764,135,000
I have ~100000 files each of one with unique rows such as: File1.txt chr1_1_200 chr1_600_800 ... File2.txt chr1_600_800 chr1_1000_1200 ... File3.txt chr1_200_400 chr1_600_800 chr1_1000_1200 ... Every file has around ~30 million rows and when its time to perform the command: cat *txt | sort ...
If the files are already sorted in an acceptable way, you could merge-sort them and then uniq them: sort -t_ -k2,2n -k3,3n -m -- *.txt | uniq > Unique_Position.txt ... which sorts numerically on the second field (as delimited by underscores _) and if those keys are unique, by the third field. The resulting output is ...
Concatenate thousands of files already sorted and re-sort the output file quickly
1,453,749,351,000
I have a problem with my Linux machine where system now seems to run easily out of RAM (and trigger OOM Killer) when it normally can handle similar load just fine. Inspecting free -tm shows that buff/cache is eating lots of RAM. Normally this would be fine because I want to cache the disk IO but it now seems that kern...
I'm the author of the question above and even though full answer hasn't surfaced this far, here's the best known explanation this far: With modern Linux kernel, the Cached value of /proc/meminfo no longer describes the amount of disk cache. However, the kernel developers considered that changing this at this point is...
How to diagnose high `Shmem`? (was: Why `echo 3 > drop_caches` cannot zero the cache?)
1,453,749,351,000
Our group is all programmers and exclusively use Linux or MacOS, but a customer uses Solaris 10, and we need our code to work there. So we scrounged up an old SunFire V240, and a rented Solaris 10 VM to test on. The code compiles just fine on the VM, but on the SunFire it fails. Our code has a giant autogenerated C++ ...
It turned out to be a bug in gmake 3.81. When I ran the compile command directly without make, it was able to use much more memory. It seems there was a known bug in 3.80: Something like this. That bug was supposed to be fixed in 3.81. but I was getting a very similar error. So I tried gmake 3.82. The compile proceed...
Solaris 10: Virtual Memory Exhausted
1,453,749,351,000
When OOM Killer or kernel reports memory state, it uses the next abbreviations Node 0 DMA: 26*4kB (M) 53*8kB (UM) 33*16kB (ME) 23*32kB (UME) 6*64kB (ME) 7*128kB (UME) 1*256kB (M) 2*512kB (ME) 0*1024kB 0*2048kB 0*4096kB = 4352kB Node 0 DMA32: 803*4kB (UME) 3701*8kB (UMEH) 830*16kB (UMH) 2*32kB (H) 0*64kB 0*128kB 1*256...
These are migration types, defined in mm/page_alloc.c in the kernel: static const char types[MIGRATE_TYPES] = { [MIGRATE_UNMOVABLE] = 'U', [MIGRATE_MOVABLE] = 'M', [MIGRATE_RECLAIMABLE] = 'E', [MIGRATE_HIGHATOMIC] = 'H', #ifdef CONF...
What do the abbreviations in OOM Killer memory statistics report mean?
1,453,749,351,000
I am running a complex workflow via bash scripts, which are using external programs/command to do different things. It runs fine for several hours, but then suddenly the OOM killer terminates programs of my workflow or the entire bash scripts, even though there is still plenty of memory available. I have logged the me...
Memory cgroup out of memory You need to avoid filling the memory cgroup that you are running within. Task in /slurm/uid_11122/job_58003653/step_0 killed as a result of limit of /slurm/uid_11122/job_58003653 memory: usage 8,388,608kB, limit 8,388,608kB, failcnt 3673 memory+swap: usage 8388608kB, limit 16777216kB...
Why is the Linux OOM killer terminating my programs?
1,453,749,351,000
I am experiencing a weird issue lately: Sometimes (I cannot reproduce it on purpose), my system is using all its swap, despite there being more than enough free RAM. If this happens, the systems then becomes unresponsive for a couple of minutes, then the OOM killer kills either a "random" process which does not help m...
shared Memory used (mostly) by tmpfs (Shmem in /proc/meminfo, available on kernels 2.6.32, displayed as zero if not available)> So the manpage definition of Shared is not as helpful as it could be :(. If the tmpfs use does not reflect this high value of Shared, then the value must represent some process(es) "who d...
Linux using whole swap, becoming unresponsive while there is plenty of free RAM
1,453,749,351,000
On my work laptop with an SSD and no swap, I sometimes run out of memory when running RAM-expensive applications (virtual machine, etc). When that happens, the system becomes slow (expected) but what I don't understand is why the disk usage LED lights up and stays that way until I manage to kill some tasks to free up ...
As you fill the memory with apps various block/filesystem caches are getting pushed out of the same memory. These caches are crucial for fast look up of files and other stuff. When there is no space for caches the kernel will try to look up all the information directly from the filesystem which is utterly slow and hen...
Why is IO so high when almost out of memory
1,453,749,351,000
The server has about 24GB memory. By running free -g I find the memory is used up total used free shared buffers cached Mem: 23 23 0 0 0 18 -/+ buffers/cache: 4 19 Swap: 56 2 53 Then I...
You're misinterpreting the output of free. What you posted is showing that you have 19 GB of RAM free. The 23 GB you're seeing is used by the system as cache but is still readily available for applications. That is also why top shows the memory as free.. See Linuxatemyram.com for a more detailed explanation
cannot find what has used all the memory
1,453,749,351,000
I am trying to set the oom_adj value for the out of memory killer, and each time I do (regardless of the process) I get back exactly one less than I set (at least for positive integers. I haven't tried negative integers since I want these processes to be killed by OOM Killer first). [root@server ~]# echo 10 > /proc/1...
oom_adj is deprecated and provided for legacy purposes only. Internally Linux uses oom_score_adj which has a greater range: oom_adj goes up to 15 while oom_score_adj goes up to 1000. Whenever you write to oom_adj (let's say 9) the kernel does this: oom_adj = (oom_adj * OOM_SCORE_ADJ_MAX) / -OOM_DISABLE; and stores th...
OOM Killer value always one less than set
1,453,749,351,000
I'm playing with MongoDB clusters. After few OOM killers, I decided to ulimit mongoDB with memory to 4G of RAM. After few hours, it was killed again with OOM. So my question is not about MongoDB, it's about memory management in linux. Here is an HTOP just a few minutes before OOM. Why are there 4.2T of VIRT and only ...
Combination of three facts causes your oom problem: small page size, large VIRT, pagetables. Your logs clearly show that almost all the RAM was used by pagetables, not by process memory (for example, not by RESident pages - these got mostly pressed out to swap). The bummer about x86_64/x86 pagetables is that when you...
How can the OOM killer kill ulimit(ed) process?
1,453,749,351,000
It is an useful feature for systems which are using (...still have to use) 32-bit binaries, and the 4G limit came into consideration. It essentially means, that the 32-bit user-space code, the 32-bit user-space data and the (32-bit with PAE, or 64-bit) kernel live in different address spaces, which essentially enables...
On a 64-bit kernel you already have full 4G accessible by a 32-bit userspace program. See yourself by entering the following in the terminal (WARNING: your system may become unresponsive if it doesn't have free 4GiB of RAM when running this): cd /tmp cat > test.c <<"EOF" #include <stdlib.h> #include <stdio.h> int main...
How can I enable 4G/4G split in Linux?
1,453,749,351,000
When visualizing some memory related metrics on server level, I get a chart which looks like this: The area below the blue line is RAM Used. The area below the red line and above the blue line is RAM Cache + Buffer. The area below the black line and above the red line is RAM Free. The area below the orange line and a...
Free RAM is wasted RAM; the fact that the amount of free RAM is low on your system is a good sign, not a bad one. What’s important is the amount of RAM used by applications, and stalls related to excessive swap use. In your case, the amount of RAM used is low compared to the amount installed, and there isn’t anything ...
RAM Free decreases over time due to increasing RAM Cache + Buffer
1,453,749,351,000
I was following a guide for automatically decrypting the hard drive on boot, using self-generated keys, and tpm2 variables, and near the end it makes this point that seems to make sense: https://blastrock.github.io/fde-tpm-sb.html#disable-the-magic-sysrq-key The magic SysRq key allows running some special kernel acti...
In the absence of any process writing something to /proc/sys/kernel/sysrq (possibly via the sysctl command) at any point since boot (including in the initramfs)¹, the default value will be as configured at kernel compilation time. You can find that out with: $ grep -i sysrq "/boot/config-$(uname -r)" CONFIG_MAGIC_SYSR...
Disable sysrq f (OOM-killer) but leave other sysrq keys operational
1,453,749,351,000
I read through the docs of man proc. When it comes to the overcommit_memory, the heuristics in overcommit_memory=0 isn't understood well. What the heuristics actually mean? does "calls of mmap(2) with MAP_NORESERVE are not checked" mean that the Kernel only allocate virtual memory without being aware of even the exis...
The overcommit_memory setting is taken into account in three places in the memory-management subsystem. The main one is __vm_enough_memory in mm/util.c, which decides whether enough memory is available to allow a memory allocation to proceed (note that this is a utility function which isn’t necessarily invoked). If o...
What does heuristics in Overcommit_memory =0 mean?
1,453,749,351,000
My Computer has been freezing a lot lately, and with no apparent reason.It freezes even if my usage is 3% CPU and 9% RAM. I was using Windows 8 until I installed Ubuntu 14.04. It was really slow, and after some researching, I adopted the idea that Ubuntu 14.04 wasn't really that stable, so I decided I'd download a le...
Your problem is that you don't have any swap space. Operating systems require a swap space so that they are able to free up ram space and store it on the hard drive. What you are going to need to do is reformat your hard drive. Red Hat has a suggest swap size chart here. Load up the arch live cd and repartition and...
Linux freezing randomly
1,453,749,351,000
Whilst reading both https://lwn.net/Articles/391222/ and http://man7.org/linux/man-pages/man5/proc.5.html I have come across the terms oom_score and badness. Both numbers have the same basic meaning; the higher they are, the more likely the associated task is to be OOM-killed when the host is under memory pressure. Wh...
It looks like it is: oom_score = badness * 1000 / totalpages based on the kernel code https://github.com/torvalds/linux/blob/master/fs/proc/base.c#L549. static int proc_oom_score(struct seq_file *m, struct pid_namespace *ns, struct pid *pid, struct task_struct *task) { unsigned long totalpages = to...
What is the relationship between oom_score and badness?
1,453,749,351,000
While developing some software, a program under test sometimes eats all the memory, then proceeds to yomp into the swap space and start thrashing the disk, leading to a predictable drop in responsiveness to the point that I generally switch to another terminal to log in and kill the process manually. What I'd like is ...
I don't think there is a way to limit swap space, unless you modify the program to only request non-swappable memory, which even if possible would probably be impractical. However what you can and should do is limit the total amount of memory available to the process. You can use cgroups (the new-ish general way), uli...
Can I deny use of swap space to a specific process (and have it just get killed)?
1,453,749,351,000
I'm trying to figure out why my Linux machine is so slow and I found this: $ free --human total used free shared buff/cache available Mem: 7,3Gi 6,6Gi 168Mi 1,0Gi 1,8Gi 746Mi Swap: 9,3Gi 2,7Gi 6,6Gi When I run top -n1 -b...
The RES column in your output from top shows the amount of physical memory used by each process. (The memory used by a process that is RESident in physical memory. This is distinct from VIRTual memory allocated by each process.) Just in the subset shown there is 4GB used. There is 1.8GB used as cache. This can be disc...
Linux uses 6.6Gi RAM for nothing
1,453,749,351,000
My software runs a command that looks something like: find | xargs do a potentially memory hungry job The problem is that sometimes a potentially memory hungry job gets too hungry, the system gets unresponsive and I have to reboot it. My understanding is that it happens due to the memory allocation over commitment. ...
I think what you are looking for is --memfree in GNU Parallel: find ... | parallel --memfree 1G dostuff This will only start dostuff if there is 1G RAM free. It will start one more until there is either less than 1G RAM free or 1 job running per CPU thread. If there is 0.5G RAM free (50% of 1G RAM) the youngest job w...
Prevent the machine being slowed down by running out of memory
1,453,749,351,000
Context: an AIX lpar with very low memory (no forking possible, so only shell's builtins (cd, echo, kill) will work). I can have a (hmc) console to it, but I need a better way to start freing memory in AIX, when memory is too low to even allow you to do a "ps -ef". (I have a way, but it is a way to randomly kill exist...
Please Try this: Of a list of included builtins in ksh: $ ksh -c 'builtin' This are the only builtins useful to answer your question: echo kill print printf read So, it seems that the only way to "read a file" is to use read. Lets define a couple of functions (copy and paste in the CLI): function Usage { echo "...
AIX - use ksh builtins to free memory when fork not possible
1,453,749,351,000
If I type in my shell x=`yes`, eventually I will get cannot allocate 18446744071562067968 bytes (4295032832 bytes allocated) because yes tries to write into x forever until it runs out of memory. I get a message cannot allocate <memory> because the kernel's OOM-killer told xrealloc there are no more bytes to allocate,...
Really, the best solution for the OOM killer is not to have one. Configure your system not to use overcommitted memory, and refuse to use applications and libraries that depend on it. In this day of infinite disk, why not supply infinite swap? No need to commit to swap unless the memory is used, right? The answer...
Why does OOM-killer sometimes fail to kill resource hogs?
1,453,749,351,000
I have 100+ boxes running FreeBSD 8.4 amd64 RELEASE (p9) with the same configuration. And only one of them sometimes behaves strangely: load average (generally 4~6, it's ok course box have 8 CPU cores) grows up to 30-40, system running slow and top starts to print kvm_open: cannot open /proc/[some_numbers]/mem message...
It's caused by the process exiting between top getting the process list and top trying to get info on that particular process. It's more common on a very busy box but generally safe to ignore. You might consider it a bug, you might not.
kvm_open: cannot open /proc
1,453,749,351,000
I have an Ubuntu 20.04.4 server with 32GB RAM. The server is running a bunch of LXD containers and two VMs (libvirt+qemu+kvm). After startup, with all services running, the RAM utilization is about ~12GB. After 3-4 weeks the RAM utilization reaches ~90%. If I stop all containers and VMs the utilization is still ~20GB....
In the end, it was a problem with the nct6775 driver. It was loaded by the /etc/modules file. After removing it the error disappeared.
Linux server high memory usage without applications
1,453,749,351,000
I want to install a data manipulation solution. The solution is deployed in a folder in home directory. Free space in disk is uncontrollable and can shrink at any moment (other users data). How can I at first preserve say 100 giga bytes for only one folder. Is it possible? if yes, then How?
One solution would be to create a separate partition on the disk for this special applicaiton's data storage. You can set the partition size to what ever you want and then mount the partition under your home directory. Then as long as no one else has access to the partition (i.e. write permissions) then you should eff...
Is there a way in Linux to preserve folder size for safety
1,453,749,351,000
I have a dedicated MySQL server equipped with 128 GB RAM. MySQL recently gets killed by the oom-killer, although MySQL is configured to use 95 GB in the worst case. In my research I came across this: # cat /proc/11895/status Name: mysqld State: S (sleeping) Tgid: 11895 Pid: 11895 PPid: 24530 TracerPid: ...
So, it turns out a colleague was experimenting with large-page-support and didn't revert all changes he made. When I ran sysctl -w vm.nr_hugepages=0 and commented out this section in the /etc/sysctl.conf # Hugepage Support MySQL #vm.hugetlb_shm_group = 27 #kernel.shmmax = 10737418240 #kernel.shmall = 23689185 #vm.nr_...
VmHWM only 25% whereas it should be around 80%
1,712,316,189,000
I am working on an embedded Linux system which is on SOC platform. I have 2 machines ran the same memory workload, and I got following memory output. Machine 1. total used free shared buff/cache available Mem: 50616 35304 2516 48 12796 1310...
free memory is completely unused, while available memory can be freed by the kernel immediately if it is needed. It contains things such as file system cache, avoiding reads from the disk and speeding up the system. If you look closely, you can see that the available amount is similar to buff/cache. Thus, the kernel s...
Which is the trigger of OOM killer, free or availaible memory in Linux?
1,712,316,189,000
I have an important process that the OOM Killer has taken a fancy to with unfortunate results. I would like to make this less likely. All google turns up is stuff like: echo -1000 > /proc/${PID}/oom_score_adj while I would like to do it in the program source itself. Is there a library call or syscall to do this, o...
There’s no system call, or library function, as far as I’m aware. No need for getpid() though, you can open /proc/self/oom_score_adj directly.
Is there a library call or syscall to set /proc/self/oom_score_adj?
1,712,316,189,000
Memory Swap Ratio Company System Today, a monitoring system indicated that one of the systems in the company has run out of memory. Executing htop on this system indicated that the Memory was nearly full (~8GB) even as the Swap Space (~0.5GB). Stopping some of the services decreased the Memory use, but the Swap Space...
If you are using a LVM, you can allocate/create a new Logical Volume a format it as a swap space. lvcreate -n swap2 -L 2G VG_NAME mkswap /dev/VG_NAME/swap2 swapon -a e.g the above will create a 2G LogicalVolume partition on VolumeGroup named VG_NAME, then format the LV as swap, and activated it.
Fastest and Safest way to increase Swap Space on Scientific Linux
1,712,316,189,000
I have a program Vuze that is written in Java, which I use to download very large files, and I'm having a problem with it. I need to increase the amount of memory it uses. I've followed the directions for the application but it doesn't change the real memory usage. I would think this would then be because Java (JVM) i...
I found the solution to my problem here. The workaround: I increased the memory used to 1024M with these instructions. I set the "Maximum files opened for read/write" to a 101. I ran the application from the command line with this command: sudo bash -c 'ulimit -n 8192'; sudo -u username ./azureus
How to increase the memory used by Java in linux?
1,712,316,189,000
We're struggling with mysql being killed by OOMKiller since upgrading from Debian 9 to Debian 11. I see that several .service files have OOMScoreAdjust=### defined, but they don't seem to be honored, and choom tells me the score adjust values for these services are 0. The value is also ignored for other services besid...
I found that I had not nested OOMScoreAdjust under the [Service] heading, and so it was not applied. That explains why it worked for some processes (ones where the value was properly nested under [Service],) but not others. Values set by choom don't appear to persist across reboots.
OOMScoreAdjust in .service files is ignored?
1,712,316,189,000
In Ubuntu 20.04 I can find oom_kill counter at file /proc/vmstat. Where I can find this metric in CentOS 7?
This feature is not available in Linux 3.10 which comes with CentOS 7.0. The change was commited two years later: "mm/oom_kill: count global and memory cgroup oom kills"
oom_kill counter in CentOS 7
1,712,316,189,000
I'm trying to compress a large archive with multi-threading enabled, however, my system keeps freezing up and runs out of memory. OS: Manjaro 21.1.0 Pahvo Kernel: x86_64 Linux 5.13.1-3-MANJARO Shell: bash 5.1.9 RAM: 16GB |swapon| NAME TYPE SIZE USED PRIO /swapfile file 32G 0B -2 I've tried th...
From man xz: Memory usage Especially users of older systems may find the possibility of very large memory usage annoying. To prevent uncomfortable surprises, xz has a built-in memory usage limiter, which is disabled by default. The memory usage limiter can be enabled with the command line option --memlimit=limit. ...
xz: OOM when compressing 1TB .tar
1,712,316,189,000
foobar.exe invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0 What is an order=0 allocation? That's less than one page, so is it like a kmalloc32 or something smaller than page_size? Linux 3.x kernel x86_64
An order of 0 is one page. page allocation order The 'order' of a page allocation is it's logarithm to the base 2, and the size of the allocation is 2order, an integral power-of-2 number of pages. 'Order' ranges from from 0 to MAX_ORDER-1. The smallest - and most frequent - page allocation is 20 or 1 page. (https://...
What does order=0 mean in mem-info data (Orders are powers of two allocations, so does it mean no pages were being allocated?)