date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,687,753,520,000
I am trying to install FreeRadius 3.0.16 in Debian 9 from my local repository. However when trying to install it I got this error: The following packages have unmet dependencies: freeradius : Depends: libssl1.0.0 (>= 1.0.1e-2+deb7u5) but it is not installable The culprit is in the original source code at freeradius-server/debian/rules: # Add dependency on distribution specific version of openssl that fixes Heartbleed (CVE-2014-0160). ifeq ($(shell dpkg-vendor --derives-from Ubuntu && echo yes),yes) SUBSTVARS = -Vdist:Depends="libssl1.0.0 (>= 1.0.1f-1ubuntu2)" else SUBSTVARS = -Vdist:Depends="libssl1.0.0 (>= 1.0.1e-2+deb7u5)" endif Putting it checking for Debian 9 is not the ideal situation, as it can be compiled for several Debian flavours... So, short of checking for the Debian version, is there any alternative of defining the Depends for Debian both and as an alternative depending on libssl1.1 (>= 1.1)?
I would just remove those lines of code; it’s not up to individual packages to force security upgrades in other packages. If you look at the Debian package’s rules, you’ll see it doesn’t have anything like this. In any case, as you point out the dependencies can’t work on Debian 9 since that uses a different package name for OpenSSL. (It should be possible to work out a disjunction which would enforce the right package upgrades, but I don’t think it’s worth the effort.)
Defining multiple library dependencies
1,687,753,520,000
What is the most easy/intuitive way to get code for some Python package that is distributed with Debian if I am not on Debian (no apt-get here)? For example, there was a bug with pip on Debian and I want to see if it is fixed by comparing its code with upstream.
I would go to https://packages.debian.org/source/<release>/<package> where <release> is the particular release of Debian I was interested in, and <package> is the package name. For example: https://packages.debian.org/source/stable/python-pip From there, I would scroll down to the bottom of the page and download the compressed tar archives (original source, and Debian's additions to this source). If I didn't know the name of the package, only that it should contain the executable pip, I would go to https://packages.debian.org/ and search for "packages that contain files named like this" and enter bin/pip in the search box at the bottom of the page. It is also possible to use an URL on the following format to get to the search results directly: https://packages.debian.org/search?mode=exactfilename&searchon=contents&keywords=bin/pip
Get the Debian version of the sources of a Python package
1,687,753,520,000
I am trying to package an Ubuntu package as a Debian package. For maintainability I am trying to use sbuild. Following the steps here I go through the first five steps, but when I try to build I get chroot errors. These are the steps: 1 sudo apt-get install sbuild 2 sudo mkdir /root/.gnupg # To work around #792100 3 sudo sbuild-update --keygen 4 sudo sbuild-adduser $LOGNAME 5 ... *logout* and *re-login* or use `newgrp sbuild` in your current shell 6 sudo sbuild-createchroot --make-sbuild-tarball=/srv/chroot/unstable-amd64.tar.gz unstable `mktemp -d` http://httpredir.debian.org/debian The sbuild-createchrrot command that I use is: CODE: SELECT ALL sudo sbuild-createchroot --make-sbuild-tarball=/srv/chroot/jessie-amd64.tar.gz jessie `mktemp -d` http://httpredir.debian.org/debian I: SUITE: jessie I: TARGET: /tmp/tmp.uLbQox2R0X I: MIRROR: http://httpredir.debian.org/debian I: Running debootstrap --arch=amd64 --variant=buildd --verbose --include=fakeroot,build-essential,debfoster --components=main --resolve-deps jessie /tmp/tmp.uLbQox2R0X http://httpredir.debian.org/debian I: Retrieving Release I: Retrieving Release.gpg I: Checking Release signature I: Valid Release signature (key id 75DDC3C4A499F1A18CB5F3C8CBF8D6FD518E17E1) I: Retrieving Packages I: Validating Packages I: Resolving dependencies of required packages... I: Resolving dependencies of base packages... I: Found additional required dependencies: acl adduser dmsetup insserv libaudit-common libaudit1 libbz2-1.0 libcap2 libcap2-bin libcryptsetup4 libdb5.3 libdebconfclient0 libdevmapper1.02.1 libgcrypt20 libgpg-error0 libkmod2 libncursesw5 libprocps3 libsemanage-common libsemanage1 libslang2 libsystemd0 libudev1 libustr-1.0-1 procps systemd systemd-sysv udev I: Found additional base dependencies: binutils bzip2 cpp cpp-4.9 debian-archive-keyring dpkg-dev g++ g++-4.9 gcc gcc-4.9 gnupg gpgv libapt-pkg4.12 libasan1 libatomic1 libc-dev-bin libc6-dev libcilkrts5 libcloog-isl4 libdpkg-perl libfakeroot libgc1c2 libgcc-4.9-dev libgdbm3 libgmp10 libgomp1 libisl10 libitm1 liblsan0 libmpc3 libmpfr4 libquadmath0 libreadline6 libstdc++-4.9-dev libstdc++6 libtimedate-perl libtsan0 libubsan0 libusb-0.1-4 linux-libc-dev make patch perl perl-modules readline-common xz-utils I: Checking component main on http://httpredir.debian.org/debian... I: Retrieving acl 2.2.52-2 I: Validating acl 2.2.52-2 I: Retrieving libacl1 2.2.52-2 I: Validating libacl1 2.2.52-2 I: Retrieving adduser 3.113+nmu3 I: Validating adduser 3.113+nmu3 I: Retrieving apt 1.0.9.8.2 I: Validating apt 1.0.9.8.2 I: Retrieving libapt-pkg4.12 1.0. It continues until it finishes, I am not sure if these are errors, but this happens right before I regain control over the terminal. I: Base system installed successfully. I: Configured /etc/hosts: ┌──────────────────────────────────────────────────────────────────────── │127.0.0.1 hn localhost └──────────────────────────────────────────────────────────────────────── I: Configured /usr/sbin/policy-rc.d: ┌──────────────────────────────────────────────────────────────────────── │#!/bin/sh │echo "All runlevel operations denied by policy" >&2 │exit 101 └──────────────────────────────────────────────────────────────────────── I: Configured APT /etc/apt/sources.list: ┌──────────────────────────────────────────────────────────────────────── │deb http://httpredir.debian.org/debian jessie main │deb-src http://httpredir.debian.org/debian jessie main └──────────────────────────────────────────────────────────────────────── I: Please add any additional APT sources to /tmp/tmp.uLbQox2R0X/etc/apt/sources.list I: Setting reference package list. I: Updating chroot. Ign http://httpredir.debian.org jessie InRelease Hit http://httpredir.debian.org jessie Release.gpg Hit http://httpredir.debian.org jessie Release Get:1 http://httpredir.debian.org jessie/main Sources [7058 kB] Get:2 http://httpredir.debian.org jessie/main amd64 Packages [6763 kB] Get:3 http://httpredir.debian.org jessie/main Translation-en [4582 kB] Fetched 18.4 MB in 21s (837 kB/s) Reading package lists... Done Reading package lists... Done Building dependency tree... Done Calculating upgrade... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. I: chroot /tmp/tmp.uLbQox2R0X has been removed. I: Successfully set up jessie chroot. I: Run "sbuild-adduser" to add new sbuild users. After that I try to run: sbuild -d jessie filename.dsc Then I get this error: ╔══════════════════════════════════════════════════════════════════════════════╗ ║ simplescreenrecorder 0.3.6+1~ppa1~wily1 (amd64) 07 Feb 2016 04:15 ║ ╚══════════════════════════════════════════════════════════════════════════════╝ Package: simplescreenrecorder Version: 0.3.6+1~ppa1~wily1 Source Version: 0.3.6+1~ppa1~wily1 Distribution: jessie Machine Architecture: amd64 Host Architecture: amd64 Build Architecture: amd64 E: /etc/schroot/schroot.conf: Failed to stat file: No such file or directory ┌──────────────────────────────────────────────────────────────────────────────┐ │ Summary │ └──────────────────────────────────────────────────────────────────────────────┘ Then the cursor just sits there blinking. What is wrong with the chroot? How come sbuild isn't setting it up properly? How do I fix this to use sbuild? sbuild: Installed: 0.65.2-1 Candidate: 0.65.2-1 Version table: 0.66.0-5~bpo8+1 0 100 http://httpredir.debian.org/debian/ jessie-backports/main amd64 Packages *** 0.65.2-1 0 500 http://httpredir.debian.org/debian/ jessie/main amd64 Packages 100 /var/lib/dpkg/status schroot: Installed: 1.6.10-1+b1 Candidate: 1.6.10-1+b1 Version table: *** 1.6.10-1+b1 0 500 http://httpredir.debian.org/debian/ jessie/main amd64 Packages 100 /var/lib/dpkg/status Edit: I do not have the below file or folders nor do I really know how to manually create them. The wiki didn't really say much about these files. E: /etc/schroot/schroot.conf: Failed to stat file: No such file or directory edit this is the contents of my /etc/schroot folder tree /etc/schroot/ /etc/schroot/ ├── buildd ├── chroot.d │   └── jessie-amd64-sbuild-k92zq_ ├── default ├── desktop ├── minimal ├── sbuild └── setup.d └── 99check -> 00check 7 directories, 2 files this is the content of that jessie-amd64 file cat /etc/schroot/chroot.d/jessie-amd64-sbuild-k92zq_ [jessie-amd64-sbuild] type=file description=Debian jessie/amd64 autobuilder file=/srv/chroot/jessie-amd64.tar.gz groups=root,sbuild root-groups=root,sbuild profile=sbuild when I run schroot -c jessie-amd64-sbuild E: /etc/schroot/schroot.conf: Failed to stat file: No such file or directory schroot -c jessie-amd64 E: /etc/schroot/schroot.conf: Failed to stat file: No such file or directory They still just give an error even though I am using the -c command, it's complaining about the schroot.conf file. I've tried to write this to the schroot.conf file cat /etc/schroot/schroot.conf [jessie-amd64] type=file description=Debian jessie/amd64 autobuilder file=/srv/chroot/jessie-amd64.tar.gz groups=root,sbuild root-groups=root,sbuild profile=sbuild then tried to run schroot -c /etc/schroot/schroot.conf schroot -c jessie-amd64 E: /srv/chroot/jessie-amd64.tar.gz: Failed to stat file: No such file or directory then I get the above error.
You are expected to rename the /etc/schroot/chroot.d/jessie-amd64-sbuild-k92zq_ file created by sbuild-createchroot to jessie-amd64-sbuild (that is, just drop the random suffix). You may also edit it if you wish. They you should be able to schroot -c jessie-amd64-sbuild and sbuild -d jessie whatever.dsc. You use tarball chroots, which do require some time to instantiate and clean up; make sure you're patient enough.
sbuild schroot fails
1,687,753,520,000
I used dh_make to create a basic Debian package install. I added a dependency which is the actual program I want. All I want to do is overwrite the conf file this package installed, with a new file with bunch of custom parameters. I tried to use the install file which looks like this: file.conf /etc/destination/file.conf but repeatedly got an error saying: dh_install: cp -a debian/tmp/file.conf debian/custom-package//etc/package// returned exit code 1 I can't figure out why it won't find the file. I know debian/tmp is created by the builder, but I don't know why it won't find my file, then it tries to copy not to the directory I want, but it prepends debian/custom-package/. I also tried to use a Makefile but while it builds and runs, the file isn't copying to the directory. I'm not sure the Makefile is right or is even getting called (dh_make didn't originally include a Makefile, I'm not sure where to call it if it doesn't get called). The make file just has the install directive and looks like this: install: cp file.conf /etc/destination/ The rules file is the basic file that was built with dh, and as recommended from the Debian guide: %: dh $@
I ended up solving this problem by using a package called config-package-dev. config-package-dev link While I didn't realize it at the time, there was a big flaw in what I was trying to do previously. Updates or changes to the package could have overwritten my custom .conf files on update and the system would have been broken. config-package-dev solved this issue by making symlinks to my custom .conf files, which insulates the configuration to changes (among other things). In addition it accomplished all the things I was trying to do and made the entire process much cleaner. I ended up throwing away my old solution and making an entirely new package with the debian package building tools. Thanks everyone.
Use a package to install dependency and copy custom conf file
1,687,753,520,000
I've download the .tar.gz, .dsc and .diff.gz of the bash package from Wheezy: https://packages.debian.org/wheezy/bash. Then I ran dpkg-source -x on the dsc file to unpack it, and this is the result: $ ls -l total 2696 -rw-rw-r-- 1 pgimeno pgimeno 2748840 Dec 30 2012 bash-4.2dfsg.tar.xz drwxrwxr-x 3 pgimeno pgimeno 4096 Apr 3 23:36 debian UPDATE: I can extract the archive manually, the problem is not that. Please read carefully below. I've added emphasis on some key sentences. That has confused me. What should I do to get a fully unpacked archive with the Debian patches applied, so that I can work on the source and produce a modified package that builds? Do I have to unpack it myself and apply the patches by hand? If so, to what directory should I unpack it? The default after running tar -xf bash-4.2dfsg.tar.xz is bash-4.2, should I leave it like that or move the files to the main directory? And then what?
The central place that records what to do with a source package is debian/rules. It's a makefile, and a few targets are mandatory or standardized, including build, which will of necessity unpack and patch any source archive, and patch, which should unpack and patch any source archive. Many packages use helper scripts, the main ones being debhelper (dh_* and the newer dh frontend) and cdbs. The bash package in wheezy uses a few debhelper scripts and doesn't provide a patch target. It doesn't provide an unpack target either (an common convention), but it does provide some targets that it uses internally: make -f debian/rules bash_src=bash unpack-bash make -f debian/rules bash_src=bash patch-bash
Unpacking source Debian package that has a tar.xz
1,687,753,520,000
Mono 3.0 has been released yesterday. I am really excited by this release and am curious to know when this will be available in Debian testing (Wheezy). Is there a standard timeline set by Debian project for when the newest release of a software will be made available into Testing or in general any of the branches, except Stable?
Currently, Debian testing is in a freeze state. This means that new uploads must be approved by the release team, and generally must fix RC (release critical) bugs. It is very rare for the release team to accept new upstream releases (rather than patches specifically for RC bugs) after the freeze. So the answer to this question is after the following has occurred: The Mono team packages and uploads Mono 3.0 to unstable Wheezy is released as stable and Jesse becomes the new testing 2-10 days have passed since the upload to unstable (depending on urgency set on the package). In addition to this, if a RC bug is filed against the unstable package before it migrates to testing, the RC bug will bock migration. The severity of the bug will need to be downgraded, or a new version of the package which fixes the RC bug will need to be uploaded. Outside of a time in which testing is frozen, the answer to your question is "2-10 days after the maintainer or team has time to do the work and upload to unstable". Maintainers or teams own packages in Debian, and they are all volunteers, so it is really dependent on the individuals involved. Unfortunately, I do not know of any direct sources where this process is clearly laid out. I have this knowledge from years of working with OS and lurking around the development community.
How soon do new releases get packaged into Debian Testing?
1,687,753,520,000
In Linux, according to the Filesystem Hierarchy Standard, /opt is the designated location for add-on application software packages. Thus, when developing my own software package, which is not a dependency for anything else, I would place that in /opt/somepackage, with a hierarchy of my choice underneath. FreeBSD, according to the link above, does not strictly follow the FHS but installs third-party packages into /usr/local. OPNsense, which is based on FreeBSD, installs its own code (at least in part) into /usr/local/opnsense. The hier manpage on FreeBSD makes no mention of /opt – thus a package installing itself in that location would be unlikely to collide with anything else, but would introduce a top-level path that is almost as exotic as installing straight to /somepackage. What would be the appropriate installation location in FreeBSD? /usr/local/somepackage instead of /opt/somepackage, again with a hierarchy of my choice underneath? Note that I have seen the following posts, which provide some insight but don’t fully answer my question: In Linux I'd use "/opt" for custom software. In FreeBSD? – asks specifically about software not managed by the package manager, whereas I am asking about developing my own .pkg. What might be an equivalent to Linux /opt/ in OpenBSD? – asks about OpenBSD, which may be different from FreeBSD
You install user programs in /usr/local/. There is no /opt in the standard install or mentioned in the hier man page. From the man page: /usr/ contains the majority of user utilities and applications And: local/ local executables, libraries, etc. Also used as the default destination for the ports(7) framework. Within local/, the general layout sketched out by hier for /usr should be used. And: NOTES This manual page documents the default FreeBSD file system layout, but the actual hierarchy on a given system is defined at the system administrator's discretion. A well-maintained installation will include a customized version of this document. If you are installing an executable that includes other things like documentation, data, helper files, etc., I would put this in /usr/local/$package because it's a package of things. If it's the executable alone, it should go in /usr/local/bin cause it's a binary.
Where to install custom software packages on FreeBSD?
1,687,753,520,000
I have a package where I want the administrator to enter a list of interface names. I'd like that list to have a default. Only each system has a different list (eth0, enp0s3, eno1, to list a few). Here is an example about just that: Template: iplock/public_interfaces Type: string Default: eth0 Description: Public Interfaces Enter a comma separated list of interface names that are connected to the Internet (public). For example: "eth0, eno1, enp0s3" (without the quotes). This will be saved in the system settings file. If necessary, you will be able to override these values by creating another file with different values or use "sudo dpkg-reconfigure iplock" to change the package settings. Could the Default: eth0 be set dynamically? Are there example of such in existing Debian packages? Note 1: I'm specifically using Ubuntu. Note 2: The template above can be found here on github.
The value of a question can be set dynamically, but not using the template default: Don't make the mistake of thinking that the default field contains the "value" of the question, or that it can be used to change the value of the question. It does not, and cannot, it just provides a default value for the first time the question is displayed. To provide a default that changes on the fly, you'd have to use the SET command to change the value of a question. There are two ways to handle this. If you can determine a suitable value equivalently when your package is installed or when your program runs, set the default to a placeholder value, and have your program use that at runtime. I.e., don’t store eno1 in the field, have eno1 calculated at runtime unless the user has specified their own value. If you want to provide a suitable value before the user is prompted, use db_set in your maintainer script. See the “Libraries” section in man debconf-devel for an example.
How do you define a dynamic default value in a Debian package template?
1,687,753,520,000
I am trying to debianize a collection of shell scripts. The build itself is quite simple, as there are no binaries to build – every file that will be installed on the target system is already present in the source tree. Now the build process fails with an error message I cannot make sense of: dpkg-genbuildinfo: error: badly formed line in files list file, line 1 dpkg-buildpackage: error: dpkg-genbuildinfo subprocess returned exit status 25 debuild: fatal error at line 1182: dpkg-buildpackage -us -uc -ui failed What is the files list file mentioned in the error message – does that refer to the .install file? The file looks like this: src/main/shell/autorecover opt/autorecover src/main/shell/autorecover.d/lib/* opt/autorecover.d/lib/ src/main/shell/autorecover.d/mods-available/* opt/autorecover.d/mods-available/ The debian dir is copied and stripped down from another (more complex) package I built earlier, which completed without such errors iirc. Nonetheless, I do end up with a deb package which seems to have everything in place – scripts as well as files to install. (Which indicates that the .install file cannot have been all that wrong, at least the files are getting collected.) What is wrong here? What do I need to fix, or where should I start looking for the error?
As further research showed, the “files list file” is an auto-generated file named debian/files. As I suspected, there was no issue with the .install file. Looking at debian/files, the first line read autorecover_0.0.1_all.deb net # FIXME optional This line should have the form name_version_arch.deb section optional The section is taken from the debian/control. Here I had indeed added # FIXME after the section name as I wasn’t sure what to use. Found the correct section (admin sounds like a good choice) and fixed the value, and the package builds.
dpkg-genbuildinfo: error: badly formed line in files list file, line 1
1,687,753,520,000
The documentation for creating RPM packages in Fedora Linux states that There are potentially four fields which comprise the structured Release: tag: package release number (<pkgrel>) extra version information (<extraver>) snapshot information (<snapinfo>) minor release bump (<minorbump>) However, I can not find any information about how to actually use those fields in a Specfile. The documentation's example page gives examples for valid formats for version strings, but not about creating them. So how would I have to write a Specfile for an artifact with Version 1, Release 2, Minor Version 3 and Package Release 4 that is a beta (aiming for 1.2.3-4-beta)?
The fields describe the structure of the release tag; how you construct it is largely up to you. In your case, I’ll assume the upstream version is 1.2.3 beta, and this is the 4th packaging update (so your release would be 4, ignoring the beta part). The traditional approach would be to write Version: 1.2.3 Release: 4.beta%{?dist} or, with more structure, %global rctag beta Version: 1.2.3 Release: 4%{?rctag:%{rctag}}%{?dist} Alternatively, you could use tildes; this has the advantage (in my mind) that all the upstream-controlled version components are part of the Version rather than the Release (which is supposed to reflect packaging concerns): Version: 1.2.3~beta Release: 4 This only works if you have never packaged any release of version 1.2.3.
How are the pkgrel, extraver, snapinfo and minorbump fields of RPM's Release tag used?
1,687,753,520,000
I see on some sites they mention a centos-7 template that ships with mock, /etc/mock/centos-7-aarch64.cfg /etc/mock/centos-7-armhfp.cfg /etc/mock/centos-7-i386.cfg /etc/mock/centos-7-ppc64.cfg /etc/mock/centos-7-ppc64le.cfg /etc/mock/centos-7-x86_64.cfg However, on my install mock on centos-7 I lack these files? Which template do I use? From yum info mock I see, yum info mock Installed Packages Name : mock Arch : noarch Version : 1.4.21 Release : 1.el7 Size : 741 k Repo : installed From repo : epel Summary : Builds packages inside chroots URL : https://github.com/rpm-software-management/mock/ License : GPLv2+ Description : Mock takes an SRPM and builds it in a chroot. yumdb info mock # yumdb info mock Loaded plugins: fastestmirror mock-1.4.21-1.el7.noarch checksum_data = 1e1b04f2009acef02f05aaf1af5b32a4cb5bce49eb0029803d8990f832bf09e4 checksum_type = sha256 command_line = install mock from_repo = epel from_repo_revision = 1574209186 from_repo_timestamp = 1574209353 installed_by = 1000 origin_url = https://dfw.mirror.rackspace.com/epel/7/x86_64/Packages/m/mock-1.4.21-1.el7.noarch.rpm reason = user releasever = 7 var_contentdir = centos var_cp_centos_major_version = 7 var_infra = stock var_uuid = 54909cf4-080d-4942-bf31-ef058b297752 It seems these files aren't on the git repo pointed to by the rpm.
You seem to have installed a newer version of mock provided by EPEL. For that package, the config files are changed from /etc/mock/centos-7-x86_64.cfg to /etc/mock/epel-7-x86_64.cfg and are provided by the RPM package mock-core-configs. The newer mock seems to be for the transition to dnf and python3, so they may have removed architectures that they no longer wanted to maintain. If you want the full list of configs that you referenced, you may need to downgrade to the CentOS version of mock.
What is the mock template for centos-7?
1,687,753,520,000
I have inherited a software project which builds a set of RPMs to be installed on a RHEL server. When I attempt to install the packages on a server, I get a "transaction check vs depsolve" error saying the package requires libc.so.6. I have found that the error will go away if I install glibc.i686. The problem here is that this package is supposed to be for the x86_64 architecture and shouldn't depend on 32-bit libraries. Is there a way I can find what is triggering the error? All of the binaries in the package I have checked so far are built for x86_64.
I ended up extracting the RPM and using a one-liner to find the offending binaries: find . -print0 | xargs -0 file | grep 'ELF 32' This listed all of the 32-bit binaries in the directory.
How can I see what is triggering a "transaction check vs depsolve" error?
1,687,753,520,000
I am back-porting the Debian package for openldap to jessie and have run into some problems with our local Debian repository. Using git-buildpackage, the Debian package goes fine, but when I get to the dput step I get an error. We have a local Debian repository where I am uploading this package. The local Debian repository uses reprepro. The first part of the dput works, but the second part fails: Checking signature on .changes gpg: Signature made Fri Feb 10 09:17:41 2017 PST using RSA key ID 53913E0C gpg: Good signature from "Horace Linxster <[email protected]>" Good signature on /srv/scratch/hlinxster/openldap/build-area/openldap_2.4.44+dfsg-3.1_amd64.changes. Checking signature on .dsc gpg: Signature made Fri Feb 10 09:17:28 2017 PST using RSA key ID 53913E0C gpg: Good signature from " "Horace Linxster <[email protected]>" Good signature on /srv/scratch/hlinxster/openldap/build-area/openldap_2.4.44+dfsg-3.1.dsc. Uploading to local (via scp to debian-local.example.com): openldap_2.4.44+dfsg-3.1.dsc 100% 2612 2.6KB/s 00:00 openldap_2.4.44+dfsg-3.1.debian.tar.xz 100% 153KB 152.9KB/s 00:00 slapd_2.4.44+dfsg-3.1_amd64.deb 100% 1401KB 1.4MB/s 00:00 slapd-smbk5pwd_2.4.44+dfsg-3.1_amd64.deb 100% 88KB 87.8KB/s 00:00 ldap-utils_2.4.44+dfsg-3.1_amd64.deb 100% 188KB 188.0KB/s 00:00 libldap-2.4-2_2.4.44+dfsg-3.1_amd64.deb 100% 218KB 218.5KB/s 00:00 libldap-common_2.4.44+dfsg-3.1_all.deb 100% 83KB 82.6KB/s 00:00 libldap-2.4-2-dbg_2.4.44+dfsg-3.1_amd64.deb 100% 454KB 454.2KB/s 00:00 libldap2-dev_2.4.44+dfsg-3.1_amd64.deb 100% 324KB 323.8KB/s 00:00 slapd-dbg_2.4.44+dfsg-3.1_amd64.deb 100% 4803KB 4.7MB/s 00:00 openldap_2.4.44+dfsg-3.1_amd64.changes 100% 4409 4.3KB/s 00:00 Successfully uploaded packages. file 'openldap_2.4.44+dfsg.orig.tar.gz' is needed for 'openldap_2.4.44+dfsg-3.1.dsc', not yet registered in the pool and not found in 'openldap_2.4.44+dfsg-3.1_amd64.changes' There have been errors! Error: post upload command failed. It is true that openldap_2.4.44+dfsg.orig.tar.gz is not included in the .changes file; the only tar file listed in the .changes file is openldap_2.4.44+dfsg-3.1.debian.tar.xz. What do I need to do during the package build process to ensure that the tar file is listed in the .changes file properly?
You need to tell dpkg-genchanges to include the original source, using its -sa option. You can give the option to git-buildpackage and it will pass it on: gbp buildpackage -sa (or git-buildpackage -sa perhaps with the Jessie version). You only need to do that the first time you upload a given upstream version to a repository. If the version is "obviously" a new upstream (-1 or -0.1) then dpkg-genchanges figures it out on its own).
Missing tar file in changes file after Debian package build
1,687,753,520,000
I am working on a debian package which usually installs in /tftpboot/linux/ This packages is also distributed on UCS (Univention Corporate Server), a debian based server system. They need these files in another directory (/var/lib/univention-client-boot). How do I adapt the corresponding debian files to make it recognize if the system is UCS and then move the files to the directory, or link these directories during install of this deb file?
You could do this in a .postinst script - check to see if it is being installed on the UCS server and create the required directory structure (under /var/lib/var/lib/univention-client-boot) and symlnks. Note that if you want to follow debian policy, the symlinks should be made relative (to the directory containing the symlinks), not with absolute paths. For a private package, strict adherence to debian policy isn't necessary. You should also have a .postrm or .prerm script to remove the symlinks (and the directories, if they are empty) when the package is removed.
Install deb Package files in another directory
1,687,753,520,000
I'm trying to merge build configuration for different version of debian (as it's hard to maintain when separated). There are only a few minor changes, but the main issue I run into is that I don't know how to detect the current version of debian inside of the "rules" file. Do I just parse the /etc/debian_version file, or is there a more sensible approach?
You should use /usr/bin/lsb_release for that. lsb_release -rs is probably what you want.
Detect debian version in "rules" package file
1,687,753,520,000
I'm looking for a way of having my package searchable by other Debian users from the command line. I don't want to go through the process of having to get a sponsor. Is there some kind of community repo where I could self publish it to like Arch has?
No, there is not. You could create you own repository, like many projects do. But that alone won't help your visibility.
Is there an Arch community repository system for Debian?
1,687,753,520,000
Is there a good reason why apt-get remove leaves installed cron files in place where an apt-get purge or apt-get remove --purge is actually required to completely remove them? Example files may be: /etc/cron.d/<packagename>, or /etc/cron.hourly/<packagename> The man pages and everything else I've seen seems to indicate that only configuration files should remain after a remove command, and purge will only remove those configuration files in addition to the package. If these are considered configuration files, then why? Is it possible to have customised (/configured) versions of these files based on the installation?
Yes, those files are considered configuration files. Generally, (at least) everything in /etc is considered a configuration file in Debian. That's why it takes a purge to remove them. The reason they are configured configuration files is that anything that the system administrator is reasonably expected to customize or edit should be considered a configuration file, and that generally includes anything in /etc, and especially a crontab file.
Why does apt-get remove leave package-installed cron files lying around?
1,687,753,520,000
The guidelines for Fedora remixes suggest to replaces packages fedora-logos, fedora-release, and fedora-release-notes with the generic-* equivalents when making a remix. Most tutorials online focus on creating a live-cd remix; I'm interesting in creating a VM, without making a live CD first. I tried the following: yum install generic-logos generic-release generic-release-notes: the generic-* packages conflicts with the fedora-* packages, preventing installation. yum erase fedora-logos fedora-release fedora-release-notes: this removes about 200MB of dependencies, most of which I want to keep. rpm -e --no-deps fedora-logos fedora-release fedora-release-notes: this completes successfully, but unsets the $releasever variable that yum uses, causing it to complain when I try to run yum install generic-release Could not parse metalink https://mirrors.fedora.org/metalink?repo=fedora-$releasever/&arch=x86_64 error was No repomd file How can I replace the fedora-logos, fedora-release and fedora-release-notes packages with their generic equivalents in a VM, without creating a live CD first?
What about this? # yum shell > remove fedora-logos fedora-release fedora-release-notes > install generic-logos generic-release generic-release-notes > run --> Running transaction check ---> Package fedora-logos.x86_64 0:21.0.5-1.fc21 will be erased ---> Package fedora-release.noarch 0:21-2 will be erased --> Processing Dependency: fedora-release = 21-2 for package: fedora-release-nonproduct-21-2.noarch ---> Package fedora-release-notes.noarch 0:21.08-1.fc21 will be erased ---> Package generic-logos.noarch 0:17.0.0-6.fc21 will be installed ---> Package generic-release.noarch 0:21-7 will be installed --> Processing Dependency: system-release-product for package: generic-release-21-7.noarch ---> Package generic-release-notes.noarch 0:21-7 will be installed --> Running transaction check ---> Package fedora-release-nonproduct.noarch 0:21-2 will be erased ---> Package generic-release-cloud.noarch 0:21-7 will be installed --> Finished Dependency Resolution ============================================================================== Package Arch Version Repository Size ============================================================================== Installing: generic-logos noarch 17.0.0-6.fc21 fedora 615 k generic-release noarch 21-7 fedora 14 k generic-release-notes noarch 21-7 fedora 12 k Removing: fedora-logos x86_64 21.0.5-1.fc21 installed 8.8 M fedora-release noarch 21-2 installed 4.1 k fedora-release-notes noarch 21.08-1.fc21 installed 603 k Installing for dependencies: generic-release-cloud noarch 21-7 fedora 12 k Removing for dependencies: fedora-release-nonproduct noarch 21-2 installed 1.0 k Transaction Summary ============================================================================== Install 3 Packages (+1 Dependent package) Remove 3 Packages (+1 Dependent package) Total download size: 653 k Is this ok [y/d/N]: Or maybe downloading these generic... packages and after rpm -e --no-deps fedora-logos fedora-release fedora-release-notes you have mentioned installing them via rpm again - something like this: rpm -ivh generic-*. After this you probably want to check all is fine with package-cleanup --problems (utility from yum-utils package).
Installing generic-logos and generic-release on Fedora
1,687,753,520,000
We're using HP DataProtector for our backup environment. The installation method leaves something to be desired, and we're attempting to automate it in such a way that it makes our Unix admins cringe less often. We're a SLES/OpenSUSE shop, so we're attempting to make up a YUM repository with the DP patches. I can make the repo just fine, it's just that the patch RPMs aren't configured right. The 'Revision' field in the RPM is not set correctly, they're all "1" even though the master RPM I pulled them out of is correctly incrementing. I would really like to be able to rebuild these RPMs with the correct Revision, as that would allow the normal update process to deal with these patches instead of the strange way HP wants to handle these. The strange way HP wants to handle these requires: Setting up an Installation Server with all of the software. No problem. Allowing root to ssh into client stations to install software that affects xinitd config Which in turn requires a passwordless SSH public-key to be placed on all target machines so the install process can remote in w/o prompting. Before any deployments can be made, each client must be manually SSHed to by root on the repo server in order to populate known_hosts Since we don't allow root logins via SSH, every time we get a patch we have to touch each server's sshd_config to allow them temporarily. We've also proven that after the initial install, subsequent patches can just be installed via rpm just peachy. So, we'd like to get that into a YUM repo if at all possible.
Rather than re-package the existing RPM, inspired by HP I packaged it in an additional RPM. The new RPM is very simple in that it just has the single patch-RPM inside it, and invokes the rpm command to install it.
Repackaging RPMs
1,687,753,520,000
I'm trying to understand how the creation of packages (rpm, deb, dpkg) work and what the architecture supports and doesn't. Right now I struggle figuring out what happens when the installation or upgrade of a package fails at different points of the process -ie, error on a scriptlet, not enough disk space (is this checked before starting?)-. From my current understanding, there's no automatic rollback to a previous working version if there was any. So my question would be, how do packages deal with this scenarios? Aren't scriptlets used at all to backup files and restore them post transaction if some error occurred? (I coudln't find examples so far) Thank you.
For Debian (and typically, derivatives), this is described in the Debian Policy chapter on maintainer scripts and the corresponding flowcharts. Errors are handled in combination by dpkg (the tool which handles package extraction etc.) and maintainer scripts. When upgrading a package: the existing package’s pre-removal script is called the new package’s pre-installation script is called the new package’s files are unpacked the existing package’s post-removal script is called the existing package’s files are removed the new package’s post-installation script is called At any point, errors are handled, and various maintainer scripts are invoked with “undo” parameters to revert the changes made in previous steps. Ultimately, the package can end up in one of the following states: fully installed in the new version (when everything goes well) fully installed in the existing version (when something fails but all the changes could be backed out) unpacked, needing configuration failed, needing re-installation For many packages, the default handling is sufficient, and no help from maintainer scripts is required. Other packages are much more complex, e.g. those with databases and schema changes between versions (see the slapd maintainer scripts for example); in some cases they won’t actually handle aborted upgrades themselves, and will instead leave a backup of the existing state and ask the administrator to sort the situation out.
Package installation failure and rollback options
1,687,753,520,000
When writing a .spec file for Fedora, I ran into a problem. I can't seem to be able to do fedpkg mockbuild at all. No matter what source I use, HTTPS or local, I keep running into this error: Failed to get repository name from Git url or pushurl Failed to get ns from Git url or pushurl Could not execute mockbuild: ('Could not download sources: %s', AttributeError("'NoneType' object has no attribute 'head'")) What's going on? The relevant part of my .spec file: Name: purple-telegram-tdlib # The main maintainer has not merged #154 for TDLib 1.8.0 Version: 0.8.1-BenWiederhake Release: 1%{?dist} Summary: New libpurple plugin for Telegram License: GPLv2 URL: https://github.com/ars3niy/tdlib-purple Source0: tdlib-purple-BenWiederhake-master.zip BuildRequires: gcc-c++ BuildRequires: git BuildRequires: make BuildRequires: cmake BuildRequires: tdlib-devel == 1.8.0 BuildRequires: tdlib-static == 1.8.0 BuildRequires: libpurple-devel BuildRequires: libwebp-devel BuildRequires: libpng-devel BuildRequires: gettext-devel
You are using the character - in the version. According to the specification: The version string consists of alphanumeric characters, which can optionally be segmented with the separators ., _ and +, plus ~ and ^ (see below). Tilde (~) can be used to force sorting lower than base (1.1~201601 < 1.1). Caret (^) can be used to force sorting higher than base (1.1^201601 > 1.1). These are useful for handling pre- and post-release versions, such as 1.0~rc1 and 2.0^a. Don't confuse the tilde (~) for the dash (-)! The dash is not a valid character. In this case, you should be using ^ instead, like so: Version: 0.8.1^BenWiederhake It builds after that change.
Could not execute mockbuild: Could not download sources
1,660,032,310,000
I'm aiming to translate a Debian package to an RPM package to install it on a CentOS Linux 7 (Red Hat). I used alien to accomplish it: alien --to-rpm --scripts --keep-version --generate debian_pkg.deb. I use the --generate flag to create a directory for building a package from, because I want to add the runtime dependencies to the spec file. To do so, I add this line: Requires: nodejs tomcat8 java-1.8.0-openjdk java-1.8.0-openjdk-devel. Then I try to create the package: rpmbuild -ba <package_name>.spec, but it ends abruptly with this error: Processing files: <package_name> error: Directory not found: /root/rpmbuild/BUILDROOT/<package_name>/srv error: Directory not found: /root/rpmbuild/BUILDROOT/<package_name>/srv/tmp error: File not found: /root/rpmbuild/BUILDROOT/<package_name>/srv/tmp/file.tar.gz error: File not found: /root/rpmbuild/BUILDROOT/<package_name>/usr/share/doc/frontend/README.Debian error: File not found: /root/rpmbuild/BUILDROOT/<package_name>/usr/share/doc/frontend/changelog.Debian.gz error: File not found: /root/rpmbuild/BUILDROOT/<package_name>/usr/share/doc/frontend/copyright RPM build errors: Directory not found: /root/rpmbuild/BUILDROOT/<package_name>/srv Directory not found: /root/rpmbuild/BUILDROOT/<package_name>/srv/tmp File not found: /root/rpmbuild/BUILDROOT/<package_name>/srv/tmp/file.tar.gz File not found: /root/rpmbuild/BUILDROOT/<package_name>/usr/share/doc/frontend/README.Debian File not found: /root/rpmbuild/BUILDROOT/<package_name>/usr/share/doc/frontend/changelog.Debian.gz File not found: /root/rpmbuild/BUILDROOT/<package_name>/usr/share/doc/frontend/copyright I searched the internet and found that it's linked to the %install section and more specifically %{buildroot}, but I can't get my head around the problem and fix it. Can somebody give me a hand? Thanks! UPDATE Here is the spec file in essence: Buildroot: /home/<package_dir> Version: 1.0 Release: 849 Distribution: Debian Group: Converted/misc Requires: nodejs tomcat8 java-1.8.0-openjdk java-1.8.0-openjdk-devel %define _rpmdir ../ %define _rpmfilename %%{NAME}-%%{VERSION}-%%{RELEASE}.%%{ARCH}.rpm %define _unpackaged_files_terminate_build 0 %pre # some shell script %post # some shell script %install mkdir -p %{buildroot}/usr/share/doc/ mkdir -p %{buildroot}/usr/share/doc/frontend/ %files %dir "/srv/" %dir "/srv/tmp/" "/srv/tmp/file.tar.gz" %dir "/usr/" %dir "/usr/share/" %dir "/usr/share/doc/" %dir "/usr/share/doc/frontend/" "/usr/share/doc/frontend/README.Debian" "/usr/share/doc/frontend/changelog.Debian.gz" "/usr/share/doc/frontend/copyright"
UPDATE: The problem you're having is with the Buildroot: tag in the specfile. In modern systems (and perhaps yours included), Buildroot: in a spec file is no longer supported and it's now ignored. See this post on LinuxQuestions about that: Fedora (as of F-10) does not require the presence of the BuildRoot tag in the spec and if one is defined it will be ignored. You can work around this by passing rpmbuild a --buildroot argument so it uses /home/<package_dir>. (It's possible this might have adverse side effects such as removing those contents after the build is done, which apparently is also the default now.) In fact, passing rpmbuild and explicit --buildroot is what alien started doing since rpm 4.7.0 started ignoring Buildroot:, as you can see in this commit. The specfile isn't really installing any sources or creating any files. The only thing happening in the %install section is creating an (empty) /usr/share/doc/frontend/ directory. Since the specfile starts by specifying Buildroot: /home/<package_dir>, I imagine it expected that directory to be previously populated, so that the rpmbuild step would be able to simply pick up the already staged contents from there and just package them. If you run alien again from the same .deb (in other words, start over), do you get a /home/<package_dir> that is populated with e.g. srv/tmp/file.tar.gz? If so, rpmbuild will work when you run it at that point. It is possible some macro in your rpmbuild is cleaning %{buildroot} after building the rpm (though I'd say that's unusual, since typically that requires a %clean section in your specfile.) Check whether that's the case, if right after a first (successful) rpmbuild, the files under /home/<package_dir> are gone, in which case further runs of rpmbuild will fail as you describe...
CentOS 7 - problem encountered with the set up of spec file during creation of RPM package
1,660,032,310,000
Here's a question and answers on what delta RPMs actually are: What is DRPM and How does it differ from RPM? I'm interested in the algorithms and data structures server side. I've Googled but nothing useful shows up. Do they (Red Hat) generate the requested delta on the fly, or are all possible deltas pregenerated and made available as soon as a new RPM comes out?
Deltas are generated when repo is created/updated. Usually: createrepo_c --update --deltas --num-deltas 1 . For more information see man createrepo_c and man makedeltarpm
How do delta RPMs work at the server end?
1,660,032,310,000
I'd like to install libuv on a Ubuntu WSL instance, and I specifically need version 1.45.0 or later. My understanding (from this tutorial article) is that the command to find out what versions of a package are available to install is apt list | grep: $ apt list | grep libuv WARNING: apt does not have a stable CLI interface. Use with caution in scripts. libuv1-dev/jammy,now 1.43.0-1 amd64 [installed] libuv1/jammy,now 1.43.0-1 amd64 [installed,automatic] libuvc-dev/jammy 0.0.6-1.1 amd64 libuvc-doc/jammy 0.0.6-1.1 all libuvc0/jammy 0.0.6-1.1 amd64 ...which leads me to think only 1.43.0-1 is available for installation by apt-get, and that it's already installed. But libuv's GitHub site indicates that there are newer releases. How can I get libuv v1.45.0 (or later) on my Ubuntu instance with apt-get? Although my immediate need is specific to libuv, I actually want to learn about this aspect of the Unix/Linux ecosystem in general: what is the relationship between drivers/packages/etc. that seem to be "released" (e.g. according to their GitHub pages) vs. what is "available" to package-managers, like apt-get? What should users do if they want a newer version of a package that isn't available from their package-manager? Should they download source and compile locally? Update: Why do I need 1.45.0, i.e. why do I need a version more advanced than what's available from my package manager? My Linux box is a development environment, where I need to compile to host (i.e. we can disregard cross-compiling for the context of this question). The application I need to compile (not written by me) has a reliance on uv_timespec64_t, which was apparently introduced in libuv v1.45.0 per this git merge/diff). Thus, that is the premise of this question: I need to compile (to the host) an application that has a dependency on a feature from a newer version of libuv than my Linux distro's package manager offers. Update: This question has a related, follow-up question here: Why do different Linux distros have different package formats (and package managers)?
Trunks and Trees While @TechLoom answered your question regarding libuv, allow me to answer you question regarding the Linux ecosystem. Let's start with Open that up in a new browser tab, and note that there are 5 major trunks: Slackware Debian Red Hat, who went corporate, and the free variant is referred to as Fedora Enoch, who had a very short life, and became Gentoo Arch Each of those branches, and hence all the child branches are defined mainly by the package management system in use. The base Slackware uses zip/unzip/tar. The package is unzipped and compiled manually Debian, of which Ubuntu is a child, uses APT - the Advanced Packaging Tool IIRC Red Hat/Fedora uses RPM - RPM Package Manager Enoch and its children compile everything from scratch using scripting. Imagine Slackware, but automated and configurable Arch is a hybrid, that has held up really well over the years. Users can have a completely binary packaged system, i.e. packaged like Debian, or a completely compiled system like Enoch, or a mixture of both. Now, as for how the ecosystem is connected. Each package manager is configured to connect only to its specified package tree, also called a repository. Debian-based systems connect to APT based trees, where you can download and install .deb files. Each of these package managers allows users to install custom trees. Ubuntu-based systems call them PPAs, or Personal Package Archives. Release Cycles Each distribution, regardless of the package manger has a release cycle. Release cycles can be thought of as "locked repositories" or using the tree analogy: "healthy trees that don't grow anymore." The repositories only contain packages that belong to that certain release. In general, packages installed by the release are never altered, and only updated in rare cases i.e., when a critical bugfix is released (which is why you cannot find libuv-1.45 in the jammy release), but if you look in mantic and noble, the next few codenames for Future Releases, you can see the package has been bumped (see Version Bumps below). Software like browsers and mail clients are generally released to the contrib(short for contributed) repository, which doesn't interfere with the main repository. In Debian's terminology, I believe they call their contributed repository universe, which fits rather well (every piece of user software under the sun can be put there). Each added package is compiled against the libraries and tools in the main repository, packaged and uploaded to the universe repository. This imitates the "locked repository" concept for software added by users or companies. In Ubuntu's case the release cycle generally occurs twice a year (once in April, and once in October, barring a minor version update [the 3rd version number]). You can see this by looking at the Ubuntu Release List. The accepted practice is to only use packages available in your release until that package is updated by the maintainer. Note that some distributions use a rolling release model, specifically those that are source based. I can issue an update command on my source compiled Gentoo box, rebuild packages, and an hour later perform the same update and possibly update the same packages again. Even our friends, or enemies depending on your preference, over at Microsoft have adopted a Release Model starting with Windows 10. The current release of Windows 11 is versioned as 23H2, or the 2nd half of 2023. The first major version of Windows 10 I remember was 1903, referring to March 2019. While Microsoft's "tree" isn't publicly available it is used to integrate updates based on bug fixes and user requests into future versions of Windows. Why Version Locking Is Important To understand why the locked concept is so important, imagine you owned a BMW anything. BMW's are mostly all hand built and hand tuned. Let's say that we decide we want to add Feature X from the 2025 model to our model we bought in 2023 without talking to BMW first. We buy the part needed to enable Feature X from the web We go to install that part in the Dashboard and hook into the electrical system, and realize that the part for Feature X, requires a special connector. We attempt to connect Feature X's part without the connector and end up damaging our 6-figure car We contact BMW, and tell them I tried to connect feature X, but did more damage than upgrading. BMW sends us a connector and tells us to have the authorized mechanic put everything together I bolded that last bullet to tie all this together. In the above "upgrade" BMW is maintainer of the "locked repository." BMW owns or can access all the parts for most all of it's cars when asked. The special connector doesn't exist outside the tree. In the same way you can only add software or upgrades to your Ubuntu version, a.k.a. the cars in this story, if the software exists in the repository. Mixing repositories from different versions causes complete breakage. If Feature X needs to be added or upgraded in a previous version, the software is backported, and get's added to that particular version's main repository (the mechanic), and the version continues operating as intended (the car is repaired). Version Bumps Version bumps, i.e., going from 1.43 to 1.45 are usually done as follows: The maintainer notices the new release, or a user files a version bump request with the maintainer The requested version is sent to the testing release to test for stability. In Ubuntu, this is known as the Future Release The package is tested, and either revoked or approved and released when the Future Release becomes the Current Release LTS Releases For users that need the stability, some distributions use the concept of a Long Term Service Release. These are usually preferred in areas with mission critical applications like a web server, firewall, or domain controller.
How to install package versions not available with apt-get?
1,660,032,310,000
Unison is a cross-platform file synchronisation software. The latest version available on Debian is outdated: Debian has version 2.40.102-2 while other systems have version 2.48.3. The thing is, Unison will not work between two machines that use different version, so Unison 2.48 on Mac OS will not sync with a machine using version 2.40 like Debian for instance. This makes Unison on Debian and Ubuntu useless for cross-platform synchronization. Of course, one can download the source from Unison and build it on Debian, but this isn't convenient compared to a simple apt-get install unison, as straight-forward as the build is. So my question is, as it looks like the maintainers don't update the Unison package anymore, what can I do to help? I've never been involved in packaging for Debian, and I have no reputation at all in the Debian community. All I have is an updated Unison binary that I'd like to make available for a simple apt-get install.
You don't need reputation in the Debian community to help out, just the willingness to help, and patience while you wait for the current maintainer to react. Given that Unison officially still has a maintainer in Debian, you should indicate your interest on the existing bug, and email the maintainer (both the OCaml maintainers' mailing list and Stéphane) to offer to help. You probably know this already, but for general reference, there's a lot of documentation available you should read, starting from How to join. Unison is team-maintained so you should also read the corresponding wiki page. If you have a package ready, you can upload it to http://mentors.debian.net/ so that others can review it, and point the current maintainer to that for review as well.
Update Unison in the default Debian repositories
1,660,032,310,000
Is there a service that guides how to wrap your software for all flavors of Linux, and provides a build farm that cross-compiles them?
The Open Build Service does just that: it allows you to build packages for all the main Linux distributions, on all the major architectures. You can build .deb, .rpm or .pkg packages, on RHEL, Fedora, CentOS, Debian, Ubuntu, Suse, OpenSuse, Arch Linux etc., on x86, ARM, PowerPC, MIPS...
Collective service to build packages for Linux
1,660,032,310,000
Here is my codelite.spec file. It compiles the sources fine, but then it gives this error (and yes I am including a few extra lines to give context): Processing files: codelite-10.0-1.fc25.x86_64 Executing(%doc): /bin/sh -e /var/tmp/rpm-tmp.zYPKNH + umask 022 + cd /home/username/rpmbuild/BUILD + cd codelite-10.0 + DOCDIR=/home/username/rpmbuild/BUILDROOT/codelite-10.0-1.fc25.x86_64/usr/share/doc/codelite + export DOCDIR + /usr/bin/mkdir -p /home/username/rpmbuild/BUILDROOT/codelite-10.0-1.fc25.x86_64/usr/share/doc/codelite + cp -pr AUTHORS /home/username/rpmbuild/BUILDROOT/codelite-10.0-1.fc25.x86_64/usr/share/doc/codelite + cp -pr LICENSE /home/username/rpmbuild/BUILDROOT/codelite-10.0-1.fc25.x86_64/usr/share/doc/codelite + cp -pr COPYING /home/username/rpmbuild/BUILDROOT/codelite-10.0-1.fc25.x86_64/usr/share/doc/codelite + exit 0 Finding Provides: /bin/sh -c " while read FILE; do echo "${FILE}" | /usr/lib/rpm/rpmdeps -P; done | /bin/sort -u " Finding Requires(interp): Finding Requires(rpmlib): Finding Requires(verify): Finding Requires(pre): Finding Requires(post): Finding Requires(preun): Finding Requires(postun): Finding Requires(pretrans): Finding Requires(posttrans): Finding Requires: /bin/sh -c " while read FILE; do echo "${FILE}" | /usr/lib/rpm/rpmdeps -R; done | /bin/sort -u | /usr/bin/sed -e 'libcodeliteu.so; libpluginu.so; libwxscintillau.so; libwxsqlite3u.so;'" /usr/bin/sed: -e expression #1, char 2: extra characters after command Provides: application() application(codelite.desktop) codelite = 10.0-1.fc25 codelite(x86-64) = 10.0-1.fc25 libdatabaselayersqlite.so()(64bit) liblibcodelite.so()(64bit) libplugin.so()(64bit) libwxshapeframework.so()(64bit) libwxsqlite3.so()(64bit) mimehandler(application/x-codelite-project) mimehandler(application/x-codelite-workspace) Requires(interp): /bin/sh /bin/sh /bin/sh Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PartialHardlinkSets) <= 4.0.4-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires(post): /bin/sh Requires(postun): /bin/sh Requires(posttrans): /bin/sh Processing files: codelite-debuginfo-10.0-1.fc25.x86_64 error: Empty %files file /home/username/rpmbuild/BUILD/codelite-10.0/debugfiles.list RPM build errors: Empty %files file /home/username/rpmbuild/BUILD/codelite-10.0/debugfiles.list there's only one sed command in this spec file, on line 87 which is in the %build macro but this error occurs later around the time of running %files. Any ideas where this sed error comes from? I have tried the following efforts to fix this error: Removing L118-120 Removing L122 and changing L127 from %files -f %{name}.lang to %files. Neither attempt has succeeded, or even changed the error message I get. I am building this package locally (with rpmbuild -ba codelite.spec) on my 64-bit Fedora 25 system.
It's the %filter_from_requires on line 60 that is wrong. According to EPEL:Packaging Autoprovides and Requires Filtering: The %filter_from_requires macro is used to filter "requires"; it does for requires what the %filter_from_provides macro does for provides and is invoked in the same fashion. Regarding the %filter_from_provides macro, it says This macro can be fed a sed expression to filter from the stream of auto-found provides. On line 60, you do not provide a sed expression. I guess you could use %filter_from_requires /lib\(codelite\|plugin\|wxscintilla\|wxsqlite3\)u\.so/d ... or something similar.
How do I fix this spec file: it keeps giving me sed errors yet the only sed is long before the error occurs?
1,660,032,310,000
I am making a new package from a project that I've been working on for practice. I've been using git, and I've noticed similarities between it and the Debian packing system. What's confusing me is when I make a change to any of the files and do not manually update the .orig.tar.xz file, dpkg wants me to add a patch. Making patches is very annoying and the source code in the .orig is not updated so it's a nightmare to extract. Now if I manually update .orig.tar.xz as well as make a new entry in changelog, it seems to be much more clean, it increments the build count as well (ie 3.2-2 -> 3.2-3). When should I use patch vs when should I update .orig and changelog?
For the now standard Debian source format 3.0 (quilt), the correct procedure when you make changes to the original/upstream source, is to add corresponding patches in the debian/patches directory, not to .orig.tar.xz. This is commonly done using quilt, but you can alternatively use a "proper" version control system like Git, if you want. The Debian build system will automatically recreate the .debian.tar.xz based on the contents of the debian directory (including the patches subdirectory). The .orig.tar.xz file should not be modified. It's the upstream source. And as for updating the changelog, that's up to you. Updating the changelog will increment the Debian version number. It has no direct bearing on patching the source.
Debian packaging: whats the difference between patches and changelog?
1,660,032,310,000
I try to create a flatpak application for Instant meshes in particular the manifest is the following: id: org.flatpak.InstantMeshes runtime: org.gnome.Platform runtime-version: "44" sdk: org.gnome.Sdk command: InstantMeshes desktop-file-name-suffix: " (Nightly)" finish-args: - --share=ipc - --socket=x11 - --socket=wayland - --filesystem=home - --device=dri modules: - name: zenity buildsystem: meson sources: - type: archive url: https://download.gnome.org/sources/zenity/3.41/zenity-3.41.0.tar.xz sha256: 19b676c3510e22badfcc3204062d432ba537402f5e0ae26128c0d90c954037e1 - name: build instant meshes buildsystem: cmake build-commands: - cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/app - make -j 4 - install -D "./Instant Meshes" /app/bin/InstantMeshes sources: - type: git url: https://github.com/wjakob/instant-meshes.git branch: master build-options: append-path: "/usr/lib/" - name: Intant Meshes buildsystem: simple build-commands: # - install /app/bin/zenity /usr/bin/zenity - install -D InstantMeshes.desktop /app/share/applications/org.flatpak.InstantMeshes.desktop - install -D instantmeshes.png /app/share/icons/hicolor/256x256/apps/instantmeshes.png ensure-writable: - /app/bin/zenity sources: - type: file path: InstantMeshes.desktop - type: file path: instantmeshes.png But at runtime get an error when try to open folders: sh: line 1: /usr/bin/zenity: File o directory do not exist That because zenity is in /app/bin path, but it is read only and I cannot modify from basic flatpak manifest. There are any posibilities to instroduce zenity into /usr/bin folder? Probably with runtimes?
There are any posibilities to instroduce zenity into /usr/bin folder? Not really, the things that flatpak exports for usage belong in /app. You're solving the wrong end: The error here is that your software expects zenity in a specific folder. Fix that – and you're done. And fixing it is easy; find the script that starts with /usr/bin/zenity and replace that with /usr/bin/env zenity, done.
Put zenity to flatpak /usr/bin folder
1,660,032,310,000
I am learning how to create deb packages for a small project of mine. I have been able to create the deb package for the binary. So far so good. After the process is finished I can see this: $ dpkg -c gitmod_0.10-1_amd64.deb drwxr-xr-x root/root 0 2024-06-01 13:57 ./ drwxr-xr-x root/root 0 2024-06-01 13:57 ./usr/ drwxr-xr-x root/root 0 2024-06-01 13:57 ./usr/bin/ -rwxr-xr-x root/root 31400 2024-06-01 13:57 ./usr/bin/gitmod drwxr-xr-x root/root 0 2024-06-01 13:57 ./usr/share/ drwxr-xr-x root/root 0 2024-06-01 13:57 ./usr/share/doc/ drwxr-xr-x root/root 0 2024-06-01 13:57 ./usr/share/doc/gitmod/ -rw-r--r-- root/root 154 2024-06-01 13:57 ./usr/share/doc/gitmod/changelog.Debian.gz -rw-r--r-- root/root 45 2024-06-01 13:57 ./usr/share/doc/gitmod/copyright I want to be able to generate the package for different releases of debian (or even other distros) so I'd like to be able to have packages like: gitmod_0.10-1_bullseye_amd64.deb gitmod_0.10-1_bookworm_amd64.deb So I need to be able to provide a suffix in a parameterized fashion (even if I need to use template files to generate the files used by debuild to generate the package). Is it possible to achieve this in a standard fashion?
Package names are codified: they contain the package name, version (including Debian revision), and architecture. So the only way to add a suffix in the style you’re after is to add it to the Debian revision. For example, you could specify 0.10-1+deb11u1 in debian/changelog for your Bullseye package, and 0.10-1+deb12u1 for your Bookworm package. This would produce packages in files gitmod_0.10-1+deb11u1_amd64.deb and gitmod_0.10-1+deb12u1_amd64.deb. For packages targeting different releases of a given distribution, it’s best to use a numeric version so that the package versions are sorted in a way that makes sense. If you use bookworm and bullseye suffixes, and a user ends up with both available through their repositories (e.g. during an upgrade from Debian 11 to 12), apt will prefer the bullseye version because it’s “higher” than the bookworm version. Using deb11u1 and deb12u1 avoids that. See What does the version string from dpkg/aptitude/apt-show-versions mean? for an example.
creating deb packages - is there an standard way to setup a suffix to the final deb packages?
1,660,032,310,000
During the installation of mysql-server on raspbian buster, apt say that mysql-server isn't available and suggest mariadb-server-10.0 as replacement. How apt know if a package is replaced with another one? In other word where this information should be set in during the package creation? Package mysql-server is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source However the following packages replace it: mariadb-server-10.0 E: Package 'mysql-server' has no installation candidate
If you run apt show mariadb-server-10.3 (I have newer version), you will see this: Package: mariadb-server-10.3 Version: 1:10.3.27-0+deb10u1 Priority: optional Section: database Source: mariadb-10.3 Maintainer: Debian MySQL Maintainers <[email protected]> Installed-Size: 66,6 MB Provides: virtual-mysql-server Pre-Depends: adduser (>= 3.40), debconf, mariadb-common (>= 1:10.3.27-0+deb10u1) Depends: galera-3 (>= 25.3), gawk, iproute2, libdbi-perl, lsb-base (>= 3.0-10), lsof, mariadb-client-10.3 (>= 1:10.3.27-0+deb10u1), mariadb-server-core-10.3 (>= 1:10.3.27-0+deb10u1), passwd, perl (>= 5.6), psmisc, rsync, socat, debconf (>= 0.5) | debconf-2.0, libc6 (>= 2.28), libgnutls30 (>= 3.6.6), libpam0g (>= 0.99.7.1), libstdc++6 (>= 5.2), zlib1g (>= 1:1.2.0) Recommends: libhtml-template-perl Suggests: mailx, mariadb-test, netcat-openbsd, tinyca Conflicts: mariadb-tokudb-engine-10.0, mariadb-tokudb-engine-10.1, mariadb-tokudb-engine-5.5, mysql-server-core-5.5, mysql-server-core-5.6, mysql-server-core-5.7, virtual-mysql-server Breaks: cqrlog (<< 1.9.0-5~), mariadb-galera-server, mariadb-galera-server-10.0, mariadb-galera-server-5.5, mariadb-server-10.0, mariadb-server-10.1, mariadb-server-10.2, mariadb-server-5.5, mariadb-tokudb-engine-10.0, mariadb-tokudb-engine-10.1, mariadb-tokudb-engine-5.5, mysql-client-5.5, mysql-server-5.5, mysql-server-5.6, mysql-server-5.7 Replaces: mariadb-galera-server, mariadb-galera-server-10.0, mariadb-galera-server-5.5, mariadb-server-10.0, mariadb-server-10.1, mariadb-server-10.2, mariadb-server-5.5, mariadb-tokudb-engine-10.0, mariadb-tokudb-engine-10.1, mariadb-tokudb-engine-5.5, mysql-client-5.5, mysql-server-5.5, mysql-server-5.6, mysql-server-5.7, virtual-mysql-server Homepage: https://mariadb.org/ Download-Size: 4 202 kB APT-Manual-Installed: no APT-Sources: http://ftp.cz.debian.org/debian buster/main amd64 Packages Description: MariaDB database server binaries MariaDB is a fast, stable and true multi-user, multi-threaded SQL database server. SQL (Structured Query Language) is the most popular database query language in the world. The main goals of MariaDB are speed, robustness and ease of use. . This package includes the server binaries. The line starting with Replaces: lists the packages which this one can replace.
How apt know if an obsolete package is replaced by another one?
1,660,032,310,000
So lately I've been trying to find a way to make an apt or a .deb package to make it easier for people to install and use packages without using the commander line but, to even start doing that I need to know how to even make a package! Questions How do you make a package? Is there an IDE that makes it easier to make packages? What programming languages do I have to learn to make this? Are there any tutorials online? Links I Found But Did Not Help Me Websits HowToPackageForDebian Chapter 6. Building the package Youtube Videos How to Create .deb Packages for Debian, Ubuntu, and Linux Mint How to build a simple Debian package (*.Deb) Extra Info OS: Chromebook (I'm using "Linux (Beta)" a virtual machine that runs Debian for Chromebooks) OS Version: 86.0.4240.199 (Official Build) (64-bit) Linux (Beta) Version: Debian GNU/Linux 10 (buster)
How do you make a package? I have three solutions for you: dh_make. This is the way most official debian archives are put together. If you're having problems following along with the official guides, it's not because they aren't well written (they are), it's just a complicated process and so I'm not sure that a video tutorial will help too much more. You really just need to put in the effort and don't be dismayed if it takes a couple of weeks. dh_make generates a skeleton debian/* directory. Fill out your debian/copyright, debian/control, debian/rules, debian/{post|pre}{inst|rm}, etc. Then use dpkg-buildpackage to make the package. If you have questions about a specific error, we can help answer that, but I can't write a guide in this answer that would be any clearer than official documentation. dpkg-deb: This one is A LOT easier. After building your project just do make install DESTDIR=/tmp/path. Put a DEBIAN directory in that same path with the same DEBIAN/control, any maintainer scripts, copyright, etc. The main difference is that you don't need a rules file because the package is already built. Then run dpkg-deb -b. cpack: If you already use cmake as a build system, then you just need to set a few CPACK_* variables and some CPACK_DEB_* variables, then after running cmake .. && cmake --build ., just run cpack. Is there an IDE that makes it easier to make packages? No. Your target platform (i.e. Debian) shouldn't dictate your IDE. Use the IDE that is most suited for your platform. Most of my packaging work is done in the terminal. What programming languages do I have to learn to make this? If you are going with solution 1 above, then you should know make so you can write a rules file. Are there any tutorials online? If you're using solution 1, then your best friend is the Debian New Maintainer's guide. You have a link to chapter 6, but I think chapters 2-5 are more fundamental to both solution 1 and solution 2. If you still have problems packaging, ask a more specific question and specify: How you are packaging. What you are building and your build system (e.g. python library, java-maven, C library, C++ application, using cmake, autoconf, pybuild, etc) What you are having problems with.
How do you make a apt or .deb Package for debian? [closed]
1,660,032,310,000
I am attempting to understand the use of a .srpm aka "source rpm" Red Hat Package Manager package. From what I understand so far, a .srpm is different from a standard .rpm in that it provides the source code and is commonly used during development. I am puzzled on why this is useful because if I run rpm2cpio ./myrpm-1.1-1.x86_64.rpm | cpio -idmv, I can still view the source code. As I understand, even standard RPMs pull down a .tar.gz/bz file which contains the source, and eventually builds this. Could you please clarify how source RPMs are different and why I would want to use one?
I have never heard of a RPM that would pull down a source code. Maybe there are some which do that, but I would say they are rare. srpm packages include source code and instructions to build rpm packages. rpm packages have compiled code, which you can run and they do not include sources. There are exceptions. For instance: Perl, python, PHP, etc. are not compiled to binary as C or C++ programs are. For these rpm packages source files are included, because they are interpreted or compiled just in time (JIT). So there are no compiled binaries, but sources are used directly. When you want to build rpm packages you would typically download a srpm and install it. Then you go to rpmbuild/SPECS directory where spec file is installed. Then you run: rpmbuild -bb <package>.spec That would build rpm(s). It is very common that one srpm specs produce more rpm packages: <package>.rpm <package>-devel.rpm ... Built rpm packages are stored in rpmbuild/RPMS. Then you can install rpm packages, which install actual program that you can run. Unpacking files from rpm package is very much different that installing a rpm package. With unpacking files you get the package files. You may try to run a program from extracted package files, but there is great chance that it will not work. Installing rpm package means, that rpm makes sure, that also all libraries needed for that program are installed, too. If there are any other actions needed before or right after installation, scripts in rpm package will make sure they will be executed. Some program might need a database, that needs to be created on installation, another might need to rerun ldconfig to update libraries cache, another might need to build manuals, etc. If you just extract files from rpm, these scripts will not be executed.
How is a source RPM different from unpacking an RPM with rpm2cpio and cpio?
1,660,032,310,000
I've built a rpm package using rpmbuild, and the package has the following dependencies: 51f32ecb00b7:/rpm # rpm -qpR pkg.rpm libc.so.6()(64bit) libc.so.6(GLIBC_2.14)(64bit) libc.so.6(GLIBC_2.17)(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.3)(64bit) libc.so.6(GLIBC_2.5)(64bit) libc.so.6(GLIBC_2.7)(64bit) libcrypto.so.1.0.0()(64bit) libcurl.so.4()(64bit) libdl.so.2()(64bit) libdl.so.2(GLIBC_2.2.5)(64bit) libjson-c.so.2()(64bit) libpthread.so.0()(64bit) libpthread.so.0(GLIBC_2.2.5)(64bit) libpthread.so.0(GLIBC_2.3.2)(64bit) libssl.so.1.0.0()(64bit) rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 rpmlib(PayloadIsXz) <= 5.2-1 I'm trying to build it in a machine running openSUSE. However, I'm getting the following dependency error: 51f32ecb00b7:/rpm # rpm -U pkg.rpm error: Failed dependencies: libcrypto.so.1.0.0()(64bit) is needed by pkg.noarch libjson-c.so.2()(64bit) is needed by pkg.noarch libssl.so.1.0.0()(64bit) is needed by pkg.noarch My question is regarding the libssl dependency. I have the following shared objects existing in my system: 51f32ecb00b7:/rpm # find / | grep libssl*.so /lib64/libss.so.2 /lib64/libss.so.2.0 /usr/lib64/libss.so.2 /usr/lib64/libss.so.2.0 /usr/lib64/libssl.so.1.1 My question is: Why RPM is giving me the error if I have libssl.so.1.1 installed? My RPM package depends on libssl.so.1.0.0, so shouldn't it be compatible? As far as I know, the first number defines the ABI compatibility between the shared objects, so 1.1 should work fine with 1.0 dependency . Finally, if I run: 51f32ecb00b7:/rpm # zypper install libopenssl1_0_0 51f32ecb00b7:/rpm # find / | grep libssl*.so /lib64/libss.so.2 /lib64/libss.so.2.0 /usr/lib64/libss.so.2 /usr/lib64/libss.so.2.0 /usr/lib64/libssl.so.1.1 /usr/lib64/libssl.so.1.0.0 Now, having /usr/lib64/libssl.so.1.0.0, it works: 51f32ecb00b7:/rpm # rpm -U pkg.rpm error: Failed dependencies: libjson-c.so.2()(64bit) is needed by pkg.noarch
My question is: Why RPM is giving me the error if I have libssl.so.1.1 installed? Your RPM requires libssl.so.1.0.0, not libssl.so.1.1 (see below). My RPM package depends on libssl.so.1.0.0, so shouldn't it be compatible? As far as I know, the first number defines the ABI compatibility between the shared objects, so 1.1 should work fine with 1.0 dependency (please correct me if I'm wrong). Many projects use the major number as a backward-compatibility indicator, but it’s not a requirement. What is required is that different versions of libraries with the same soname be compatible. That’s why RPM requirements are expressed as sonames and library symbols (libc.so.6, GLIBC_2.14): the soname indicates which library is required, with which soname, and the symbols indicate which version of the library is needed (or later). The guarantee (from the library developers) works thus: any program compiled with a given version of a library can be used with that version or any later one, as long as the soname stays the same; and any program compiled with against a given set of versioned library symbols can be used with any version of the library, with the same soname, which also provides all those versioned library symbols. libssl 1.1 isn’t backwards compatible with 1.0; in fact the migration is rather difficult. A program compiled against libssl 1.0 can’t be used with libssl 1.1. To mark this fact, the libraries have different sonames.
Why RPM doesn't accept my shared object as dependency?
1,660,032,310,000
I have a script built just a way to learn bash and it uses jq for json parsing suppose someone else downloads it and runs the file, will bash automatically prompt the user to install jq or should I include in the script to install it? Yes I understand that the terminal will probably throw jq: command not found but is there a way to handle​ it more gracefully? Or is this how it's usually handled? How is that do you want to install the package jq (Y/N)? is achieved?
You should leave it. Typically, you would only install dependencies when creating a package for a specific package manager, not as part of a program or script. There are so many different package managers, each with their own way of handling dependencies, and you want to let people choose which one to install with. That way they can be consistent. Otherwise, they could end up with problems like duplicate packages and incompatible versions of libraries. Also, your script won't know how to install dependencies on all systems, even if you compile from source (some machines don't have a compiler). You should list them in your documentation, if you have any (README file, comments, etc.)
Should I include code to install the packages that my script requires?
1,660,032,310,000
So, I am somewhat puzzled by the rpmbuild process. I'm now maintaining a slew of scripts previously created and while most work, there are enough differences between them that finding a consistent approach is just not happening. Some individually copy files (very tedious) to a temp location prior to packaging. Some use the original author's spec file, we're just modifying configs or code. Some are home-rolled, but were apparently created with the same level of understanding as I! Specifically, I would love to just have the make; make install approach, but, while make builds the software just fine, make install actually installs it on my system. What I would like to do, is use make install, but have it placed in a working directory for the purpose of the packager. I want the software to install on the target machine in /usr/bin etc, but when I run make install, I want it to go to /tempDir/usr/bin -- make sense? Basically, I just want to avoid polluting my system with software I'm packaging; it doesn't seem right that it all gets plugged in. Must be something misconfigured or is this normal? Exerpts from the spec file I'm working with. Copying source file to /usr/src/redhat/SOURCES and building with rpmbuild -bb <specfilename>. BuildRoot: /var/tmp/%{name}-%{version}-root Source0: %{name}-%{version}.tar.gz %prep %setup -q %build ./configure <config opts> make %install rm -rf $RPM_BUILD_ROOT make install
The files that are to be packaged need to be installed/isolated into a shadow tree. This is usually done by overriding]DESTDIR like make DESTDIR=%{buildroot} install in the %install section.
RPM %install section
1,660,032,310,000
I know how to extract a package, make some minor changes, and repackage it. I wanted to know if there was anything special you'd have to do if you were to repackage a deb package so it was compatible with an older OS version. For example, I want to upgrade xyz package on Ubuntu 12 but xyz package is only available on Ubuntu 14. Can I just modify the control file to change Utopic Unicorn to Precise Pangolin?
You'll get unmet dependencies if you want to take package from newer system and install on older one. You can check dependencies in the control file in Depends: section. You can try to resolve dependencies by upgrading them, but by doing so you can break other applications which depend on old versions. If you'll manage to get them, you can just rebuild package or try to install this package from source.
How do you repackage a deb package for an older OS?
1,660,032,310,000
I have CentOS and Red hat enterprise Linux running, where I have now working Google Apps Engine and related other Python web applications. Every week or depending on any Google Apps Engine latest release my working setup does not work any longer (right now its working but after one week it won't work again and its very weird problem without any answer from Google Apps Engine team members either). For the time being the only solution for this is to rollback the whole CentOS/RHEL to 1 week back configuration including if kernel changes or anything related. How can I tell CentOS/RHEL, go back to a installation point of one week back or to any restore point, so that it can reverse back to old setup when it was working for sure?
You can try enabling yum's rollback feature as follows: vi /etc/yum.conf add this line to file: tsflags=repackage vi /etc/rpm/macros (create if non-existent) add this line to file: %_repackage_all_erasures 1 Now you can use rpm to rollback to different restore points: $ rpm -Uvh –rollback ’21:00′ $ rpm -Uvh –rollback ’3 hours ago’ $ rpm -Uvh –rollback ‘august 13′ $ rpm -Uvh –rollback ‘yesterday’ All repackaged software is available here: /var/spool/repackage. NOTE: You can only rollback from the point at which you enabled the above, you can't rollback prior to this! References Linux : How to rollback Yum updates on RHEL/CentOS
CentOS - acts abnormal every week and causes my whole setup to be down, how can I rollback / restore back to one week?
1,660,032,310,000
If I'm writing a Debian package maintainer script (such as a pre-install script) for a package I create, how can I make the script determine if it is supposed to be running in non-interactive mode (e.g. if apt-get install was invoked with -y, and things like that)?
If your maintainer scripts need to interact with the user running the installation, the recommended way to proceed is to use debconf; see Conditional file and directory installation in Debian Packages for pointers. This might seem complicated but it does bring a number of benefits — not only does debconf handle non-interactive setups (with an explicit DEBIAN_FRONTEND=noninteractive invocation, or because there is no way to interact with a user), it also supports various frontends, and settings managed by debconf can be set ahead of installation (using “pre-seeding”). This might not be relevant in your case but debconf also supports prompts in various languages. Note that apt-get flags are separate from maintainer script interactivity; see Is DEBIAN_FRONTEND=noninteractive redundant with apt-get -yqq?
Check for non-interactive mode in Debian package maintainer scripts
1,660,032,310,000
I'm making a file manager system for managing my projects. My package is named filesystem. I need a config file that stores the path to the root of my file system. I think I need to create a file /etc/filesystem/root.conf and add it to DEBIAN/conffiles. If the file doesn't exist I want to prompt the installer for a root directory. Else I want to keep the root directory the same (for example when upgrading). I think the prompt should be in postinst. Should I add the file to conffiles? Should I add the prompt in postinst? How do I add an option to apt install that forces a prompt? (I have read about --force-confnew) I need the file to be deleted in apt purge.
To prompt the user for a root directory, you should use debconf. This will allow you to prompt the user, while still allowing pre-configuration, and explicit re-configuration using dpkg-reconfigure. See Configuration management with debconf and man debconf-devel for details, in particular Config file handling in the latter. Purging a configuration file handled using debconf should be done in postrm; see the relevant chapter in Debian Policy. Files which are manipulated in maintainer scripts as a result of debconf prompts or stored values shouldn’t be listed in conffiles, otherwise your users will end up with confusing questions during upgrades.
Common practice for config files
1,660,032,310,000
As I understand it, debian packages should be named NAME_VERSION_ARCHITECTURE. but when I use git as version control it and I rename the folder, it threats all files as deleted and then new. Is there a workaround to the NAME_VERSION_ARCHITECTURE name?
The “name_version_arch” scheme applies to binary packages, not source packages or VCS repositories. You can name your git repository whatever you like, although it’s common practice to name it after the source package name. See for example one of my source packages. The git repository shouldn’t contain the top-level directory named after the package; it contains the package directly.
How should versioning of debian packages work with git?
1,660,032,310,000
I can't seem to work out how to build GNU Hello on Ubuntu 22.04. To reproduce, start a Docker container using docker run --interactive --rm --tty ubuntu:22.04, then run the following: apt-get update apt-get install -y debhelper-compat dpkg-dev wget cd "$(mktemp --directory)" wget http://archive.ubuntu.com/ubuntu/pool/main/h/hello/hello_2.10.orig.tar.gz http://archive.ubuntu.com/ubuntu/pool/main/h/hello/hello_2.10-2ubuntu4.dsc http://archive.ubuntu.com/ubuntu/pool/main/h/hello/hello_2.10-2ubuntu4.debian.tar.xz tar -xf hello_2.10-2ubuntu4.debian.tar.xz mkdir hello_2.10-2ubuntu4 mv debian hello_2.10-2ubuntu4 cd hello_2.10-2ubuntu4 dpkg-buildpackage At this point I get this error message: cp: cannot stat 'NEWS': No such file or directory followed by dh_installdocs: error: cp --reflink=auto -a NEWS debian/hello/usr/share/doc/hello returned exit code 1 What am I doing wrong? Where is the NEWS file supposed to be? The build is aware of the upstream tarball ("dpkg-source: info: building hello using existing ./hello_2.10.orig.tar.gz"), do I need to manually unpack that?
Yes, you need to extract the main tarball too: tar xf hello_2.10.orig.tar.gz cd hello*/ tar xf ../hello_2.10-2ubuntu4.debian.tar.xz dpkg-buildpackage apt-get source hello will take care of downloading and extracting the source package for you (if the source repositories are configured), and apt-get build-dep will take care of the build dependencies: apt-get update apt-get source hello apt-get build-dep hello cd hello*/ dpkg-buildpackage
How to build GNU Hello .deb?
1,660,032,310,000
I'm often in the situation where I backport some fedora-packaged software to CentOS, or forward-port something from an older version of EPEL to Fedora, or vice versa. How can I do that with the least amount of effort?
In essence, that's what distgit is for: keeping package specs coordinated and available. So, let's use it. We'll need to set up our system to be able to build packages: # only on CentOS and other RHEL derivatives: sudo dnf install 'dnf-command(config-manager)' sudo dnf config-manager --set-enabled powertools sudo dnf install --refresh epel-release # all: sudo dnf install --refresh fedpkg 'dnf-command(builddep)' git First, find your package on https://src.fedoraproject.org/browse/projects/ . I'm using guile22 as an example: Great, at this point we have all we need to build packages! Let's go to the guile22 distgit site as we found with the search, and click on the "clone" button; we're presented with the URL https://src.fedoraproject.org/rpms/guile22.git. pkgname=guile22 # clone the distgit git clone "https://src.fedoraproject.org/rpms/${pkgname}.git" cd "${pkgname}" ## Ask dnf to install build dependencies ### if this fails, read diligently – packages might have changed names and you might ### need to slightly edit the .spec file, or you might need to build the missing ### dependencies yourself, just like you're building this package, before you can ### continue. sudo dnf builddep "${pkgname}.spec" ## build ### `fedpkg local` executes the build "unisolatedly" on this very distro. ### Instead of that, you could also omit the `dnf builddep` step above and do a ### `fedpkg mockbuild`. This will take a bit longer, as it needs to set up a clean ### build chroot. Often it's worth it. fedpkg local ## install ### `fedpkg local` put the RPMs into $arch subdirs, so on my machine those are `x86_64` ### and `noarch`, but if you build for e.g. ppc64, that might be different. sudo rpm -i x86_64/${pkgname}-*.rpm And that's it!
I need a package for my version of RHEL/EPEL/CentOS/Fedora, but it's only packaged for other versions of Redhatoids
1,660,032,310,000
I need to get the list of sources packages that doesn't have a binary packages on Debian. Listing all sources packages may be an answer, so we can get the diff between the available binary and the available sources package.
As far as I’m aware, all source packages in Debian must produce at least one binary package on at least one architecture. To count the number of binary packages produced by the source packages available in the system’s configured source repositories (deb-src lines), run awk '/Package:/{p=$2;b=0} /Binary:/{b=NF - 1} /^$/{printf "%s: %d\n", p, b} END{printf "%s: %d\n", p, b}' /var/lib/apt/lists/*Sources This fails to find any source package with no binary packages in the current stable, testing, unstable and experimental repositories. If you want to determine which source packages don’t produce any binaries on a given architecture, you can proceed as follows: list the unique source package names globally: awk '/Package:/{print $2}' /var/lib/apt/lists/*_Sources | sort -u > source-packages list the source packages used to produce a given architecture’s binaries (excluding all, which is included in the arch-specific indexes): awk '/(Package|Source):/{source=$2}/Version:/{print source}' /var/lib/apt/lists/*-amd64_Packages | sort -u > amd64-packages list entries present in the list of global source packages but not in those used for amd64: comm -23 source-packages amd64-packages
List all available sources packages that doesn't have a binay packages
1,299,665,584,000
I am always very hesitant to run kill -9, but I see other admins do it almost routinely. I figure there is probably a sensible middle ground, so: When and why should kill -9 be used? When and why not? What should be tried before doing it? What kind of debugging a "hung" process could cause further problems?
Generally, you should use kill (short for kill -s TERM, or on most systems kill -15) before kill -9 (kill -s KILL) to give the target process a chance to clean up after itself. (Processes can't catch or ignore SIGKILL, but they can and often do catch SIGTERM.) If you don't give the process a chance to finish what it's doing and clean up, it may leave corrupted files (or other state) around that it won't be able to understand once restarted. strace/truss, ltrace and gdb are generally good ideas for looking at why a stuck process is stuck. (truss -u on Solaris is particularly helpful; I find ltrace too often presents arguments to library calls in an unusable format.) Solaris also has useful /proc-based tools, some of which have been ported to Linux. (pstack is often helpful).
When should I not kill -9 a process?
1,299,665,584,000
It often baffles me that, although I have been working professionally with computers for several decades and Linux for a decade, I actually treat most of the OS' functionality as a black box, not unlike magic. Today I thought about the kill command, and while I use it multiple times per day (both in its "normal" and -9 flavor) I must admit that I have absolutely no idea how it works behind the scenes. From my viewpoint, if a running process is "hung", I call kill on its PID, and then it suddenly isn't running anymore. Magic! What really happens there? Manpages talk about "signals" but surely that's just an abstraction. Sending kill -9 to a process doesn't require the process' cooperation (like handling a signal), it just kills it off. How does Linux stop the process from continuing to take up CPU time? Is it removed from scheduling? Does it disconnect the process from its open file handles? How is the process' virtual memory released? Is there something like a global table in memory, where Linux keeps references to all resources taken up by a process, and when I "kill" a process, Linux simply goes through that table and frees the resources one by one? I'd really like to know all that!
Sending kill -9 to a process doesn't require the process' cooperation (like handling a signal), it just kills it off. You're presuming that because some signals can be caught and ignored they all involve cooperation. But as per man 2 signal, "the signals SIGKILL and SIGSTOP cannot be caught or ignored". SIGTERM can be caught, which is why plain kill is not always effective – generally this means something in the process's handler has gone awry.1 If a process doesn't (or can't) define a handler for a given signal, the kernel performs a default action. In the case of SIGTERM and SIGKILL, this is to terminate the process (unless its PID is 1; the kernel will not terminate init)2 meaning its file handles are closed, its memory returned to the system pool, its parent receives SIGCHILD, its orphan children are inherited by init, etc., just as if it had called exit (see man 2 exit). The process no longer exists – unless it ends up as a zombie, in which case it is still listed in the kernel's process table with some information; that happens when its parent does not wait and deal with this information properly. However, zombie processes no longer have any memory allocated to them and hence cannot continue to execute. Is there something like a global table in memory where Linux keeps references to all resources taken up by a process and when I "kill" a process Linux simply goes through that table and frees the resources one by one? I think that's accurate enough. Physical memory is tracked by page (one page usually equalling a 4 KB chunk) and those pages are taken from and returned to a global pool. It's a little more complicated in that some freed pages are cached in case the data they contain is required again (that is, data which was read from a still existing file). Manpages talk about "signals" but surely that's just an abstraction. Sure, all signals are an abstraction. They're conceptual, just like "processes". I'm playing semantics a bit, but if you mean SIGKILL is qualitatively different than SIGTERM, then yes and no. Yes in the sense that it can't be caught, but no in the sense that they are both signals. By analogy, an apple is not an orange but apples and oranges are, according to a preconceived definition, both fruit. SIGKILL seems more abstract since you can't catch it, but it is still a signal. Here's an example of SIGTERM handling, I'm sure you've seen these before: #include <stdio.h> #include <signal.h> #include <unistd.h> #include <string.h> void sighandler (int signum, siginfo_t *info, void *context) { fprintf ( stderr, "Received %d from pid %u, uid %u.\n", info->si_signo, info->si_pid, info->si_uid ); } int main (void) { struct sigaction sa; memset(&sa, 0, sizeof(sa)); sa.sa_sigaction = sighandler; sa.sa_flags = SA_SIGINFO; sigaction(SIGTERM, &sa, NULL); while (1) sleep(10); return 0; } This process will just sleep forever. You can run it in a terminal and send it SIGTERM with kill. It spits out stuff like: Received 15 from pid 25331, uid 1066. 1066 is my UID. The PID will be that of the shell from which kill is executed, or the PID of kill if you fork it (kill 25309 & echo $?). Again, there's no point in setting a handler for SIGKILL because it won't work.3 If I kill -9 25309 the process will terminate. But that's still a signal; the kernel has the information about who sent the signal, what kind of signal it is, etc. 1. If you haven't looked at the list of possible signals, see kill -l. 2. Another exception, as Tim Post mentions below, applies to processes in uninterruptible sleep. These can't be woken up until the underlying issue is resolved, and so have ALL signals (including SIGKILL) deferred for the duration. A process can't create that situation on purpose, however. 3. This doesn't mean using kill -9 is a better thing to do in practice. My example handler is a bad one in the sense that it doesn't lead to exit(). The real purpose of a SIGTERM handler is to give the process a chance to do things like clean up temporary files, then exit voluntarily. If you use kill -9, it doesn't get this chance, so only do that if the "exit voluntarily" part seems to have failed.
How does Linux "kill" a process?
1,299,665,584,000
I know that pkill has more filtering rules than killall. My question is, what is the difference between: pkill [signal] name and killall [signal] name I've read that killall is more effective and kill all processes and subprocesses (and recursively) that match with name program. pkill doesn't do this too?
The pgrep and pkill utilities were introduced in Sun's Solaris 7 and, as g33klord noted, they take a pattern as argument which is matched against the names of running processes. While pgrep merely prints a list of matching processes, pkill will send the specified signal (or SIGTERM by default) to the processes. The common options and semantics between pgrep and pkill comes in handy when you want to be careful and first review the list matching processes with pgrep, then proceed to kill them with pkill. pgrep and pkill are provided by the the procps package, which also provides other /proc file system utilities, such as ps, top, free, uptime among others. The killall command is provided by the psmisc package, and differs from pkill in that, by default, it matches the argument name exactly (up to the first 15 characters) when determining the processes signals will be sent to. The -e, --exact option can be specified to also require exact matches for names longer than 15 characters. This makes killall somewhat safer to use compared to pkill. If the specified argument contains slash (/) characters, the argument is interpreted as a file name and processes running that particular file will be selected as signal recipients. killall also supports regular expression matching of process names, via the -r, --regexp option. There are other differences as well. The killall command for instance has options for matching processes by age (-o, --older-than and -y, --younger-than), while pkill can be told to only kill processes on a specific terminal (via the -t option). Clearly then, the two commands have specific niches. Note that the killall command on systems descendant from Unix System V (notably Sun's Solaris, IBM's AIX and HP's HP-UX) kills all processes killable by a particular user, effectively shutting down the system if run by root. The Linux psmisc utilities have been ported to BSD (and in extension Mac OS X), hence killall there follows the "kill processes by name" semantics.
What's the difference between pkill and killall?
1,299,665,584,000
Is there any package which shows PID of a window by clicking on it?
Yes. Try xprop and you are looking for the value of _NET_WM_PID: xprop _NET_WM_PID | cut -d' ' -f3 {click on window}
Getting a window's PID by clicking on it
1,299,665,584,000
If you check the STAT column in above image you will see Ss S S< SN and R+ What does this indicates ? Process states. If yes,Then what is the significance of 'Ss S< SN and R+'?
These are indeed the process states. Processes states that ps indicates are: D Uninterruptible sleep (usually IO) R Running or runnable (on run queue) S Interruptible sleep (waiting for an event to complete) T Stopped, either by a job control signal or because it is being traced. W paging (not valid since the 2.6.xx kernel) X dead (should never be seen) Z Defunct ("zombie") process, terminated but not reaped by its parent. and the additional characters are: < high-priority (not nice to other users) N low-priority (nice to other users) L has pages locked into memory (for real-time and custom IO) s is a session leader l is multi-threaded (using CLONE_THREAD, like NPTL pthreads do) + is in the foreground process group You could also find this all by looking in the man page for ps, specifically the PROCESS STATE CODES section.
What does this process STAT indicates?
1,299,665,584,000
The UNIX system call for process creation, fork(), creates a child process by copying the parent process. My understanding is that this is almost always followed by a call to exec() to replace the child process' memory space (including text segment). Copying the parent's memory space in fork() always seemed wasteful to me (although I realize the waste can be minimized by making the memory segments copy-on-write so only pointers are copied). Anyway, does anyone know why this duplication approach is required for process creation?
It's to simplify the interface. The alternative to fork and exec would be something like Windows' CreateProcess function. Notice how many parameters CreateProcess has, and many of them are structs with even more parameters. This is because everything you might want to control about the new process has to be passed to CreateProcess. In fact, CreateProcess doesn't have enough parameters, so Microsoft had to add CreateProcessAsUser and CreateProcessWithLogonW. With the fork/exec model, you don't need all those parameters. Instead, certain attributes of the process are preserved across exec. This allows you to fork, then change whatever process attributes you want (using the same functions you'd use normally), and then exec. In Linux, fork has no parameters, and execve has only 3: the program to run, the command line to give it, and its environment. (There are other exec functions, but they're just wrappers around execve provided by the C library to simplify common use cases.) If you want to start a process with a different current directory: fork, chdir, exec. If you want to redirect stdin/stdout: fork, close/open files, exec. If you want to switch users: fork, setuid, exec. All these things can be combined as needed. If somebody comes up with a new kind of process attribute, you don't have to change fork and exec. As larsks mentioned, most modern Unixes use copy-on-write, so fork doesn't involve significant overhead.
Why is the default process creation mechanism fork?
1,299,665,584,000
For example, we'd like to see: PROCESS IF TX RX FILE(regular) R/W prog1 eth0 200kB/s 12kB/s -- -- wlan0 12kB/s 100kB/s -- -- -- -- -- file1 R -- -- -- file2 R -- -- -- file3 W prog2 eth0 0kB/s 200kB/s -- -- -- -- -- file4 W -- -- -- file5 W Is this possible? nethogs only shows the TX/RX, while lsof only shows the file accesses. I'm currently doing a 2-step process like so: sudo nethogs sudo lsof -a -d 1-999 -c hogging_program / Is there a better way?
As far as I know, no. What you're trying to accomplish is possible combining multiple commands as you're currently doing, though I don't know of other apps that would provide you data easier to parse (ed: another answer suggested iftop which I did not know added a pipe-able single line text output mode). With some clever shell scripting, piped data, and a bit of manual formatting, you could get at least close to the output you're looking for. Your search for something that shows both network and file statistics - which would be provided by two different parts of the operating system - seems to be up against some tenants of 'The UNIX Philosophy:' Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features. Expect the output of every program to become the input to another, as yet unknown, program. Don't clutter output with extraneous information. This is particularly evident in programs that output text, like lsof. You don't usually see *NIX console programs providing a user interface as much as data to be piped into another program, or possibly a script utilizing shell commands like cut to create their own specifically tailored outputs. Doug McIlroy summarized his earlier statement years later: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface. While it may not help you get the formatted output you're looking for, The Art of UNIX Programming is a good read, and where I found sources for those quotes.
Is there a top-like command that shows the network bandwidths and file accesses of running processes
1,299,665,584,000
At first, the question seems to be a little bit silly/confusing as the OS does the job of managing process execution. However, I want to measure how much some processes are CPU/IO-bound and I feel like my OS is interfering on my experiments with, for instance, scheduled OS processes. Take as an example the following situation: I ran the process A twice and got the following output from the tool "time" (time columns in seconds): +---+-------+---------+-----------+---------+ |Run|Process|User Time|System Time|Wall time| +---+-------+---------+-----------+---------+ |1 |A |196.3 |5.12 |148.86 | |2 |A |190.79 |4.93 |475.46 | +---+-------+---------+-----------+---------+ As we can see, although the user and sys time are similar, the elapsed time of both drastically changes (diff. of ~5 min). Feels like something in my environment caused some sort of contention. I want to stop every possible background process/services to avoid any kind of noise during my experiments but I consider myself a novice/intermediate unix-user and I don't know how to guarantee this. I'm using Linux 4.4.0-45-generic with Ubuntu 14.04 LTS 64 bit. I really appreciate the assistance. If you guys need any missing information, I will promptly edit my post. CPU Info $ grep proc /proc/cpuinfo | wc -l 8 $ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 2 Core(s) per socket: 4 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 60 Stepping: 3 CPU MHz: 4002.609 BogoMIPS: 7183.60 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 8192K NUMA node0 CPU(s): 0-7
You have a kernel option configuration where a CPU won't be used by the OS, it is called isolcpus. isolcpus — Isolate CPUs from the kernel scheduler. Synopsis isolcpus= cpu_number [, cpu_number ,...] Description Remove the specified CPUs, as defined by the cpu_number values, from the general kernel SMP balancing and scheduler algroithms. The only way to move a process onto or off an "isolated" CPU is via the CPU affinity syscalls. cpu_number begins at 0, so the maximum value is 1 less than the number of CPUs on the system. This configuration I am about to describe how to setup, can have far more uses than for testing. Meru for instance, uses this technology in their Linux-based AP controllers, to keep the network traffic from interfering with the inner workings of the OS, namely I/O operations. I also use it in a very busy web frontend, for quite the same reasons: I have found out from life experience that I lost control too regularly for my taste of that server ; had to reboot it forcefully until I separated the front end daemon on it´s own dedicated CPUs. As you have 8 CPUs, that you can check with the output of the command: $ grep -c proc /proc/cpuinfo 8 or $ lscpu | grep '^CPU.s' CPU(s): 8 Add in Debian/Ubuntu in the file /etc/default/grub to the option GRUB_CMDLINE_LINUX: GRUB_CMDLINE_LINUX="isolcpus=7" (it is 7, because it starts in 0, and you have 8 cores) Then run, sudo update-grub This is telling the kernel to not use one of your cores. Reboot the system. Then start your process. Immediately after starting it, you can change for the 8th CPU (7 because 0 is the 1st), and be quite sure you are the only one using that CPU. For that, use the command: taskset -cp 7 PID_number taskset - retrieve or set a processes’s CPU affinity SYNOPSIS taskset [options] [mask | list ] [pid | command [arg]...] DESCRIPTION taskset is used to set or retrieve the CPU affinity of a running pro cess given its PID or to launch a new COMMAND with a given CPU affinity. CPU affinity is a scheduler property that "bonds" a process to a given set of CPUs on the system. The Linux scheduler will honor the given CPU affinity and the process will not run on any other CPUs. Note that the Linux scheduler also supports natural CPU affinity: the scheduler attempts to keep processes on the same CPU as long as practical for performance reasons. Therefore, forcing a specific CPU affinity is useful only in certain applications. For reading more about it, see: isolcpus, numactl and taskset Also using ps -eF you should see in the PSR column the processor being used. I have a server with CPU 2 and 3 isolated, and indeed, it can be seen with ps -e the only process in userland as intended, is pound. # ps -eo psr,command | tr -s " " | grep "^ [2|3]" 2 [cpuhp/2] 2 [watchdog/2] 2 [migration/2] 2 [ksoftirqd/2] 2 [kworker/2:0] 2 [kworker/2:0H] 3 [cpuhp/3] 3 [watchdog/3] 3 [migration/3] 3 [ksoftirqd/3] 3 [kworker/3:0] 3 [kworker/3:0H] 2 [kworker/2:1] 3 [kworker/3:1] 3 [kworker/3:1H] 3 /usr/sbin/pound If you compare it with the non-isolated CPUs, they are running many more things (the window below slides): # ps -eo psr,command | tr -s " " | grep "^ [0|1]" 0 init [2] 0 [kthreadd] 0 [ksoftirqd/0] 0 [kworker/0:0H] 0 [rcu_sched] 0 [rcu_bh] 0 [migration/0] 0 [lru-add-drain] 0 [watchdog/0] 0 [cpuhp/0] 1 [cpuhp/1] 1 [watchdog/1] 1 [migration/1] 1 [ksoftirqd/1] 1 [kworker/1:0] 1 [kworker/1:0H] 1 [kdevtmpfs] 0 [netns] 0 [khungtaskd] 0 [oom_reaper] 1 [writeback] 0 [kcompactd0] 0 [ksmd] 1 [khugepaged] 0 [crypto] 1 [kintegrityd] 0 [bioset] 1 [kblockd] 1 [devfreq_wq] 0 [watchdogd] 0 [kswapd0] 0 [vmstat] 1 [kthrotld] 0 [kworker/0:1] 0 [deferwq] 0 [scsi_eh_0] 0 [scsi_tmf_0] 1 [vmw_pvscsi_wq_0] 0 [bioset] 1 [jbd2/sda1-8] 1 [ext4-rsv-conver] 0 [kworker/0:1H] 1 [kworker/1:1H] 1 [bioset] 0 [bioset] 1 [bioset] 1 [bioset] 1 [bioset] 1 [bioset] 1 [bioset] 1 [bioset] 0 [jbd2/sda3-8] 1 [ext4-rsv-conver] 1 /usr/sbin/rsyslogd 0 /usr/sbin/irqbalance --pid=/var/run/irqbalance.pid 1 /usr/sbin/cron 0 /usr/sbin/sshd 1 /usr/sbin/snmpd -Lf /dev/null -u snmp -g snmp -I -smux -p /var/run/snmpd.pid 1 /sbin/getty 38400 tty1 1 /lib/systemd/systemd-udevd --daemon 0 /usr/sbin/xinetd -pidfile /run/xinetd.pid -stayalive 1 [kworker/1:2] 0 [kworker/u128:1] 0 [kworker/0:2] 0 [bioset] 1 [xfsalloc] 1 [xfs_mru_cache] 1 [jfsIO] 1 [jfsCommit] 0 [jfsCommit] 0 [jfsCommit] 0 [jfsCommit] 0 [jfsSync] 1 [bioset] 0 /usr/bin/monit -c /etc/monit/monitrc 1 /usr/sbin/pound 0 sshd: rui [priv] 0 sshd: rui@pts/0,pts/1 1 -bash 1 -bash 1 -bash 1 [kworker/u128:0] 1 -bash 0 sudo su 1 su 1 bash 0 bash 0 logger -t cmdline root[/home/rui] 1 ps -eo psr,command 0 tr -s 0 grep ^ [0|1] 0 /usr/bin/vmtoolsd
How to ensure exclusive CPU availability for a running process?
1,299,665,584,000
When I used killall -9 name to kill a program, the state become zombie. Some minutes later, it stopped really. So, what's happening during those minutes?
The program actually never receives the SIGKILL signal, as SIGKILL is completely handled by the operating system/kernel. When SIGKILL for a specific process is sent, the kernel's scheduler immediately stops giving that process any more CPU time for running user-space code. If the process has any threads executing user-space code on other CPUs/cores at the time the scheduler makes this decision, those threads will be stopped too. (In single-core systems this used to be much simpler: if the only CPU core in the system was running the scheduler, it by definition wasn't running the process at the same time!) If the process/thread is executing kernel code (e.g. a system call, or an I/O operation associated with a memory-mapped file) at the time of SIGKILL, it gets a bit trickier: only some system calls are interruptible, so the kernel internally marks the process as being in a special "dying" state until the system calls or I/O operations are resolved. CPU time to resolve those will be scheduled as usual. Interruptible system calls or I/O operations will check if the process that called them is dying at any suitable stopping points, and will exit early in that case. Uninterruptible operations will run into completion, and will check for a "dying" state just before returning to user-space code. Once any in-process kernel routines are resolved, the process state is changed from "dying" to "dead" and the kernel begins cleaning it up, similar to when a program exits normally. Once the clean-up is complete, a greater-than-128 result code will be assigned (to indicate that the process was killed by a signal; see this answer for the messy details), and the process will transition into "zombie" state. The parent of the killed process will be notified with a SIGCHLD signal. As a result, the process itself will never get the chance to actually process the information that it has received a SIGKILL. When a process is in a "zombie" state it means the process is already dead, but its parent process has not yet acknowledged this by reading the exit code of the dead process using the wait(2) system call. Basically the only resource a zombie process is consuming any more is a slot in the process table that holds its PID, the exit code and some other "vital statistics" of the process at the time of its death. If the parent process dies before its children, the orphaned child processes are automatically adopted by PID #1, which has a special duty to keep calling wait(2) so that any orphaned processes won't stick around as zombies. If it takes several minutes for a zombie process to clear, it suggests that the parent process of the zombie is struggling or not doing its job properly. There is a tongue-in-cheek description on what to do in case of zombie problems in Unix-like operating systems: "You cannot do anything for the zombies themselves, as they are already dead. Instead, kill the evil zombie master!" (i.e. the parent process of the troublesome zombies)
What does a program do when it's sent SIGKILL signal?
1,299,665,584,000
The word "subreaper" is used in some answers. Searching Google also turn up entries where the word is "just used". How can I understand what a "subreaper" is?
This was implemented to the Linux kernel 3.4 as a flag of the system call prctl(). From the prctl(2) manpage: [...] A subreaper fulfills the role of init(1) for its descendant processes. Upon termination of a process that is orphaned (i.e., its immediate parent has already terminated) and marked as having a subreaper, the nearest still living ancestor subreaper will receive a SIGCHLD signal and be able to wait(2) on the process to discover its termination status. A process can define itself as a subreaper with prctl(PR_SET_CHILD_SUBREAPER). If so, it's not init (PID 1) that will become the parent of orphaned child processes, instead the nearest living grandparent that is marked as a subreaper will become the new parent. If there is no living grandparent, init does. The reason to implement this mechanism was that userspace service managers/supervisors (like upstart, systemd) need to track their started services. Many services daemonize by double-forking and get implicitly re-parented to PID 1. The service manager will no longer be able to receive the SIGCHLD signals for them, and is no longer in charge of reaping the children with wait(). All information about the children is lost at the moment PID 1 cleans up the re-parented processes. Now, a service manager process can mark itself as a sort of "sub-init", and is now able to stay as the parent for all orphaned processes created by the started services. All SIGCHLD signals will be delivered to the service manager. In Linux, a daemon is typically created by forking twice with the intermediate process exiting after forking the grandchild. This is a common technique to avoid zombie processes. The init script calls a child. That child forks again and thus exits immediately. The grandchild will be adopted by init, which continuously calls wait() to collect the exit status of his children to avoid zombies. With the concept of subreapers, the userspace service manager now becomes the new parent, instead of init.
What is a "subreaper" process?
1,299,665,584,000
Since google chrome/chromium spawn multiple processes it's harder to see how much total memory these processes use in total. Is there an easy way to see how much total memory a series of connected processes is using?
Given that google killed chrome://memory in March 2016, I am now using smem: # detailed output, in kB apparently smem -t -P chrom # just the total PSS, with automatic unit: smem -t -k -c pss -P chrom | tail -n 1 to be more accurate replace chrom by full path e.g. /opt/google/chrome or /usr/lib64/chromium-browser this works the same for multiprocess firefox (e10s) with -P firefox be careful, smem reports itself in the output, an additional ~10-20M on my system. unlike top it needs root access to accurately monitor root processes -- use sudo smem for that. see this SO answer for more details on why smem is a good tool and how to read the output.
Get chrome's total memory usage
1,299,665,584,000
I have to copy files on a machine. And the data is immensely large. Now servers need to serve normally, and there are usually a particular range of busy hours on those. So is there a way to run such commands in a way that if server hits busy hours, it pauses process, and when it gets out of that range, it resumes it? Intended-Result cp src dst if time between 9:00-14:00 pause process After 14:00 resume cp command.
Yes, you need to acquire the process id of the process to pause (via the ps command), then do: $> kill -SIGSTOP <pid> The process will then show up with Status "T" (in ps). To continue, do a: $> kill -CONT <pid>
Is there a way to pause a running process on Linux systems and resume later?
1,299,665,584,000
I have a general question, which might be a result of misunderstanding of how processes are handled in Linux. For my purposes I am going to define a 'script' as a snippet of bash code saved to a text file with execute permissions enabled for the current user. I have a series of scripts that call each other in tandem. For simplicity's sake I'll call them scripts A, B, and C. Script A carries out a series of statements and then pauses, it then executes script B, then it pauses, then it executes script C. In other words, the series of steps is something like this: Run Script A: Series of statements Pause Run Script B Pause Run Script C I know from experience that if I run script A until the first pause, then make edits in script B, those edits are reflected in the execution of the code when I allow it to resume. Likewise if I make edits to script C while script A is still paused, then allow it to continue after saving changes, those changes are reflected in the execution of the code. Here is the real question then, is there any way to edit Script A while it is still running? Or is editing impossible once its execution begins?
In Unix, most editors work by creating a new temporary file containing the edited contents. When the edited file is saved, the original file is deleted and the temporary file renamed to the original name. (There are, of course, various safeguards to prevent dataloss.) This is, for example, the style used by sed or perl when invoked with the -i ("in-place") flag, which is not really "in-place" at all. It should have been called "new place with old name". This works well because unix assures (at least for local filesystems) that an opened file continues to exist until it is closed, even if it is "deleted" and a new file with the same name is created. (It's not coincidental that the unix system call to "delete" a file is actually called "unlink".) So, generally speaking, if a shell interpreter has some source file open, and you "edit" the file in the manner described above, the shell won't even see the changes since it still has the original file open. [Note: as with all standards-based comments, the above is subject to multiple interpretations and there are various corner-cases, such as NFS. Pedants are welcome to fill the comments with exceptions.] It is, of course, possible to modify files directly; it's just not very convenient for editing purposes, because while you can overwrite data in a file, you cannot delete or insert without shifting all following data, which would imply quite a lot of rewriting. Furthermore, while you were doing that shifting, the contents of the file would be unpredictable and processes which had the file open would suffer. In order to get away with this (as with database systems, for example), you need a sophisticated set of modification protocols and distributed locks; stuff which is well beyond the scope of a typical file editing utility. So, if you want to edit a file while its being processed by a shell, you have two options: You can append to the file. This should always work. You can overwrite the file with new contents of exactly the same length. This may or may not work, depending on whether the shell has already read that part of the file or not. Since most file I/O involves read buffers, and since all the shells I know read an entire compound command before executing it, it is pretty unlikely that you can get away with this. It certainly wouldn't be reliable. I don't know of any wording in the Posix standard which actually requires the possibility of appending to a script file while the file is being executed, so it might not work with every Posix compliant shell, much less with the current offering of almost- and sometimes-posix-compliant shells. So YMMV. But as far as I know, it does work reliably with bash. As evidence, here's a "loop-free" implementation of the infamous 99 bottles of beer program in bash, which uses dd to overwrite and append (the overwriting is presumably safe because it substitutes the currently executing line, which is always the last line of the file, with a comment of exactly the same length; I did that so that the end result can be executed without the self-modifying behaviour.) #!/bin/bash if [[ $1 == reset ]]; then printf "%s\n%-16s#\n" '####' 'next ${1:-99}' | dd if=/dev/stdin of=$0 seek=$(grep -bom1 ^#### $0 | cut -f1 -d:) bs=1 2>/dev/null exit fi step() { s=s one=one case $beer in 2) beer=1; unset s;; 1) beer="No more"; one=it;; "No more") beer=99; return 1;; *) ((--beer));; esac } next() { step ${beer:=$(($1+1))} refrain | dd if=/dev/stdin of=$0 seek=$(grep -bom1 ^next\ $0 | cut -f1 -d:) bs=1 conv=notrunc 2>/dev/null } refrain() { printf "%-17s\n" "# $beer bottles" echo echo ${beer:-No more} bottle$s of beer on the wall, ${beer:-No more} bottle$s of beer. if step; then echo echo Take $one down, pass it around, $beer bottle$s of beer on the wall. echo echo echo next abcdefghijkl else echo echo Go to the store, buy some more, $beer bottle$s of beer on the wall. fi } #### next ${1:-99} #
What happens if you edit a script during execution?
1,299,665,584,000
I'm looking for a way to limit a processes disk io to a set speed limit. Ideally the program would work similar to this: $ limitio --pid 32423 --write-limit 1M Limiting process 32423 to 1 megabyte per second hard drive writing speed.
That is certainly not trivial task that can't be done in userspace. Fortunately, it is possible to do on Linux, using cgroup mechanizm and its blkio controller. Setting up cgroup is somehow distribution specific as it may already be mounted or even used somewhere. Here's general idea, however (assuming you have proper kernel configuration): mount -t tmpfs cgroup_root /sys/fs/cgroup mkdir -p /sys/fs/cgroup/blkio mount -t cgroup -o blkio none /sys/fs/cgroup/blkio Now that you have blkio controller set, you can use it: mkdir -p /sys/fs/cgroup/blkio/limit1M/ echo "X:Y 1048576" > /sys/fs/cgroup/blkio/limit1M/blkio.throttle.write_bps_device Now you have a cgroup limit1M that limits write speed on device with major/minor numbers X:Y to 1MB/s. As you can see, this limit is per device. All you have to do now is to put some process inside of that group and it should be limited: echo $PID > /sys/fs/cgroup/blkio/limit1M/tasks I don't know if/how this can be done on other operating systems.
How to Throttle per process I/O to a max limit?
1,299,665,584,000
I have script including multiple commands. How can I group commands to run together ( I want to make several groups of commands. Within each group, the commands should run in parallel (at the same time). The groups should run sequentially, waiting for one group to finish before starting the next group) ... i.e. #!/bin/bash command #1 command #2 command #3 command #4 command #5 command #6 command #7 command #8 command #9 command #10 how can I run every 3 commands to gether? I tried: #!/bin/bash { command #1 command #2 command #3 } & { command #4 command #5 command #6 } & { command #7 command #8 command #9 }& command #10 But this didn't work properly ( I want to run the groups of commands in parallel at the same time. Also I need to wait for the first group to finish before running the next group) The script is exiting with an error message!
The commands within each group run in parallel, and the groups run sequentially, each group of parallel commands waiting for the previous group to finish before starting execution. The following is a working example: Assume 3 groups of commands as in the code below. In each group the three commands are started in the background with &. The 3 commands will be started almost at the same time and run in parallel while the script waits for them to finish. After all three commands in the the third group exit, command 10 will execute. $ cat command_groups.sh #!/bin/sh command() { echo $1 start sleep $(( $1 & 03 )) # keep the seconds value within 0-3 echo $1 complete } echo First Group: command 1 & command 2 & command 3 & wait echo Second Group: command 4 & command 5 & command 6 & wait echo Third Group: command 7 & command 8 & command 9 & wait echo Not really a group, no need for background/wait: command 10 $ sh command_groups.sh First Group: 1 start 2 start 3 start 1 complete 2 complete 3 complete Second Group: 4 start 5 start 6 start 4 complete 5 complete 6 complete Third Group: 7 start 8 start 9 start 8 complete 9 complete 7 complete Not really a group, no need for background/wait: 10 start 10 complete $
Run commands in parallel and wait for one group of commands to finish before starting the next
1,299,665,584,000
I was slightly confused by: % vim tmp zsh: suspended vim tmp % kill %1 % jobs [1] + suspended vim tmp % kill -SIGINT %1 % jobs [1] + suspended vim tmp % kill -INT %1 % jobs [1] + suspended vim tmp So I resigned to just "do it myself" and wonder why later: % fg [1] - continued vim tmp Vim: Caught deadly signal TERM Vim: Finished. zsh: terminated vim tmp % Oh! Makes sense really, now that I think about it, that vim has to be running in order for it's signal handler to be told to quit, and to do so. But obviously not what I intended. Is there a way to "wake and quit" in a single command? i.e., a built-in alias for kill %N && fg %N? Why does resuming in the background not work? If I bg instead of fg, Vim stays alive until I fg, which sort of breaks my above intuition.
vi-vi-vi is of the devil. You must kill it with fire. Or SIGKILL: kill -KILL %1 The builtin kills are kind enough to send SIGCONT to suspended processes so that you don't have to do it yourself, but that won't help if the process blocks the signal you're sending or if handling the signal causes the processes to become suspended again (if a background process tries to read from the terminal, by default, it'll be sent SIGTTIN, which suspends the process if unhandled).
Kill a suspended process?
1,299,665,584,000
to find the PID of the process to kill use : pgrep <process command> I then use the kill command to kill the PID returned by pgrep <process command> kill <PID> Can these commands be combined into one so can kill the PID or PID's returned by pgrep <process command> ? Or is there a method kill multiple processes by command name ? Something like : kill(pgrep <name of process>)
You can use pkill: pkill httpd You may also want to use process substitution(although this isn't as clear): kill $(pgrep command) And you may want to use xargs: pgrep command | xargs kill
How to kill multiple processes
1,299,665,584,000
What is the difference between ps and top command ? I see that both can display information about running processes . Which one should be used when ?
top is mostly used interactively (try reading man page or pressing "h" while top is running) and ps is designed for non-interactive use (scripts, extracting some information with shell pipelines etc.)
What is the difference between ps and top command?
1,299,665,584,000
I am evaluating the expression 6^6^6 using python and bc separately. The content of the python file is print 6**6**6. When I execute time python test.py, I get the output as real 0m0.067s user 0m0.050s sys 0m0.011s And then, I ran the command time echo 6^6^6 | bc which gave me the following output real 0m0.205s user 0m0.197s sys 0m0.005s From these results it is clear that the sys time taken by python and bc was 11ms and 5ms respectively. The bc command outperformed python at sys time level but when it comes to user and real time python was almost 4 times faster than bc. What might have gone there. I haven't given any priority to the processes as such. I am trying to understand this situation.
Python imports a large number of files at startup: % python -c 'import sys; print len(sys.modules)' 39 Each of these requires an even greater number of attempts at opening a Python file, because there are many ways to define a module: % python -vv -c 'pass' # installing zipimport hook import zipimport # builtin # installed zipimport hook # trying site.so # trying sitemodule.so # trying site.py # trying site.pyc # trying /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.so # trying /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sitemodule.so # trying /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.py # /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.pyc matches /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.py import site # precompiled from /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.pyc # trying os.so # trying osmodule.so # trying os.py # trying os.pyc # trying /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.so # trying /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/osmodule.so # trying /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.py # /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.pyc matches /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.py import os # precompiled from /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.pyc ... Each "trying", except for those which are builtin, requires an os-level/system calls, and each "import" seems to trigger about 8 "trying" messages. (There was ways to reduce this using zipimport, and each path in your PYTHONPATH may require another call.) This means there are almost 200 stat system calls before Python starts on my machine, and "time" assigns that to "sys" rather than "user", because the user program is waiting on the system to do things. By comparison, and like terdon said, "bc" doesn't have that high of a startup cost. Looking at the dtruss output (I have a Mac; "strace" for a Linux-based OS), I see that bc doesn't make any open() or stat() system calls of its own, except for loading a few shared libraries are the start, which of course Python does as well. In addition, Python has more files to read, before it's ready to process anything. Waiting for disk is slow. You can get a sense for Python's startup cost by doing: time python -c pass It's 0.032s on my machine, while 'print 6**6**6' is 0.072s, so startup cost is 1/2rd of the overall time and the calculation + conversion to decimal is the other half. While: time echo 1 | bc takes 0.005s, and "6^6^6" takes 0.184s so bc's exponentiation is over 4x slower than Python's even though it's 7x faster to get started.
python vs bc in evaluating 6^6^6
1,299,665,584,000
I started hosting sites a while back using Cherokee. For external sources (FastCGI, etc) it has an option to launch the process if it can't find one running on the designated socket or port. This is great because it means if PHP or a Django site falls over (as they occasionally do) it restarts it automatically. On a new server using PHP-FPM I couldn't use Cherokee (it has a bug with PHP) so I've moved to NGINX. I really like NGINX (for its config style) but I'm having serious issues with processes falling over and never respawning. PHP does this sometimes but Django sites are more of a problem. I've created init scripts for them and they come up on boot but this doesn't help me if they conk out between reboots. I guess I'm looking for a FastCGI proxy. Something that, like Cherokee, knows what processes should be running on which sockets/ports and respawns them on-demand. Does such a thing exist? Is there any way to build this into NGINX (for ease of config)?
How about daemontools and specifically the supervise tool supervise monitors a service. It starts the service and restarts the service if it dies. Setting up a new service is easy: all supervise needs is a directory with a run script that runs the service.
Ensure a process is always running
1,299,665,584,000
Transmission is intermittently hanging on my NAS. If I send SIGTERM, it doesn't disappear from the process list and a <defunct> label appears next to it. If I send a SIGKILL, it still doesn't disappear and I can't terminate the parent because the parent is init. The only way I can get rid of the process and restart Transmission is to reboot. I realize the best thing I can do is try and fix Transmission (and I've tried), but I'm a novice at compiling and I wanted to make sure my torrents finished before I start messing around with it.
You cannot kill a <defunct> process (also known as zombie process) as it is already dead. The system keeps zombie processes for the parent to collect the exit status. If the parent does not collect the exit status then the zombie processes will stay around forever. The only way to get rid of those zombie processes are by killing the parent. If the parent is init then you can only reboot. Zombie processes take up almost no resouces so there is no performance cost in letting them linger. Although having zombie processes around usually means there is a bug in some of your programs. Init should usually collect all children. If init has zombie children then there is a bug in init (or somehwere else but a bug it is). http://en.wikipedia.org/wiki/Zombie_process
How can I kill a <defunct> process whose parent is init?
1,299,665,584,000
I'm learning CentOS/RHEL and currently doing some stuff about process management. The RHCSA book I'm reading describes running kill 1234 as sending SIGQUIT. I always thought the kill command without adding a switch for signal type should default to kill -15 SIGTERM is kill -15 and SIGKILL is kill -9, right? Does CentOS/RHEL use a slightly different method of kill -15 or have I just been mistaken? EDIT: kill -l gives SIGQUIT as kill -3 and it seems to be associated with using the keyboard to terminate a process. man 7 signal also states that SIGQUIT is kill -3, so I can only assume that my book is wrong in stating that SIGQUIT is kill -15 default.
No, they're not the same. The default action for both is to terminate the process, but SIGQUIT also dumps core. See e.g. the Linux man page signal(7). kill by default sends SIGTERM, so I can only imagine that the mention of SIGQUIT being default is indeed just a mistake. That default is in POSIX, and so are the numbers for SIGTERM, SIGKILL and SIGQUIT.
Is SIGQUIT the same as SIGTERM?
1,299,665,584,000
As I understand it, a zombie process has died but still exists as a placeholder in the process table until its parent (or init if the zombie is itself an orphan) checks its exit status. And my understanding of orphan processes is they are processes that are still alive and running but whose parent has died. Since a zombie is already dead, its children would be considered orphans, wouldn't they? Would they be affected be reaping the zombie? Specifically, would init adopt them as its children only once the zombie was reaped, or would they be adopted as soon as the parent became a zombie?
As I understand it, a zombie process has died but still exists as a placeholder in the process table until its parent (or init if the zombie is itself an orphan) checks its exit status. Correct. And my understanding of orphan processes is they are processes that are still alive and running but whose parent has died. Correct. Since a zombie is already dead, its children would be considered orphans, wouldn't they? Yes. When the parent dies, it's dead. With respect to its children, it doesn't matter whether the parent stays on as a zombie: the children become orphans at the time the parent dies, and then they lose any connection with their parent. Would they be affected be reaping the zombie? Specifically, would init adopt them as its children only once the zombie was reaped, or would they be adopted as soon as the parent became a zombie? No, and the latter, as per above.
Can a zombie have orphans? Will the orphan children be disturbed by reaping the zombie?
1,299,665,584,000
I use Ubuntu Server 10.10 and I would like to see what processes are running. I know that PostgreSQL is running on my machine but I can not see it with the top or ps commands, so I assume that they aren't showing all of the running processes. Is there another command which will show all running processes or is there any other parameters I can use with top or ps for this?
From the ps man page: -e Select all processes. Identical to -A. Thus, ps -e will display all of the processes. The common options for "give me everything" are ps -ely or ps aux, the latter is the BSD-style. Often, people then pipe this output to grep to search for a process, as in xenoterracide's answer. In order to avoid also seeing grep itself in the output, you will often see something like: ps -ef | grep [f]oo where foo is the process name you are looking for. However, if you are looking for a particular process, I recommend using the pgrep command if it is available. I believe it is available on Ubuntu Server. Using pgrep means you avoid the race condition mentioned above. It also provides some other features that would require increasingly complicated grep trickery to replicate. The syntax is simple: pgrep foo where foo is the process for which you are looking. By default, it will simply output the Process ID (PID) of the process, if it finds one. See man pgrep for other output options. I found the following page very helpful: http://mywiki.wooledge.org/ProcessManagement
How can I see what processes are running?
1,299,665,584,000
Don't normally post here but I am ripping my hair out over this one. I have a Python script that forks when it launches, and is responsible for starting a bunch of other processes. This script used to be launched at startup via sysvinit, but recently I upgraded to Debian Jessie so have adapted it to launch via systemd. Unfortunately, I'm running into an issue I can't work out. When you launch the script directly in a user shell, it launches it's child processes correctly, and when the script exits the child processes are orphaned and continue to run. When launched Via systemd, if the parent process exits, the children all exit too (Well, the screens that they launch in die and appear as Dead). Ideally I need to be able to restart the parent script without killing all the child processes, is there something that I am missing? Thanks! [Unit] Description=Server commander After=network.target [Service] User=serveruser Type=forking PIDFile=/var/Server/Server.pid ExecStart=/var/Server/Server.py ExecStop=/bin/kill -s TERM $MAINPID [Install] WantedBy=multi-user.target Edit: It's probably relevant for me to point out that the Python script is essentially a 'controller' for its child processes. It starts and stops servers in GNU screens as requested from a central server. It is normally always running, it doesn't spawn services and exit. There are cases however where I would like to be able to reload the script without killing child processes, even if that means the processes are orphaned off to pid 1. In fact, it wouldn't even matter if the Python script started off processes as a parent process, if that is even possible. A better explanation of how it works: systemd spawns Server.py Server.py forks and writes the pid file for systemd Server.py then spawns server processes in gnu screen based on its instructions Server.py continues to run to perform any restarts requested from the server When launching without systemd, Server.py can be restarted and the GNU screens it launches are unaffected. When launching with systemd, when Server.py shuts down, instead of those screen processes being orphaned off to pid 1, they are killed.
I managed to fix this simply by setting KillMode to process instead of control-group (default). Thanks all!
Systemd and process spawning: child processes are killed when main process exits
1,299,665,584,000
How can i view the priority of a specific process ?
The top command lists the priority of running processes under the PR heading. If you have it installed, you can also search for a process and sort by priority in htop.
What is a command to find priority of process in Linux?
1,299,665,584,000
I want to run two commands simultaneously in bash on a Linux machine. Therefore in my ./execute.sh bash script I put: command 1 & command 2 echo "done" However when I want to stop the bash script and hit Ctrl+C, only the second command is stopped. The first command keeps running. How do I make sure that the complete bash script is stopped? Or in any case, how do I stop both commands? Because in this case no matter how often I press Ctrl+C the command keeps running and I am forced to close the terminal.
If you type command 1 & command 2 this is equal to command 1 & command 2 i.e. this will run the first command in background and then runs the second command in foreground. Especially this means, that your echo "done" is printed after command 2 finished even if command 1 is still running. You probably want command 1 & command 2 & wait echo "done" This will run both commands in background and wait for both to complete. If you press CTRL-C this will only send the SIGINT signal to the foreground process, i.e. command 2 in your version or wait in my version. I would suggest setting a trap like this: #!/bin/bash trap killgroup SIGINT killgroup(){ echo killing... kill 0 } loop(){ echo $1 sleep $1 loop $1 } loop 1 & loop 2 & wait With the trap the SIGINT signal produced by CTRL-C is trapped and replaced by the killgroup function, which kills all those processes.
Ctrl-C with two simultaneous commands in bash
1,299,665,584,000
I am reading up on Linux processes from The Linux Documentation Project: https://www.tldp.org/LDP/tlk/kernel/processes.html Processes are always making system calls and so may often need to wait. Even so, if a process executes until it waits then it still might use a disproportionate amount of CPU time and so Linux uses pre-emptive scheduling. In this scheme, each process is allowed to run for a small amount of time, 200ms, and, when this time has expired another process is selected to run and the original process is made to wait for a little while until it can run again. This small amount of time is known as a time-slice. My question is, how is this time being kept track of? If the process is currently the only one occupying the CPU, then there is nothing actually checking if the time has expired, right? I understand that processes jump to syscalls and those jump back to the scheduler, so it makes sense how processes can be “swapped” in that regards. But how is Linux capable of keeping track how much time a process has had on the CPU? Is it only possible via hardware timers?
The short answer is yes. All practical approaches to preemption will use some sort of CPU interrupt to jump back into privileged mode, i.e. the linux kernel scheduler. If you look at your /proc/interrupts you'll find the interrupts used in the system, including timers. Note that linux has several different types of schedulers, and the classic periodic timer style, is seldom used - from the Completely Fair Scheduler (CFS) documentation: CFS uses nanosecond granularity accounting and does not rely on any jiffies or other HZ detail. Thus the CFS scheduler has no notion of “timeslices” in the way the previous scheduler had, and has no heuristics whatsoever. Also, when a program issues a system call (Usually by a software interrupt - "trap"), the kernel is also able to preempt the calling program, this is especially evident with system calls waiting for data from other processes.
How does Linux accomplish pre-emptive scheduling?
1,299,665,584,000
I'm trying to map Linux process state codes (as in ps) to states in the OS state diagram but I can't seem to map them. Is it because Linux process states don't necessarily match the theoretical OS state diagram? Specifically, I am unsure where D/S/T/I fit in the diagram. ps process state codes from the man page: PROCESS STATE CODES Here are the different values that the s, stat and state output specifiers (header "STAT" or "S") will display to describe the state of a process: D uninterruptible sleep (usually IO) I Idle kernel thread R running or runnable (on run queue) S interruptible sleep (waiting for an event to complete) T stopped by job control signal t stopped by debugger during the tracing W paging (not valid since the 2.6.xx kernel) X dead (should never be seen) Z defunct ("zombie") process, terminated but not reaped by its parent OS process state diagram from Wikipedia: Similar to Does "stopped" belong to "blocked" state? but the answer is pretty incomplete.
Short answer: The states (roughly) map like this: State Meaning D Blocked I Blocked R Waiting or Running S Blocked T Blocked (more or less) t Blocked (more or less) W Blocked (obsolete since Linux 1.1.30) X Terminated Z Terminated Long answer: The externally visible process state codes in Linux try to pack information that might be interesting for a system administrator into one character, so they also include information why a process is blocked (and thus if it can be unblocked and what may unblock it). The distinction between "Waiting" and "Running" is blurred, because processes run in such tiny time slices that, for a human sitting in front of the computer, there is not much difference between a process ready to run and a process running. Also Linux doesn't swap out whole processes but individual memory pages, so you won't find states mapping to "Swapped out and waiting" or "Swapped out and blocked". State Meaning D The process is blocked and that state cannot be interrupted (e.g. with kill). Usually while in this state the kernel is performing I/O on behalf of the process, and the kernel code in question isn't able to handle interruptions. I The process is a kernel thread that currently has nothing to do and is blocked waiting for new work. This state is technically the same as D (as usually kernel threads aren't interruptible). It was introduced for accounting/cosmetic reasons, because processes in D state are considered contributing to the system load. R The process is waiting to run or running. These are all processes the scheduler can and will schedule on the available CPUs. Internally the kernel can differentiate between running and waiting processes but this isn't exposed through the process state codes. S The process is blocked and that state can be interrupted with kill. This state is entered with most system calls that wait for some event (sleep, select, poll, wait, etc.). T The process is blocked from being scheduled by a signal like SIGSTOP. This state doesn't match perfectly into the theoretical state "Blocked" because the process doesn't wait for an event by itself, but is usually blocked from further running by intervention of another process or the user (Ctrl+Z). t Similar to above. The process is blocked from being scheduled by a debugger or tracing process, not by itself waiting for an event. W Obsolete. The process is blocked waiting for a memory page to be read from swap into RAM. This code was used up until Linux v1.1.30. Since v2.3.43 there's no way to put a process in this state any more, and since v2.5.50 every reference to this state was removed. X The process is terminated and currently being removed from the process list. You won't see this state often as it only appears when ps runs exactly in the split second while the kernel cleans up a process entry on another CPU core. Z The process is terminated and the entry in the process list only exists so that the parent process can collect the exit status information.
Process states in OS theory vs in Linux
1,299,665,584,000
According to the man page, and wikipedia; nice ranges from -20 to 20. Yet when I run the following command, I find some processes have a non numerical value such as (-). See the sixth column from the left with title 'NI'. What does a niceness of (-) indicate? ps axl F UID PID PPID PRI NI VSZ RSS WCHAN STAT TTY TIME COMMAND 4 0 1 0 20 0 19356 1548 poll_s Ss ? 0:00 /sbin/init 1 0 2 0 20 0 0 0 kthrea S ? 0:00 [kthreadd] 1 0 3 2 -100 - 0 0 migrat S ? 0:03 [migration/0] 1 0 4 2 20 0 0 0 ksofti S ? 0:51 [ksoftirqd/0] 1 0 5 2 -100 - 0 0 cpu_st S ? 0:00 [migration/0] 5 0 6 2 -100 - 0 0 watchd S ? 0:09 [watchdog/0] 1 0 7 2 -100 - 0 0 migrat S ? 0:08 [migration/1] 1 0 8 2 -100 - 0 0 cpu_st S ? 0:00 [migration/1] 1 0 9 2 20 0 0 0 ksofti S ? 1:03 [ksoftirqd/1] 5 0 10 2 -100 - 0 0 watchd S ? 0:09 [watchdog/1] 1 0 11 2 -100 - 0 0 migrat S ? 0:05 [migration/2] I've checked 3 servers running: Ubuntu 12.04 and CentOs 6.5 and Mac OsX 10.9. Only the Ubuntu and CentOs machines have non digit niceness values.
What does a niceness of (-) indicate? Notice those also have a PRI score of -100; this indicates the process is scheduled as a realtime process. Realtime processes do not use nice scores and always have a higher priority than normal ones, but still differ with respect to one another. You can view details per process with the chrt command (e.g. chrt -p 3). One of your -100 ones will likely report a "current scheduling priority" of 99 -- unlike nice, here high values are higher priority, which is probably where top created the -100 number from. Non-realtime processes will always show a "current scheduling priority" of 0 in chrt regardless of nice value, and under linux a "current scheduling policy" of SCHED_OTHER. Only the Ubuntu and CentOs machines have non digit niceness values. Some versions of top seem to report realtime processes with rt under PRI and then 0 under NI.
What does a niceness value of (-) mean?
1,299,665,584,000
I use pgrep from procps-3.3.10. If I have executable aout_abcdefgh_ver27, then pgrep aout_abcdefgh_ver27 returns nothing, while ps aux | grep aout_abcdefgh_ver27 returns the expected result: ps aux | grep aout_abcdefgh_ver27 evgeniy 14806 0.0 0.0 4016 672 pts/8 S 12:50 0:00 ./aout_abcdefgh_ver27 evgeniy 15241 0.0 0.0 12596 2264 pts/8 S+ 12:50 0:00 grep --colour=auto aout_abcdefgh_ver27 But if I run $ pgrep aout_abcdefgh_v 14806 pgrep returns what I expect, so I wonder why it works in such a strange way, maybe I should use some option for pgrep to work with full process name? It looks like it has a very short limit for the pattern, something like ~ 10 symbols.
The problem is that by default, pgrep only searches the process name. The name is a truncated version of the entire command. You can see what the name is by looking at /proc/PID/status where PID is the process ID of the relevant process. For example: $ ./aout_abcdefgh_ver27 & [1] 14255 ## this is the PID $ grep Name /proc/14255/status Name: aout_abcdefgh_v So yes, pgrep with no flags only reads the first 15 characters of the executable's name. To search the full command line used to launch it, you need the -f flag (from man pgrep): -f, --full The pattern is normally only matched against the process name. When -f is set, the full command line is used. So, if you use -f: $ pgrep -f aout_abcdefgh_ver27 14255
pgrep full match not work, only part, why?
1,299,665,584,000
For catastrophe testing scenarios on out server environment we're looking for an easy way to make a process stuck in D (uninterruptible sleep) state. Any easy ways? An example C sample code would be a plus :) Edit - the first answer is semi-correct, as the process is shown to be in D state, but it still receives signals and can be killed
I had the same problem and resolved it by creating a kernel module that gets stuck in D state. As I don't have any experience in modules, I took the code from this turorial with some modifications found someplace esle. The result is a device on /dev/memory that gets stuck on read but can be waken up writting on it (it needs two writes, I don't know why but I don't care). To use it just: # make # make mknod # make install # cat /dev/memory # this gets blocked To unblock, from another terminal: # echo -n a > /dev/memory # echo -n a > /dev/memory Makefile: obj-m += memory.o all: make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules clean: make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean install: sudo insmod memory.ko uninstall: sudo rmmod memory mknod: sudo mknod /dev/memory c 60 0 sudo chmod 666 /dev/memory Code for memory.c: /* Necessary includes for device drivers */ #include <linux/init.h> #include <linux/module.h> #include <linux/kernel.h> /* printk() */ #include <linux/slab.h> /* kmalloc() */ #include <linux/fs.h> /* everything... */ #include <linux/errno.h> /* error codes */ #include <linux/types.h> /* size_t */ #include <linux/proc_fs.h> #include <linux/fcntl.h> /* O_ACCMODE */ #include <asm/uaccess.h> /* copy_from/to_user */ #include <linux/sched.h> MODULE_LICENSE("Dual BSD/GPL"); /* Declaration of memory.c functions */ int memory_open(struct inode *inode, struct file *filp); int memory_release(struct inode *inode, struct file *filp); ssize_t memory_read(struct file *filp, char *buf, size_t count, loff_t *f_pos); ssize_t memory_write(struct file *filp, char *buf, size_t count, loff_t *f_pos); void memory_exit(void); int memory_init(void); /* Structure that declares the usual file */ /* access functions */ ssize_t memory_write( struct file *filp, char *buf, size_t count, loff_t *f_pos); ssize_t memory_read(struct file *filp, char *buf, size_t count, loff_t *f_pos); int memory_open(struct inode *inode, struct file *filp); int memory_release(struct inode *inode, struct file *filp); struct file_operations memory_fops = { .read = memory_read, .write = memory_write, .open = memory_open, .release = memory_release }; /* Declaration of the init and exit functions */ module_init(memory_init); module_exit(memory_exit); /* Global variables of the driver */ /* Major number */ int memory_major = 60; /* Buffer to store data */ char *memory_buffer; int memory_init(void) { int result; /* Registering device */ result = register_chrdev(memory_major, "memory", &memory_fops); if (result < 0) { printk( "<1>memory: cannot obtain major number %d\n", memory_major); return result; } /* Allocating memory for the buffer */ memory_buffer = kmalloc(1, GFP_KERNEL); if (!memory_buffer) { result = -ENOMEM; goto fail; } memset(memory_buffer, 0, 1); printk("<1>Inserting memory module\n"); return 0; fail: memory_exit(); return result; } void memory_exit(void) { /* Freeing the major number */ unregister_chrdev(memory_major, "memory"); /* Freeing buffer memory */ if (memory_buffer) { kfree(memory_buffer); } printk("<1>Removing memory module\n"); } int memory_open(struct inode *inode, struct file *filp) { /* Success */ return 0; } int memory_release(struct inode *inode, struct file *filp) { /* Success */ return 0; } static DECLARE_WAIT_QUEUE_HEAD(wq); static volatile int flag = 0; ssize_t memory_read(struct file *filp, char *buf, size_t count, loff_t *f_pos) { printk("<1>going to sleep\n"); flag = 0; //wait_event_interruptible(wq, flag != 0); wait_event(wq, flag != 0); printk("<1>Reading from memory module\n"); /* Transfering data to user space */ copy_to_user(buf,memory_buffer,1); /* Changing reading position as best suits */ if (*f_pos == 0) { *f_pos+=1; return 1; } else { return 0; } } ssize_t memory_write( struct file *filp, char *buf, size_t count, loff_t *f_pos) { char *tmp; printk("<1>wake someone up\n"); flag = 1; //wake_up_interruptible(&wq); wake_up(&wq); printk("<1>Writting to memory module\n"); tmp=buf+count-1; copy_from_user(memory_buffer,tmp,1); return 1; }
Simulate an unkillable process in D state
1,299,665,584,000
I'm writing a Perl script that parses logfiles to collect PIDs and then checks whether that PID is running. I am trying to think of the best way to make that check. Obviously, I could do something like: system("ps $pid > /dev/null") && print "Not running\n"; However, I'd prefer to avoid the system call if possible. I therefore thought I could use the /proc filesystem (portability isn't a concern, this will always be running on a Linux system). For example: if(! -d "/proc/$pid"){ print "Not running\n"; } Is that safe? Can I always asume that if there's no /proc/$pid/ directory the associated PID is not running? I expect so since AFAIK ps itself gets its information from /proc anyway but since this is for production code, I want to be sure. So, can there be cases where a running process has no /proc/PID directory or where a /proc/PID directory exists and the process is not running? Is there any reason to prefer parsing ps over checking for the existence of the directory?
The perl function kill(0,$pid) can be used. If the return code is 1 then the PID exists and you're allowed to send a signal to it. If the return code is 0 then you need to check $!. It may be EPERM (permission denied) which means the process exists or ESRCH in which case the process doesn't exist. If your checking code is running as root then you can simplify this to just checking the return code of kill; 0=>error, 1=>ok For example: % perl -d -e 0 Loading DB routines from perl5db.pl version 1.37 Editor support available. Enter h or 'h h' for help, or 'man perldebug' for more help. main::(-e:1): 0 DB<1> print kill(0,500) 0 DB<2> print $! No such process DB<3> print kill(0,1) 0 DB<4> print $! Operation not permitted DB<5> print kill(0,$$) 1 This can be made into a simple function use Errno; sub test_pid($) { my ($pid)=@_; my $not_present=(!kill(0,$pid) && $! == Errno::ESRCH); return($not_present); } print "PID 500 not present\n" if test_pid(500); print "PID 1 not present\n" if test_pid(1); print "PID $$ not present\n" if test_pid($$);
How should I check whether a given PID is running?
1,299,665,584,000
I can't find any pattern when I look at the numbering of PIDs in process table (ps -a), as the PIDs are not subsequent numbers and sometimes there are large "gaps" between those numbers. Is it because there may be some processes that run for a short time and they reserve some PIDs? Is there some range, after which the numbering of processes resets? I'm using Mac OS X but I guess that the answer should apply to UNIX in general.
Yes on both counts. Many processes are short lived. They get a PID, run, finish, and the PID disappears from the process table. Processes sometimes only live for a fraction of a second! Often when programs start they run numerous commands as part of checking the system and initializing their environment. The maximum PID number depends on the system and is sometimes configurable. Basically if you know you are going to have a huge number of processes, then you may need to increase the number, but on new operating systems I believe the maximum number is typically large enough for most any workload. PIDs are entries in the process table, and the more you have the more memory the process table takes up. Have a look at this related question: https://serverfault.com/questions/279178/what-is-the-range-of-a-pid-on-linux-and-solaris Also note that related to this is the "maximum nr of processes per user" which is a measure to protect against a malicious user intentionally creating many processes to hog the whole process table.
How are the processes in UNIX numbered?
1,299,665,584,000
Given a commodity PC, we would like to use it to execute some tasks in the background round the clock. Basically, we would like to have commands like: add-task *insert command here* list-tasks remove-task(s) The added tasks should simply be put in a queue and executed one after another in the background (keeping running after logout of the shell). Is there any simple script/program that does this?
There's a standard batch command that does more or less what you're after. More precisely, batch executes the jobs when the system load is not too high, one at a time (so it doesn't do any parallelization). The batch command is part of the at package. echo 'command1 --foo=bar' | batch echo 'command2 "$(wibble)"' | batch at -q b -l # on many OSes, a slightly shorter synonym is: atq -q b at -q b -r 1234 # Unschedule a pending task (atq gives the task ID)
Simple queuing system?
1,299,665,584,000
Is there a way to know which cores currently have a process pinned to them? Even processes run by other users should be listed in the output. Or, is it possible to try pinning a process to a core but fail in case the required core already has a process pinned to it? PS: processes of interest must have bin pinned to the given cores, not just currently running on the given core PS: this is not a duplicate, the other question is on how to ensure exclusive use of one CPU by one process. Here we are asking how to detect that a process was pinned to a given core (i.e. cpuset was used, not how to use it).
Answer to myself: hwloc-bind from Linux (and homebrew for Macs) package hwloc. Cf. https://www.open-mpi.org/projects/hwloc/tutorials/20130115-ComPAS-hwloc-tutorial.pdf for some doc.
Linux: how to know which processes are pinned to which core?
1,299,665,584,000
I had issued the ps -ef|grep java command and this is one of the entries that I got : subhrcho 875 803 0 Jan23 pts/5 00:02:27 [java] <defunct> What is <defunct> implying here ? What state is that process is that process with PID=875 in ?
From the ps manpage: Processes marked <defunct> are dead processes (so-called "zombies") that remain because their parent has not destroyed them properly. These processes will be destroyed by init(8) if the parent process exits.
What does <defunct> mean in the output of ps?
1,299,665,584,000
Consider a shell script executed by Sh, not Bash (and I can't change it, and can't use a shebang, it's ignored). The & operator works, but disown $! does not and makes Sh complains “disown: not found”. How do I detach a background process from specifically Sh? I mean, doing from Sh the same as disown do from Bash.
First let's see what Bash's disown command does. From Bash manual: The shell exits by default upon receipt of a SIGHUP. Before exiting, an interactive shell resends the SIGHUP to all jobs, running or stopped. Stopped jobs are sent SIGCONT to ensure that they receive the SIGHUP. To prevent the shell from sending the SIGHUP signal to a particular job, it should be removed from the jobs table with the disown builtin (see Job Control Builtins) or marked to not receive SIGHUP using disown -h. If the huponexit shell option has been set with shopt (see The Shopt Builtin), Bash sends a SIGHUP to all jobs when an interactive login shell exits. What this means is that it's the shell itself, if it receives SIGHUP, forwards it to the background jobs. If the shell exits (and huponexit had't been enabled), the background processes—which don't have a controlling terminal—don't get SIGHUP on terminal closing. So if your concern is to prevent SIGHUP to a process launched from a shell wrapper, like e.g. #!/bin/sh my-command arg & then there's no need in disown-like functionality, since my-command will not receive SIGHUP unless the shell gets it before exiting. But, there still is a problem if you want to run a child from a script that will continue execution for some time after launching the child, like e.g. here: #!/bin/sh sleep 55555 & sleep 3333 # some long operation we don't care about The script above will terminate the sleep 55555 command if the script's controlling terminal closes. Since Bourne shell doesn't have the disown builtin that Bash, Ksh and some other shells have, we need another tool. That tool is nohup(1). The above script, in which we aren't interested in the stdout and stderr of the child process, can be modified to the following to avoid sleep getting SIGHUP: #!/bin/sh nohup sleep 55555 >/dev/null 2>&1 & sleep 3333 # some long operation we don't care about The redirection to /dev/null is to avoid getting the nohup.out file in current directory. Without the redirection, this file will contain the output of the nohupped process.
Detach a process from Sh (not Bash) or “disown” for Sh (not Bash)
1,299,665,584,000
suspend is a builtin command in Bash. When would you naturally use this command and find it useful?
Let's say you lack both GNU screen and tmux (and X11, and virtual consoles) but want to switch between a login shell and another interactive shell. You would first login on the console, and then start a new shell, temporarily blocking the login shell. To get the login shell back to do some work there, you'd do suspend. Then you would fg to get the interactive shell back to continue with whatever it was you did there. In fact, with job control, the login shell could spawn a number of interactive shells as background jobs that you could switch to with fg %1, fg %2 etc., but to get back to the login shell, you would need to use suspend unless you wanted to manually kill -s STOP $$. Also note that Ctrl+Z at the prompt in an interactive shell won't suspend it. EDIT: I initially had a long hypothetical section about the use of suspend in a script, but since the command requires job control and since non-interactive shells usually don't have job control, I deleted that section. Deleted section with suspend replaced by kill -s STOP $$ (this really doesn't belong to the answer any more, but it may be interesting to others anyway): Let's say you have a background process (a script) in a script, and that this background process at some stage needs to stop and wait for the parent process to tell it to go on. This could be so that the parent has time to extract and move files into place or something like that. The child script would suspend (kill -s STOP $$), and the parent script would send a CONT signal to it when it was okay to continue. It gives you the opportunity to implement a sort of synchronisation between a parent process and a child process (although very basic as the parent shell process more or less needs to guess that the child process is suspended, although this can be fixed by having the child trap CONT and not suspend if that signal is received too early).
What is a practical example of using the suspend command in Bash?
1,299,665,584,000
When I run top -H, I see that my multiple mysql threads all have the same PID. However, in ps -eLf I see each one has a different PID: ps -eLf UID PID PPID LWP C NLWP STIME TTY TIME CMD mysql 1424 1 1424 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld mysql 1424 1 1481 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld mysql 1424 1 1482 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld mysql 1424 1 1483 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld mysql 1424 1 1484 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld mysql 1424 1 1485 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld mysql 1424 1 1486 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld mysql 1424 1 1487 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld mysql 1424 1 1488 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld mysql 1424 1 1489 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld mysql 1424 1 1490 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld mysql 1424 1 1791 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld mysql 1424 1 1792 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld mysql 1424 1 1793 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld mysql 1424 1 1794 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld mysql 1424 1 1809 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld mysql 1424 1 1812 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld and in top -H PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1424 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.08 mysqld 1481 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.16 mysqld 1482 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.33 mysqld 1483 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.16 mysqld 1484 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.23 mysqld 1485 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.27 mysqld 1486 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.15 mysqld 1487 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.18 mysqld 1488 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.16 mysqld 1489 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.16 mysqld 1490 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.34 mysqld 1791 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.26 mysqld 1792 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.54 mysqld 1793 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.00 mysqld 1794 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.00 mysqld 1809 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.00 mysqld 1812 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.13 mysqld What is going on and which one should I believe?
They are actually showing the same information in different ways. This is what the -f and -L options to ps do (from man ps, emphasis mine): -f               Do full-format listing. This option can be combined with many other UNIX-style options to add additional columns. It also causes the command arguments to be printed. When used with -L, the NLWP (number of threads) and LWP (thread ID) columns will be added. -L              Show threads, possibly with LWP and NLWP columns. tid              TID the unique number representing a dispatacable entity (alias lwp, spid). This value may also appear as: a process ID (pid); a process group ID (pgrp); a session ID for the session leader (sid); a thread group ID for the thread group leader (tgid); and a tty process group ID for the process group leader (tpgid). So, ps will show thread IDs in the LWP column while the PID column is the actual process identifier. top on the other hand, lists the different threads in the PID column though I can't find an explicit mention of this in man top.
Why do top and ps show different PIDs for the same processes?
1,299,665,584,000
We can issue CTRL+Z to suspend any jobs in Unix and then later on bring them back to life using fg or bg. I want to understand what happens to those jobs that are suspended like this ? Are they killed/terminated ? In other words what is the difference between killing and suspending a process ?
The jobs are not killed, they are suspended. They remain exactly as they are at the time of the suspension: same memory mapping, same open file, same threads, … It's just that the process sits there doing nothing until it's resumed. It's like when you pause a movie. A suspended process behaves exactly like a process that the scheduler stubbornly refuses to give CPU time to, except that the process state is recorded as suspended rather than running.
What happens to suspended jobs in unix?
1,299,665,584,000
Let's assume process runs in ulimited environment : ( ulimit ... -v ... -t ... -x 0 ... ./program ) Program is terminated. There might be many reasons : memory/time/file limit exceeded ; just simple segfault ; or even normal termination with return code 0. How to check what was the reason of program termination, without modifying program? P.S. I mean "when binary is given". Maybe some wrapper (ptrace-ing etc) might help?
Generally speaking, I don't think you can unfortunately. (Some operating systems might provide for it, but I'm not aware of the ones I know supporting this.) Reference doc for resource limits: getrlimit from POSIX 2008. Take for example the CPU limit RLIMIT_CPU. If the process exceeds the soft limit, it gets sent a SIGXCPU If the process exceeds the hard limit, it gets a plain SIGKILL If you can wait() on your program, you could tell if it was killed by SIGXCPU. But you could not differentiate a SIGKILL dispatched for breach of the hard limit from a plain old kill from outside. What's more, if the program handles the XCPU, you won't even see that from outside. Same thing for RLIMIT_FSIZE. You can see the SIGXFSZ from the wait() status if the program doesn't handle it. But once the file size limit is exceeded, the only thing that happens is that further I/O that attempts to test that limit again will simply receive EFBIG - this will be handled (or not, unfortunately) by the program internally. If the program handles SIGXFSZ, same as above - you won't know about it. RLIMIT_NOFILE? Well, you don't even get a signal. open and friends just return EMFILE to the program. It's not otherwise bothered, so it will fail (or not) in whichever way it was coded to fail in that situation. RLIMIT_STACK? Good old SIGSEGV, can't be distinguished from the score of other reasons to get delivered one. (You will know that that was what killed the process though, from the wait status.) RLIMIT_AS and RLIMIT_DATA will just make malloc() and a few others start to fail (or receive SIGSEGV if the AS limit is hit while trying to extend the stack on Linux). Unless the program is very well written, it will probably fail fairly randomly at that point. So in short, generally, the failures are either not visibly different from other process death reasons, so you can't be sure, or can be handled entirely from the program in which case it decides if/when/how it proceeds, not you from the outside. The best you can do as far as I know is write a bit of code that forks of your program, waits on it, and: check the exit status to detect SIGXCPU and SIGXFSZ (AFAIK, those signals will only be generated by the OS for resource limit problems). Depending on your exact needs, you could assume that SIGKILL and SIGSEGV were also related to resource limits, but that's a bit of a stretch. look at what you can get out of getrusage(RUSAGE_CHILDREN,...) on your implementation to get a hint about the other ones. OS-specific facilities might exist to help out here (possibly things like ptrace on Linux, or Solaris dtrace), or possibly debugger-type techniques, but that's going to be even more tied to your specific implementation. (I'm hoping someone else will answer with some magic thing I'm completely unaware of.)
How to check, which limit was exceeded? (Process terminated because of ulimit. )
1,299,665,584,000
In linux, from /proc/PID/stat, I can get the start_time (22:nd) field, which indicates how long after the kernel booted the process was started. What is a good way to convert that to a seconds-since-the-epoch format? Adding it to the btime of /proc/stat? Basically, I'm looking for the age of the process, not exactly when it was started. My first approach would be to compare the start_time of the process being investigated with the start_time of the current process (assuming it has not been running for long). Surely there must be way better ways. I didn't find any obvious age-related parameters when looking at https://www.kernel.org/doc/Documentation/filesystems/proc.txt So, What I have currently is: process age = (current_utime - ([kernel]btime + [process]start_time)) Any alternative ways that are more efficient from within a shell script? (Ideally correct across DST changes)
Since version 3.3.0, the ps of procps-ng on Linux has a etimes output field that gives you the elapsed time in seconds since the process was started (which by the way is not necessarily the same thing as the elapsed time since the last time that process executed a command (if at all!) (the time that process has been running the command in the process name), so may not be as useful as you thought). So you can do: ps -o etimes= -p "$pid" For the start time as Unix epoch time (with GNU date): (export TZ=UTC0 LC_ALL=C; date -d "$(ps -o lstart= -p "$pid")" +%s) Note that you cannot use the modification time of /proc/$pid. That is just the time those files were instantiated which has nothing to do with the start time of the process.
Get process age from command line [duplicate]
1,299,665,584,000
I have multiple serial ports to each of which devices are connected. They are listed as /dev/ttyUSB*. Now, I need to make sure using a python script that no other process is using any of these before I run a kermit script (so that access is not denied) login_init. I tried ps and lsof commands. lsof gave the following output: sof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs Output information may be incomplete. COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME login_ini 13395 user4 4u CHR 188,9 0t0 512 /dev/ttyUSB9 python 14410 user4 6u CHR 188,9 0t0 512 /dev/ttyUSB9 I got the pids of the processes alright, but when I give the killall command, it says no process found as follows: user4@user-pc-4:~/Scripts$ killall -9 13395 13395: no process found user4@user-pc-4:~/Scripts$ killall -9 14410 13395: no process found Is this the right and the only way or there are better ways to do it?
killall expects a substring of the program's name as argument. To kill a process by its process ID, use kill. You can directly kill all the processes that have a file open with the command fuser. fuser -k /dev/ttyUSB9
Find and kill the process that is using a serial port
1,299,665,584,000
If I begin a process and background it in a terminal window (say ping google.com &), I can kill it using kill %1 (assuming it is job 1). However if I open another terminal window (or tab) the backgrounded process is not listed under jobs and cannot be killed directly using kill. Is it possible to kill this process from another terminal window or tab? Note: I am using the Xfce Terminal Emulator 0.4.3 and bash (although if a solution exists in another common shell but not bash I am open to that as well)
Yes, all you need to know is the process id (PID) of the process. You can find this with the ps command, or the pidof command. kill $(pidof ping) Should work from any other shell. If it doesn't, you can use ps and grep for ping.
How can I kill a job that was initiated in another shell (terminal window or tab)?
1,299,665,584,000
I know there are a million questions on getting the process ID, but this one seems to be unique. Google has not given me the answer, so I hope stackexhange will help rather than close this question. When Java is involved it seems trickier to find a process ID (pgrep doesn't work afaik). Furthermore, I need to automate this in a bash script. One issue I've encountered is that when I use ps aux | grep the grep process itself always shows up, so handling the results in a simple bash script is not trivial enough for me to figure out a good solution on my own (with my limited bash skills). Some things I have tried: Example 1 - this returns a process even though there is no application by that name: $ ps aux | grep -i anythingnotreal user2 3040 0.0 0.0 4640 856 pts/3 S+ 18:17 0:00 grep --color=auto -i anythingnotreal Example 2 - this returns nothing even though "java_app" is currently running: $ pgrep java_app It returns nothing. However, here's proof that "java_app" is running: $ ps aux | grep java_app tester2 2880 0.7 2.8 733196 58444 ? Sl 18:02 0:07 java -jar /opt/java_app2/my_java_app.jar tester2 3058 0.0 0.0 4644 844 pts/3 S+ 18:19 0:00 grep --color=auto java_app What I need is a solution I can plug into a bash script that will tell me if the java application of interest (for which I know the jar file name and path) is currently running. (If it is running, I need to ask the user to close it before my script continues.)
By default, pgrep only matches the command, not the arguements. To match the full command line, you need the -f option. $ pgrep -f java_app From the pgrep manpage: -f The pattern is normally only matched against the process name. When -f is set, the full command line is used
Find the process id of a java application in a bash script (to see if the target application is already running)
1,358,935,391,000
I know I can kill any process with kill -9 command . But sometimes i see that even if I have terminated a program with CTRL+C , the process doesn't get killed . So I want to know the difference between kill -9 vs CTRL+C
^C send the interrupt signal, which can be handled by a program (you can ignore it) kill -9 send the sigkill signal which kills the program that you can't handle. That's why you can't kill some programs with ^C
What is the difference between exiting a process via Ctrl+C vs issuing a kill -9 command?
1,358,935,391,000
I have a eight core machine. How can I find out how many cores are used by processes that I see in htop?
In htop, press F2 or Sto enter setup, then use the arrows to navigate the Columns->Available Columns menu, select PROCESSOR and Enterto add a processor column. Then q to get back to the main screen.
How to find how many cores a process is using?
1,358,935,391,000
I have often issues with processes stuck in D state, due to NFS shares behind firewalls. If I lose connections, processes get stuck in D state and I can't kill them. The only solution becomes hard reboot. I was wondering if there are any other ways but all the solutions and information I can find is "you just can't kill it". Everyone seems to be fine with and accept the way it is. I am a bit critical about this. I thought that there must be a way to scrape off the process from the memory so that there is no need for reboot. It is very annoying if this happens often. And if the resource happens to return the IO, it can simply be ignored in this case. Why isn't this possible? Linux kernel is IMHO very advanced and you should be able to do things like this. Especially, in servers... I could not find a satisfying answer, why isn't/can't this be implemented? I would also be interested in answers regarding programming and of algorithmic nature, which would explain this issue.
Killing a process while it's in a system call is possible, and it mostly works. What's difficult is to make it work all the time. Going from 99.99% to 100% is the difficult part. Normally, when a process is killed, all the resources that it uses are freed. If there's any I/O going on with the process, the code doing this I/O is notified and it exits, allowing the resources that it's using to be freed. Uninterruptible sleep happens visibly when “the code is notified and it exits” takes a non-negligible amount of time. This means that the code isn't working as it should. It's a bug. Yes, it's theoretically possible to write code without bugs, but it's practically impossible. You say “if the resource happens to return the IO, it can simply be ignored”. Well, fine. But suppose for example that a peripheral has been programmed to write to memory belonging to the process. To kill the process without cancelling the request to the peripheral, the memory must be kept in use somehow. You can't just get rid of that resource. There are resources that must stay around. And freeing the other resources can only be done if the kernel knows which resources are safe to free, which requires the code to be written in such a way that it's always possible to tell. The cases when uninterruptible sleep lasts for a visible amount of time are cases where it's impossible to tell, and the only safe thing is to way. It is possible to design an operating system where killing a process is guaranteed to work (under certain assumptions about the hardware working correctly). For example a hard real time operating systems guarantee that killing a process takes at most a certain fixed amount of time (assuming that it offers a kill facility at all). But it's difficult, especially if the operating system must also support a wide range of peripheral and offer good common-case performance. Linux favors common-case behavior over worst-case behavior in many ways. Getting all the code paths covered is extremely difficult, especially when there wasn't a stringent framework for doing so from day 1. In the grand scheme of things, unkillable processes are extremely rare (you don't notice when they don't happen). It is a symptom of buggy drivers. A finite amount of effort has been put in writing Linux drivers. To eliminate more cases of prolonged uninterruptible sleep would either require more people on the task, or lead to less supported hardware and worse performance.
Why can't we kill uninterruptible D state process?
1,358,935,391,000
Possible Duplicate: What if ‘kill -9’ does not work? I guess its a bit late to ask this, but for future reference; I was called to look at a server today after a customer was reporting that connecting with ssh was slow and executing commands was also slow (with some not working at all). After logging in I could type promptly so I didn't think it was a network issue like delay or bandwidth saturation (as I find this tends to be directly relatable through your ssh experiences). I first tried to run top, after a minute of nothing happening I cancelled this operation with CTRL+C. The prompt was hanging waiting for top to start up. free -m also was just hanging at the prompt for a minute or longer before I cancelled that. df -h did execute, and showed me that there was 60% of disk space free (I was wondering if some application had gone bananas and filled up the disks with logs). dmesg wouldn't execute either. I executed tail -n 50 /var/log/message and sadly I no longer have the output but it looked like there had been a serious problem. Lots of memory locations printed in HEX and presumably their contents (incomprehensibly ramblings) on the right. It was very similar to the output in this log I found on Google, trying to find a similar example, except that in the right hand column most of the lines contain "ext4" in them, perhaps there was a file system error? Running tail -n 50 /var/log/syslog I saw in the middle of all the memory madness that was repeated here a couple of lines that said works to the effect of Info procname:pid blocked for more than 120 seconds. I executed ps aux and looked through the output until I found one process with 299% cpu usage; ps aux | grep procname procuser 8279 299 0.0 479064 41916 pts/6 Sl+ 08:05 548:31 /path/to/procname procbox 6390 6394 6395 0 So this process has gone bonkers it seems but I can't execute any command (with or without sudo) that are related to memory. For example free -m, or top. I could cat /proc/meminfo and see that there was about 5GB out of 40GBs of RAM free. I tried kill PID but after a couple of minutes of hanging I gave up. I tried kill -9 PID but again, same thing. I can only assume this process was so busy that it couldn't answer kill messages from the kernel? I tried renice 19 PID and kill -9 PID but this didn't work either, renice would run, just hang. In the end a hard reboot was required which was not ideal. Files are now corrupt etc due to the specialist applications on the server. What other options did I have? Is there no way to simply cease a process? Rather than sending a SIGTERM, just flat out cease the processing of code, or similar?
I executed tail -n 50 /var/log/message and sadly I no longer have the output but it looked like there had been a serious problem. Lots of memory locations printed in HEX and presumably their contents (incomprehensibly ramblings) on the right. It could have been nearly anything, and the contents of these kernel dumps would be important to knowing what it was. For example, you could have had a hardware problem, like a disk that was no longer responding to requests. Trying to run programs that were already cached in RAM could work fine, while running programs that needed to read from the disk could hang. It could also be that you hit a kernel bug, or some other driver problem, or had a bad bit flip in your RAM, or had virtually any other bad hardware. If a driver locked a particular resource in the kernel, and then hit a bug or error and failed to properly unlock it, then any other driver or system call that tries to obtain that lock would simply hang. It may not be a bug in the kernel. You can get this sort of behavior when e.g. using the lvm or dmsetup tools to manage disks. They can both can suspend a device, which has the result that "any further I/O to that device will be postponed for as long as the device is suspended". Programs that then try to access that device will simply block in the kernel. You could trigger this manually with "dmsetup suspend", or I've seen a disk left in suspended state by accident when the LVM tool encountered an error. If this is a one time thing, don't sweat it. If it happens again, try to carefully note the kernel output so you can track down its cause. The first crash dump will be the most important. If it happens a lot and you can't get the output, consider using a netconsole to send the kernel output directly to another machine.
kill -9 hangs, unable to kill process (murder proof process) [duplicate]
1,358,935,391,000
I read in an online flash card that the command is: pkill -u bob $(pgrep -u bob) However, I think this is wrong. I think it's saying: Kill all the processed owned by bob, and 4572\n4600 Because: [bob@localhost ~]$ pgrep -u bob 4572 4600 Also, it gives an error: [bob@localhost ~]$ pkill -u bob $(pgrep -u bob) pkill: only one pattern can be provided Try `pkill --help' for more information. Which makes sense because you can't have newlines in usernames, right? I think the command should only be: pkill -u bob To "kill all processes owned by bob" While the command: pgreg -u bob Gives "all processes owned by bob" I'm wondering: Am I uses the right commands as intended? Is my analysis of the incorrect way accurate?
You Are Correct Wrong: pkill -u bob $(pgrep -u bob) Correct: pkill -u bob The flash card probably meant to show: kill $(pgrep -u bob) which would kill all of the processes returned by pgrep -u bob.
How to kill all processes owned by `user` on Centos 7? [duplicate]
1,358,935,391,000
I am running debian right now and sometimes I need to kill java manually from the terminal, but when I try kill #pid# or pkill java nothing happens. No console output (ok, that's normal) and java is still running (not normal). The only way to kill it is to restart the PC. Any suggestions?
Maybe its ignoring the signal for some reason. Did you try kill -9? But please note: kill -9 cannot be ignored or trapped. If a process sees signal 9, it has no choice but to die. It can't do anything else - not even gracefully clean up its files.
'kill java' doesn't kill java
1,358,935,391,000
A thing I frequently want to do is launch some long running process or server as my own non-privileged user and then have a way to tell if it's still running and restart it if not. So for example I might set a cron job that runs every so often checking if my process is running and restart it if it has crashed. This is the essence of process management tools like djb's daemontools, supervisord, launchd etc. except that those tools are configured by default to run as root with config files in /etc but I would like a utility that lets me do the same sort of thing as my non-privileged user from the comfort of my home directory.
deamontools you mentioned work just fine as user. See https://cr.yp.to/daemontools/supervise.html Update - solutions as per the above suggestion the OP got this working using the svscan program from daemontools after trying two different methods: Put it in like this a modern crontab: @reboot /usr/bin/svscan $HOME/.local/service 2>&1 > $HOME/.local/service/log Make ~/.config/autostart/svscan.desktop with the Exec=... line set to launch svscan with a wrapper script. My wrapper script looks like this: #!/usr/bin/env sh ( echo "Starting svscan." date /usr/bin/svscan $HOME/.local/service 2>&1 ) >> $HOME/.local/service/log Both methods work but each is good for a different situation. The first way is good if you're doing it on a headless machine where you want to allow a non-privileged user to install their own long running services and processes. The second way is good if you want all of the services to inherit the environment, ssh-agent etc. of your currently logged in X user, which means the processes effectively become a proxy of the currently logged in user themself.
Is there a utility for daemonizing processes as non-privileged user?
1,358,935,391,000
I've realised recently that the kill utility can send any signal I want, so when I need to SIGKILL a process (when it's hanging or something), I send a SIGSEGV instead for a bit of a laugh (kill -11 instead of kill -9.) However, I don't know if this is bad practice. So, is kill -11 more dangerous than kill -9? If so, how?
The SIGSEGV signal is sent by the kernel to a process that has made an invalid virtual memory reference (segmentation fault). One way sending a SIGSEGV could be more "dangerous" is if you kill a process from a filesystem that is low on space. The default action when a process receives a SIGSEGV is to dump core to a file then terminate. The core file could be quite large, depending on the process, and could fill up the filesystem. As @Janka has already mentioned, you can write code to tell your program how you want it to handle a SIGSEGV signal. You can't trap a SIGKILL or a SIGSTOP. I would suggest using a SIGKILL or a SIGSTOP when you only want to terminate a process. Using a SIGSEGV usually won't have bad repercussions, but it's possible the process you want to terminate could handle a SIGSEGV in a way you don't expect.
Is there any danger in using kill -11 instead of kill -9?
1,358,935,391,000
I'm trying to properly emulate POSIX signals handling and job control for my pet operating system, but it's not clear to me what should happen to a session after the session leader exits. I cannot find documentation related to what happens to the session and its process if, for example, a child kills the session leader while several background processes and a different foreground process are running. My tests show that all the process in the session are killed, but how? Do they receive a specific signal? Is this case specified in the POSIX standard? And if so, can you provide some references?
You are not the only one puzzled by POSIX sessions; Lennart Poettering (he of systemd fame) is puzzled too. As far as anybody can tell, when a session leader dies, init inherits the orphaned session and All session member processes in the foreground process group (if any) receive a SIGHUP. Session member processes who are not in the foreground group don't receive any signal. See also: notes.shichao.io/apue/ch9 Chapter 10 "Processes" in The Linux Kernel by Andries Brouwer (2003). If the terminal goes away by modem hangup, and the line was not local, then a SIGHUP is sent to the session leader. [...] When the session leader dies, a SIGHUP is sent to all processes in the foreground process group. [...] Thus, if the terminal goes away and the session leader is a job control shell, then it can handle things for its descendants, e.g. by sending them again a SIGHUP. If on the other hand the session leader is an innocent process that does not catch SIGHUP, it will die, and all foreground processes get a SIGHUP. Andries Brower, The Linux Kernel, section 10.3 "Sessions".
What happens to a unix session when the session leader exits?
1,358,935,391,000
ps -o command shows each command on a separate line, with space separated, unquoted arguments: $ ps -o command COMMAND bash ps -o command This can be a problem when checking whether the quoting was correct or to copy and paste a command to run it again. For example: $ xss-lock --notifier="notify-send -- 'foo bar'" slock & [1] 20172 $ ps -o command | grep [x]ss-lock xss-lock --notifier=notify-send -- 'foo bar' slock The output of ps is misleading - if you try to copy and paste it, the command will not do the same thing as the original. So is there a way, similar to Bash's printf %q, to print a list of running commands with correctly escaped or quoted arguments?
On Linux, you can get a slightly more raw list of args to a command from /proc/$pid/cmdline for a given process id. The args are separated by the nul char. Try cat -v /proc/$pid/cmdline to see the nuls as ^@, in your case: xss-lock^@--notifier=notify-send -- 'foo bar'^@slock^@. The following perl script can read the proc file and replace the nuls by a newline and tab giving for your example: xss-lock --notifier=notify-send -- 'foo bar' slock Alternatively, you can get a requoted command like this: xss-lock '--notifier=notify-send -- '\''foo bar'\''' 'slock' if you replace the if(1) by if(0): perl -e ' $_ = <STDIN>; if(1){ s/\000/\n\t/g; s/\t$//; # remove last added tab }else{ s/'\''/'\''\\'\'\''/g; # escape all single quotes s/\000/ '\''/; # do first nul s/(.*)\000/\1'\''/; # do last nul s/\000/'"' '"'/g; # all other nuls } print "$_\n"; ' </proc/$pid/cmdline
How to show quoted command list?